Apr 9 2024

Software Architecture with C# 12 and .NET 8 is out!

The fourth edition of my book is out! You can buy it on Amazon

fourthedition

If you are a an aspiring .NET software architect, or a C# developer wishing to jump into the world of enterprice applications and cloud, this is the right book for you!

Software Architecture with C# 12 and .NET 8 puts high-level design theory to work in a .NET context, teaching you the key skills, technologies, and best practices required to become an effective .NET software architect.

This fourth edition puts emphasis on a case study that will bring your skills to life. You'll learn how to choose between different architectures and technologies at each level of the stack. You'll take an even closer look at Blazor and explore OpenTelemetry for observability, as well as a more practical dive into preparing .NET microservices for Kubernetes integration.

Divided into three parts, this book starts with the fundamentals of software architecture, covering C# best practices, software domains, design patterns, DevOps principles for CI/CD, and more. The second part focuses on the technologies, from choosing data storage in the cloud to implementing frontend microservices and working with Serverless. You'll learn about the main communication technologies used in microservices, such as REST API, gRPC, Azure Service Bus, and RabbitMQ. The final part takes you through a real-world case study where you'll create software architecture for a travel agency.

What’s new in this edition?

Topics are analyzed in greater detail and updated for .NET 8 and the latest Azure components. We have also added a new practical chapter on developing .NET applications for Kubernetes.

Finally, the book has been organized into three parts, creating a flow that will guide you in your journey to becoming a software architect: architectural fundamentals, .NET technologies, and practical coding with a great case study.

Highlights from the World Wide Travel Club case study:

  • Code examples in past editions restructured and re-organized into a case study
  • Examining user needs and managing requirements with Azure DevOps
  • Understanding the application domains and choosing cloud data storage
  • Implementing worker microservices with gRPC and RabbitMQ

How does this book differ from other books on C# 12 and .NET 8?

Although we are using .NET 8, the current Long Term Support version of .NET, and the book’s programming language is C# 12, we don’t only talk about technology. We connect different modern topics needed to design an enterprise application, and we enable you to understand how these techniques work together. This means the book focuses more on architectures, patterns, and design techniques, than on the syntax of the language and its features.

In a few words, the book assumes you already have basic knowledge of .NET and C#, driving you toward their usage for implementing cutting-edge applications based on microservices and modern architectures and design techniques.

Table of Contents

  1. Understanding the Importance of Software Architecture
  2. Non-Functional Requirements
  3. Managing Requirements
  4. Best Practices in Coding C# 12
  5. Implementing Code Reusability in C# 12
  6. Design Patterns and .NET 8 Implementation
  7. Understanding the Different Domains in Software Solutions
  8. Understanding DevOps Principles and CI/CD
  9. Testing Your Enterprise Application
  10. Deciding on the Best Cloud-Based Solution
  11. Applying a Microservice Architecture to Your Enterprise Application
  12. Choosing Your Data Storage in the cloud
  13. Interacting with Data in C# - Entity Framework Core
  14. Implementing Microservices with .NET
  15. Applying Service-Oriented Architectures with .NET
  16. Working with Serverless - Azure Functions
  17. Presenting ASP.NET Core
  18. Implementing Frontend Microservices with ASP.NET Core
  19. Client Frameworks: Blazor
  20. Kubernetes
  21. Case Study
  22. Case Study Extension: Developing .NET Microservices for Kubernetes

DO NOT MISS IT!

Francesco

    Tags: , , , , , ,

    Apr 1 2022

    Software Architecture with C# 10 and .NET 6 is out!

    The third edition of my book is out! You can buy it on Amazon

    imagebook3_thumb[1]

    If you are a C# developer wishing to jump into the world of enterprice applications and cloud, this is the right book for you!

    From how to collect requirements, and how to master DevOps, selecting the right cloud resources, Web API, front-end frameworks (ASP.Net MVC and Blazor) to microservices design principles and practice,. This new edition updates all subjects to the last cloud and .Net features and adds new chapters:

    • A detailed description of gRPC and of how to use it from .NET
    • A new chapter that explains in detail how to implement a worker microservice with  ASP.NET + gRPC, and .NET hosted services + RabbitMQ
    • An introduction to Artificial Intelligence and Machine Learning
    • An introduction to native clients (including a short review of .NET MAUI)

    Most of chapters give enough details to cover 90% of all practical applications and give all links and pointers to get more details.The only exceptions are the chapters about artificial intelligence and native clients that are just introductions to big subjects. However,  also there you will find complete learning paths to follow to become an expert.

    The first three chapters describe modern development processes, and how to collect and document functional and not-functional requirements. Example of requirement collection and management with Azure DevOps are given.

    Then the books moves to the basic cloud concepts and describes how to select the right cloud resources for each application.

    Chapter 5 explains the whole therory behind Microservices design, and lists .NET resources that plays a foundamental role in the .NET implementation of Microservices. A self-contained description of Docker, is given, too.

    Chapter 6 is dedicated to Kubernetes. There you will find all basic concepts and enough details to cover 90% of all practial applications.

    Chapter 7 and 8 are dedicated to data storage and how to interact with them with Entity Framework Core and other clients. There, you will find .the whole theory behind distributed databases, how to maximize read and write parallelism and how to choose between SQL and Not-SQL databases.

    Chapter 9 is about serverless and Azure functions. There, you  will find enough details to cover simple-middle complexity functions, and pointers on how to implement more complex functions.

    Chapter 10 is dedicated to  the concept of pattern and describes various patterns used throghout the book.

    Chapter 11 describes Domain Driven Design, that is the most used design methodology for microservices. Related patterns and their practical usage in .NET layered applications are given, too.

    Chapter 12 describes typical patterns of code reusability used in .NET applications.

    Chapter 14 gives a detailed description of gRPC and of its usage in .NET applications. Then, a complete implementation of a worker microservice with gRPC and ASP.NET CORE is given. Finally the same example is implemented with a .NET worker service and RabbitMQ.

    Further chapters describe SOA architectures  and their implementation with ASP-NET Core (13), ASP.NET Core and ASP.NET Core MVC(15) and Blazor (17).

    Chapter 16 puts in practice all concepts learned with ASP.NET Core MVC and Domain Driven Design with the implementation of a front-end microservice.

    Chapter 18 is a introduction to native .NET clients that includes also a first look to .NET MAUI. The description is not detailed since a detailed description would require another complete book, but native clients are compared with browser-based clients and frameworks (like Blazor) and complete learning paths are given.

    Chapter 19 is an introduction to artificial intelligence and machine learning. The basic principles of the main AI techniques are described, without going deep into the implementation details.Also. the basic theory about machine learning is given. Here the focus is on understanding which problems can be solved with machine learning and how many examples they require. A practical example of supervised learning is given.

    Chapter 20 is dedicated to best practices and code metrics.

    Chapter 21 and 22 are dedicated to DevOps and to the usage of Azure DevOps and GitHub actions.

    Finally chapter 23 is dedicated to testing, test-driven design, unit-tests and functional/acceptance tests.The chapter gives the complete theory and describes in detail xUnit, and Moq.Practical examples of functional tests based on AngleSharp and Selenium are given, too.

    DO NOT MISS IT!

    Francesco

    Tags:

    Dec 15 2021

    Move quickly your team to Asp.NET Core 6!

    Asp.NET Core online interactive course Start quickly with Asp.NET Core 6, with an interactive online course on the Educative platform

    Takeaway Skills

    ASP.NET Core
    MVC Pattern
    Razor and Razor tag-helpers
    Localization and globalization
    Option framework
    Dependency injection and hosted services
    Authentication and Authorization

    Start Now!

    Course Contents

    1. Razor Template Language

    2. Tag Helpers

    3. Controllers and ViewModels

    4. Validation and Metadata

    5. Dependency Injection

    6. Configuration and Options Framework

    7. Localization and Globalization

    8. Factoring out UI Logic

    9. Authentication and Authorization

    10. Publishing Your Web Application

    Try it|

    Tags: , , ,

    Oct 4 2017

    Building Web applications with Knockout.js and ASP.NET core

    Category: Asp.net | Asp.net core | MVC | TypeScript | WebApi | JavaScriptFrancesco @ 05:40

    Amongst all the client side frameworks backed by big companies, React.js and Angular.js appear to be the most popular. However, Knockout.js still maintains a good market share, thanks to its interesting peculiarities.

    Knockout is based on an MVVM paradigm similar to Angular.js, but unlike React.js. While it is adequate for modular complex applications, at the same time, it is very simple to mix with server side templating, similar to React.js, but unlike Angular.js….Read full article

    Contact us if you want to learn more

    aspnet-core-running-spa

    Tags: , ,

    Jul 1 2016

    Ap.net Core 1.0.0 RTM Version of the Mvc Controls Toolkit Ready

    Category: Asp.net core | WebApi | MVC | Htnl5 fallback | Asp.netFrancesco @ 20:00

    The first Asp.net Core 1.0.0 RTM release of the Mvc Controls Toolkit is available for download! This is the link that explains how to install it, while a starting example may be downloaded here. Pay attention! You must follows all installation steps also for running the example, since the example among other things has also the purpose of becoming familiar with installation, and configuration stuffs.

    Enjoy!  & Stay tuned

    In a short time tutorials, live examples, and a complete documentation web site

    Francesco

    Tags: , , ,

    Jun 25 2016

    Ap.net Core Rc2 Version of the Mvc Controls Toolkit Ready

    Category: Asp.net core | WebApi | Htnl5 fallback | MVC | Asp.netFrancesco @ 02:40

    The first Asp.net Core Rc2 release of the Mvc Controls Toolkit is available for download! This is the link that explains how to install it, while a starting example may be downloaded here. Pay attention! You must follows all installation steps also for running the example, since the example among other things has also the purpose of becoming familiar with installation, and configuration stuffs.

    Enjoy!  & Stay tuned

    In a short time tutorials, live examples, and a complete documentation web site

    Francesco

    Tags: , , ,

    May 7 2015

    JavaScript Intensive Web Applications 4: JSON based Ajax and Single Page Applications

    Category: WebApi | MVC | JavaScriptFrancesco @ 06:19

    JavaScript Intensive Web Application 1: Getting JavaScript Intellisense

    JavaScript Intensive Web Applications 2: Enhancing Html with JavaScript

    JavaScript Intensive Web Applications 3: Enhancing Applications with Ajax

    jsFarmIntellisense

    In this last post of the series, I discuss the use of JSON based Ajax calls and client side View Models. I will propose also a simple implementation of a knockout.js binding to apply a generic jQuery plug-in to an Html node. The post is concluded with a short analysis of Single Page Application frameworks.

    In my previous post we have seen that Html returning Ajax calls update the needed parts of an Html page while keeping unmodified the remainder of the page. This allow a  tighter interaction between user and server because the user may work on other areas of the page while waiting for a server response, and he/she may ask  supplementary information to the server when he/she is in the middle of a task without loosing the whole page state.

    The user experience may be improved further if we are able to maintain the whole state of the current task on the client, because this way we reduce further the need to communicate with the server: the user may prepare all data for the server while receiving immediately all needed help and suggestions with no need to communicate with the server in this first stage. Communication with the server is needed only after everything has been prepared. For instance, the user may modify all data contained in a grid, recurring to a detail  window when needed. Entities connected with one-to-many relations with the main list may be edited in the detail view. Everything without communicating with the server! Then, when all changes have been done, the user performs a single submit, and updates the global state of the system. The server answer may contain corrections performed by the server to the data supplied by the user, that are automatically applied to the client copy of the data.

    In other words maintaining the whole state of a task on the client side allows a tighter user-machine cooperation since this cooperation may be performed without waiting for remote server answers. However, the increased complexity of the client side requires a robust and modular architecture of the client code. In particular, since we move logics, and not only UI on the client side, Html nodes that are  mainly UI staffs must be supported by JavaScript models. Models and Html nodes should cooperate while keeping separation of concerns between logics and UI. This means that all processing must take pace on models that are then rendered by using client side templates. Accordingly, Ajax calls can’t return Html anymore, but must return JavaScript models.

    Summing up, all architectures where the whole state of the current task is maintained on the client should have the following features:

    1. JSON communication with the server. The format of the data exchanged between server and client might be also Xml based, but as a matter of fact at the moment, the simpler JSON protocol is a kind of standard.
    2. Html is created dynamically by instantiating client templates, thus this kind of Web Applications are not visible to search engines.
    3. The state of client and server must be kept aligned, by performing simultaneous updates on both client and server in a transactional fashion. This means, for instance, that if a server update fails for some reason the client must be able to restore the state of the last client server synchronization.

    As a matter of fact at the moment point 3 has not received the needed attention also in sophisticated Single Page Application frameworks, that don’t supply general tools to face it, so the problem is substantially left to custom solutions of the developers.

    In the case of Html based Ajax communication we have seen that, since the communication is substantially based on form submits, the server relies on all input fields having adequate names to build a model that then is passed to the Action methods that serve the client requests. In JSON based commutations, instead,  input fields names are completely irrelevant since action methods receive substantially JavaScript models.

    Html Ids, and CSS classes are also used as “addresses” to select Html nodes to enhance with JavaScript code.  Several frameworks like knockout.js and angular.js avoid the use of these ids and CSS classes as a way to attach JavaScript behavior to Html nodes. In their case, model properties are “connected” to Html nodes through the so called bindings that are substantially communication channels between Html nodes and the JavaScript properties that updates one of them when the other changes. They may be one-way or two ways. Bindings may also connect Html nodes with JavaScript functions, and the developer may also define custom bindings, thus bindings solve completely the problem of connecting Html nodes with JavaScript code with no need to provide unique ids, or selection-purpose CSS classes.

    Below how to use a custom knockout.js binding for applying jQuery Plug-ins to Html nodes:

     

    1. <input type="button" value="my button" data-bind="jqplugins: ['button']"/>
    2. <input type="button" value="my button"
    3. data-bind="jqplugins: [{ name: 'button', options: {label: 'click me'}}]"/>

     

    The binding name is followed by an array whose elements may be either simple strings, in case there are no plug-in options, or objects with a name and an option property. As you can see in knockout.js bindings are contained in the Html5 data-bind attribute.

    Below the JavaScript code that defines the jqplugins custom binding:

     

    1. (function ($) {
    2.     function applyPlugin(jElement, name, options) {
    3.         if (typeof $.fn[name] !== 'function') {
    4.             throw new Error("unrecognized plug-in name: " + name);
    5.         }
    6.         if (!options) jElement[name]();
    7.         else jElement[name](options);
    8.     }
    9.     ko.bindingHandlers.jqplugins = {
    10.         update: function (element, valueAccessor, allBindingsAccessor) {
    11.             var allPlugins = ko.utils.unwrapObservable(valueAccessor());
    12.             var jElement = $(element);
    13.             for (var i = 0; i < allPlugins.length; i++) {
    14.                 var curr = allPlugins[i];
    15.                 if (typeof (curr) === 'string')
    16.                     applyPlugin(jElement, curr, null);
    17.                 else {
    18.                     applyPlugin(jElement,
    19.                         ko.utils.unwrapObservable(curr.name),
    20.                         ko.utils.unwrapObservable(curr.options));
    21.                 }
    22.             }
    23.         }
    24.     }
    25. })(jQuery)

     

    The code above enables the use of all available jQuery plug-ins on all knockout.js based architectures, so that we can move to advanced client architectures based to knockout.js without renouncing to our favorite widgets and CSS/JavaScript frameworks like jQuey UI, Bootstrap, jQuery Mobile, and Zurb Foundation.

     

    As a next step we may pass from storing the whole state of a single task, to storing the whole application state on the client side, which implies that the whole application must live in a single Html physical page(otherwise the whole state would be lost). Similar applications are called Single Page Applications.

    In a Single Page Application Virtual pages are created dynamically by instantiating client templates that substitute the Html of any previous virtual  page in the same physical page. The same physical page may show simultaneously several virtual pages in different areas. For instance, a virtual page might play the role of master, and another the role of detail page.

    Most of Single Page Application frameworks have also the concept of virtual link and/or of routing, and may connect the virtual pages to the browser history, so that the user may navigate among virtual pages with usual links and with the browser buttons.

    But… why re-implementing the whole browser behavior inside a single physical page? What are the advantages of Single Page Applications compared to “multiple physical pages applications” based on Client View models?

    In general having the whole application state on the client side reduces further the need to communicate with the server, thus increasing the responsiveness to the user inputs. More specifically:

    1. Once the client templates needed to create a new virtual page have been downloaded from the server further accesses to the same virtual page become very fast. On the contrary, loading a complex client model based page that is able to store the whole state of a task may be time consuming, so saving this loading time improves the user experience considerably.
    2. The state of a previously visited virtual page may be maintained so that the user finds the virtual page in exactly the same state he/she left it. This improves the cooperation between different tasks that are someway connected: the user may move forth and back between several virtual pages with the browser buttons while performing a complex task without loosing the state of each page.
    3. The same physical page may contain simultaneously several virtual pages in different areas. Thus, the user may move forth and back between several virtual page in one area, while keeping the content of another area. This scenario enables advanced form of cooperation between virtual pages.
    4. The whole Single Page Application may be designed to work also off-line. When the user has finished working the whole application state may be saved in the local storage and restored when he/she needs to perform further changes, or when he/she can go on-line to perform a synchronization with the server.

    The main problem Single Page Application developers are faced with is keeping a large JavaScript codebase modular and maintainable.  Since virtual pages are actually client templates <-> ViewModel pairs, the concept of virtual page itself has been conceived in a way to increase modularity. However several virtual pages need also a way to cooperate that doesn’t undermine their modularity and the independence of each virtual pages from the remainder of the system.

    In particular:

    1. Each virtual page definition should not depend on the remainder of the system to keep modularity, which, in turn, implies that virtual pages may not contain direct references to other external data structures.
    2. Notwithstanding point 1, some kind of cooperation that doesn’t undermine modularity, must be achieved among model-view pairs and among model-view pairs and the application models. A modular cooperation may be achieved by injecting interfaces that connect each model-view pair with the external environment as soon as a model-view pair is added to the page.
    3. Pointers, to data structures contained inside each virtual page should be either avoided or handled by resource managers to avoid they are used when a virtual page has been released or when it is not in an active state.

    Separation is ensured someway by the concept of ViewModel itself. Durandal.js uses AMD modules to encode ViewModels. AMD protocol is a powerful technique for both loading dynamically and injecting other code modules that the current module might depend on and consequently for handling a large JavaScript codebase. However, the dependency tree is hardwired, so the injection mechanism is more adequate to inject code than dynamic data structures that might depend on the state of the ongoing computation. Accordingly, the full achievement of point 2) requires an explicit programming effort. Angular.js uses a custom dependency injection and module loading mechanism. That mechanism is easier to use, but it is less adequate for managing large codebases (in my opinion , not adequate at all). However, the fact that the injection mechanism is less structured make easier the injection of dynamic data structures when a model-view pair is instantiated.

    In general most of frameworks ensure separation with some kind of cooperation, but no frameworks offer a completely out-of-the-box solution for point 2, and an out-of-the-box solution form managing the lifetime of pointers that have been injected into model-view pairs to ensure an adequate cooperation in the context of the ongoing computation (point 3). More specifically, the lifetime of pointers to AMD modules(or other types of dynamically loaded modules), that have been injected are automatically handled, but there is no out-the-of-the-box mechanism for managing pointers that a model-view pair might have to data structures contained into another model-view pair, so the developer has the burden of coding all controls needed to ensure the validity of each pointer, in order to avoid the use of pointers to data structures contained in model-view pairs that have been removed from the page.

    The need for a more robust solution to problems 2 and 3 is among is among the reasons that pushed me to to implement a custom Single Page Application framework in the Data Moving Controls suite. The Data Moving SPA framework (see here, and here) relies on contextual rules that “adapt” each virtual page that is being loaded to the “current context”. Where the “current context” includes both interface implementations that connect the virtual page to the remainder of the system and information about the current state of the application, such as, if the user is logged or not, the current culture (that is the browser language and culture settings), and so on. Contextual rules are used also to redirect a not logged user to a login virtual page and to verify if the user has the needed authorizations to access the current virtual page. The interface implementations passed by the contextual rules to the virtual page View Models include also all resource managers needed  for  sharing data structures among all application virtual pages safely. Another communication mechanism is the possibility to pass input data to any page that is being loaded. Such input data are analogous to the input data passed in the query string. In fact, this input may be included also in virtual links.

    Another big challenge of Single Page Applications is the duplication of code in both client and server side. In fact, the same classes, input validation criteria, and other metadata must be available on both client and server side, and when the languages used by the two sides are different, this become a big problem. The Meteor framework uses JavaScript on both server and client, and allow code sharing between the two sides. The main  price to pay for this solution is the use of a language that is not strongly typed also on the server side.  In the Data Moving SPA we faced this problem by equipping SPA server with dynamic JavaScript files implemented with Razor views. This way JavaScript data structures may be obtained by serializing in JavaScript their equivalent .Net data structures.

    Another important problem all SPA must solve is the data synchronization between Client and Server. Durandal.js works quite well with Breeze.Js that offers some synchronization services for the case the server may be modeled as an OData source. Breeze.Js may be adapted also to most of all other SPA framework, but this solution is acceptable only if there is almost no business logics between the client and the server side database. In fact, only in this case the server API may be exposed as an OData source only, with no need of more complex communication. 

    Meteor,  takes care of sever/client synchronization in a way that is completely transparent to the developer. A similar solution facilitates the coding of simple applications, but may be inadequate for complex business systems that needs to control explicitly communication between client and server.

    The Data Moving SPA framework offers retrievalManagers to submit a wide range of (customizable) queries(that includes also OData queries)  to the server, while viewModelUpdatesManagers and updatesManagers take care of synchronizing a generic data structure with the server in a transactional fashion, by taking into account both changes in several Entity Sets (additions, modifications, and deletes), and changes in temporary data structures(core workspaces). As a result of the synchronization process they may return either errors that are automatically dispatched in the right places of the UI, in case of failure, or remote commands that apply modifications to the client side data structure to be synchronized with the server. While the synchronization process is completely automatic, the developer has full control on which data to synchronize, and when to synchronize them, and also the possibility to customize various part of the process.

     

    That’s all! This post ends the short series about JavaScript intensive web application. This series is no way a tutorial that describes extensively all details of the techniques that has been discussed, but just a guide on how to select the right technique for each application and on how to solve some architectural issues that are not usually discussed in other places.

     

    Stay tuned! 

    Francesco

    Tags: , ,

    Jun 21 2014

    New Versions of Mvc Controls Toolkit and Data Moving Controls Suite

    Category: Asp.net | Javascript | MVC | WebApiFrancesco @ 21:43

    New 3.0.0 release of the Mvc Controls Toolkit. See the list of changes.

    New 1.2 release of the Data Moving Controls Suite. See the list of changes.

     

    Enjoy!

    Francesco

    Tags: , , ,

    Feb 18 2014

    Data Moving Mvc Control Suite Available for Purchase!

    Category: WebApi | MVC | JavaScript | Asp.netFrancesco @ 06:54

    Finally, the Data Moving Controls Suite is available for purchase! Not only, several powerful asp.net mvc controls easy to configure with a fluent interface and easy to style with your favourite framework (supported: jQuey UI, JQuery Mobile and Twitter Bootstrap), but also, a complete Single Page Application Framework. A sophisticated validation framework that extends the standard server/client asp.net mvc validation framework, the possibility to store control settings to reuse them in several pages, futuristic user interfaces based on Interaction Primitives, and more...

    Hurry Up 25% off till 2014-4-31! 

    Try to win your license by solving the Triangles Enumeration Problem.

    Tags: , , ,

    Mar 18 2013

    Single Page Applications 1: Manipulating the Client Side ViewModel

    Category: Asp.net | MVC | WebApiFrancesco @ 21:46

    Data Moving Plugin Controls

    Data Moving Plugin Styling

    Data Moving Plugin Forms: Detail Views, Data Filtering UI, and Undo/Redo Stack

    Single Page Applications 1: Manipulating the Client Side ViewModel

    Single Page Applications 2: Validation Error Handling

    Single Page Applications 3: TreeIterator

    This is the first of 3 introductory tutorials about the features for handling Single Page Applications offered by The Data Moving Plugin. The Data Moving Plugin is in its RC stage and will be released to the market in about one month.

    Firs of all just a little bit theory….then we will move to a practical example. If you want, you may give a look to the video that shows the example working before reading about the theory:

    See this Video in High Resolution on the Data Moving Plug-in Site

     

    Typically an SPA application allows the user to perform several tasks without leaving its physical Html page. This is result may be achieved by defining several “virtual pages” inside the same physical page. During the whole lifetime of the application just an active virtual page is visible, all other pages are either hidden or completely out of the Dom because they are created dynamically by instantiating client templates. The active page interacts with just a part of the Client Side ViewModel, and only that part is kept in synchrony with the server. 

    One can use also different techniques to enable the user to perform several tasks without leaving the l Html physical page; in any case the client ViewModel may be partitioned into subparts that are the smallest “units” that may be sinchronized with the server independently of the remainder of the client side ViewModel. We call such elementary units Workspaces because they are the “data units”  manipulated by the user while he is performing one of the several tasks allowed by the SPA.

    A workspace, in turn, is composed of two conceptually different sub-parts: a kind of short living data structure that is used just to carry out the current task and a set of long living data structures that are typically connected with a Database on the server side. Typically, on the client side, we don’t have all long living entities  of a “given type” but just a small window of the whole Entity Set. We call Entity Set Window, the set of all long-living entities of the same type stored in the Client Side ViewModel, and we call Core Workspace the part of the Workspace that remains after removing all Entity Set Windows.

    Summing up, the Client side ViewModel is composed of partially overlapping Workspaces, that are in turn composed of a Core WorkSpace and several Entity Set Windows.

    In general we can’t assume that all data of the Workspace are someway visible in the user interface. In fact the task being performed currently by the user may be composed of several steps (just think to the steps of a wizard), and substantially just the data “used” in the current step are visible to the user. Accordingly, each Workspace may be further split into partially overlapping UI Units, where each UI Unit is a part of the workspace that is “visible” in the user interface at a given time.

    The concept of UI Unit is very important in error handling because, while all UI Units belonging to a Workspace must be submitted simultaneously to the server, only the errors that refer to the current UI Unit can be shown to the user.

    The Data Moving Plug-in, offers tools to handle properly Entity Set Windows, Core Workspaces, and for handling properly UI Units during validation error processing:

    1. Retrieval Managers takes care of browsing Entity Sets in the Entity Set Windows, while updatesManagers take care of keeping the Entity Set Windows synchronized with the server by processing updates performed by the users to the Entity Set Windows, and by dispatching the principal key of newly created entities returned by the server.
    2. Whole WorkSpace updatesMangers take care of keeping a whole Workspace synchronized with the server, by automatically issuing commands to the updatesManagers of all Entity Set Windows contained in the Workspace and by taking care “personally” of the Core Workspace.
      The communication protocol between a whole Workspace updatesManager and the server includes the possibility for the server to issue “remote commands” that modify the Core Workspace on the client side. In fact, often, it is not possible for the server, to send a whole “updated” Core Workspace to the client that substitutes completely the old one, because the Core Workspace might have “links” with UI elements and with other Client Side ViewModel data, and a similar substitution would break them.
    3. The Data Moving Plug-in provides a powerful dom element-to-dom element data binding engine that enables the user to trigger “interactions” between dom elements,  and provides also a  Reference Knockout binding that maps UI elements to sub-parts of the Workspace, in such a way that the user “move” such parts of the Workspace by simply moving the UI elements that represent them in an intuitive way. The dom element-to-dom element data binding engine has been already described in a previous tutorial and in a previous video, so in this tutorial we will focus mainly on the Reference binding.
    4. Error Bubbling, Entities in Error Filtering, and other enhancements of the standard Asp.net Mvc validation engine help in associating errors to data that are not immediately visible on the screen. Error handling will be described in the second tutorial about SPA applications:  Single Page Applications 2: Validation Error Handling.

    Let understand better how all this works with a simple example(the same shown in the video above).

    Suppose we have a list of artists and a list of programmers, that are completely stored in the client side view model, and let suppose we would like to build a team to face a web project made of both artists and programmers. The team will have both a leader programmer and a leader artist, and not all people are entitled to cover the role of leader. Below a screen shot with an indication of the UI elements that represent data of the Core Workspace and data of the Entity Set Windows:

    TeamBuilding

    In the programmers tab there is another Entity Set Window containing Programmers Entities. Since we said all programmers and all artists are contained in the client ViewModel the Entity Set Windows  contain the whole Entity Sets. Moreover, since both the list of all programmers and the list of all artists are paged, not all all programmers and not artists belong to the current UI Unit; this means we will have difficulties in showing possible errors related to artists and programmers that are not in the current page.

    As we can see in the video new people may be added to the Team being built by simply dragging them in the “Members” area. if a new Leader is selected, the old leader is automatically moved back in the original list. People that can cover the role of leader have a yellow border, and the two Leader areas accept only people entitled to cover the Leader role. Moreover, the artists area of the team accepts just artists while the programmers area of the team accepts just programmers.

    The whole team building UI logics with the constraints listed above has been obtained without writing a single line of procedural code, but by just declaring reference bindings, Drag Sources, and Drop Targets.

    For instance, below the definition of the leader programmer area:

    1.   <div id="leader_programmer"class='leader-container programmers ui-widget-content' data-bind="@bindings.Reference(m => m.ProposedTeam.LeaderProgrammer).Get()">
    2.     @ch._withEmpty(m => m.ProposedTeam.LeaderProgrammer, ExternalContainerType.div, existingTemplate: "ProgrammerTemplate0")
    3. </div>
    4. @Html.DropTarget("#leader_programmer", new List<string> { "LeaderProgrammer" }, rolesDropOptions)

    The reference binding maps the div named “leader-programmer” with the property of the Core Workspace ProposedTeam.LeaderProgrammer, while the DropTarget declaration makes it accepts Drag Sources tagged as “LeaderProgrammer”. As a consequence of this two declarations when a UI element representing a programmer entitled to cover the role of leader (ie that has the “LeaderProgrammer” tag) is dragged over this are it is “accepted”, and the data item tied to the dragged UI element with another reference binding, is moved into the knockout observable of the ProposedTeam.LeaderProgrammer property. This in turn triggers the instantiation of the programmerTemplate0 client template because of the _withEmpty instruction that is an enhancement of the knockout with binding.

    The ProgrammerTemplate0 templates is the client template automatically built by the grid on the left of the page that lists all programmers. As a consequence the chosen leader programmer is rendered in the “Leader Programmer” area with the same appearance he had in the grid. Each member area works in a similar way:

    1. <div class="members-container programmers ui-widget-content" id ="all_programmers" data-bind="@bindings.Reference(m => m.ProposedTeam.Programmers).Get()">
    2.     @ch._foreachEmpty(m => m.ProposedTeam.Programmers, ExternalContainerType.div, existingTemplate: "ProgrammerTemplate0")
    3. </div>
    4. @Html.DropTarget("#all_programmers", new List<string> { "Programmer" }, rolesDropOptions)

    However in this case the ProposedTeam.Programmers property used in the reference binding is an observable array, so the dragged element is pushed into this array. Instead of the _withempty, we have a _foreachEmpty that is an enhancement of knockout foreach binding.

     

    To makes everything works properly all programmers must be declared as Drag Sources tagged with “Programmer”. Moreover, all programmers entitled to cover the role of leader must have also the “LeaderProgrammer” tag:

    1. @Html.DragSourceItems(".programmers", ".simple-list-item", new List<string> { "Programmer" }, new DataDragOptions { DestroyOriginal = true, AppendTo = "body" })

    The above declaration basically says: “define all elements marked with the class “simple-list-item”  that are descendants of the dom element with class “programmers” as Drag Sources with tag “Programmer”. Now since the whole grid containing all programmers is under a div with class “programmers” and since all rows of this grid have the class “simple-list-item” all programmers are all defined as Drag sources.

    The request is extended also to future elements that will be added as descendants of the element with class “programmers“, thus if we insert new elements in the grid they will be automatically declared as Drag Sources.

    The “simple-list-item” class is added to each row of the grid as a part of its row definition instructions with:

    1. .ItemRootHtmlAttributes(new Dictionary<string, object> { { "class", "simple-list-item" } })

    About the “LeaderProgrammer” tag, it must be added to all data items with the property CanBeTeamLeader set to true. Since this property may change during processing we must add it with a knockout binding attached to the CanBeTeamLeader property:

    1. .KoBindingsGenerator(bb => bb.CSS("LeaderProgrammer", l => l.CanBeTeamLeader)
    2.     .Reference(m => m)
    3.     .Get().ToString())

    The KoBindingsGenerator is a method of the fluent interface of the grid row definition. It accepts a function of the type

    1. Func<IBindingsBuilder<U>, string> knockoutBindings

    and applies the knockout bindings defined in the body of the function to all rows of the grid, by adding them to the client row template being built by the grid. We use the IBindingsBuilder interface received as argument to build a standard Knockout Css binding that adds the css class “LeaderProgrammer” whenever the property CanBeTeamLeader is true, and a Reference binding that bind each row to its associated data item. The Reference binding enables the “Dragged” programmer to “release” its referred data to the data item referred by the “Drop Traget”.

    Since in the options of the DragSourceItems declaration we set DestroyOriginal to true a dropped programmer is removed from the programmers list.

    When we put a new Leader Programmer in the Leader Programmer area, the old Leader programmer returns back to the programmers list because we defined the Programmers list as mirroring pool for the programmers entities (this is done in javascript):

    1. ko.mirroring.pool = function (obj) {
    2.     var dis = obj["MainCompetence"];
    3.     dis = ko.utils.unwrapObservable(dis);
    4.     if (dis === "Artist") return TeamBuilding.ClientModel.AllArtsist.Content;
    5.     else if (dis === "Programmer") return TeamBuilding.ClientModel.AllProgrammers.Content;
    6.     else return null;
    7. };

    All mirroring pools are defined by assigning a javascript function to the ko.mirroring.pool configuration variable. This function is passed all items that were removed from their places because of a Reference binding based interaction and that were put in no other place, so they “disappeared” from the client side ViewModel. This function is their last chance to find an “home”. This function analyze all properties of each item and possibly find a new “home” for it.

    Moving either an artist or a programmer in the detail area assigns a reference to its associated data item to the knockout observable CurrentDetail in the client ViewModel without detaching the data item from its previous place, because in this case the DestroyOriginal option of the drop target is not set to true. This triggers the instantiation of a template that shows the data item in detail mode:

    1. @ch._with0(m => m.CurrentDetail,
    2.     @<text>
    3.         <p>
    4.         <span class='ui-widget-header'>@item.LabelFor(m => m.Name)</span>
    5.         :
    6.         @item.TypedEditDisplayFor(m => m.Name, simpleClick: true)
    7.         @item.ValidationMessageFor(m => m.Name, "*")
    8.         </p>
    9.         <p>
    10.         <span class='ui-widget-header'>@item.LabelFor(m => m.Surname)</span>
    11.         :
    12.         @item.TypedEditDisplayFor(m => m.Surname, simpleClick: true)
    13.         @item.ValidationMessageFor(m => m.Surname, "*")
    14.         </p>
    15.         <p>
    16.         <span class='ui-widget-header'>@item.LabelFor(m => m.EMail)</span>
    17.         :
    18.         @item.TypedEditDisplayFor(m => m.EMail, simpleClick: true)
    19.         @item.ValidationMessageFor(m => m.EMail, "*")
    20.         </p>
    21.         <p>
    22.         <span class='ui-widget-header'>@item.LabelFor(m => m.Address)</span>
    23.         :
    24.         @item.TypedEditDisplayFor(m => m.Address, simpleClick: true)
    25.         @item.ValidationMessageFor(m => m.Address, "*")
    26.         </p>
    27.         <p>
    28.         <span class='ui-widget-header'>@item.LabelFor(m => m.CanBeTeamLeader)</span>
    29.         :
    30.         @item.CheckBoxFor(m => m.CanBeTeamLeader)
    31.         </p>
    32.     </text>
    33. , ExternalContainerType.koComment, afterRender: "mvcct.ko.detailErrors", forceHtmlRefresh: true, isDetail: true)

    The _with0 instruction is a different enhancement of the knockout with binding, that accepts an in-line razor helper as client template. Among its arguments there is one named isDetail that we set to true, to inform the Data Moving Plug-in engine that the template is the detail view of a data item. This declaration triggers a synchronization behavior between the original UI of the data item and its detail view.

     

    Having finished describing how the user can manipulate the Workspace we can move to see how server-client interaction takes place. The two Entity Set Windows of the Workspace are implemented with two grids. For a detailed description about how to “code” grids you may refer to Data Moving Plugin Controls. Here we point out just that since all data items are already on the client side we must use a local retrievalManager to execute the paging, sorting and filtering queries:

    1. .StartLocalRetrievalManager(m => m.AllProgrammers.Content, true, "TeamBuilding.programmersRM").EndRetrievalManager()

    The first argument is the source of all items to be queried(a property of the client side ViewModel), the second argument set to true requires the execution of an initial query as soon as the page is loaded (in order to show some initial data in the grid), and the third argument is where to put the newly created retrievalManager.

    The updatesManager of the two grids are both root updatesManager since our items are not children of any one-to-many relation, as in all other examples we have seen in Data Moving Plugin Controls. However, in this case they don’t communicate directly with the server, because we will define a whole WorkSpace updatesManager that will take care of collecting data from the two grids updatesManagers, handling the updates of the Core WorkSpace, communicating with the server, and dispatching the responses of the server to the two grids updatesManagers.

    The definition of the programmers updatesManager is:

     

    1. .CreateUpdatesManager<TeamBuildingDestinationViewModel>("TeamBuilding.programmersUpdater", true)
    2.     .BasicInfos(m => m.Id, m => m.ProgrammersChanges, "TeamBuilding.DestinationModel")
    3.     .IsRoot(Url.Action("UpdateTeam"))
    4. .EndUpdatesManager()

    It appears more complex of the updates managers we have seen in Data Moving Plugin Controls. The first call to CreateUpdatesManager contains the whole path where to store the updatesManager on the client side instead of the name of the property of the ViewModel where to store it, that’s why the second optional parameter is set to true. Moreover, the method call contains a generic type instantiation, the viewmodel we will use to submit all changes to the server:

    1. public class TeamBuildingDestinationViewModel
    2. {
    3.     public Team ProposedTeam { get; set; }
    4.     public OpenUpdater<Employed, Guid?> ProgrammersChanges { get; set; }
    5.     public OpenUpdater<Employed, Guid?> ArtistsChanges { get; set; }
    6. }

    The first property will be filled with the whole Core Workspace, while the other two properties will be filled with the programmers and artists change sets. That’s why the call to BasicInfos has two more parameters after the specification of the principal key: the first parameter is the property of the destination ViewModel where to store the programmers change set, and the third parameter is the property of the whole client ViewModel where to put the destination viewmodel before sending it to the server. The IsRoot method contains a fake url since the destination ViewModel will be posted to the server by the whole Workspace updatesManager.

    The whole Workspace updatesManager must be defined in javascript since it is not tied to any specific Data Moving Plugin control:

    1. $('#outerContainer').attr('data-update'),
    2.  TeamBuilding.ClientModel, "ProposedTeam", TeamBuilding.DestinationModel, "ProposedTeam",
    3.  { updatersIndices: [TeamBuilding.programmersUpdater, TeamBuilding.artistsUpdater],
    4.      classifyEntity: function (x) {
    5.          if (x['Id'] && x['MainCompetence'])
    6.              return x.MainCompetence() == "Programmer" ? 0 : 1;
    7.          else
    8.              return null;
    9.      },
    10.      ........

    The first argument is the url where to submit the destination ViewModel that is extracted from an Html5 attribute of a dom element. The second argument is the whole client ViewModel and the third argument is the path where to find the Core Workspace within the whole client ViewModel. The fourth argument is the destination ViewModel, and the fifth argument is the path to the place where to store the Core Workspace within the destination ViewModel. Then we have an option argument with several properies. Here we analyze just three of them, since all others are connected to error handling that will be discussed in Single Page Applications 2: Validation Error Handling.

    UpdatersIndices is an array containing all updatersManager of the Entity Set Windows of the Workspace; in our case the updatesManagers of the two grids.

    classifyEntities is a function that given an entity must return the index in the previous array of its Entity Set Windows updatesManager. This function enables the whole Workspace updatesManager to process adequately all entities it find inside the Core WorkSpace.

    That’s enough for everything to work properly! When the user clicks submit the update method of TeamBuilding.updater is invoked, the destination ViewModel is filled, and submitted to the server. When the server send the response the parts of the response destined to the Entity Set Windows updatesManagers will be automatically dispatched to them, and processed automatically. As a consequence, modifications to the the Core Workspace returned as remote commands by the server are applied, keys created for newly inserted entities are dispatched each to its entity, and errors associated to various data elements are dispatched next to the adequate UI elements:

    1. $('#submit').click(function () {
    2.         var form = $(this).closest('form');
    3.         TeamBuilding.ClientModel.CurrentDetail(null);
    4.         if (form.validate().form()) {
    5.             TeamBuilding.updater.update(form);
    6.         }
    7.     });

    Let give a look to the action method:

    1. public HttpResponseMessage Post(TeamBuildingDestinationViewModel model)
    2.         {
    3.             if (ModelState.IsValid)
    4.             {
    5.                 try
    6.                 {
    7.                     var builder = ApiResponseBuilder.NewResponseBuilder(model, ModelState, true, "Error in client request");
    8.                     builder.Process(m => m.ArtistsChanges, model.ProposedTeam, m => m.Id);
    9.                     builder.Process(m => m.ProgrammersChanges, model.ProposedTeam, m => m.Id);
    10.                     //business processing here
    11.  
    12.                     var response = builder.GetResponse();
    13.                     return response.Wrap(Request);
    14.                 }
    15.                 catch (Exception ex)
    16.                 {
    17.                     ModelState.AddModelError("", ex.Message);
    18.                     return
    19.                         Request.CreateResponse(System.Net.HttpStatusCode.InternalServerError, new ApiServerErrors(ModelState));
    20.                 }
    21.             }
    22.             return Request.CreateResponse(System.Net.HttpStatusCode.InternalServerError, new ApiServerErrors(ModelState));
    23.         }

    This code is very similar to the code we have seen in the action methods that processes the grids updates in Data Moving Plugin Controls: we create an ApiResponseBuilder and then we call the Process method on each of the Entity Set change sets we received in the destination ViewModel. Since our principal key are Guids we dont need to specify a custom key generation function, so the indication of which property is the principal key suffices. However, now the Process methods has 3 arguments, instead of two. We used a different overload! An overload that accepts a Core WorkSpace as second argument. Why we need this further argument? Simple, because we have to process also the changes of the entities that are contained in the Core WorkSpace. In fact, a programmer or artist that we added to the Team might have been modified, or it might be a newly inserted item. The addition of the new argument enables the Process method to include also the entities contained in the Core WorkSpace  in the change sets in their appropriate places if this is necessary.

     

    Now suppose we want to modify the client side Core WorkSpace by changing all programmers names with the suffix “Changed” and by adding two more programmers to the team. We need to add adequate remote commands in the response. How to built them? Quite easy! It is enough to create a changes builder object and then to mimic these operations on it:

    1. var changer = builder.NewChangesBuilder(model.ProposedTeam);
    2.  
    3. changer.Down(m => m.Programmers)
    4.     .UpdateModelIenumerable(m => m, m => m.Name, (m, i) => m.Name + "Changed");
    5. changer.UpdateModelField(m => m.LeaderProgrammer, m => m.Name, model.ProposedTeam.LeaderProgrammer.Name + "Changed");
    6. changer.Down(m => m.Programmers)
    7.     .AddToArray(m => m[0],
    8.         new Employed()
    9.         {
    10.             Id = Guid.NewGuid(),
    11.             MainCompetence = "Programmer",
    12.             Name = "John1",
    13.             Surname = "Smith1",
    14.             Address = "New York, USA",
    15.             EMail = "John@dummy.us",
    16.             CanBeTeamLeader = true
    17.         }, 0, true).AddToArray(m => m[0],
    18.             new Employed()
    19.             {
    20.                 Id = Guid.NewGuid(),
    21.                 MainCompetence = "Programmer",
    22.                 Name = "John2",
    23.                 Surname = "Smith2",
    24.                 Address = "New York, USA",
    25.                 EMail = "John@dummy.us"
    26.             }, 0, true);

    The first instruction modifies the names of the programmers that are simple members of the team(actually it just creates the remote command to do this). We go down the Programmers properties of the Core Workspace and then call the UpdateModelIEnumerable that applies a modifications to all elements of an IEnumerable. The first argument specify the IEnumerable to be modified; since we already moved into the IEnumerable it is just m => m. The second argument specifies the property of each element that must me modified and the third argument specifies how to modify it.

    The second instruction modifies the LeaderProgrammer name.  It is self-explanatory.

    Finally, the third instruction adds two more programmers. We move down the Programmers property, and then we call AddToArray twice. The first argument of each call specifies the place in the javascript array where to place the newly added element: in our case we place it at index 0….but wait …wait wait, since the last argument of the call is set to true the enumeration starts from the bottom, so we are just queuing the new elements at the bottom of the array.

    Now in order to include all “remote commands” in the response we must substitute:

    1. var response = builder.GetResponse();

    with:

    1. var response = builder.GetResponse(changer.Get());

    However, we have a problem, the two programmers that we added to the team might be already contained in the programmers list so we might have an entity duplication in the client side View Model. Luckily, the Data Moving Plug-in offers tools to enforce  uniqueness.we may trigger the processing that enforce uniqueness of entities in onUpdateComplete callback of the whole WorkSpace updatesManager (defined in its options):

    1. onUpdateComplete: function (e, result, status) {
    2.     if (!e.success) return;
    3.     var hash = {};
    4.     mvcct.updatesManager.utils.entitiesInWorkSpace(TeamBuilding.ClientModel.ProposedTeam, hash);
    5.     TeamBuilding.programmersUpdater.filterObservable(hash);
    6.     TeamBuilding.artistsUpdater.filterObservable(hash);
    7. },

    The mvcct.updatesManager.utils.entitiesInWorkSpace method extracts all entities contained in the core workspace, and index them into an hash table. After that, each Entity Set updatesManager ensures they are not contained in the Entity Set Window it takes care of. The task is carried out efficiently because of the indexing performed by mvcct.updatesManager.utils.entitiesInWorkSpace .

    That’ all for now!

    Stay tuned and give a look also to all other Data Moving Plug-in introductory tutorials and videos

                          Francesco

    Tags: , , , , , ,