May 24 2016

Html enhancements, processing, and fallback: how to organize all of them?

Category: Htnl5 fallback | TypeScript | JavaScript | Asp.netFrancesco @ 02:44

I already analyzed and classified JavaScript techniques for Web Applications in a previous series of posts. In this post I’ll propose a solution to the main problems associated with the enhancement of Html with jQuey plug-ins or with other Html post-processors.  The solution I describe here is implemented in a JavaScript library available on both bower and npm registry

But let start the story from the beginning!

Html post-processors add Widgets or further functionalities (like, for instance drag and drop based interactions). that  either are fallbacks of features not supported by the browser, or that are completely new features (features not supported by any browser and not mentioned in Htnl5 specifications).

Since Html enhancements depend strongly on browser capabilities and on the usage of JavaScript they can’t be processed on the server side with some smart server side template engine, but must be processed in the browser itself.

So there are just two possibilities:

  1. Using client templates that adapt to the browser capabilities
  2. Performing an Html post-processing, that adds widgets and/or features.

If we don’t use an advanced JavaScript framework like Angular, React.js, Knockout, or any other client-templates based framework we are left just with the post-processing option. This is the famous jQuery plug-ins way, where we enhance static Html with by selecting target nodes with jQuery css-like selectors: basically a kind of super-css with no limits on the effects we may achieve.

Also when  we use a client-side template engine, often, we are forced to use Html post-processing techniques! Why? Simple, just because often the functionality we need is not implemented in the chosen client-side framework, so we are forced to use some jQuery plug-in, in any case. This is very common, since there are several mutually incompatible advanced client side frameworks, and their average life is quite short (they either die, or evolve in a partially incompatible way), so the community hasn’t enough time to produce a complete set of useful libraries, while, on the other side, jQuery plug-ins always survive and increase since they may be easily adapted to any framework.

Moreover, Html post-processing may be applied also to the output of already existing modules and for this reason, often, it is the only viable option also when using client-side templates. Suppose, for instance, you want, to apply not supported Html5 input fallback to existing Knockout.js, or Angular.js client controls, you have just two options, either rewriting all controls to include the fallback feature, or applying somehow an Html post-processing.

Summing up, unluckily, Html post-processing still has a fundamental role in Web Applications development. I said, “unluckily”, since while all new advanced client-side frameworks were conceived to improve the overall quality of the client side code, the html enhancement/post-processing paradigm suffers of well known problems;

  1. .While initial page content is processed automatically, dynamically created content is not!  So each time we receive new Html as a result of an Ajax call, or as a result of the instantiation of client-side templates, we need to apply manually all transformations that were applied to our initial static Html, in the same order!
  2. Transformations must be applied in the right order, and this order is not necessarily the order implied by the JavaScript file dependencies. For instance, say module B depends on module A, so A script must come before B script, However, in turn, this implies that all transformations defined in A are applied before the ones defined in B,…but we might need that B transformations are applied before A transformations.
  3. Usually transformations are applied on the document.ready event so it is very difficult to coordinate them with content loaded asynchronously

Summing up, all problems are connected with dynamic content, and with operations timing, so after, analyzing the very nature of these problems, I concluded they might have been solved by creating a centralized register of all transformations to apply and by organizing the overall processing into different stages.

Transformations register

If we register all transformations in a centralized data structures then on one side we may decide with no other constraints their order of application, and on the other side we may apply all transformations, in the right order to each chunk of Html with a single call to the an “apply-all” method of the centralized data structure.That “apply-all” method might be applied at page load after all needed asynchronous modules (if any) have been loaded,and each time new dynamic content is created.

Invoking, the “apply all” method is easy also in most of the client framework based on client templates, that usually allows the application of some processing to all dynamically created content, 

3 Different stages

The complete freedom in choosing the right order of application isn’t enough to deal with several interconnected cooperating modules.In fact, the overall relative order constraints network might admit no solution! Moreover, also when relative order constraints may be satisfied the addition of a new module might break the stability of the application. In a few words: application order constraints undermine software modularity and maintainability.

So we need to reduce relative order constraints by removing all order constrains that depend on the way the modules are implemented and interact and not on the transformations very nature.

For instance if a module applies a validation plug-in while another module performs not supported Hrml5 input fallbacks the very nature of these two transformation implies that the validation plug-in must be applied after the fallback, since it must act on the already transformed input fields (the original ones are destroyed).

However, if modules A, B, C, D,  furnish further features to the main transformation applied by M, it would be desirable that there are no order constraints among them. Imagine, for instance, M transforms some Html tables into grids, while A, B, C, D apply additional features like for instance grouping, sorting, and virtual scrolling.

Order constraints among A, B, C, D may be avoided only if they act in a coordinated way, by exploiting extension points provided by M. In other terms when, A, B, C, D transformations are registered, instead of operating directly on their target Html, they should connect to extension points of M that then applies all of them properly and at the right time during the Grid construction.

If A, B, C, D were explicitly designed as extensions of M the designer would take care of their coordination, but what if we are putting together software from different authors?

We must build wrappers around them, and we must use a general framework that takes care of the coordination problem.  A three stages protocol may do the job, and ensure transformations are applied efficiently:

  1. Options pre-processing. During this stage each module connects to extension points of another module it is supposed to extends.
  2. Options processing. during this stage each module configure itself based on its options and extension. Since, this stage is executed just once all processing that doesn’t depend on the nodes to transform should be factored out here. This way it will be executed just once, and not, on each transformation call.
  3. Actual node transformation. This is the step we previously called “apply all”. It is automatically invoked on the initial content of the page, and then it must me invoked on the root of each newly created content.

All above steps are applied in sequence (all modules perform 1, then all modules perform 2, etc.)

Since, module coordination, is handled in step 1 there is no need to impose order constraints to ensure coordination of cooperating modules. This way order constraints are minimized. Only the “natural” constraints easy to figure out just by the definition of each transformation, should remain.

In the software implementation of the above ideas, called mvcct-enhancer, for registering a module we specify 3 functions each executing one of the threes stages described above. All module options are specified within an overall options object where each module has its own sections (i.e properties). Connections, to other modules extension points take place by adding option values (that may be also functions) in the options section of the target module. The functions that process steps 1 and 2 receive just the overall option object as argument, while the actual transformation function receives two arguments, the root node of the chunk to transform and a bool set to true only for the initial processing of the page static content.

mvcct-enhancer has functions for applying all transformations on a dynamic node, for initiating the three steps protocol, and for waiting for async JavaScript loading. It has also a basic Html5 input support detection and fallback module that has extension points to add jQuey widgets.

We implemented also a module that provides widget extensions based on Bootstrap, called bootstrap-html5-fallback. It is a good example on how to build wrappers around existing software to use it with mvcct-enhancer. In fact the the main purpose of mvcct-enhancer is to provide a framework for coordinating several already existing modules written by different authors, at a price of writing a few lines wrappers.

That’s all for now!

In the next post we will see an example of how to use mvcct-enhancer to provide, Html5 fallback, and globalized validation to Asp.net Core Mvc with the help of the Asp.net core new version of the Mvc Controls Toolkit.

Francesco

Tags: , , , ,

May 7 2015

JavaScript Intensive Web Applications 4: JSON based Ajax and Single Page Applications

Category: WebApi | MVC | JavaScriptFrancesco @ 06:19

JavaScript Intensive Web Application 1: Getting JavaScript Intellisense

JavaScript Intensive Web Applications 2: Enhancing Html with JavaScript

JavaScript Intensive Web Applications 3: Enhancing Applications with Ajax

jsFarmIntellisense

In this last post of the series, I discuss the use of JSON based Ajax calls and client side View Models. I will propose also a simple implementation of a knockout.js binding to apply a generic jQuery plug-in to an Html node. The post is concluded with a short analysis of Single Page Application frameworks.

In my previous post we have seen that Html returning Ajax calls update the needed parts of an Html page while keeping unmodified the remainder of the page. This allow a  tighter interaction between user and server because the user may work on other areas of the page while waiting for a server response, and he/she may ask  supplementary information to the server when he/she is in the middle of a task without loosing the whole page state.

The user experience may be improved further if we are able to maintain the whole state of the current task on the client, because this way we reduce further the need to communicate with the server: the user may prepare all data for the server while receiving immediately all needed help and suggestions with no need to communicate with the server in this first stage. Communication with the server is needed only after everything has been prepared. For instance, the user may modify all data contained in a grid, recurring to a detail  window when needed. Entities connected with one-to-many relations with the main list may be edited in the detail view. Everything without communicating with the server! Then, when all changes have been done, the user performs a single submit, and updates the global state of the system. The server answer may contain corrections performed by the server to the data supplied by the user, that are automatically applied to the client copy of the data.

In other words maintaining the whole state of a task on the client side allows a tighter user-machine cooperation since this cooperation may be performed without waiting for remote server answers. However, the increased complexity of the client side requires a robust and modular architecture of the client code. In particular, since we move logics, and not only UI on the client side, Html nodes that are  mainly UI staffs must be supported by JavaScript models. Models and Html nodes should cooperate while keeping separation of concerns between logics and UI. This means that all processing must take pace on models that are then rendered by using client side templates. Accordingly, Ajax calls can’t return Html anymore, but must return JavaScript models.

Summing up, all architectures where the whole state of the current task is maintained on the client should have the following features:

  1. JSON communication with the server. The format of the data exchanged between server and client might be also Xml based, but as a matter of fact at the moment, the simpler JSON protocol is a kind of standard.
  2. Html is created dynamically by instantiating client templates, thus this kind of Web Applications are not visible to search engines.
  3. The state of client and server must be kept aligned, by performing simultaneous updates on both client and server in a transactional fashion. This means, for instance, that if a server update fails for some reason the client must be able to restore the state of the last client server synchronization.

As a matter of fact at the moment point 3 has not received the needed attention also in sophisticated Single Page Application frameworks, that don’t supply general tools to face it, so the problem is substantially left to custom solutions of the developers.

In the case of Html based Ajax communication we have seen that, since the communication is substantially based on form submits, the server relies on all input fields having adequate names to build a model that then is passed to the Action methods that serve the client requests. In JSON based commutations, instead,  input fields names are completely irrelevant since action methods receive substantially JavaScript models.

Html Ids, and CSS classes are also used as “addresses” to select Html nodes to enhance with JavaScript code.  Several frameworks like knockout.js and angular.js avoid the use of these ids and CSS classes as a way to attach JavaScript behavior to Html nodes. In their case, model properties are “connected” to Html nodes through the so called bindings that are substantially communication channels between Html nodes and the JavaScript properties that updates one of them when the other changes. They may be one-way or two ways. Bindings may also connect Html nodes with JavaScript functions, and the developer may also define custom bindings, thus bindings solve completely the problem of connecting Html nodes with JavaScript code with no need to provide unique ids, or selection-purpose CSS classes.

Below how to use a custom knockout.js binding for applying jQuery Plug-ins to Html nodes:

 

  1. <input type="button" value="my button" data-bind="jqplugins: ['button']"/>
  2. <input type="button" value="my button"
  3. data-bind="jqplugins: [{ name: 'button', options: {label: 'click me'}}]"/>

 

The binding name is followed by an array whose elements may be either simple strings, in case there are no plug-in options, or objects with a name and an option property. As you can see in knockout.js bindings are contained in the Html5 data-bind attribute.

Below the JavaScript code that defines the jqplugins custom binding:

 

  1. (function ($) {
  2.     function applyPlugin(jElement, name, options) {
  3.         if (typeof $.fn[name] !== 'function') {
  4.             throw new Error("unrecognized plug-in name: " + name);
  5.         }
  6.         if (!options) jElement[name]();
  7.         else jElement[name](options);
  8.     }
  9.     ko.bindingHandlers.jqplugins = {
  10.         update: function (element, valueAccessor, allBindingsAccessor) {
  11.             var allPlugins = ko.utils.unwrapObservable(valueAccessor());
  12.             var jElement = $(element);
  13.             for (var i = 0; i < allPlugins.length; i++) {
  14.                 var curr = allPlugins[i];
  15.                 if (typeof (curr) === 'string')
  16.                     applyPlugin(jElement, curr, null);
  17.                 else {
  18.                     applyPlugin(jElement,
  19.                         ko.utils.unwrapObservable(curr.name),
  20.                         ko.utils.unwrapObservable(curr.options));
  21.                 }
  22.             }
  23.         }
  24.     }
  25. })(jQuery)

 

The code above enables the use of all available jQuery plug-ins on all knockout.js based architectures, so that we can move to advanced client architectures based to knockout.js without renouncing to our favorite widgets and CSS/JavaScript frameworks like jQuey UI, Bootstrap, jQuery Mobile, and Zurb Foundation.

 

As a next step we may pass from storing the whole state of a single task, to storing the whole application state on the client side, which implies that the whole application must live in a single Html physical page(otherwise the whole state would be lost). Similar applications are called Single Page Applications.

In a Single Page Application Virtual pages are created dynamically by instantiating client templates that substitute the Html of any previous virtual  page in the same physical page. The same physical page may show simultaneously several virtual pages in different areas. For instance, a virtual page might play the role of master, and another the role of detail page.

Most of Single Page Application frameworks have also the concept of virtual link and/or of routing, and may connect the virtual pages to the browser history, so that the user may navigate among virtual pages with usual links and with the browser buttons.

But… why re-implementing the whole browser behavior inside a single physical page? What are the advantages of Single Page Applications compared to “multiple physical pages applications” based on Client View models?

In general having the whole application state on the client side reduces further the need to communicate with the server, thus increasing the responsiveness to the user inputs. More specifically:

  1. Once the client templates needed to create a new virtual page have been downloaded from the server further accesses to the same virtual page become very fast. On the contrary, loading a complex client model based page that is able to store the whole state of a task may be time consuming, so saving this loading time improves the user experience considerably.
  2. The state of a previously visited virtual page may be maintained so that the user finds the virtual page in exactly the same state he/she left it. This improves the cooperation between different tasks that are someway connected: the user may move forth and back between several virtual pages with the browser buttons while performing a complex task without loosing the state of each page.
  3. The same physical page may contain simultaneously several virtual pages in different areas. Thus, the user may move forth and back between several virtual page in one area, while keeping the content of another area. This scenario enables advanced form of cooperation between virtual pages.
  4. The whole Single Page Application may be designed to work also off-line. When the user has finished working the whole application state may be saved in the local storage and restored when he/she needs to perform further changes, or when he/she can go on-line to perform a synchronization with the server.

The main problem Single Page Application developers are faced with is keeping a large JavaScript codebase modular and maintainable.  Since virtual pages are actually client templates <-> ViewModel pairs, the concept of virtual page itself has been conceived in a way to increase modularity. However several virtual pages need also a way to cooperate that doesn’t undermine their modularity and the independence of each virtual pages from the remainder of the system.

In particular:

  1. Each virtual page definition should not depend on the remainder of the system to keep modularity, which, in turn, implies that virtual pages may not contain direct references to other external data structures.
  2. Notwithstanding point 1, some kind of cooperation that doesn’t undermine modularity, must be achieved among model-view pairs and among model-view pairs and the application models. A modular cooperation may be achieved by injecting interfaces that connect each model-view pair with the external environment as soon as a model-view pair is added to the page.
  3. Pointers, to data structures contained inside each virtual page should be either avoided or handled by resource managers to avoid they are used when a virtual page has been released or when it is not in an active state.

Separation is ensured someway by the concept of ViewModel itself. Durandal.js uses AMD modules to encode ViewModels. AMD protocol is a powerful technique for both loading dynamically and injecting other code modules that the current module might depend on and consequently for handling a large JavaScript codebase. However, the dependency tree is hardwired, so the injection mechanism is more adequate to inject code than dynamic data structures that might depend on the state of the ongoing computation. Accordingly, the full achievement of point 2) requires an explicit programming effort. Angular.js uses a custom dependency injection and module loading mechanism. That mechanism is easier to use, but it is less adequate for managing large codebases (in my opinion , not adequate at all). However, the fact that the injection mechanism is less structured make easier the injection of dynamic data structures when a model-view pair is instantiated.

In general most of frameworks ensure separation with some kind of cooperation, but no frameworks offer a completely out-of-the-box solution for point 2, and an out-of-the-box solution form managing the lifetime of pointers that have been injected into model-view pairs to ensure an adequate cooperation in the context of the ongoing computation (point 3). More specifically, the lifetime of pointers to AMD modules(or other types of dynamically loaded modules), that have been injected are automatically handled, but there is no out-the-of-the-box mechanism for managing pointers that a model-view pair might have to data structures contained into another model-view pair, so the developer has the burden of coding all controls needed to ensure the validity of each pointer, in order to avoid the use of pointers to data structures contained in model-view pairs that have been removed from the page.

The need for a more robust solution to problems 2 and 3 is among is among the reasons that pushed me to to implement a custom Single Page Application framework in the Data Moving Controls suite. The Data Moving SPA framework (see here, and here) relies on contextual rules that “adapt” each virtual page that is being loaded to the “current context”. Where the “current context” includes both interface implementations that connect the virtual page to the remainder of the system and information about the current state of the application, such as, if the user is logged or not, the current culture (that is the browser language and culture settings), and so on. Contextual rules are used also to redirect a not logged user to a login virtual page and to verify if the user has the needed authorizations to access the current virtual page. The interface implementations passed by the contextual rules to the virtual page View Models include also all resource managers needed  for  sharing data structures among all application virtual pages safely. Another communication mechanism is the possibility to pass input data to any page that is being loaded. Such input data are analogous to the input data passed in the query string. In fact, this input may be included also in virtual links.

Another big challenge of Single Page Applications is the duplication of code in both client and server side. In fact, the same classes, input validation criteria, and other metadata must be available on both client and server side, and when the languages used by the two sides are different, this become a big problem. The Meteor framework uses JavaScript on both server and client, and allow code sharing between the two sides. The main  price to pay for this solution is the use of a language that is not strongly typed also on the server side.  In the Data Moving SPA we faced this problem by equipping SPA server with dynamic JavaScript files implemented with Razor views. This way JavaScript data structures may be obtained by serializing in JavaScript their equivalent .Net data structures.

Another important problem all SPA must solve is the data synchronization between Client and Server. Durandal.js works quite well with Breeze.Js that offers some synchronization services for the case the server may be modeled as an OData source. Breeze.Js may be adapted also to most of all other SPA framework, but this solution is acceptable only if there is almost no business logics between the client and the server side database. In fact, only in this case the server API may be exposed as an OData source only, with no need of more complex communication. 

Meteor,  takes care of sever/client synchronization in a way that is completely transparent to the developer. A similar solution facilitates the coding of simple applications, but may be inadequate for complex business systems that needs to control explicitly communication between client and server.

The Data Moving SPA framework offers retrievalManagers to submit a wide range of (customizable) queries(that includes also OData queries)  to the server, while viewModelUpdatesManagers and updatesManagers take care of synchronizing a generic data structure with the server in a transactional fashion, by taking into account both changes in several Entity Sets (additions, modifications, and deletes), and changes in temporary data structures(core workspaces). As a result of the synchronization process they may return either errors that are automatically dispatched in the right places of the UI, in case of failure, or remote commands that apply modifications to the client side data structure to be synchronized with the server. While the synchronization process is completely automatic, the developer has full control on which data to synchronize, and when to synchronize them, and also the possibility to customize various part of the process.

 

That’s all! This post ends the short series about JavaScript intensive web application. This series is no way a tutorial that describes extensively all details of the techniques that has been discussed, but just a guide on how to select the right technique for each application and on how to solve some architectural issues that are not usually discussed in other places.

 

Stay tuned! 

Francesco

Tags: , ,

Dec 21 2013

JavaScript Intensive Web Applications 3: Enhancing Applications with Ajax

Category: JavascriptFrancesco @ 21:43

JavaScript Intensive Web Application 1: Getting JavaScript Intellisense

JavaScript Intensive Web Applications 2: Enhancing Html with JavaScript

paramsIntellisense1

What are the advantages and drawbacks of using Ajax to update web pages? How to decide if Ajax calls should return html or JSON? In this post I will give some answers to the above questions and I will give some tricks to enhance also the Html created dynamically as a consequence of Ajax calls with JQuery Plug-ins .

Most of people that uses Ajax when asked why they are using it, answer that Ajax calls improve performance and user experience….Well, for sure they improve user experience, but I don’t know if possible performance improvements might be relevant…Finding the right answer to all these questions is the first step toward an optimized use of Ajax based techniques.

What are the times that compose the the total response time of a server request? A network latency time is needed to establish the connection with the server, a trasmission time is needed to send all bytes (that depends on the available bandwidth), a server response time, and a browser re-drawing time. Now, if the server is well designed, and if we are not sending tons of html , the bottlenecks are the latency time and the browser re-drawing time. With all nowadays continuous technological improvements bandwidth will impact always less on performance. Also the browser re-draw time will impact always less on the performance. So the request response time will be always more and more tied to the network latency. Accordingly, redrawing the whole page or just a part of it would require almost the same communication time. Moreover re-drawing just a not negligible area of the page (say the 25% of the page) requires almost the same time as re-drawing the whole page since a whole page re-draw is more efficient than a partial re-draw.

As a conclusion in most of the cases Ajax techniques don’t imply any appreciable improvement in the total response time! So why using Ajax?

  1. If we need to refresh just a small part of the page, as in the case of  an auto-complete that write suggestions under the textbox we are writing in, there is a not negligible improvement of the response time.
  2. During the Ajax update the state of the remainder of the page is maintained. What does this mean? From the user experience point of view this means something like: the browser will not loose the scrolling we have done, and the textbox the user is writing in will not loose the focus…otherwise it would be impossible to have auto-complete and similar widgets working properly. One might object that we might restore the whole state also after a standard page redraw…Yes it is true…but before the page has been completely redrawn the user would see the page returning to the top of the document, the Textbox disappearing, and so on….and similar unacceptable stuffs.
    Maintaining the whole state is important also for more macroscopic state information. Let think ,for instance, to a grid with a detail view that retrieves row details from the server when a row is selected, and show them in a separate area of the page. A complete page refresh might cause the loss of the whole grid state, that is the data page shown, sorting and filtering, the possible scrolling of the grid body, etc. Now, in theory, it is possible to rebuild the whole state also after a whole page refresh but this might confuse the user that would see the grid disappearing, being re-drawn in a different position and then being scrolled till reaching the previous position. However, this isn’t the only drawback, if grid data are taken from a shared database (as it is usual..), also the same page with the same sorting and filtering might show different data disappointing completely the user. Moreover, any attempt to take into account all these state information on the server side might undermine the modularity of our application turning our Controllers into “spaghetti code”.

So when  is it convenient to use Ajax techniques?

Simple, either when we need to update just a small part of the page or when we need to keep the state of a part of the page. Also the reason for implementing our application as a Single Page Application, that is, an application that never leaves the same physical page, is always the same: keeping state information in the physical page. In the next post dedicated to Single Page Applications we will see also other reasons for keeping state information in the page but the fact remains that …the reason why Single Page Applications exist is… keeping state information in the page.

Now, when our pages become more an more complex keeping state information inside Html nodes may lead to “spaghetti code”, so in a way that is completely analogous to the Mvc pattern on the server side it is more and more convenient to store information inside a client side ViewModel, and then using that ViewModel to render adequate Html. On the server side we use Razor Views to turn models into Html, while on the client side we use client side templates to create dynamically Html from a JavaScript model…..Well, this is the main reason to use Ajax calls where client and server exchange JSON!

JSON based Ajax techniques will be discussed in greater detail in the next post of this series. Here it is worth to point out just when they should be preferred to standard Ajax techniques where the server returns immediately the needed html. Since JSON techniques conforms with the idea of using a client side ViewModel, that in turn ensures a better modularity, one might draw the conclusion that they should always be preferred to standard Html-returning Ajax techniques!….NO…false, JSON based techniques should be preferred only when you may use client side templates! Below the typical reasons that, in some circumnstances, prevent the use of client templates:

  1. Pages created with client templates are not visible to search engines.
  2. Some slow mobile device might not be able to render client templates with an acceptable performance.

You might object that also in case Ajax calls return Html, that Html is not visible to search engines. TRUE….but IRRELEVANT because, when we use Ajax calls returning Html, the initial page is not rendered with Ajax, because the same Action Method that serves an Ajax request may be called also when the initial page is rendered,…but without using Ajax and usingt @Html.Action(…), and @Html.RenderAction(…), instead. This way, the initial page is completely visible to search engines. Accordingly, if we make a clever use of Html-returning Ajax Mvc controllers we may produce web Applications that are completely visible to search engines. Here “clever use” means, for instance, that when we change the page of a grid we don’t do it with Ajax but with a link based pager (possibly...with a smart encoding of the page number in the URL). In other terms, we should use Ajax only for that operations that are not performed by search engines. So, for instance,  we may show a detail area in a grid page, but then the same detail page must be available also as a separate page either through a "pretty url", through a link.

 

Let see in detail, how we may avoid initial Ajax calls when we render the whole page with the simple example of the grid with a detail view.

Suppose we have a PlannerController with a ToDoList action method that fills a ViewModel with a paged list of ToDoItems, and a DisplayDetailToDo action method that fills a ViewModel with the details of a single ToDoItem. Suppose that we display the ViewModel  filled by the ToDoList action method in a ToDoList View containing a Grid and a detail area, and suppose that initially the detail are should contain the first item of the grid. Then the detail area of the ToDoList View should be something like:

  1. <div id="detailArea" data-update-url="@Url.Action("DisplayDetailToDo", "Planner")">
  2.     @Html.Action("DisplayDetailToDo", "Planner", new {ItemId=Model.Items[0].ItemId})
  3. </div>

Then, whenever the user select the ToDoItem with Id –> selectedId in the grid we perform the Ajax call:

  1. var ajaxRoot=$('#detailArea');
  2. ajaxRoot.load(ajaxRoot.attr("data-update-url")+"?ItemId="+selectedId );

That we may place in a click handler (my previous post shows how to add modularly click handlers) that catches all events bubbled by the rows of the grid. The grid side code depends on the chosen grid, but we may take selectedId from an Html5 attribute of the button, link or other Html node used to select the grid row.

In both cases we use the same action method that should be something like:

  1. public ActionResult DisplayDetailToDo(int ItemId)
  2. {
  3.     var model = repository.GetToDo(ItemId);
  4.     ...
  5.     ...
  6.     ...
  7.     return PartialView(model);
  8. }

Where I omitted all errors handling code.

Our problem now is how to enhance also the Html returned by the Ajax call with jQuery widgets. We may use the basically the same technique I have shown in my previous post, based on the widgetsHelpers.initialize method, since when we insert new Html with the jQuery .html method all JavaScript contained in the Html string is executed. However, the widgetsHelpers.initialize method contains the .ready jQuery method…that doesn’t work with dynamically added content. This problem is easily solved with a temporary substitution of the .ready jQuery method with a custom method during the processing of the Ajax response:

  1. var delayedExecution = [];
  2. var newReady = function (x) {
  3.     delayedExecution.push(x);
  4. };
  5. var oldReady = jQuery.fn.ready;
  6. jQuery.fn.ready = newReady;
  7. try {
  8.     //response processing here
  9. }
  10. finally {
  11.     jQuery.fn.ready = oldReady;
  12. }
  13. for (var i = 0; i < delayedExecution.length; i++)
  14.     delayedExecution[i]();

 

Where in most of the cases the response processing is just the call to the jQuery  .html method. Thus, we may define a widgetsHelpers.dynamicHtml(jTarget, html) that does the job of attaching an Html string to a jTarget node while ensuring that all JavaScript enhancements contained in the Html string are properly applied:

  1. widgetsHelpers.dynamicHtml=function(jTarget, html){
  2.     var delayedExecution = [];
  3.     var newReady = function (x) {
  4.         delayedExecution.push(x);
  5.     };
  6.     var oldReady = jQuery.fn.ready;
  7.     jQuery.fn.ready = newReady;
  8.     try {
  9.         jTarget.html(html);
  10.     }
  11.     finally {
  12.         jQuery.fn.ready = oldReady;
  13.     }
  14.     for (var i = 0; i < delayedExecution.length; i++)
  15.         delayedExecution[i]();
  16. };

 

 

However, we have another problem, too….Avoiding that the jQuery plug-ins that we apply to the newly added Html are re-applied also to the remainder of the Html page. In fact, if, for instance, we enhance all input fields contained in our dynamic Html that have the “datetime” CSS class with a Bootstrap datepicker, the datepicker plug-in would be re-applied also to the input fields of the remainder of the page  with the same attribute. Often jQuery plug-ins are robust and re-applying them to the same nodes doesn’t produce any effect. However, you can’t rely on this robustness, and, in any case, a similar solution would be very inefficient. The only way out is using different names….however, as we have seen in my previous post the code for generating datepickers is all contained into an unique Date.cshtml partial view that is called both by our initial page and by any other Ajax request.

Actually this is not the only “name convention” problem of Ajax provided content that we must solve! Normally, in Asp.net Mvc all input fields have names that MUST be strictly tied to the position where their content will be inserted in the ViewModel of the action method that the receive the data posted by the client. So for instance, a date that must be inserted, in the DateOfBirth property of Person instance inserted in the PersonalInfos property of the ViewModel MUST be rendered in an input field with name PersonalInfos.DateOfBirth, and id PersonalInfos_DateOfBirth, otherwise the default model binder wouldn’t be able to fill properly the ViewModel. The dot in the name is turned into an underscore in the id because the id can’t contain dots. Now, if the ViewModel used to render the page is the same as the model used to receive the post all above conventions are automatically enforced by the Asp.net Mvc Html helpers (TextBoxFor, etc.).

However, in general the ViewModel used by the Ajax controller differs from the one used for the initial page, since the Ajax call furnish just a part of the page data. So, for instance, in our previous example, if the Ajax call returns just the data obtained by rendering a Person object the name of our Date field would be DateOfBirth instead of PersonalInfos.DateOfBirth. Now if the Person data are submitted separately with another Ajax call to an Action method that uses a Person ViewModel all works ok, but if the Person data must be submitted together with the main page data we must add someway the PersonalInfos prefix.

Adding the PersonalInfos prefix to all input fields rendered by the partial view used by the controller that respond to the Ajax request is quite easy. It is enough to add the following code at the beginning of the View:

Html.ViewData.TemplateInfo.HtmlFieldPrefix = "PersonalInfos";

The prefix should be added just to the top level Partial View, since each time we call EditorFor and DisplayFor the Asp.net Mvc engine takes care of defining the right prefix for the child Partial View. However, in general the server doesn’t know this prefix, since the prefix depends on the role that the Ajax content will play in the overall page ViewModel. Suppose, for instance that the Html returned by the Ajax call must be used to add a new row to a grid, that contains Person data. The prefix to add should be something like AllPersons[i], where i is the 0 based index of the new row in the grid. So if the grid already contains 10, rows i=10, if the grid already contains 15 rows i= 15, and so on. In other terms only the client may know our prefix! So we must add the prefix as a further parameter of the Ajax call.

Unluckily the previous prefix, in general, cannot be used also to solve the problem we have with the datetime CSS class, because in the second case the CSS class must be unique withineach Ajax call not within a specific position in the ViewModel. Accordingly, for the CSS classes used to enhance the Ajax Html we might use a different prefix based on a count of all Ajax calls made to the server from the current Html page:

  1. (function ($) {
  2.     ...
  3.     ...
  4.     ...
  5.     var ajaxCount=0;
  6.     widgetsHelpers.newClassPrefix= function(){
  7.         return "classprefix"+(ajaxCount++);
  8.     };
  9. })(jQuery)

The two prefixes must be added to the parameters of the Ajax call together with the original request parameters, say,  personId, to get the final request URL:

ajaxRoot.attr("data-update-url")+"?personId="+personId+"&htmlPrefix="+htmlPrefix+"&classPrefix="+classPrefix

On the server side, any Ajax enabled controller must take care of receiving the two prefixes:

public ActionResult PersonData(int personId, string htmlPrefix, string classPrefix)
{
    if (!string.IsNullOrWhiteSpace(htmlPrefix)) ViewData[htmlPrefix]=htmlPrefix;
    if (!string.IsNullOrWhiteSpace(classPrefix)) System.Web.HttpContext.Current.Items["classPrefix"] = classPrefix;
    var model = repository.GetPersonById(personId);
    ...
    ...
    ...
    return PartialView(model);
}

 

The classPrefix has been added to the HttpContext dictionary, since it must be used by all Partial Views called in the current request, while the htmlPrefix has been added to the ViewData since it must be used just by the top level Partial View.

Now in the top level Partial View:

  1. @{
  2.     Html.ViewData.TemplateInfo.HtmlFieldPrefix = ViewData["htmlPrefix"] as string ?? "";
  3.     string classPrefix = System.Web.HttpContext.Current.Items.Contains("classPrefix") ?
  4.         System.Web.HttpContext.Current.Items["classPrefix"] as string + "-" :
  5.         "";          
  6. }

In the Data.cshtml Partial View, and in general in all Partial Views that might be involved in an Ajax call:

 

  1. @{
  2.     string classPrefix = System.Web.HttpContext.Current.Items.Contains("classPrefix") ?
  3.         System.Web.HttpContext.Current.Items["classPrefix"] as string + "-" :
  4.         "";          
  5. }

 

and then in each CSS class enhanced input field:

@Html.TextBoxFor(m => m.DateOfBirth, new {@class=classPrefix+"datetime"})

 

That’s all for now!

In the next post Json based Ajax calls and Single Page Applications.

Stay tuned!

Francesco

Tags: , , , , ,

Dec 10 2013

JavaScript Intensive Web Applications 2: Enhancing Html with JavaScript

Category: JavascriptFrancesco @ 06:35

JavaScript Intensive Web Applications 1: Getting JavaScript Intellisense

JavaScript Intensive Web Applications 3: Enhancing Applications with Ajax

jsFarmIntellisense 

There are mainly three ways you may improve your application with JavaScript, each with its vantages and disadvantages:

  1. Enhancing the page Html with JavaScript widgets
  2. Refreshing Html page areas with fresh Html returned by Ajax calls
  3. Creating Html dynamically using JSON returned by Ajax calls

In this post I will speak about the first technique that is the only one that has substantially no drawbacks. The other ones will be discussed in further posts of the same series.

In this and in all other posts of this series I assume that your web Application is implemented with Asp.net Mvc.

If we suppose that the application submits user inputs contained in Html input fields with standard form submits, JavaScript becomes just a tool that may  improve the appearance of the page and that may help the user to fill more easily the input fields. In other terms, it becomes a sort of “turbo CSS” we may use to improve the appearance and the user experience. This is the main idea that is behind all jQuery widgets that select Html nodes with CSS selectors and enhance them in a way similar to the way a CSS rule would do.

Unluckily, the “pseudo-styles” applied by jQuery widgets are not automatically enforced also on newly added Html, so jQuery widgets create problems when they are used together with Ajax techniques. We will analyze in detail these problems and how to solve them in the posts of this series dedicated to Ajax. In what follows I assume that no dynamic Html is added to the page, or that some small piece of Html that might be added automatically by some jQuery widget doesn’t need further enhancements by other jQuery widgets.

Do JavaScript enhancements have drawbacks? Since all browsers support JavaScript, …substantially no… if some simple cautions are adopted:

  1. You pay attention to cross browser compatibility. If you use jQuery and jQuery based frameworks like: jQuery UI, jQuery Mobile, Bootstrap, and Zurb Foundation this should be quite automatic.
  2. All Widgets that you use just enhance an existing Html. The basic functionality should be available, maybe with an awful unacceptable appearance, also if JavaScript is not supported. This requirements is not added for compatibility with browsers that don’t support JavaScript ….that don't exist anymore, but for compatibility with the search engines. If your application is an intranet, or if your page should not be available to search engines you may drop this point. Again if you use the above mentioned jQuery frameworks, and look at the specifications of further jQuery plug-ins you might use (most of the existing jQuery plug-ins conform to this requirement), and if you design properly your custom jQuery plug-ins, also this point should not be a problem.
  3. JavaScript enhancements must not undermine the accessibility of the page. This means all Widgets must use the right Html tags, and if needed, ARIA attributes. For instance, something that has the semantics of a list must be rendered with <li> tags also if it is enhanced with JavaScript. <table> …<tr>…<td> must not be used for layout but only for tabular data, if you need a table like layout, please use adequate CSS like display: table, and similar, instead. All widgets included in the jQuery frameworks I listed previously are ARIA compliant and conform to the requirements of this point.
  4. You use a well defined architecture, to avoid JavaScript “spaghetti code”. Architectures based on the idea of jQuery plug-in helps a lot but you need an effective way to organize all JavaScript modules used by the various pages. I will show you a trick based on require.js, and partial views  to add modularly as many JavaScript and CSS widget-files as you like, without undermining the maintainability of the application.
  5. You pay attention to the development time of each page and you avoid to fall into an endless loop of improvements with new, or better widgets.

All instructions that enhance the html must be executed after the DOM is ready, thus they may be inserted either at the end of the page Html body, or in the page header enclosed in a jQuery $(document).ready(….) handler.

Now, several “influencers” in the area suggest to insert all JavaScript at the end of the Html body and to avoid the use of .ready(). The reason is that any JavaScript placed in the page header slower the page rendering. However, most of the times I prefer the user see a white page loading instead of a page before that it has been enhanced by JavaScript because when you use complex widgets (a Tab widget is enough to show the phenomenon) the page may be unacceptable before its enhancement also if a search engine is able to understand its content :). For this reason I usually place all JavaScript libraries in the header (they are slow to load) and the page enhancing code at the end of the page body. This way, since usually the page enhancing code is quite fast the user see a blank page first, when all JavaScript libraries are loading, and after a fast adjustment (when the page enhancing code is executing) the final page.

I suggest to include all page enhancing code in a separate file, that should contains just lines of the type:

$("<selector>").myPlugin();

The file should contain just lines like the one above to keep the semantics of a “pseudo-CSS” file. This means that if you define custom widgets the widgets code should be included into a different JavaScript library file, that may be included in the page header together with all other JavaScript libraries.

The call to each plug-in should not contain any argument: all plug-in parameters  should be inserted in Html5 attributes. This way all enhancement calls become “standard” and may be created automatically by general purpose JavaScript code (see below). However, this implies that whenever you substitute a widget with another widget that performs the same job, you must modify also the Html; usually this is not a problem if you enclose the Html to be enhanced in a single server-module that is called in the remainder of the Html.  The example below, involving a Bootstrap datepicker, show how to proceed:

Html:

<input type="text" class="datepicker" value="02/16/12" data-date-format="mm/dd/yy" id="dp2" >

Enhancing JavaScript code:

$('.datepicker').datepicker();

 

 

 

 

The single line of JavaScript enhancing code above enhances all input fields with a datepicker CSS class. If you are using Asp.net Mvc, input fields with the datepicker class may be generated automatically with Html.EditorFor(…) if you define a Date.cshtml Mvc Template and if you decorate all DateTime properties that represent pure dates with a DateTypeAttribute  with a Date type value. This way any change to the datepicker parameters require just the change of the Date.cshtml file.

JQuery Mobile, Bootstrap and Zurb Foundation assign predefined classes and/or Html5 attributes to all predefined widgets and enhance them automatically on the .ready event, so you need to add an enhancing JavaScript file only if you use custom widgets. We will see the drawbacks of this approach when discussing Ajax techniques.

Event Handlers may be attached by specialized jQuey extensions, like in the example below:

 

$('.click-operation').clickHandler();

 

The click-operation class may be applied to all nodes that needs a click handler. Then, each single node might contain a data-event-operation Html5 attribute that specifies the specific operation to be carried out on that node. A possible implementation of the clickHandler jQuery extension is:

jQuery.fn.clickHandler = function () {
    this.click(function (evt) {
        switch (jQuery(evt.target).attr("data-event-operation ")) {
            case "op1": ....; break;
            case "op2": ....; break;
            ....
        }
        evt.stopPropagation();
    });
    return this;
}

I used evt.target, so the click handler may be used also for bubbled click events. Moreover, I called stopPropagation to avoid that the event is bubbled to a possible ancestor clickHandler.

Returning to the datepicker example. Since it is not part of the default widgets Bootstrap comes with, we might decide to substitute it with another widget. Imagine also that analogously we would like to substitute also other widgets with better implementations….Wow… a not easy job…we should modify a lot of JavaScript files included in all pages that contain the widgets we have substituted. If we were able to include the references to the datepicker JavaScript file in the same Date.cshtml  partial view that contain the Html of the datepicker it would be enough to make a few modifications to this file and in 10 minutes we would have a different  datepicker working. This way we might be able also to test easily several widgets.

The problem described above is a conceptual problem that is intrinsic in the the pseudo-CSS approach used to manage the Widgets. Widgets are conceptually different by style rules because style rules are part of a “closed specification” while widgets are not, so there are different widgets that do the same job, and new Widgets appears every day: the only way to deal with an “open set” is by enforcing modularity and by defining interfaces. In other terms we must enclose all code of a widget into an unique module that offers a standard interface to the remainder of the system.

Below a simple trick that solves the problem.Let add to the bottom of our Date.cshtml file the following snippet of code:

  1. <script type="text/javascript">
  2.     widgetsHelpers.initialize(["@Url.Content("~/Scripts/bootstrap-datepicker.js")"],
  3.                                 ["@Url.Content("~/Content/datepicker.css")"],
  4.                                 "datepicker",
  5.                                 ".datepicker")
  6. </script>

The first argument contains all JavaScript files(it is an array) with the needed code, the second argument the possibly null list  of CSS Urls that might be needed, the third argument the name of the jQuery plug-in method to call, and finally the selector that characterize all inputs that must be enhanced with the datepicker.

The implementation of the widgetsHelpers.initialize function uses the require.js library to load asynchronously the JavaScript files and is straightforward:

  1. (function ($) {
  2.     window["widgetsHelpers"] = window["widgetsHelpers"] || {};
  3.     var widgetsHelpers = window.widgetsHelpers;
  4.     widgetsHelpers.modules =
  5.         {
  6.             css: {},
  7.             js: {},
  8.             widgets: {}
  9.         };
  10.     function loadCss(url) {//load a Css file and adds it to the page
  11.         widgetsHelpers.modules.css[url] = true;
  12.         var link = document.createElement("link");
  13.         link.type = "text/css";
  14.         link.rel = "stylesheet";
  15.         link.href = url;
  16.         document.getElementsByTagName("head")[0].appendChild(link);
  17.     }
  18.     widgetsHelpers.initialize = function(js, css, widget, selector){
  19.         if (!widgetsHelpers.modules.widgets[selector]){
  20.             widgetsHelpers.modules.widgets[selector] = true;
  21.             $(document).ready(function(){
  22.                 if(css) {
  23.                     for (var i=0; i < css.length; i++)
  24.                         if(!widgetsHelpers.modules.css[css[i]]) loadCss(css[i]);
  25.                 }
  26.                 if (js ){
  27.                     var nJs = [];
  28.                     for(var i = 0; i<js.length; i++)
  29.                         if(!widgetsHelpers.modules.js[js[i]]) {
  30.                             nJs.push(js[i]);
  31.                             widgetsHelpers.modules.js[js[i]] = true;
  32.                         }
  33.                     if(nJs.length)
  34.                         require(nJs, function () {
  35.                             $(selector)[widget]();
  36.                         });
  37.                     else{
  38.                         $(selector)[widget]();
  39.                     }
  40.                 }
  41.                 else{
  42.                     $(selector)[widget]();
  43.                 }
  44.             });
  45.         }
  46.     };
  47.     widgetsHelpers.loadCss = loadCss;
  48. })(jQuery)

We create a namespace, than we create the dictionary widgetsHelpers.modules to “remember” the JavaScript, CSS files and modules that have been already loaded. The loadCss function loads all CSS files that cannot be loaded with require.js.

Finally the initialize function, verifies if another call has already required the same widget, and, if not, on the .ready event loads both the needed CSS and JavaScript files (if not null and if not already loaded), then it applies the widget on the provided selector.

In case  a single partial view needs a JavaScript module containing the definitions of several widgets we may use the widgetsHelpers.intializeAll instead:

 

  1. widgetsHelpers.initializeAll = function(js, css, widgetsArray, selectorsArray){
  2.     widgetsHelpers.intialize(js, css, widgetsArray[0], selectorsArray[0]);
  3.     for(var i=1; i<widgetsArray.length; i++) widgetsHelpers.intialize(
  4.         null, null, widgetsArray[i], selectorsArray[i]);
  5. };

widgetsArray and selectorArray are arrays that contain respectively all widget names and all jQuery selectors used to reference these widgets from the Html nodes. The JavaScript file and the CSS files are passed just in the first call to initialize, while all other calls are needed just to create the pseudo-CSS rules.

The same partial view may contain several calls to initialize and/or initializeAll in case the widgets are split in different files.

That’s all for now!

In the next post all secrets of Ajax based applications…and new useful tricks.

Stay tuned!

Francesco

Tags: , , , , ,

Dec 2 2013

JavaScript Intensive Web Application 1: Getting JavaScript Intellisense

Category: JavascriptFrancesco @ 03:43

JavaScript Intensive Web Applications 2: Enhancing Html with JavaScript

JavaScript Intensive Web Applications 3: Enhancing Applications with Ajax

This is the first of a series of tutorials on the use of client techniques in Web Applications. We will discuss when it is convenient to use Ajax, or JavaScript intensive Web pages, or Json communication or Single Page Applications, and how to solve some typical “nightmares” that these techniques bring with them.

In this first tutorial we will try to remove (or just to lower…) one of the main barriers that discourages the development of large JavaScript codebases: the absence of syntax checks and Visual Studio Intellisense comparable with the ones we have in other strongly typed languages.

Actually Visual Studio and a lot of other JavaScript editors are able to signal immediately pure syntax errors. The main problem is that they are not so smart also in inferring types, and  consequently in furnishing adequate intellisense. The reason of this incapability are basically two:

  1. JavaScript is a dynamic, not strongly typed language. This means that the same variable or function parameter may store different data types, and that consequently the JavaScript editor cannot rely on the variable/parameter data type to perform type checking and to give adequate intellisense.
  2. JavaScript contains no concept of module  reference and/or linking, so a JavaScript file comes to know all details about external functions and prototypes only at run-time when all needed modules are for sure available.

Visual studio offers tools for resolving easily the second problem: namely when you are in a JavaScript file you may add some kind of references to other JavaScript files used by the current module by using the syntax of Xml comments. Xml comments are JavaScript comments composed by /// followed by adequate Xml expressions. Since they are comments they are ignored by both JavaScript minifiers and JavaScript's interpreters.

The syntax for a JavaScript reference Xml comment is basically:

  1. /// <reference path="/path/subpath/..../JavascriptFileToReference.js" />

We may use also “~” to denotes the root of our web application.

When we are editing a JavaScript file, it is enough to drag the file we would like to include from the Solution Explorer to the file we are editing to get automatically the reference Xml comment.

If a JavaScript file is included in an Html page or .cshtml page there is no need to reference it also with a reference Xml comment to get JavaScript help on its code. However, often Html, or .cshtml files use JavaScript files that they don’t include directly for different reasons such as: 1) they might use code retrieved via AMD, 2) the JavaScript files might be included in a _Layout page or in another .cshtml page in case they are partial views, 3) the .cshtml file might be used to produce a dynamic JavaScript file, instead of an Html page.

In all above cases we may use a reference Xml comment inside the <script> tags that enclose the JavaScript code. However, unluckily, in this case we can’t drag the file to reference but we have to insert the reference Xml comment manually.

So now we are able to reference JavaScript library to get intellisense…so the problem now is to to actually get intellisense on each JavaScript variable. While JavaScript is not strongly typed, starting from Visual studio 2012, the JavaScript intellisense improved a lot, and now Visual Studio is able to infer the type that should be contained in a variable from the previous code. For instance if you write:

(function () {
    var simpleOperation = function () {
        this.mult = function (x, y) {
            return x * y;
        };
    };

and then:

  1. var operation = new simpleOperation();

Then we get help on the variable operation:

jsNewIntellisense

We get the same help also if the object is returned by a farm function:

(function () {
    var simpleOperationFarm = function () {
        return{
            mult: function (x, y) {
                return x * y;
            }
        };   
    }
    var operation = simpleOperationFarm();

 

jsFarmIntellisense

 

 

 

 

 

In general Visual Studio >= 2012 do the best to infer a type from a static analysis of the code. However, very often static analysis is not able to infer types in a dynamic language like JavaScript.

However, we may use a couple of tricks to “pass” to Visual studio the information on the types contained in a variable or parameter.

The first trick may be applied to the parameters of a function: immediately after the parameters declaration we may place a param Xml comment:

function (operation) {
        /// <param name = "operation" value = "new simpleOperation()"/>

The value attribute may contain any JavaScript expression, but typically we put, a creation operation, a farm function, a simple value (such as an integer, or a string), an array, an object, or nested arrays and objects. Below, a suitable value to get help on objects that are elements of an array:

paramsIntellisense1

 

Notwithstanding some syntax error…we get our intellisense!

We might obtain a similar result also with:

function (operation) {
    /// <param name = "operation" value = "[{mult: function(x, y){}}]"/>

 

 

Now we are able to get help on each function parameter…but often knowing the type of the function parameters is not enough to infer the type of each variable that is local to that function, or the type of an object manipulated by the function…(for instance because they were not passed as a parameter, but it is part of the function closure). Moreover, sometimes JavaScript functions accept parameters that may take several different types.

Now, we may call a method or to read/set a property of an object in a given place of a JavaScript function only if we know that in that part of code a member or variable must necessarily contain a given type, because we must be sure the property or method we are referring to actually exists! Thus, let suppose that we know that in some part of our code the type of the variable operation, or the type of the member mayObject.operation must be SimpleOperation, then we may enclose  that part of our code in a function:

 

 

(function (..., operation, ...) {
    /// <param name = "operation" value = "new simpleOperation()"/>
    // now we may get intellisense
    ...
    ...
    ...
})(..., myObject.operation, ...);

 

 

in case we can’t enclose the code inside a function we may use this other trick:

 

myObject.operation = myObject.operation || new simpleOperation();

 

Since we supposed we are sure myObject.operation contains a simpleOperation,  the second operand of || will never be evaluated, so our instruction do simply…nothing but helping VisualStudio to infer the type of myObject.operation .

Needless to say the second operand of the above || may contain the same kind of expressions of the value attribute of the param Xml attribute.

The above tricks enable us to get intellisense in any situation! Thus the main nightmare of JavaScript coding has been “mitigated”!

 

That’s all for now!

In the next post a deeper analysis of JavaScript intensive Web techniques.

 

Stay tuned!

Francesco

Tags: , , , , ,