Live Portal Trace for Enonic CMS

Posted by

Live Portal Trace Screenshot of Longest Page Requests

There is nothing more exiting than seeing how fast Enonic CMS can perform and there is nothing more valuable than discovering which pages run slow.

OK - maybe a bit exaggerated in a broad sense , but this is how I feel after living in the world of Live Portal Trace for quite a while now. I have been designing it, making it, honing it and using it to resolve customer problems.

For those who are new to LPT - it gives you the ability to:

  • Get an overview over the slowest requests since restart.
  • Inspect why a certain request (typically a page) performs slow. Every potential tedious part of a request is traced and in the user interface you can drill down to find that slow performing content query.
  • See how many requests are completed per second and a graph for history
  • Browse statistics of the Entity Cache, Page Cache and the Java Memory

The first version of LPT was made using:

  • Freemarker to generate both HTML and JSON
  • JQuery for requests in the background, JSON parsing and base64 decoding
  • Sparklines for the graphs

Only the rows in the history table was transferred as JSON, the other tables for the longest requests were pre-generated HTML on the server side. The trace details info was not only pre-generated HTML, but also encoded as base64 to avoid issues with quotes since it was transferred to the client as a part of a JSON-document. In the browser, the base64 encoded trace detail was decoded and stored on the table row. Upon user clicks on the row, the HTML was displayed in the trace detail popup.

This design was not only proven error prone and difficult to maintain, but also loading several hundred traces/rows performed quite badly in most browsers. As a workaround I had to restrict the total number of rows to be inserted in the table to a live-able amount - so when reached, the last row was removed.

You could say that the first version(s) of LPT were characterized by the need to discover performance issues on customer installations and not good code.

So with a strong urge make up for these sins - and to learn more about JavaScript - I chose Live Portal Trace as my LAB project - with the following goals:

  • Better handling of high frequency of traces and large amounts of traces [in progress]
    • All trace data transferred as JSON, no more generating HTML on the server side except for the page itself [done]
    • No more slow base64 encoding and decoding [done]
    • Preferences for number of traces to keep in the browser [not started]
  • Ability to load a large amount of traces much faster in browser [done]
  • Better presentation of the trace details [in progress]
    • Present trace data in an easy to use 3rd party tree table [done]
    • Enhance the presentation of trace details [in progress]
  • Export of trace data so it can be used in a spread sheet without any manual work [not started]
  • Ability for a trace to differentiate between the the total time spent doing application logic vs reading from the persistence layer [not started]
  • Ability to present detailed graphs for specific URLs so you can discover variations in performance of the same request over a period of time [not started]

So, when I was deciding on how to generate the JSON for the traces I concluded that the best way is to have one model. I already had one - more or less excellent domain model in the Java code for generating the HTML on the server side. So why shouldn't the same model work for the JSON on the client side? Maintaining two models, one based on the other gives more maintenance work. Having one external and one internal will possibly make the external one less prone to changes, but then at the cost of it starting to lie about the truth (the internal model) as it changes. So I landed on generating JSON directly from the Java classes using the Jackson JSON processor knowing it is very important to get the first version right.

Generating JSON from your Java objects is like cooking with gas - it quickly gives results:

The nice thing about feeding the browser with JSON instead of HTML is that it is very versatile. It's almost like sending over the Java objects. This means that I can use the data for a lot more than just presenting it in raw form: Information can easily be aggregated, numbers summarized and advanced graphs presented - without asking the server for a different view on the data - of course as long it is painless to do it in JavaScript.

Generating HTML from the JSON on the client side using JavaScript proved to be quite fast and less memory consuming. Now hundreds, thousands! of requests loads fast and it's possible to click around - even at high request frequencies.

Was it as simple to use the JSON as it was fast? Yes, parsing it is straightforward using jQuery;

Using it also. Looping arrays in JSON can be done the old fashioned way:

or using the jQuery iterator:

So when does the these things done become available? Possibly already in 4.5.7.

Comments