Web App performance benchmarking – Realtime and End-to-End

OK, last night was thinking of benchmark performance of each component of web app, e.g. client, app server and data services.
There are great APM available in the market which gives you a good amount of information to start with, but still, I was thinking something is missing and want more. APM I tried and used is AppDynamics.

So now what is more we want, AppDynamics provides the analytics capabilities but it comes with the cost.ūüėĀ So what all other open-source tools can help.

ELK: this is the really good centralized log analysis tool which you can configure to meet your expectations. It has an inbuilt search engine(elastic search) and data visualization (Kibana), and to get the logs or data you have LogStash which supports multiple technologies to receive data. Using ELK one can get the high-level health of the application, e.g. success rate of the transaction, average response time, slow performing requests, application usage details etc. With ELK you are covered for application server and data services performance.

In ELK we can push the Apache web access logs which give you visibility to usage and performance of the application.

Using MDC filters one can push the service/method performance details to ELK and yes exception details.

OK all this is configured and available, what next? So we don’t have to keep monitoring logs n data we are capturing we can configure alerts (email) n dashboards. But my recommendation (if you ask ūüėé) monitor the logs for at least a week to see is your new setup is capturing the details you are expecting n then tweak the configuration accordingly.

Now the challenge is how you can monitor client performance and do we really need to monitor it real time?

My thought is, at least monitor it for pilot release to see how the user is adaptive to your new application and if there are any issues. Even I feel it’s more critical than server performance, as most of your testing, all kind of is done with a controlled environment (machines, network, internet speed, browser types and even user behavior). So to get an answer to the question, how the actual user is using your app n what challenges he is facing? Real time Browser performance metric/log will be a real help.

Thoughts on real-time client component performance and monitoring of web application.

Web App performance benchmarking – Realtime and End-to-End

Redis PROD Setup – Part 1

Recently worked on analysis to use Redis as a cache for REST services. Performed the basic configuration of Redis and ran the benchmark test on it and results were amazing. I have used the redis-benchmark to perform benchmarking. https://redis.io/topics/benchmarks

First test where I used 568 bytes data size for Get/Set

redis-benchmark -q -n 100000 -c 50 -P 12 -r 16 -d 568
PING_INLINE: 337837.84 requests per second
PING_BULK: 331125.84 requests per second
SET: 284090.91 requests per second
GET: 318471.31 requests per second
INCR: 444444.47 requests per second
LPUSH: 349650.34 requests per second
RPUSH: 352112.66 requests per second
LPOP: 392156.88 requests per second
RPOP: 390624.97 requests per second
SADD: 425531.91 requests per second
SPOP: 401606.44 requests per second
LPUSH (needed to benchmark LRANGE): 346020.75 requests per second
LRANGE_100 (first 100 elements): 354609.94 requests per second
LRANGE_300 (first 300 elements): 337837.84 requests per second
LRANGE_500 (first 450 elements): 343642.59 requests per second
LRANGE_600 (first 600 elements): 317460.31 requests per second
MSET (10 keys): 62227.75 requests per second

Second test where I used 1000 bytes data size for Get/Set, still there no huge decline in the throughput.

redis-benchmark -q -n 100000 -c 50 -P 12 -r 16 -d 1000
PING_INLINE: 369003.69 requests per second
PING_BULK: 416666.69 requests per second
SET: 277777.78 requests per second
GET: 367647.03 requests per second
INCR: 423728.81 requests per second
LPUSH: 277777.78 requests per second
RPUSH: 277777.78 requests per second
LPOP: 462962.94 requests per second
RPOP: 432900.41 requests per second
SADD: 373134.31 requests per second
SPOP: 403225.81 requests per second
LPUSH (needed to benchmark LRANGE): 251889.16 requests per second
LRANGE_100 (first 100 elements): 318471.31 requests per second
LRANGE_300 (first 300 elements): 317460.31 requests per second
LRANGE_500 (first 450 elements): 335570.47 requests per second
LRANGE_600 (first 600 elements): 325732.88 requests per second
MSET (10 keys): 41666.66 requests per second

My Redis setup is, 1 Master and 4 Slaves and I have configured the Sentinel to monitor the instances and on failure select the Master.


    • Custom IP address bind configuration, used the private IP address instead on Public one.
    • Changed the default to port to the custom one.
    • One needs to select the Timeout setting carefully, this setting tells the Redis server when to disconnect the client if it‚Äôs ideal for N seconds. If you are considering this setup to use with Spring Cache then align this setting with your Jedis connection pool setting.
    • Log level for PROD instance needs to be kept at the lower side, as you may face server issues if your logs occupy more disk space than what is necessary.
    • Limit the maximum database your server wants to handle.
    • Snapshotting section is the critical one, here if you have the number of slaves then need to select the snapshot frequency accordingly. Redis does asynchronous sync with the slaves, but you need to consider a good balance between CPU and Memory usage and the time for the eventually consistent state. One can have multiple conditions to trigger the snapshot, e.g.
    • save 900 1 (after 900 seconds if there is one record changed)
    • save 300 10 (after 300 seconds if there are 10 records changed)
    • save 60 10000 (after 60 seconds if there are 1000 records changed)
  • Configure the Slave Master using ‚Äúslaveof‚ÄĚ and ‚Äúmasterauth‚ÄĚ settings
  • slave-priority is another key configuration you need to consider if you are planning to use Sentinel. If the master is down then based on this configuration sentinel process identifies the next master. Lower priority one will be considered as next master. NOTE: Do not set it to 0, the value indicates that this Redis instance will not be promoted to Master, it will be always considered as Slave.
  • If you have separate IPs, Private and Public then based on challenges I faced, always use the private IP for binding to Redis server, and configure the announce-IP configuration to announce IP of Redis server which will have the Public IP address.
  • Always set the maxclient configuration.
  • Another tow very important configuration once should consider while configuring PROD server. We found this very useful when we performed load test on the Redis server to see how Redis performs with huge data sets. ‚Äúmaxmemory‚ÄĚ this tells what is the max memory allocation for the Redis, maxmemory-policy configuration tells the Redis what to do when maxmemory threshold is reached, Redis has provided different strategies you can choose from.

NEXT : Detailed configuration of Redis Master, Slave, and Sentinel

Redis PROD Setup – Part 1

Date field considerations for REST with different TimeZone

I can see two different scenarios of using the Date field with consideration of TimeZone, one is when the user enters date and you need to display the same date on screen, what I mean is with same TimeZone, and other one is the Date is entered by Admin kind of user from his TmeZone and you need to display the the saved Date with users TimeZone.

Simple work flow is

1) Select the Date and time from UI with TimeZone option (You can ignore TimeZone if you don’t want to display it to user, and while submitting or making REST request you can add the TimeZone details)

2) On server deserialize the date to UTC to persist the date filed

3) When GET request is made to the resource which has Date field, retrieve date from where you persisted, return it with the TimeZone details.

4) Client side, create Date instance from passed in date string, and JavaScript Date object will by default convert it to client specific TmeZone. You can use Date object and display your date in desired format.

In your bean filed annotate it as

	public Date getJoinDate() {
		return joinDate;

	public void setJoinDate(Date joinDate) {
		this.joinDate = joinDate;

Java Code to Deserialize JSON to Date

import java.io.IOException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
import com.fasterxml.jackson.core.JsonLocation;
import com.fasterxml.jackson.core.JsonParseException;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.core.ObjectCodec;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.JsonDeserializer;
import com.fasterxml.jackson.databind.JsonNode;
public class JsonDateDeserializer extends JsonDeserializer<Date> {
 private static final SimpleDateFormat dateFormat = new SimpleDateFormat("MM/dd/yyyy hh:mm ZZ");
 public Date deserialize(JsonParser jp, DeserializationContext ctxt)
 throws IOException, JsonProcessingException {
 ObjectCodec oc = jp.getCodec();
 JsonNode node = oc.readTree(jp);
 String dateString = node.asText();
 Date joinDate;
 try {
 joinDate = dateFormat.parse(dateString);
 } catch (ParseException e) {
 throw new JsonParseException(e.getMessage(), JsonLocation.NA);
 return joinDate;

in above screen I used final static for SimpleDateFormat, SimpleDateFormat class is not thread safe, and I FastDateFormat which is good alternative to SimpleDateFormat which is used in serialize code snippet listed below. You can also use JodaTime api for date and calendar manipulations, ideally you should not use SimpleDateFormat and instead should move to Apache Commons-lang date implementations and if you need full support of parsing and formatting then use JodaTime library.

To serialize from Date to JSON

import java.io.IOException;
import java.util.Date;
import java.util.TimeZone;
import org.apache.commons.lang.time.FastDateFormat;
import org.springframework.stereotype.Component;
import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonSerializer;
import com.fasterxml.jackson.databind.SerializerProvider;
public class JsonDateSerializer extends JsonSerializer<Date>{
 private FastDateFormat fastDateFormat = FastDateFormat.getInstance("MM/dd/yyyy hh:mm Z", TimeZone.getTimeZone("UTC"));
 public void serialize(Date date, JsonGenerator gen, SerializerProvider provider) throws IOException, JsonProcessingException {
 String formattedDate = fastDateFormat.format(date);

In above code I have created the FastDateFormat instance with UTC TimeZone, and to add timezone details I added “Z”, which will add the timezone details in¬†RFC822 format i.e. +0530.

In my example I displayed Date as it is default get converted to string by using JavaScript built-in object Date.

If you want to display the Date (particularly with time) at client as per Locale and with very specific format, will suggest to do the formatting on Java side based on user locale and timezone, as formatting Date in JavaScript to a specific format is tricky. JavaScript has good support for formatting date to Local but if you want some different format then it become tricky, e.g. with en-US the date time will displayed as 12/21/2014, 12:00 AM GMT+5:30, and now if you want to remove comma and different timezone format then it become tricky.

form more details : JavaScipt Date Tutorial

Date field considerations for REST with different TimeZone


Actually its nothing to do with REST for PUT and POST. In general how HTTP PUT works and how POST work, is what I want to demonstrate through code.

Why REST is considered, usually we get confused while developing REST API, that when to use PUT and when to use POST for update and insert resource.

Lets start with the actual definition of these methods (copied form http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html)


The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line.

The actual function performed by the POST method is determined by the server and is usually dependent on the Request-URI. The posted entity is subordinate to that URI in the same way that a file is subordinate to a directory containing it, a news article is subordinate to a newsgroup to which it is posted, or a record is subordinate to a database.

The action performed by the POST method might not result in a resource that can be identified by a URI. In this case, either 200 (OK) or 204 (No Content) is the appropriate response status, depending on whether or not the response includes an entity that describes the result.

If a resource has been created on the origin server, the response SHOULD be 201 (Created) and contain an entity which describes the status of the request and refers to the new resource, and a Location header (see section 14.30).

Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource.


The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that URI. If a new resource is created, the origin server MUST inform the user agent via the 201 (Created) response. If an existing resource is modified, either the 200 (OK) or 204 (No Content) response codes SHOULD be sent to indicate successful completion of the request. If the resource could not be created or modified with the Request-URI, an appropriate error response SHOULD be given that reflects the nature of the problem. The recipient of the entity MUST NOT ignore any Content-* (e.g. Content-Range) headers that it does not understand or implement and MUST return a 501 (Not Implemented) response in such cases.

If the request passes through a cache and the Request-URI identifies one or more currently cached entities, those entries SHOULD be treated as stale. Responses to this method are not cacheable.

The fundamental difference between the POST and PUT requests is reflected in the different meaning of the Request-URI. The URI in a POST request identifies the resource that will handle the enclosed entity. That resource might be a data-accepting process, a gateway to some other protocol, or a separate entity that accepts annotations. In contrast, the URI in a PUT request identifies the entity enclosed with the request — the user agent knows what URI is intended and the server MUST NOT attempt to apply the request to some other resource. If the server desires that the request be applied to a different URI.

Lets Go back to our REST example

Ok, now to make it more clear in REST terms, lets consider a example of Customer and Order scenario, so we have API to create/modify/get customer but for order we do have create order for customer and when we call GET /CustomerOrders API will get the customer orders.

APIs we have

GET /Customer/{custID}

PUT /Customer/{custID}

POST /Customer custID will be part of the HTTP body (to demonstrate difference between POST and PUT, otherwise for stated requirement it wont required)

POST /Order/{custID}

GET /CustomerOrders/{custID}

I have enable browser cache by adding header “Cache-Control”. so lets first see flow of PUT and GET for customer

Initial load, I called PUT /Customer/1 which placed new resource on server and then called GET /Customer/1 which returned me the customer I placed. now when I again call the GET /Customer/1 I will get the browser “Cached” instance of customer.

Now you call PUT /Customer/1 with updated values of customer, and then call GET /Customer/1, you will observer that browser makes calls to server to get new changed values. and if you add debug point or increase the wait time in you PUT, and make parallel request for GET (Ajax), then GET request will be pending till PUT is served, so browser makes cached instance of resource to stale.

In case of POST, new resource will be posted to server, but if POST request is not served, and you request for same resource using GET, cached instance will be returned. Once the post is successful and you make GET call to the resource, browser will hit to server to get new resource.

I added delay of 100 milliseconds in both PUT and POST and made request as

1) Called GET /Customer/1 multiple times to check if I am getting cached resource. Then I called PUT, and immediately called GET, and GET was pending till PUT is served. below if the screen shot which explains it.


2) Called GET /Customer/1 multiple times to check if I am getting cached resource. Then I called POST, and immediately called GET, and GET was served from cache. below if the screen shot which explains it.


So example explains all, so in our customer order case, customer should be PUT for new customer and for updating customer as we are retrieving the customer using same resource URI but for Order we used POST as we don’t have same URI for GET orders.

One More Example for PUT

You have a site which hosts different articles and documents. Client has sent request to create new documents which has title ‚ÄúWhyToUsePutForCreation‚ÄĚ then your request will look like PUT /article/WhyToUsePutForCreation and once the application creates it, application will respond with 201. i.e. resource created. Now from client I can list the new document in documents list and which will be fetched by calling GET /article/WhyToUsePutForCreation

Download Code

Strange Browser Behavior : If I increase the delay in PUT method to 1000 milliseconds. First I made request to PUT /Customer/1, and immediately sent request for GET,  browser wait for PUT to get complete and after that GET request returns either cached resource or calls the server to get new resource.

I tried with different delay time, and it is not fixed behavior of Browser about, when to return the cached resource (old one) and when to request resource from server.

I am not sure if this is correct behavior, but I think its a problem with browser if I consider description of PUT.


Spring’s Device Detection (Spring Boot) Vs WURFL Device detection

I liked the Spring Boot idea very much, very impressive. Excellent use of Embedded servers, auto configs and the best thing with less code you can complete the functionality.

When I was going through the basic guides of Spring boot, I tried the Device Detection guide, which detects the the device from which the request is made, its basic implementation which tells you if request is made form Mobile, Tab or PC.

Spring Device has used the “User-Agent”, “x-wap-profile”, “Accept” HTTP¬†headers to detect the device type, if all of the listed headers fails to identify the device type, in the last code iterate through all the headers to see if the request is from “Opera Mini” browser, which mostly used by many mobile users.

Spring Device has used basic algorithm used in WordPress’s Mobile pack, which¬†works for large number of mobile browsers.

If you need to know full capabilities of the phone, like OS type, touch screen support, browser type, XHTML-MP supprted or not etc then WURFL API is a good option.

Only thing is WURFL API is not updated for the Spring Boot and uses old version of commons-lang and other dependent libraries. As source code is provided I modified a bit and I was able to use the WURFL API for device detection.

WURFL has commercial license available and also has cloud based service if you want to try.

You can download the code from the WURFL’s repository, to create¬†GeneralWURFLEngine class as spring componant I added @Component annotation

@Component(value="WURFLEngine")public class GeneralWURFLEngine implements WURFLEngine, WurflWebConstants {

and it dosent have default constructor, I added one with following lines, where wurfl.zip file has the XML file which has all data related to mobile devices.

 static URL filePath = GeneralWURFLEngine.class.getClassLoader().getResource("wurfl.zip");
 public GeneralWURFLEngine() {
 this(new XMLResource(filePath.getPath()));

and when I started the Spring Boot Application, I added the package of the GeneralWURFLEngine class to component scan path, and one more major change, as Maven repository for WURFL is only available if you license, I added jar in /lib folder.

Below is the code changed made to Spring Boot guide for device detection.

package com.ykshinde.controller;
import javax.annotation.Resource;
import javax.servlet.http.HttpServletRequest;
import net.sourceforge.wurfl.core.WURFLEngine;
import org.springframework.mobile.device.Device;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;
public class DeviceDetectionController {
    WURFLEngine engine;
    public @ResponseBody String detectDevice(Device device, HttpServletRequest request) {
        net.sourceforge.wurfl.core.Device device2 = engine.getDeviceForRequest(request);
        StringBuffer deviceCapabilities = new StringBuffer();
        deviceCapabilities.append(" DEVICE_ID : ").append(device2.getId()).append("<br>")
        .append(" DEVICE_OS : ").append(device2.getCapability("device_os")).append("<br>")
        .append(" DEVICE_OS_VERSION : ").append(device2.getCapability("device_os_version")).append("<br>")
        .append(" IS_TABLET : ").append(device2.getCapability("is_tablet")).append("<br>")
        .append(" IS_WIRELESS_DEVICE : ").append(device2.getCapability("is_wireless_device")).append("<br>")
        .append(" MOBILE_BROWSER : ").append(device2.getCapability("mobile_browser")).append("<br>")
        .append(" MOBILE_BROWSER_VERSION : ").append(device2.getCapability("mobile_browser_version")).append("<br>")
        .append(" POINTING_METHOD : ").append(device2.getCapability("pointing_method")).append("<br>")
        .append(" PREFERRED_MARKUP : ").append(device2.getCapability("preferred_markup")).append("<br>")
        .append(" RESOLUTION_HEIGHT : ").append(device2.getCapability("resolution_height")).append("<br>")
        .append(" RESOLUTION_WIDTH : ").append(device2.getCapability("resolution_width")).append("<br>")
        .append(" UX_FULL_DESKTOP : ").append(device2.getCapability("ux_full_desktop")).append("<br>")
        .append(" XHTML_SUPPORT_LEVEL : ").append(device2.getCapability("xhtml_support_level")).append("<br>");
        String deviceType = "unknown";
        if (device.isNormal()) {
            deviceType = "normal";
        } else if (device.isMobile()) {
            deviceType = "mobile";
        } else if (device.isTablet()) {
            deviceType = "tablet";
        deviceCapabilities.append(" DEVICE TYPE (SPRING BOOT) : ").append(deviceType);
        return deviceCapabilities.toString();

And below is the response displayed when request emulated as from “Samsung Tab ”

DEVICE_ID : samsung_galaxy_tab_ver1_subschi800
DEVICE_OS : Android
IS_TABLET : true
MOBILE_BROWSER : Android Webkit
POINTING_METHOD : touchscreen
PREFERRED_MARKUP : html_web_4_0

NOTE : If you are going to use WURFL Api commercially, please do check the licensing part of it.

 Download Code

Spring’s Device Detection (Spring Boot) Vs WURFL Device detection

Cross-site request and Sever configuration

While developing REST web service, we came across an error “No ‘Access-Control-Allow-Origin’ header is present on the requested resource” with Http Status code 403, after exploring more on it, found the details of CORS and how browser supports it. Obviously as a user (mostly developer) I can disable the security and can¬†make the request, but lets understand the ideal way of handling it.

What is CORS?

This document defines a mechanism to enable client-side cross-origin requests. Specifications that enable an API to make cross-origin requests to resources can use the algorithms defined by this specification. If such an API is used on http://example.org resources, a resource on http://hello-world.example can opt in using the mechanism described by this specification (e.g., specifying Access-Control-Allow-Origin: http://example.org as response header), which would allow that resource to be fetched cross-origin from http://example.org.

You can find more details on : http://www.w3.org/TR/cors/

So if you are making request to same server but the URL is different, the request will be treated as the cross-domain, e.g. http://localhost and will be treated as the cross-domain. Specification is implemented by the browser to support same-origin policy and security. When browser makes a request, it check for the origin and the URL of the request made to, if it wont match (protocol+domain+port number should be same) then a pre-flight request is made to the server, which is nothing but a same request with HTTP method OPTION. e.g. if you are making a call from localhost:8080/app/index.html to then request is made to with OPTION http method.

Pre-flight Request 

OPTIONS /app/home HTTP/1.1 
Connection: keep-alive 
Cache-Control: max-age=0 
Access-Control-Request-Method: GET 
Origin: http://localhost:8080 
Access-Control-Request-Headers: accept, content-type 
Accept: */* 
Referer: http://localhost:8080/app/index.html 


HTTP/1.1 200 OK 
Access-Control-Allow-Origin: http://localhost:8080 
Access-Control-Allow-Credentials: true 
Access-Control-Max-Age: 1800 
Access-Control-Allow-Methods: GET 
Access-Control-Allow-Headers: content-type,access-control-request-headers,access-control-request-method,accept,origin,x-requested-with 
Content-Length: 0


If your server returns the Access-Control-Allow-Origin headers then only client can access the targeted site content, when the preflight request is successful then browser makes the original request. Once the request is successful, browser cache the details of the origin, url (request made to), max-age and header details, so for subsequent requests to same URL will be served directly and in that case preflight request will be not sent.

On Server Side, How I can support the cross-site request

We do use Tomcat, from Tomact version 7.0.40 by default CorsFilter is included, you just need to enable it for your application, the minimal configuration is


you can specify the optional parameters by adding init parameters, follow this link for more details.



If you have Apache or any other HTTP server between client and you application server, then make sure you have implemented cross-site support on those servers.
if your application is used or open for limited applications, it is recommended to use specific domain names list for “Allowed Origin”, if you have a global web API then you can use *.
More on OPTION method

This method allows the client to determine the options and/or requirements associated with a resource, or the capabilities of a server, without implying a resource action or initiating a resource retrieval. Mostly option do not have the body, but specification is open and in future option body may be supported to make detailed query on server.

BTW : WebSocket won’t fall under same-origin or cross-site policy, so if you create a WebSocket to different URL it will work.

Download code from GIT

Cross-site request and Sever configuration

Running Sonar Analysis Using Maven

In last post I have covered how to configure SonarQube. Let’s see how you can run a Sonar code analysis using Maven.

You need to enable sonar profile, for that you need to add following profile configuration to your settings.xml.


you can add above code in global settings.xml, which is kept in c:\users\\.m2 folder. Or you can create a file in your parent project and while running maven command use mvn –settings settings.xml command, maven –settings option is used to provide custom setting file.

In your parent pom.xml, you can specify sonar specific properties.


To run the SonarQube analysis run

mvn sonar:sonar

Maven will connect to the MySQL server and will run the code analysis on your code using configured rules, once the instrumentation us done, data will be persisted to MySQL server. once build is sucessfull you can access the report on your SonarQube application.

Running Sonar Analysis Using Maven