Points load tester should consider

From last 6-7 months, I was working closely on load testing of a couple of application. While working on it I realized few capabilities a responsible person should have.

1) One should have a good understanding of the application domain, even more than a functional tester. If one needs to craft the load test plan then he should be aware of how the user does use it. He should clarify his assumptions of how the system will be used. Recommended way is to observe few users, to understand better about application usage.

2) one should have a better understanding of the application architecture, how components interact? What is the difference between non-prod and prod system from an infrastructure perspective? What will be the network bandwidth? What will be the average think time? etc.

3) One should be technically strong and creative enough to gather the load test data or way to generate the load test data. This one is very critical to building independent, low maintenance, automated load test. One can have a utility which will run prior to the Load test to generate test data. A process of test data management is the key to having successful continues delivery with zero human interaction.

4) One should be able to identify the peak load details based on historical data. If the historical data is not available then one needs to come up with the approximate numbers. As load test may run for 1-2 hour, one needs to have a good combination of different transactions. NOTE: once you baseline the performance of each transaction through the full test (1-2 hours), in CD you can have tested for lesser numbers just to verify if the build is not impacting the baseline numbers. For major builds, one can run the full test.

5) And the key and important one, expert knowledge of the load testing tool, protocols, command-line or configuration options, how the tool works etc.

Points load tester should consider

Web App performance benchmarking – Realtime and End-to-End

OK, last night was thinking of benchmark performance of each component of web app, e.g. client, app server and data services.
There are great APM available in the market which gives you a good amount of information to start with, but still, I was thinking something is missing and want more. APM I tried and used is AppDynamics.

So now what is more we want, AppDynamics provides the analytics capabilities but it comes with the cost.😁 So what all other open-source tools can help.

ELK: this is the really good centralized log analysis tool which you can configure to meet your expectations. It has an inbuilt search engine(elastic search) and data visualization (Kibana), and to get the logs or data you have LogStash which supports multiple technologies to receive data. Using ELK one can get the high-level health of the application, e.g. success rate of the transaction, average response time, slow performing requests, application usage details etc. With ELK you are covered for application server and data services performance.

In ELK we can push the Apache web access logs which give you visibility to usage and performance of the application.

Using MDC filters one can push the service/method performance details to ELK and yes exception details.

OK all this is configured and available, what next? So we don’t have to keep monitoring logs n data we are capturing we can configure alerts (email) n dashboards. But my recommendation (if you ask 😎) monitor the logs for at least a week to see is your new setup is capturing the details you are expecting n then tweak the configuration accordingly.

Now the challenge is how you can monitor client performance and do we really need to monitor it real time?

My thought is, at least monitor it for pilot release to see how the user is adaptive to your new application and if there are any issues. Even I feel it’s more critical than server performance, as most of your testing, all kind of is done with a controlled environment (machines, network, internet speed, browser types and even user behavior). So to get an answer to the question, how the actual user is using your app n what challenges he is facing? Real time Browser performance metric/log will be a real help.

Thoughts on real-time client component performance and monitoring of web application.

Web App performance benchmarking – Realtime and End-to-End

Redis PROD Setup – Part 1

Recently worked on analysis to use Redis as a cache for REST services. Performed the basic configuration of Redis and ran the benchmark test on it and results were amazing. I have used the redis-benchmark to perform benchmarking. https://redis.io/topics/benchmarks

First test where I used 568 bytes data size for Get/Set

redis-benchmark -q -n 100000 -c 50 -P 12 -r 16 -d 568
PING_INLINE: 337837.84 requests per second
PING_BULK: 331125.84 requests per second
SET: 284090.91 requests per second
GET: 318471.31 requests per second
INCR: 444444.47 requests per second
LPUSH: 349650.34 requests per second
RPUSH: 352112.66 requests per second
LPOP: 392156.88 requests per second
RPOP: 390624.97 requests per second
SADD: 425531.91 requests per second
SPOP: 401606.44 requests per second
LPUSH (needed to benchmark LRANGE): 346020.75 requests per second
LRANGE_100 (first 100 elements): 354609.94 requests per second
LRANGE_300 (first 300 elements): 337837.84 requests per second
LRANGE_500 (first 450 elements): 343642.59 requests per second
LRANGE_600 (first 600 elements): 317460.31 requests per second
MSET (10 keys): 62227.75 requests per second

Second test where I used 1000 bytes data size for Get/Set, still there no huge decline in the throughput.

redis-benchmark -q -n 100000 -c 50 -P 12 -r 16 -d 1000
PING_INLINE: 369003.69 requests per second
PING_BULK: 416666.69 requests per second
SET: 277777.78 requests per second
GET: 367647.03 requests per second
INCR: 423728.81 requests per second
LPUSH: 277777.78 requests per second
RPUSH: 277777.78 requests per second
LPOP: 462962.94 requests per second
RPOP: 432900.41 requests per second
SADD: 373134.31 requests per second
SPOP: 403225.81 requests per second
LPUSH (needed to benchmark LRANGE): 251889.16 requests per second
LRANGE_100 (first 100 elements): 318471.31 requests per second
LRANGE_300 (first 300 elements): 317460.31 requests per second
LRANGE_500 (first 450 elements): 335570.47 requests per second
LRANGE_600 (first 600 elements): 325732.88 requests per second
MSET (10 keys): 41666.66 requests per second

My Redis setup is, 1 Master and 4 Slaves and I have configured the Sentinel to monitor the instances and on failure select the Master.


    • Custom IP address bind configuration, used the private IP address instead on Public one.
    • Changed the default to port to the custom one.
    • One needs to select the Timeout setting carefully, this setting tells the Redis server when to disconnect the client if it’s ideal for N seconds. If you are considering this setup to use with Spring Cache then align this setting with your Jedis connection pool setting.
    • Log level for PROD instance needs to be kept at the lower side, as you may face server issues if your logs occupy more disk space than what is necessary.
    • Limit the maximum database your server wants to handle.
    • Snapshotting section is the critical one, here if you have the number of slaves then need to select the snapshot frequency accordingly. Redis does asynchronous sync with the slaves, but you need to consider a good balance between CPU and Memory usage and the time for the eventually consistent state. One can have multiple conditions to trigger the snapshot, e.g.
    • save 900 1 (after 900 seconds if there is one record changed)
    • save 300 10 (after 300 seconds if there are 10 records changed)
    • save 60 10000 (after 60 seconds if there are 1000 records changed)
  • Configure the Slave Master using “slaveof” and “masterauth” settings
  • slave-priority is another key configuration you need to consider if you are planning to use Sentinel. If the master is down then based on this configuration sentinel process identifies the next master. Lower priority one will be considered as next master. NOTE: Do not set it to 0, the value indicates that this Redis instance will not be promoted to Master, it will be always considered as Slave.
  • If you have separate IPs, Private and Public then based on challenges I faced, always use the private IP for binding to Redis server, and configure the announce-IP configuration to announce IP of Redis server which will have the Public IP address.
  • Always set the maxclient configuration.
  • Another tow very important configuration once should consider while configuring PROD server. We found this very useful when we performed load test on the Redis server to see how Redis performs with huge data sets. “maxmemory” this tells what is the max memory allocation for the Redis, maxmemory-policy configuration tells the Redis what to do when maxmemory threshold is reached, Redis has provided different strategies you can choose from.

NEXT : Detailed configuration of Redis Master, Slave, and Sentinel

Redis PROD Setup – Part 1

Protobuf Performance Comparison and points to make decision

What is Porotbuf?

Developed by Google for object serialization, its open source library and available for multiple languages. It’s a Fast buffer which does the object serialization, you can consider same as XML but it’s more faster, takes less size, serialization and deserialization is faster than any other available approach.

What is the procedure?

One need to define the object structure, it can be done by defining the .proto file, which defines required, optional fields of the object.

Once proto file is generated, one need to use supplied code generator, this utility is language specific and generates language specific code. If it’s used for java then you can consider that this utility generates the java pojo for serialization and de-serialization.

Now using supplied library, generated beans/models and .proto files, one can serialize or de- serialize the response.

Why should I use it?

  • JSON and XML transmit data with metadata details, and which adds load on payload, requires more memory compared to Protobuf. Protobuf compress the data, generate dense data. If compared to XML Fast buffers takes almost 1/3rd size and if compared to JSON then its ½.
  • JSON and XML are more readable and not secure to transmit data over the network. If you want your response shouldn’t be readable by user then you can use Protobuf.
  • Consumer of the service needs the .proto file to de-serialize the object stream.
  • Less CPU and Memory will be consumed for serialization and de- serialization, so processing time on mobile devices is faster compared to JSON


Here I considered the web application which sends data using REST service, and a web page which renders the data on screen. I have used total time to render a page using JSON and Proto, end-to-end to make sure I am covering, serialization, data transmission, de- serialization and DOM rendering. I compared it with different network speed, broadband, 3G and 2G.


Network JSON Proto
Time Broadband 555 MS 359 MS
Payload size Broadband 1.2 MB 684KB
Time 3G (1Mb/S) 7.93 S 4.6 S
Payload size 3G (1Mb/S) 1.2 MB 684KB
Time (ms) 2G 22 S 13.73 S
Payload size 2G 1.2 MB 684KB
Network JSON Proto
Time Broadband 288 MS 293 MS
Payload size Broadband 512 KB 292 KB
Time 3G (1Mb/S) 2.91 S 1.86 S
Payload size 3G (1Mb/S) 512 KB 292 KB
Time 2G 9.80 S 6.06 S
Payload size 2G 512 KB 292 KB
Network JSON Proto
Time Broadband 229 MS 233 MS
Payload size Broadband 302 KB 269 B
Time 3G (1Mb/S) 318 MS 331 MS
Payload size 3G (1Mb/S) 302 KB 269 B
Time 2G 723 MS 808 MS
Payload size 2G 302 KB 269 B

Points to consider

  • If payload is larger than 300KB then one can gain more from speed and performance perspective.
  • If application needs to send smaller chunks of data (IoT case) then, need to think about if system really needs the status real time or if we can merge the events triggered and upload the payload after an interval. Need to ask question, which one is more applicable? sending 40KB payloads 10 times or sending a 400KB once?
  • Does application need object serialization which is platform independent, not human readable and takes lesser memory? If yes the go for ProtoBuf
  • I haven’t tested the serialization and de-serialization performance on smaller devices like mobile and IoT one. Definitely those will one more aspect to consider.
  • It’s not limited only for REST services which returns the data in JSON or XML to compare with, one can use Protobuf for MQ, RFC.
  • Protobuf makes more sense if you have same web application or rest services to be used by desktop and mobile devices.

I used Spring Boot for REST service, bytebuffer.js on JS side and Google Protocol buffer libraries.

Protobuf Performance Comparison and points to make decision

Raspberry Pi 2 – Sonic Pi

Ordered Raspberry Pi 2 as soon as I came to know its available for sale 🙂

First impression, it’s really fast (if you have used previous version you will agree). I tried both linus distributions, Raspbian and Snappy Ubuntu, didn’t got much time to explore the Ubuntu one, but my kid liked the Raspbian most, because of the Sonic Pi and Mathematica.

Sonic Pi is the awesome open source programming tool for kids, and its fun while you learn programming. It covers, loops, conditionals, concurrency and data structures.

Raspberry in action :


Snappy Ubuntu on Sony :


Few updates in Raspberry Pi 2, which I loved

New 900MHz quad-core processor, 1GB memory and Combined 3.5mm audio jack and composite video.

Using Male 3.5mm to 3 RCA AV Audio Video Male Converter Cable, you can connect your Raspberry to any TFT screens or to TV which support RCA Video, cheaper one are the used in car for rear view.


Or you can opt for a TFT LCD screen


Raspberry Pi 2 – Sonic Pi

Maven – Quick Start

Many of my friends were facing similar kind of problem while configuring the maven, so thought to cover this topic in simple form

Disclaimer : Please consider this as a quick start guide, its not the in details one, and I tried to cover the same topic in different way.

How I look at Maven?

Maven is the tool which helps me to download/manage all required API for my project.

In old days, if you need to build a simple application which handles the MS Office files (excel, word), you have to search for the API, then on the API home page you need to go to the page which details out the all required jars to use that API, and yes few of API provider also has a ZIP file which contents all the dependent jar files, if there is any other jar dependency you need to download it from their site, and many of time you will face compilation or runtime issues as some of the jar is missing or its version is not compatible.

To fix this Maven and other build tools came into the picture, where you tell the tool the details of the repository, and the API you want to use. API provider add the details of the dependencies to use their API.

Repository : Its the server which stores the all required jar files, one of them is NEXUS, which stores the required files with their binary, source and versions. Nexus provides facility to add, publish new dependencies or jar files or any dependent files which will be used by projects. You can publish your jar files on this server so that it can be reused in other applications.

POM file : POM files has the details of the dependencies to run/use API or project. This file can also has details about project, jvm version, team members etc. POM also has the Plugins configurations, there are multiple plugins available to build, report, measure code quality, analyse dependencies and deploy.

Configuration and Getting the dependencies

Settings.xml : This file holds the configurations which is common for all Maven projects, like proxy details, repository server details, authentication details. So when you run the mvn command to build the project, maven search for the settings file in Maven Folder/conf/ and <user dir>/.m2/,  If both files exists, their contents gets merged, with the user-specific settings.xml being dominant.

Now maven know how to connect to server/internet (if required) and download the required dependencies, plugins etc.

POM.xml : In this file one can add the dependency details which looks like as


GroupId, ArtifactID and version identifies the dependency uniquely, and based on these details  required file is get downloaded to .m2 directory which is your local repository, and from where the file is referred in your project.

What is SCOPE in dependency tag?

Most of the time I fund the people face problems as they havent tried to understand what this scope is for.

compile – this is the default scope, used if none is specified. Compile dependencies are available in all classpaths. Furthermore, those dependencies are propagated to dependent projects.

provided – this is much like compile, but indicates you expect the JDK or a container to provide it at runtime. It is only available on the compilation and test classpath, and is not transitive.

runtime – this scope indicates that the dependency is not required for compilation, but is for execution. It is in the runtime and test classpaths, but not the compile classpath.

test – this scope indicates that the dependency is not required for normal use of the application, and is only available for the test compilation and execution phases.

system – this scope is similar to provided except that you have to provide the JAR which contains it explicitly. The artifact is always available and is not looked up in a repository.

So based on your scope, jar or package files are referred in build process. e.g. provided and test scoped dependencies wont be added to your WAR file.

for more deatils : http://maven.apache.org/pom.html

You can exclude few of the dependencies which you wont needs to be downloaded/included from the child dependencies.

Basic commands

mvn clean install

mvn eclipse:configure-workspace
is used to add the classpath variable M2_REPO to Eclipse which points to your local repository and optional to configure other workspace features.

mvn eclipse:eclipse
generates the Eclipse configuration files.

mvn eclipse:clean
is used to delete the files used by the Eclipse IDE.

Few more commands

mvn dependency:tree -Dverbose -Dincludes=commons-collections

mvn verify

mvn dependency:analyze-only verify

mvn dependency:analyze-duplicate

mvn dependency:analyze-report

mvn site

Quick Start Guide: http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html

Maven – Quick Start

Formatting Date in JavaScript

You can use built-in JavaScript object Date to do the date formatting in JS, you can also use Locales to format your date. I want to show date in mm/dd/yyyy hh:mm Z, I used simple trick, the format I am expecting is US format, and JS frovides API to convert Date to specific local, so I passed en-US as local parameter, and using options to get the desired output.

few of the options to instantiate Date

var today = new Date(); 
var myDate = new Date(dateString);

where dateString is string representing an RFC2822 or ISO 8601 date format e.g. 12/21/2014 00:00 GMT+5:30

Converting to Local

    var myDate = new Date();
    var options = { timeZoneName: 'short', hour : '2-digit', minute : &quot;2-digit&quot;, hour12: false};
    var dateStr = myDate.toLocaleString('en-US', options);

In options, I specified the timeZoneName which specifies if we want a short timezone name or long, hours, minutes and hour12 which specifies if the time needs to be shown in 12 hour format of 24 hours.

Other available options

  • weekday : [“narrow” | “short” | “long”]
  • era: [“narrow” | “short” | “long”]
  • year : [“2-digit” | “numeric”]
  • month : [“2-digit” | “numeric” | “narrow” | “short” | “long”]
  • day : [“2-digit” | “numeric”]
  • hour : [“2-digit” | “numeric”]
  • minute : [“2-digit” | “numeric”]
  • second : [“2-digit” | “numeric”]
  • timeZoneName : [“short” | “long”]

And if you need to format Date to more specific date format, then can use JS Date provided methods to get specific date elements like, day, year, month time etc.

   function padZero(dateArg) {
      if (dateArg &lt; 10) {
        return '0' + dateArg;
      return dateArg;
  function formateDate(date) {
    return date.getUTCFullYear() +
        '-' + padZero(date.getUTCMonth() + 1) +
        '-' + padZero(date.getUTCDate()) +
        ' ' + padZero(date.getUTCHours()) +
        ':' + padZero(date.getUTCMinutes()) +
        ':' + padZero(date.getUTCSeconds()) ;

JSFiddle Code link

Formatting Date in JavaScript