What is Porotbuf?
Developed by Google for object serialization, its open source library and available for multiple languages. It’s a Fast buffer which does the object serialization, you can consider same as XML but it’s more faster, takes less size, serialization and deserialization is faster than any other available approach.
What is the procedure?
One need to define the object structure, it can be done by defining the .proto file, which defines required, optional fields of the object.
Once proto file is generated, one need to use supplied code generator, this utility is language specific and generates language specific code. If it’s used for java then you can consider that this utility generates the java pojo for serialization and de-serialization.
Now using supplied library, generated beans/models and .proto files, one can serialize or de- serialize the response.
Why should I use it?
- JSON and XML transmit data with metadata details, and which adds load on payload, requires more memory compared to Protobuf. Protobuf compress the data, generate dense data. If compared to XML Fast buffers takes almost 1/3rd size and if compared to JSON then its ½.
- JSON and XML are more readable and not secure to transmit data over the network. If you want your response shouldn’t be readable by user then you can use Protobuf.
- Consumer of the service needs the .proto file to de-serialize the object stream.
- Less CPU and Memory will be consumed for serialization and de- serialization, so processing time on mobile devices is faster compared to JSON
Here I considered the web application which sends data using REST service, and a web page which renders the data on screen. I have used total time to render a page using JSON and Proto, end-to-end to make sure I am covering, serialization, data transmission, de- serialization and DOM rendering. I compared it with different network speed, broadband, 3G and 2G.
|Time||Broadband||555 MS||359 MS|
|Payload size||Broadband||1.2 MB||684KB|
|Time||3G (1Mb/S)||7.93 S||4.6 S|
|Payload size||3G (1Mb/S)||1.2 MB||684KB|
|Time (ms)||2G||22 S||13.73 S|
|Payload size||2G||1.2 MB||684KB|
|Time||Broadband||288 MS||293 MS|
|Payload size||Broadband||512 KB||292 KB|
|Time||3G (1Mb/S)||2.91 S||1.86 S|
|Payload size||3G (1Mb/S)||512 KB||292 KB|
|Time||2G||9.80 S||6.06 S|
|Payload size||2G||512 KB||292 KB|
|Time||Broadband||229 MS||233 MS|
|Payload size||Broadband||302 KB||269 B|
|Time||3G (1Mb/S)||318 MS||331 MS|
|Payload size||3G (1Mb/S)||302 KB||269 B|
|Time||2G||723 MS||808 MS|
|Payload size||2G||302 KB||269 B|
Points to consider
- If payload is larger than 300KB then one can gain more from speed and performance perspective.
- If application needs to send smaller chunks of data (IoT case) then, need to think about if system really needs the status real time or if we can merge the events triggered and upload the payload after an interval. Need to ask question, which one is more applicable? sending 40KB payloads 10 times or sending a 400KB once?
- Does application need object serialization which is platform independent, not human readable and takes lesser memory? If yes the go for ProtoBuf
- I haven’t tested the serialization and de-serialization performance on smaller devices like mobile and IoT one. Definitely those will one more aspect to consider.
- It’s not limited only for REST services which returns the data in JSON or XML to compare with, one can use Protobuf for MQ, RFC.
- Protobuf makes more sense if you have same web application or rest services to be used by desktop and mobile devices.
I used Spring Boot for REST service, bytebuffer.js on JS side and Google Protocol buffer libraries.