It is apparent from the plots that the multi-threaded model is better in terms of CPU utilization and also stabilizes at a lower memory footprint. The memory utilization is perfectly constant for both the models, which was a bit unexpected to me.
Let’s see the plots for a multi-client setup with 5 clients.
The observation here is the same as the previous case.
The lower memory footprint of the thread based model was no surprise to me, although I didn’t expect it to have better CPU utilization than the message passing model. I’m still skeptical whether it will have the upper hand for an even more significant number of clients, and will continue experimenting in case I find something interesting to share.
It should be noted that the measurements corresponding to rabbitmq might not be precise. I’ll make the relevant updates in case I find a better approach to get accurate metrics for rabbitmq.
Update: I experimented a bit more with an increased number of clients. For the case with 20 clients, the benchmarks are available here. The performance of both the models, in this case, is terrible (lagging and unsynchronised video) and hence I’m not sharing them here.
The compelling case is the one with 10 clients. The performance of both models is comparable to the single client case, but resource utilization is quite interesting.
To make more sense of the data, this time I recorded the observations for twice as long as before (40 seconds).
A few observations -
Leave a Comment