Send inquiry

Performance impact of LiveData stream mapping

Performance impact of LiveData stream mapping

Current software architectures, no matter how different, have one thing in common – programmer is advised to keep coupling between modules to the minimum. One of the ways to achieve this goal is to use proprietary classes to represent data – doing so brings some compelling advantages – new modules can be developed out of order, minimisation of merge issues, not guiding the programmer towards specific solutions that stop working after the next update somewhere else, module reuse is usually just a matter of updating the interface… 

For Android apps, however, it may seem like a bit of an overkill. Unlike in business apps the data model tends to be small and most of it is needed in presentation layer anyway, whole module reuse is not all that likely with the exception of user handling and most of the logic that can change is handled by server side, so why write mapping methods for basically identical data classes? And why increase the app complexity when the user expects as low response times as possible and the devices the app must run on may have the performance of a calculator? But is the mapping impact even measurable? This experiment aims to answer the last of these questions.


These days, the Google – recommended architecture for Android app development is Model - View - View model (MVVM) using Architecture Components, specifically ViewModel component used as view model, Activity or Fragment as view and custom implementation for model. This defines two logical borders for data mutation. Specific app design may definitely contain more, but the principle stays the same. For the purpose of this experiment a simple app with one Activity and ViewModel was created. 

For the purpose of simplicity, separate model class was not created. Primary measured quantity was time between delivery of new data batch in original LiveData stream and stream after set of Transformations.Map(LiveData source, Function mapFunction) operations. Second set of measurements was taken to show difference in allocated memory between app with and without the transformed stream.

Data for the test was generated as a thousand-piece list of objects, consisting of a random string and five random integers. In each mapping function a new object was created with one of the integers removed. Test runs were performed on Android Emulator and a set of physical devices, to minimise device performance impact.


Allocated memory with non-mapped streams

Allocated memory with mapped streams

Test results showed the same consistent pattern over all the devices. Delay between new data set reception in original and mapped stream were in order of milliseconds (for emulator and new devices) to tens of milliseconds (low end devices). Data allocation was equal, even with large string part of the data objects. The results show that stream mapping definitely has some performance impact, but it can be considered negligible. 

More complex mapping methods would increase the time difference and perhaps cause change in data allocation for the time of processing, but that would most likely mean some part of logic is included in them - so this processing would have to take place somewhere anyway.

My conclusion is that trying to optimise performance by using the same object all over the app is not worth the trouble, so if you plan mobile applications development, the proprietary data classes are the way to go.