Skip to content

Memory performance of Redis data structures

Redis has some excellent documentation on time complexity of operations on the various data structures. However, it’s harder to find information on memory performance of Redis data structures. In the absence of good documentation on this, I’ve taken a more pragmatic approach and run some benchmarks.

Benchmark

I’ve created a simple Redis benchmark program in Python. It adds a fixed amount of data to various Redis data structures and then uses the MEMORY USAGE command to determine how much memory the data structure takes.

My benchmark has the following characteristics:

  • Data structures tested: List, Set, Sorted Set (zset) and Hash
  • Data inserted: 1 million 16 byte entries into a single key

Results

StructureEntry countTotal memory (bytes)Memory per entry (bytes)Overhead per entry (bytes)
List10000001813520818.142.14
Set10000005638873656.3940.39
Sorted Set1000000102789424102.7986.79
Hash10000006438873664.3948.39

Results vary slightly for different values of entry count and the size of each entry. However the overheads stay broadly in line with the results above.

Memory performance and overhead

If memory usage is a concern, a simple list is by far the best choice. I found it required on average 18 bytes to store a 16 byte value. Just 2 bytes of overhead.

A sorted set is by far the worst. For my data set it averaged 103 bytes of memory per 16 byte value. That’s more than 5 times as much memory requirement compared to a list.

Redis sets, sorted sets and hash structures offer a great deal of value with behaviours such as ordering, and uniqueness as well as diff, union and intersection operations. However, if you’re more concerned with the memory performance of Redis data structures, they’re best avoided.

Published inUncategorized

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *