Jan 22 2010
.NET Caching Solutions – A Pre-Velocity Review
Caching is one of the vital aspect for enterprise applications. ASP.NET Cache is not enough and even not suitable for service layer which is now more predominantly occupied by WCF. A robustic and scalable memory caching solution is required to cache different types of data. I’m the beliver of “RAM is disk” for service layer. Enterprise Library Caching Application Block (CAB) from Microsoft PnP team provides an in-process memory caching, however it is not scalable and robust. But, its provider pattern is an elegant model for application design, so that you can plug any caching solutions.
Recently, I was in a situation to find out suitable caching solution for our WCF based service layer. Although, Microsoft yet to avail its “Windows Server AppFabric” to the .NET/WCF world (which might be the filler of .NET peoples’ decade old expectation), there are some caching solutions available from third parties.
I did a small evaluation on the following caching solutions:
- NCache Express
- Shared Cache
- Memcached
- Enterprise Library 4.1
Disclaimer: The test was done at very small scale and not included the scalability factors. Memory and CPU consumption were not covered.
NCache supports different caching topologies and types with many features like Object Query Language, remote clustering, etc. I’ve taken its express edition for this evaulation. is 100% written in C# with distributed caching and three different topologies. It supports WCF DataContractSerializer in addition with BinarySerializer. Guys from *nux world knows the defacto caching solution “memcached“. It has Win32/64 avatar too and some open source managed client libraries also available. I used Danga Interactive 32-bit version of memcached 1.2.6 () and BeIT Memcached managed client (). 64-bit Windows version is available at .
I’d done a small performance testing on the above solutions. In addition to these, I’ve used Enterprise Library 4.1 CAB provider model and implement provider for these solutions. So, following different solutions had been tested:
- Enterprise Library 4.1 CAB
- Memcached
- Enterprise Library 4.1 based Memcached
- Shared Cache
- Enterprise Library 4.1 based Shared Cache
- NCache Express
- Enterprise Library 4.1 based NCache Express
Following three different test had been done on every solutions, totally 21 test:
- Adding 1000 items & Getting these items
- Adding 1000 items & Getting 5th element
- Adding 1000 items & Getting 995th element
The testing was performed on Intel Pentium 4 3.2GHz, 2GB RAM running WinXP SP2.
Let us see the results and, pros and cons of these solutions.
Enterprise Library 4.1
Not surprisingly excellent performance due to in-process memory caching, however not suitable for enterprise scale. This supports primitive and Serializable objects.
EntLib 4.1 | |||||
R1 | R2 | R3 | R4 | Avg | |
Adding 1000 items (in ms) | 21 | 13 | 23 | 12 | 17.25 |
Getting 1000 items (in ms) | 9 | 5 | 9 | 5 | 7 |
Get 5th item (1000 items) (in ms) | 4 | 3 | 3 | 2 | 3 |
Get 995th item (1000 items) (in ms) | 4 | 3 | 3 | 2 | 3 |
The “R” deonotes “Run”. I did 4 runs for every test.
Memcached
Since, the OS calls for *nux memcached and Windows memcached is different, some bloggers cautioned memcached on windows is might not be a win-win. But, I could not find any degrades in my testing. It takes small memory foot-print and CPU usage. It supports primitive and Serializable objects. However, the Win32 service version does not have flexible configuration.
MemCached | |||||
R1 | R2 | R3 | R4 | Avg | |
Adding 1000 items (in ms) | 166 | 160 | 203 | 164 | 173.25 |
Getting 1000 items (in ms) | 154 | 140 | 175 | 138 | 151.75 |
Get 5th item (1000 items) (in ms) | 5 | 5 | 5 | 5 | 5 |
Get 995th item (1000 items) (in ms) | 5 | 5 | 5 | 5 | 5 |
EntLib MemCached | |||||
R1 | R2 | R3 | R4 | Avg | |
Adding 1000 items (in ms) | 178 | 164 | 202 | 193 | 184.25 |
Getting 1000 items (in ms) | 5 | 3 | 6 | 3 | 4.25 |
Get 5th item (1000 items) (in ms) | 2 | 0 | 0 | 0 | 0.5 |
Get 995th item (1000 items) (in ms) | 2 | 0 | 0 | 0 | 0.5 |
Shared Cache
Like Enterprise Library 4.1 CAB, this solution uses provider like model and come out with IndexusDistributionCache which supports WCF data contract serializer too in addition with binary serializer.
SharedCache | |||||
R1 | R2 | R3 | R4 | Avg | |
Adding 1000 items (in ms) | 4723 | 3441 | 3904 | 3592 | 3915 |
Getting 1000 items (in ms) | 3357 | 3633 | 4582 | 3722 | 3823.5 |
Get 5th item (1000 items) (in ms) | 12 | 14 | 17 | 13 | 14 |
Get 995th item (1000 items) (in ms) | 13 | 14 | 13 | 14 | 13.5 |
EntLib SharedCache | |||||
R1 | R2 | R3 | R4 | Avg | |
Adding 1000 items (in ms) | 3649 | 3612 | 4379 | 4992 | 4158 |
Getting 1000 items (in ms) | 5 | 2 | 5 | 3 | 3.75 |
Get 5th item (1000 items) (in ms) | 3 | 0 | 0 | 0 | 0.75 |
Get 995th item (1000 items) (in ms) | 3 | 0 | 0 | 0 | 0.75 |
NCache
This is more matured and tailored solution for .NET world. The best is the object is not necessarily Serializable.
Ncache | |||||
R1 | R2 | R3 | R4 | Avg | |
Adding 1000 items (in ms) | 365 | 282 | 382 | 296 | 331.25 |
Getting 1000 items (in ms) | 257 | 248 | 287 | 234 | 256.5 |
Get 5th item (1000 items) (in ms) | 12 | 8 | 10 | 8 | 9.5 |
Get 995th item (1000 items) (in ms) | 12 | 8 | 8 | 8 | 9 |
EntLib Ncache | |||||
R1 | R2 | R3 | R4 | Avg | |
Adding 1000 items (in ms) | 317 | 277 | 367 | 278 | 309.75 |
Getting 1000 items (in ms) | 5 | 2 | 5 | 3 | 3.75 |
Get 5th item (1000 items) (in ms) | 2 | 0 | 0 | 0 | 0.5 |
Get 995th item (1000 items) (in ms) | 3 | 0 | 0 | 0 | 0.75 |
The results shows Memcached performed better for its consistent performance and simplicity. The surprising fact is that the above all solutions perform well in Enterprise Library 4.1 CAB provider model than their default mode.
Tweets that mention .NET Caching Solutions - A Pre-Velocity Review | Udooz! -- Topsy.com
Jan 22, 2010 @ 15:26:47
[...] This post was mentioned on Twitter by M Sheik Uduman Ali, Solidsoft. Solidsoft said: .NET Caching Solutions – A Pre-Velocity Review | Udooz!: NET Cache is not enough and even not suitable for ser… http://bit.ly/84kENn #WCF [...]
hobbes2k
May 07, 2010 @ 20:16:40
This is great information, thanks for posting your research udooz!