I have been using DNN as a platform for web applications for years. We have been running from version 3 through 7 and have always found the performance of DNN to be only just quick enough.
Due to extensive application and hardware tuning we have always found ways to make the performance acceptable. But I keep having the feeling there is something wrong...
In our application which we host in a DNN portal we have been using object caching with great success for years. We have been using this setup where DNN has its own cache and our application has another. Recently we decided to integrate the two and create a new caching provider in DNN which can be used by DNN and our own application. This was actually quite easy to create due to some excellent examples which communicate with a Redis cache server.
When we installed this on our test web server, where the cache runs on another machine in the network, the DNN (7.4.2) portal slowed down to a crawl. Just a menu click took 2 minutes to render. Something was terribly wrong.
The advantage of having a seperate cache server is that you can easily monitor all request to the server. I was rather surprised with what I saw...
Every postback of a DNN page request hundreds of mostly duplicate keys from the cache! I understand that getting an object from memory on a webserver is much quicker than retrieving from the database but even so, requesting a single object more than 20 times from cache on a single postback is rediculous. I hope the developers of the DNN core understand that getting an object from cache still requires it to be deserialized and when the cache server runs on another machine it needs to be transferred across the network as well.
The solution is pretty simple if you keep to some guide lines...
1. Do not get an object from cache in a loop if the cachekey does not change in the loop....
2. Store objects retrieved from cache in variables that live just as long as the postback....
3. Prevent loading objects from cache for functionality that is not active (online user list when turned off or googe analytics when not provided)
But who can help analyse and improve the DNN Source code?
When I look at the source of 7.4.2 I find some pointers to start looking at. For example:
DNN Platform\Library\Common\Utilities\DataCache.cs
from line nr 566:
public static TObject GetCachedData<TObject>(CacheItemArgs cacheItemArgs, CacheItemExpiredCallback cacheItemExpired)
{
// declare local object and try and retrieve item from the cache
return GetCachedData<TObject>(cacheItemArgs, cacheItemExpired, false);
}
internal static TObject GetCachedData<TObject>(CacheItemArgs cacheItemArgs, CacheItemExpiredCallback cacheItemExpired, bool storeInDictionary)
{
object objObject = storeInDictionary
? GetCachedDataFromDictionary(cacheItemArgs, cacheItemExpired)
: GetCachedDataFromRuntimeCache(cacheItemArgs, cacheItemExpired);
// return the object
if (objObject == null)
{
return default(TObject);
}
return (TObject)objObject;
}
The storeInDictionary is the interesting parameter. In the first call it defaults to false. Setting this to true would help to store object locally in memory in stead of the external cache. Not a real solution but it might help to improve performance.
Hope this helps find solutions for making DNN a faster running CMS!