Craig M wrote:
Jason Peterson wrote:
Craig M wrote:
I am finding the performance of 6.1.5 really good, and I think the client resource management is a great idea.
However, I am running a new 6.1.5 website with no extra modules installed and it is using up to 500mb of memory on my VPS. DNN 5.x.x installs on the same server seem to use around 200mb max and they are much more popular websites. Can anyone explain the huge memory increase using 6.1.5 or suggest any ideas to lower memory usage?
What do you mean 'performance is really good', can you offer up some metrics so we know what the good performance is relative to? I did see in Gemini a performance fix for 6.1.5 but no clues about how performance was affected defect or fix. I do intend to setup a test site and evaluate.
Using 6.1.5 with client resource management I am getting a Google PageSpeed score of 91 and a YSlow score of 82. The best scores using these tools on any of my 5.x.x websites are a Google PageSpeed score of 79 and a YSlow score of 75 so I think this is a good improvement. Load times using Firebug/YSlow with 6.1.5 also seem to be marginally quicker.
However as mentioned in my previous post memory usage has more than doubled using 6.1.5 verses 5.x.x which is not good. I have a VPS with 2GB of memory and 7 DNN websites which has been working flawlessly for a couple of years using 5.x.x websites (even under heavy usage memory gets to about 1.5GB max). But with the first website I have setup using 6.1.5 using so much memory (around 500mb) if I upgrade all my other sites to 6.1.5 I will be maxing out my 2GB of RAM in no time. So if anyone can explain the huge memory increase using 6.1.5 or suggest ideas to lower memory usage that would be great.
Craig,
Thanks for posting numbers. Don't get carried away with the Yslow and Google PageSpeed scores, these can be deceiving and should be considered tools as opposed to performance metrics as they do not measure and rate overall performance. For example, a web page can take 20 seconds to load and get an excellent score. This is because these tools are simply rating how well certain best practices are followed such as limiting http requests, compression usage, etc. They do not measure and rate the turnaround time for the server to receive a request and send back html. Thus they will not penalize low server side efficiency such as inefficient queries and code. Also realize that to some degree, client side performance issues are mitigated by modern day browsers and protocols. For instance, in the case where a site has many large javascript files, the browser is intelligent enough to cache these files so as not to incur the full payload hit on each request. Conversely, an http request can still be created as the browser checks if the file has changed. So, all the client side performance work done with the client resource management API is very encouraging but in real world usability terms the server side efficiency is equally as important, and has a greater potential for negative impact (for example you could easily create an inefficient query or code that results in a few seconds page load delay however it would be more difficult to create such a delay (visible to the user) via client side elements.)
I did test a fresh install of 6.1.5, performance looked quite poor so I stripped out the mega menu as I am not sure about performance on that since I don't use the menus that ship, and also stripped out all server controls such as login, search etc. and many of the control registrations. End result was my 6.1.3 stripped down page (but installation not stripped down, quite a few custom modules and very large amounts of data) took about 250+ ms to retrieve while the 6.1.5 fresh install took about 300+ ms to retrieve. Both sites use public authenticated cacheability, IIS 7 dynamic and static compression, 'Page' page state persistence, and heavy caching. Realize these metrics are relative to my location from my server and network latency, thus the actual millisecond values do not represent information that could be compared to someone else's results, however the % difference between the 2 does represent a performance decrease that would likely be replicated on further testing by others.
In conclusion and taken together with the results posted by Will in his blog showing page load times going back all the way to 5.0 it is clear there is insufficient performance control mechanisms in DNN workflow to mitigate the natural performance deterioration that results from constant additions and changes by many parties doing their own thing. I think this must be the case as opposed to DNN just not giving a rip about performance. This is why on my team every single task no matter the size passes through me as it comes and goes. There is a lot of room for improvement and I think the best solution is multi angle approach of a) making an initial set of parent and resulting sub tasks to investigate and improve performance and b) implementing a workflow mechanism to ensure performance either improves or remains the same across future builds. The worrying thing to me about the load times ise not necessarily the speed as much as what the speed tells us about the server resources utilized for a single request and what that means to scalability. I have been running a 5.6.2 site that gets a few hundred K page views/mo and it has performed acceptably, however these are still very small numbers. We are about to launch a major public site a couple years in the works where at least a million uniques/mo and beyond are a given after a short time and my devs and myself have major concerns about scalability. I am quite sure we will need to get into the core and fix the issues ourselves, we probably won't upgrade at this point but it really would be nice to see some stuff logged in Gemini so we could easily apply fixes to our version of the core.