It always makes me wince when I see core or any developer always assume that .NET caching methology are always the best method of optimizing an application.
Over the years, I've been involved with some of the largest intranet applications developed here in Canada - one is still arguably probably one of the largest running (burst transactions / pages per hour in the 1M per day and 25M / day respectively, > 10,000 concurrent users) the application had to be developed for the burst high threshold volume simply because that's when they make their money. Caching was never the issue, nor consideration - it was the well thought out applications development and coding required to insure that bottlenecks would not happen. And no, we didn't do what everyone touts as "great" and fast dnn sites, we didn't toss a cluster of servers at it. That's poor application design that can't handle a medium sized website, and you have to continually throw more hardware at it.
Application tuning first and formost comes with the application or core API set. .NET caching under most cases will objectively give you some improvements, but it quickly moves a bottlenck downlevel into .NET code. For years Microsoft has been telling us that SQL Server caching alogorthims are some of the best, and it's TPS rating indicates such. Why double cache? Is the prevalent theory that .NET CLR code execution is faster than tight, tight C++ code that SQL Server uses? Caching on the code execution side, should always be done carefully and abstracted at the right layers - caching data is rather silly, when it's already cached in SQL Server anyways. Caching uplevel after complex business logic - more beneficial. Rule #1 of performance optimization - tune the application first, cache if you have no other choice or don't have the $$ or time to do the first part.
Objectively the code needs to be anaylzed and proper coding methodologies in place for code level optimization. ascx controls replaced by pure server controls, and the entire framework vetted for wasteful lines of code execution, and there is alot of that in core code.
Performance hurts everyone whether you host the sites (less sites / system), developer (I would estimate we have a loss of well over 20% in productivity in development using the DNN 3 code bases from the DNN 2 code bases), or end user (general frustration and loss of satisfaction), and really goes completely away from what DotNetNuke's original theme was - a phpnuke replacement.
As being someone that has invested, literally 10's of thousands of hours on DNN module code, I find the trend of performance very worrisome. Last time I checked, phpnuke could run easily 100's if not 1000's of sites on a single server. This more than features, will spell a decline in DNN interest over time. Features are nice, but really - under most cases, with the exception of ML sites, most users that we've had are quite happy with DNN 2.1.2.
If there wasn't so much variation in all the dnn code bases now, we'd move back our development into DNN 2, however, with so many fundamental changes, and things that don't work that should work from the API level - we're forced with developing under DNN 3.x. However, we implement sites based upon the following theory - fast sites, still will use DNN 2.1.2, sites needing more enhanced features, ie: ML then we use DNN 3/4.
Cheers,
Richard