CarlosRafi wrote
Mitchel:
I don't know if you saw a post a couple of days ago from somebody that was also having performance problems and they found that it was related to the log table. I find all this very difficult to comprehend. Honestly, 7,000 or 8,000 records in SQL Server should not make such a dent in the performance of any application, is like we are talking dBase III Plus here. I don't know what it is, but something is very wrong for an increase of 1,000 records to significantly affect the performance. Maybe it is the way the DAL is dealing with those inserts, but SQL Server should not break a sweat dealing with 8,000 records in a table, for any operation.
What do you think?
Carlos
Carlos,
I agree with you 100% that it SHOULDN'T make a difference but for some reason it does. I work with SQL Server every day in the financial industry, I am regularly working with multi-million rows of data in tables and I have not seen any performance issues, however with DotNetNuke I have noticed the issues that others are experiencing with the log tables.
My only guess is two fold.
1. Maybe there are too many indexes on this table, since it is rarely reported on (At least typically) maybe it isn't worth the hit on insertion.
2. As you mentioned maybe there is an inefficiency in the update process that causes it to take longer then expected.
I have not researched either of these ideas but they are some of the first things that come to mind. The only other item I can think of is the impact of shared SQL Server hosting. Maybe with the extra load placed on these SQL Servers with the hosting providers there isn't enough server resources for SQL Server to perform as expected. This was my first thought, but I would think that slowdowns would be noticed across the board and not just with these two tables.