Richard,
If you go into the "dirty details" of the benchmark used it seems the bottom-line is on this page - https://azure.microsoft.com/en-us/doc... :
"The number of users is determined by the database size (in scale-factor units). There is one user for every five scale-factor units. Because of the pacing delay, one user can generate at most one transaction per second, on average.
For example, a scale-factor of 500 (SF=500) database will have 100 users and can achieve a maximum rate of 100 TPS. To drive a higher TPS rate requires more users and a larger database."
So a basic tier SQL Azure database can sustain 5 users at 720MB. A standard tier (S0) can sustain 10 users at 1 GB.
Then based on transactions completed the tiers are rated on transactions per unit of time.
Basic = Transactions per hour 80th percentile at 2.0 seconds
Standard = Transactions per minute 90th percentile at 1.0 seconds
Today I added a banner of Google Ads to one of my pages. That resulted in the DTU/CPU% shooting up to 60% and several SQL Database Driver errors indicating a problem loading the banner module which then ended with a HTTP 502 Gateway error. Within a minute the site recovered and the DTU/CPU went down to less than 5%. But that one spike essentially brought the entire site to its knees. And the admin account was tagged to the errors, so not due to a guest or user access.
Tonight I am boosting the pricing level of the database from Basic to Standard which provides 10 DTUs and 250GB. I do not need the extra storage space so will leave the size at 2GB. In case others might want to know - supposedly you can change the pricing level online without any disruption in online access to the database. Tonight I will find out if that is true.