Well... when the electrons decide to quit flowing across the silica... what are you going to do?
This morning we experienced several database server failures. Each time we brought the server back up, it would work fine for a few minutes and then suddenly decide to go blue side up. What's nice for me to notice is that in each case, our hosts MaximumASP were on top of the issue before I even made the call ( Thanks guys! ). They had plenty of spare equipment, all we had to do was narrow down where the issue was. Since we live in a RAID 5 redundant array, and keep a software based mirror to boot, we weren't too worried about loss of data... just downtime.
The guys finally located the issue as a faulty backplane. It's the first time in my career that I've ever seen one of those go down, but I've learned never to be surprised in the technology business. The best news was that it was a simple matter to replace the backplane, replace the existing drives and bring the server back around.
We had some additional strange issues with reconnecting to the server after it was up... still trying to understand those. But ultimately, believe it was the RAID array evening itself out. After rebooting the DB server everything is back to normal.
Sorry for the outage today. We are moving into MaximumASP's new datacenter facilities within the next few weeks and should be implementing more failover protection.
Cheers!