In this season of recession and financial meltdowns, a common question seems to be, "How big is ‘too big to fail'?" Titans of the financial industry made big bets with lots of risk and, when they didn't pan out, American society overall has to pay the price. But, that aside, the very scale of our financial system, by just about every metric, has reached amazing heights, be that number of financial transactions per second, number of traders, number of funds traded, amount of money changing hands—you name it.
This might seem like a tangent to the point of databases in general and SQL Server in particular, but there are actually quite a few similarities in my mind. My inspiration for this lead-in paragraph was an advertisement from Western Digital for a new one-terabyte hard disk for only $103. Yes, that's right—one terabyte for just over $100. It doesn't seem that long ago that we were paying thousands of dollars for a dishwasher-sized hard disk that held 10 megabytes.
It seems like everything these days has to scale to enormous heights—financial systems, web systems, storage systems, and database systems. The good news is that SQL Server is well poised to handle even the largest applications and data sets. If you're looking for information about scaling SQL Server into a size best described using scientific notation, I encourage you to start at www.sqlcat.com. This is the website of the Microsoft SQL Server Customer Advisory Team. It's hard to find people who are more talented and capable than this group of individuals. They are called in when all other forms of support are exhausted—the team assigned to the most difficult customer scenarios and problems. Based on these experiences, they write many of the case studies and white papers found on technet.microsoft.com and msdn.microsoft.com.
Just recently, SQLCAT issued an excellent blog post about preventative maintenance on multi-terabyte systems, at sqlcat.com/technicalnotes/archive/2009/08/13/dbcc-checks-and-terabyte-scale-databases.aspx. In that same vein, SQLCAT wrote a new whitepaper about performing multi-terabyte backups across the network, at sqlcat.com/whitepapers/archive/2009/08/13/a-technical-case-study-fast-and-reliable-backup-and-restore-of-a-vldb-over-the-network.aspx.
Another great source for information is the writing of Paul Randall, a former lead for the Microsoft SQL Server Storage Team, now an independent consultant and trainer. A good example of Paul's VLDB wisdom can be found at www.sqlskills.com/blogs/paul/post/CHECKDB-From-Every-Angle-Consistency-Checking-Options-for-a-VLDB.aspx. I strongly encourage you to put his blog on your must-read list.
You can also find good information from the most prominent SAN vendors, such as EMC. In many cases, their studies and white papers offer wisdom that is universal. Often, you can take advantage of their advice even if you're not using their products. (On the other hand, don't forget that SANs require practically the same amount of tuning, balancing and administration as any other high-end IO subsystem.) Check out their SQL Server related information at www.emc.com/solutions/application-environment/microsoft/solutions-for-sql-server-business-intelligence.htm.
The big keep getting bigger in America, whether they are simple database systems or huge multinational financial systems. Another less often remembered truism is that, sooner or later, they will require a tune-up, reconfiguration, or major disaster recovery. Set up your SQL Server systems now to handle problems of scale and maintenance by taking advantage of the wealth of information already available at sites like SQLCat, SQLSkills, and from the major SAN vendors.