In part one of this tip, Jon William Toigo discussed some issues associated with backing up large-scale databases, and offered insight into what one company planned to do about it through the use of reference data segregation and a pre-staging methodology. Part two gets to the root of the problems associated with large-scale backups.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
The root of the problem
Database administrators and designers have had the ability for many years to construct their databases so that "reference data" could be neatly tucked away into well-defined subset constructs. Comparatively few have built this functionality into their DB architecture however. Why? The explanation is the same as the explanation for why so many n-tier client-sever applications lack common middleware standards, a design factor that inhibits their recoverability: No one asked them to.
Generally speaking, DBAs have a bad rap. They often take it on the chin from storage guys who view them as out-and-out resource hogs. Storage administrators frequently complain that the DBA doesn't understand storage resource management. He mismanages the resources he has and often requests much more capacity than he actually needs, compromising capacity allocation efficiency strategies. At the end of the day, most storage guys throw up their hands in disgust and just give the DBA whatever he wants, especially if his application is mission critical.
Disaster recovery planners have adopted an even more laissez faire approach by simply accepting whatever instructions the DBA gives them regarding the capacity and platform requirements for database recovery. DBAs almost always want real-time mirroring or low delta journaling systems to safeguard their assets. From their perspective, it is the simplest way to cover their data stores, regardless of whether it is also the most expensive and inflexible approach.
What has always been missing is a collaborative strategy that would give storage managers and DR planners chairs at the application and database development tables. Without their input at the earliest design phases and throughout the design review process, the management and recovery criteria for data base and application design typically go unstated and are not provided in the resulting product.
Of course, the idea of introducing personnel from storage and DRP into the database design process will likely raise the hairs on the necks of DBAs everywhere. Database and application designers have their own lingo and diagrammatic conventions, most of which seem alien to non-DBAs. Anyone who doesn't talk the talk, can't communicate effectively with the DBA let alone specify requirements in terms and language that the DBA will understand.
Some retraining might help to bridge the gaps. But, to really address the systemic problems, a complete retooling of IT professional disciplines is in order: combine the data protection skills and knowledge of the DRP guy with the storage administration skills and knowledge of the storage guy with the database design and administration skills and knowledge of a database guy and you will produce the "data management professional." But that would require chimeric gene splicing in the extreme and would probably violate the Harvard protocols on genetic engineering.
In the absence of such sweeping systemic and procedural changes, solving the problems of large-scale database backup will require a conscientious effort to get the DBAs and data protection folk talking to one another so they can come up with recoverable designs. In the final analysis, this is probably a more fruitful approach than trying to find a silver bullet technology for ferreting out all the cells from all the columns and all the rows that seem to have the characteristics of reference data.
For more information:Tip: The problems backing up big databases
Tip: Get top performance from database storage
Tip: Treat databases the SAME
About the author: Jon William Toigo heads up an international storage consulting group, Toigo Partners International and has also authored hundreds of articles on storage and technology along with his monthly SearchStorage.com "Toigo's Take on Storage" expert column and backup/recovery feature. He is a frequent site contributor on the subjects of storage management, disaster recovery and enterprise storage. Toigo has authored a number of storage books, including Disaster recovery planning: Preparing for the unthinkable, 3/e. For detailed information on the nine parts of a full-fledged DR plan, see Jon's web site at www.drplanning.org/phases.html.