Many out there will have different ideas about this, some using procs, some using a function, others using a shell script. Well I didn’t want to spend much time on it so decided a group_concatconcat would be enough.There is no genius, rather laziness : but what if you have a hundred databases and you want to drop them all?
The PAGER option is most intriguing
I am running Mysql 5.1 DB for WordPress Application with Hyperdb plugin enabled. I have 93000 blogs and for this I have 93000 8tables per blog = 764000 tables MyISAM in the DB. I created a backup script to take backup each blog separately and its working fine. But the problem is about timing. Mysqldump is taking almost 2/3 seconds per blog and most of the time are consumed by “SHOW TABLES LIKE” to get the actual tables name get_actual_table_name funciton at source code. I changed it for query from INFORMATION_SCHEMA rather that SHOW TABLES and now each log is taking maximum 1 second. I want to know if you guyes have any other good idea for this huge DB 300GB backup so that I can go with the default source code.
But the first two joins—the inner join, and the left exclusion join—are logically equivalent to a left outer join, so we can write:
SELECT * FROM a LEFT JOIN b ON a.id=b.id
SELECT * FROM a RIGHT JOIN b ON a.id=b.id WHERE a.id IS NULL;
| id | name | id | name |
| 1 | a | NULL | NULL |
| 2 | b | 2 | b |
| 1 | a | NULL | NULL |
| NULL | NULL | 3 | c |
Why doesn’t MySQL implement FULL OUTER JOIN syntax for this? We don’t know.
via Common Queries Tree.
DBAs facing the problem of corporate data explosion have an excellent new tool to help them in the MySQL 5.0 Archive storage engine. Whether it’s a data warehousing, data archiving, or data auditing situation, MySQL Archive tables can be just what the doctor ordered when it comes to maintaining large amounts of standard or sensitive information, while keeping storage costs at a bare-bones minimum.
Should probably investigate using ARCHIVE storage for the multi-year history tables
MySQL replication can stop if slave fails to execute SQL statement from the binary log. From that moment, slave prints last error and waits for replication recovery. If master has consistent snapshot, then is only necessary to re-point slave to the new master position. It can be done with change master to or sql_slave_skip_counter.
However, sometimes you want to apply binary logs to a MySQL instance, without having those changes applied to the binary logs themselves. One option is to restart the server binary logging disabled, and after the load is finished, restart the server with binary logging re-enabled. This is not always possible nor desirable, so there’s a better way, that works in at least versions 4.1 and up
via Applying binary logs without adding to the binary log | The Pythian Blog.
SET SESSION sql_log_bin=0;
When you do a SHOW SLAVE STATUSG and you see errors you need to resolve them. One way is to simply skip the offending statement.
SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;
Repeat as desired.