Zabbix is a great tool for monitoring and reporting for multiple servers and that installation is covered here. In this instance, Sar and KSar will be used as a standalone data collection tool, as it falls outside the physical and networking reach of my Zabbix server.
Sar is a neat little tool that is part of the sysstat package, more information can be found on the author's website. In my case, we will be using to collect data on CPU, Memory, Swap, Network and all the other metrics that can make or break a Linux based service.
kSar is a separate java based tool that generates some lovely graphs using the collected sar data, because a picture paints a thousand words, or in this case, a graph summarizes a crapload of data.
This tutorial covers the installation of both, as well as a practical usage scenario.
0. Got root/ sudo?
1. Install the packagesyum install sysstat java
2. Set the sysstat cron to run
nano /etc/cron.d/sysstat
Ensure the following lines are active / not commented out. The first line specifies how often the tool should take a snapshot, the second is when the daily summary is processed.
*/10 * * * * root /usr/lib/sa/sa1 1 1
53 23 * * * root /usr/lib/sa/sa2 –A
3. Configure sar to keep a month's worth of data
By default, sar keeps 7 days worth of data. Since we need data monthly, the configuration needs updating.
nano /etc/sysconfig/sysstat
Update line, HISTORY = 7 to now read:
HISTORY = 31
4. Download kSar
Located at http://sourceforge.net/projects/ksar
Create a folder to store kSar and Monthly text files
mkdir /mon
Extract kSar into /mon
Change permissions to make kSar executable
chmod +x /mon/kSar
5. View Daily Server Data (default)
Prep report:
LC_ALL=C sar -A > /mon/sardata.txt
See the graphs
cd /mon/kSarx.x.x/
./run.sh
Click "Data" Menu option -> Load from text file
Select /mon/sardata.txt
6. View Monthly Server Data (see below for actual script)
Prep report:
cd /mon/
./sarprep_monthly.sh
(Ensure script is executable before running!)
See the graphs
cd /mon/kSarx.x.x/
./run.sh
Click "Data" Menu option -> Load from text file
Select /mon/sarmonthly_July.txt (use appropriate month name)
7. All done!
sarprep_monthly.sh
#cleanup old monthly file
rm -rf /mon/sarmonthly_$(date +"%B").txt
#loop through 31 possible days, merge all files into one
for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31; do
LC_ALL=C sar -A -f /var/log/sa/sa$i >> /mon/sarmonthly_$(date + "%B").txt
done
That's it! Til next time...
-noveck
Content moved to tucuche-consulting.com as of April 2019
Tuesday, October 29, 2013
Thursday, September 26, 2013
Upgrading / Migrating a MySQL 5.0 Database to MySQL 5.5 [InnoDB]
This post is an ultra, no, make that uber paranoid method of upgrading/migrating a relatively large (20+ GB on file) InnoDB database from MySQL version 5.0 to MySQL 5.5. Some might consider it overkill, but as it relates to a database of this size and maturity, I'd prefer not to take any unnecessary risks.
This assumes that the old server is running mysql 5.0 on CentOS 5.x and that the MySQL 5.5 is installed on a new server running CentOS 6, using the remi repositories. This is covered here.
Phase 1 - Prepare Data on the Old Server
1. Execute Database Check to ensure tables are clean
From terminal:
mysqlcheck –c mydbname –u root –p
<enter password when prompted>
2. Re-index tables before the dump
From mysql: (single line!)
select concat(‘ALTER TABLE`’, table_schema,’`.`’, table_name,’` Engine=InnoDB;’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/InnoBatch.sql’;
From shell:
mysql -u root –p --verbose < /tmp/InnoBatch.sql
3. Export the database as a dump file
From shell:
mysqldump -u root –p –e –c --verbose --default-character-set=utf8 --skip-set-charset --max-allowed-packet = 100M --single-transaction --databases mydbname –r /root/Desktop/mydbdump.sql
4. Copy to new DB server
scp –r /root/Desktop/mydbdump.sql root@new.db.srv.ip:/root/Desktop/
Phase 2 - Import to New Server
1. Create empty database shell for import
From mysql:
create database mdbname character set utf8 collate utf8_unicode_ci\
2. Issue Grant permissions to new DB (I hope you have this documented, else you might need to dump/restore the mysql.user table to new DB)
3. Import SQL file.(but first set a really high session value for max_allowed_packet to handle the large data import)
set global max_allowed_packet = 1000000000;
source /root/Desktop/mydbdump.sql
4. Check mysql for transaction warnings
from mysql:
show warnings\G
5. Run upgrade script
From shell:
mysql_upgrade –u root –p --force
6. Rebuild InnoDB tables, which would force the InnoDB tables to upgrade
(source: http://www.mysqlperformanceblog.com/2010/05/14/mysql_upgrade-and-innodb-tables/ )
From mysql: (single line!)
select concat(‘ALTER TABLE`’, table_schema,’`.`’, table_name,’` Engine=InnoDB;’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/InnoBatch.sql’;
From shell:
mysql -u root –p --verbose < /tmp/InnoBatch.sql
7. Execute Database Check to ensure newly imported/upgraded tables are clean
From shell:
mysqlcheck –c mydbname –u root –p
Phase 3 - Compare old and new database
Checking data consistency to ensure all the data was transferred via an accurate record count.
http://dev.mysql.com/doc/refman/5.0/en/innodb-restrictions.html
1. On Old db server, generate query to perform record count on each table
from mysql: (single line!)
select concat(‘ SELECT “’,table_name, ‘” as table_name, count(*) as exact_row_count from ‘,table_schema, ‘.’ table_name, ‘ UNION’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/TableAnalysisQuery.sql’;
From shell:
nano /tmp/TableAnalysisQuery.sql
remove the LAST Union from the end of last line in the file.
2. Run the query to get table row count for all tables
From shell:
mysql –u root –p < /tmp/TableAnalysisQuery.sql > /root/Desktop/TableAnalysisResults-$(hostname).txt
3. On New db server, generate query to perform record count on each table
from mysql: (single line!)
select concat(‘ SELECT “’,table_name, ‘” as table_name, count(*) as exact_row_count from ‘,table_schema, ‘.’ table_name, ‘ UNION’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/TableAnalysisQuery.sql’;
From shell:
nano /tmp/TableAnalysisQuery.sql
remove the LAST Union from the end of last line in the file.
4. Run the query to get table row count for all tables
From shell:
mysql –u root –p < /tmp/TableAnalysisQuery.sql > /root/Desktop/TableAnalysisResults-$(hostname).txt
5. Copy both text files to a third machine for comparison
On OLD db server, from shell:
scp –r /root/Desktop/TableAnalysisResults-myolddb.mydomain.com.txt root@third.machine.ip:/root/Desktop
On NEW db server, from shell:
scp –r /root/Desktop/TableAnalysisResults-mynewdb.mydomain.com.txt root@third.machine.ip:/root/Desktop
ON third server
from shell:
diff –a /root/Desktop/TableAnalysisResults-myolddb.mydomain.com.txt /root/Desktop/TableAnalysisResults-mynewdb.mydomain.com.txt
No output from the previous command means that the data is consistent (as it relates to number of rows on each table) on both servers and the new database can be made active/ brought in production
<EOF>
That's it!
-noveck
This assumes that the old server is running mysql 5.0 on CentOS 5.x and that the MySQL 5.5 is installed on a new server running CentOS 6, using the remi repositories. This is covered here.
Phase 1 - Prepare Data on the Old Server
1. Execute Database Check to ensure tables are clean
From terminal:
mysqlcheck –c mydbname –u root –p
<enter password when prompted>
2. Re-index tables before the dump
From mysql: (single line!)
select concat(‘ALTER TABLE`’, table_schema,’`.`’, table_name,’` Engine=InnoDB;’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/InnoBatch.sql’;
From shell:
mysql -u root –p --verbose < /tmp/InnoBatch.sql
3. Export the database as a dump file
From shell:
mysqldump -u root –p –e –c --verbose --default-character-set=utf8 --skip-set-charset --max-allowed-packet = 100M --single-transaction --databases mydbname –r /root/Desktop/mydbdump.sql
4. Copy to new DB server
scp –r /root/Desktop/mydbdump.sql root@new.db.srv.ip:/root/Desktop/
Phase 2 - Import to New Server
1. Create empty database shell for import
From mysql:
create database mdbname character set utf8 collate utf8_unicode_ci\
2. Issue Grant permissions to new DB (I hope you have this documented, else you might need to dump/restore the mysql.user table to new DB)
3. Import SQL file.(but first set a really high session value for max_allowed_packet to handle the large data import)
set global max_allowed_packet = 1000000000;
source /root/Desktop/mydbdump.sql
4. Check mysql for transaction warnings
from mysql:
show warnings\G
5. Run upgrade script
From shell:
mysql_upgrade –u root –p --force
6. Rebuild InnoDB tables, which would force the InnoDB tables to upgrade
(source: http://www.mysqlperformanceblog.com/2010/05/14/mysql_upgrade-and-innodb-tables/ )
From mysql: (single line!)
select concat(‘ALTER TABLE`’, table_schema,’`.`’, table_name,’` Engine=InnoDB;’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/InnoBatch.sql’;
From shell:
mysql -u root –p --verbose < /tmp/InnoBatch.sql
7. Execute Database Check to ensure newly imported/upgraded tables are clean
From shell:
mysqlcheck –c mydbname –u root –p
Phase 3 - Compare old and new database
Checking data consistency to ensure all the data was transferred via an accurate record count.
http://dev.mysql.com/doc/refman/5.0/en/innodb-restrictions.html
1. On Old db server, generate query to perform record count on each table
from mysql: (single line!)
select concat(‘ SELECT “’,table_name, ‘” as table_name, count(*) as exact_row_count from ‘,table_schema, ‘.’ table_name, ‘ UNION’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/TableAnalysisQuery.sql’;
From shell:
nano /tmp/TableAnalysisQuery.sql
remove the LAST Union from the end of last line in the file.
2. Run the query to get table row count for all tables
From shell:
mysql –u root –p < /tmp/TableAnalysisQuery.sql > /root/Desktop/TableAnalysisResults-$(hostname).txt
3. On New db server, generate query to perform record count on each table
from mysql: (single line!)
select concat(‘ SELECT “’,table_name, ‘” as table_name, count(*) as exact_row_count from ‘,table_schema, ‘.’ table_name, ‘ UNION’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/TableAnalysisQuery.sql’;
From shell:
nano /tmp/TableAnalysisQuery.sql
remove the LAST Union from the end of last line in the file.
4. Run the query to get table row count for all tables
From shell:
mysql –u root –p < /tmp/TableAnalysisQuery.sql > /root/Desktop/TableAnalysisResults-$(hostname).txt
5. Copy both text files to a third machine for comparison
On OLD db server, from shell:
scp –r /root/Desktop/TableAnalysisResults-myolddb.mydomain.com.txt root@third.machine.ip:/root/Desktop
On NEW db server, from shell:
scp –r /root/Desktop/TableAnalysisResults-mynewdb.mydomain.com.txt root@third.machine.ip:/root/Desktop
ON third server
from shell:
diff –a /root/Desktop/TableAnalysisResults-myolddb.mydomain.com.txt /root/Desktop/TableAnalysisResults-mynewdb.mydomain.com.txt
No output from the previous command means that the data is consistent (as it relates to number of rows on each table) on both servers and the new database can be made active/ brought in production
<EOF>
That's it!
-noveck
Monday, August 19, 2013
Installing MySQL 5.5 on CentOS 6.x
This post covers the installation of MySQL 5.5 on CentOS 6 (64bit)
By default, CentOS 6 ships with MySQL 5.1, but to take all the advantages of the more recent versions, it is generally recommended to try to use 5.5 or 5.6 if possible, especially on a new server.
0. Login as root/ su
1. Open Terminal Interface
2. Go to Temporary Folder
cd /tmp
3. Get and Install EPEL Repo (this example uses 64bit, get the 32bit rpm if needed! )
wget http://fedora.mirror.nexicom.net/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm –Uvh epel-release-6-8.noarch.rpm
4. Get and Install REMI Repo
wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
rpm –Uvh remi-release-6.rpm
5. Check MySQL version to be installed
yum --enablerepo=remi list mysql mysql-server
**If the following error is observed during this step, see below for resolution
ERROR: Cannot retrieve metalink for repository: epel. Please verify its path and try again.
FIX: Edit epel.repo and change all https references in “mirrorlist” sections to http
cd /etc/yum.repos.d/
nano epel.repo
Find: mirrorlist=https://mirrors.fedorapro……
Change to: mirrorlist=http://mirrors.fedorapro…..
6. Install mysql 5.5
yum install --enablerepo=remi mysql mysql-server
7. Start MySQL and configure to start on boot
service mysqld start
chkconfig mysqld on
8. Run mysql upgrade script
mysql_upgrade -u root –p
9. Change Mysql default Password
/usr/bin/mysqladmin -u root password 'yourpasswordhere'
10. Check to ensure that the mysql is at the desired version
mysql –version
11. Set proper permissions on /tmpchown –R root:root /tmp
chmod –R 1777 /tmp
12. Secure MySQL
Optional but recommended for production servers
See link for details: http://dev.mysql.com/doc/refman/5.5/en/mysql-secure-installation.html
/usr/bin/mysql_secure_installation
13. Restart mysql service
service mysqld restart
That's it!
-noveck
By default, CentOS 6 ships with MySQL 5.1, but to take all the advantages of the more recent versions, it is generally recommended to try to use 5.5 or 5.6 if possible, especially on a new server.
0. Login as root/ su
1. Open Terminal Interface
2. Go to Temporary Folder
cd /tmp
3. Get and Install EPEL Repo (this example uses 64bit, get the 32bit rpm if needed! )
wget http://fedora.mirror.nexicom.net/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm –Uvh epel-release-6-8.noarch.rpm
4. Get and Install REMI Repo
wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
rpm –Uvh remi-release-6.rpm
5. Check MySQL version to be installed
yum --enablerepo=remi list mysql mysql-server
**If the following error is observed during this step, see below for resolution
ERROR: Cannot retrieve metalink for repository: epel. Please verify its path and try again.
FIX: Edit epel.repo and change all https references in “mirrorlist” sections to http
cd /etc/yum.repos.d/
nano epel.repo
Find: mirrorlist=https://mirrors.fedorapro……
Change to: mirrorlist=http://mirrors.fedorapro…..
6. Install mysql 5.5
yum install --enablerepo=remi mysql mysql-server
7. Start MySQL and configure to start on boot
service mysqld start
chkconfig mysqld on
8. Run mysql upgrade script
mysql_upgrade -u root –p
9. Change Mysql default Password
/usr/bin/mysqladmin -u root password 'yourpasswordhere'
10. Check to ensure that the mysql is at the desired version
mysql –version
11. Set proper permissions on /tmpchown –R root:root /tmp
chmod –R 1777 /tmp
12. Secure MySQL
Optional but recommended for production servers
See link for details: http://dev.mysql.com/doc/refman/5.5/en/mysql-secure-installation.html
/usr/bin/mysql_secure_installation
13. Restart mysql service
service mysqld restart
That's it!
-noveck
Wednesday, July 17, 2013
CentOS 5 top menu vanished after update
Well this is a tickler. After I performed an update on a CentOS system, the top menu pulled a Houdini on me. The panel was totally empty and non-responsive, save for the time.
This is a quick and easy fix, so don't panic. Just force the gnome panel to reload.
0. Login as root or sudo.
You should find the Terminal via right-click on the desktop. Thankfully.
1. Reload the gnome panel (forcefully)
killall gnome-panel
2. That's it!
The menu items should be return. If that didn't work, you might need further help.
-noveck
This is a quick and easy fix, so don't panic. Just force the gnome panel to reload.
0. Login as root or sudo.
You should find the Terminal via right-click on the desktop. Thankfully.
1. Reload the gnome panel (forcefully)
killall gnome-panel
2. That's it!
The menu items should be return. If that didn't work, you might need further help.
-noveck
Tuesday, June 11, 2013
Reset MySQL root password on CentOS 5.x
I had one of those oh-crap moments and forgot the mysql root password in one of my development/test machines.
This is a reblog of someone else's post in case it ever gets deleted, I must say it saved my bacon (or at the very lease a couple hours of hair pulling and reinstall)
Credit to: http://gettechgo.wordpress.com/2012/05/10/how-to-reset-mysql-root-password-linux-o-s/
0. Login as root/su
1. Stop the MySQL service
service mysqld stop
2. Start MySQL Safe mode with skip grant tables option
mysqld_safe --skip-grant-tables &
(press ctrl+z to exit, if required)
3. Start the MySQL service
service mysqld start
4. Log into the MySQL server without any password
mysql -u root -p mysql
5. Reset the password for ‘root’ user
UPDATE user SET password=PASSWORD(‘new-password’) where user=’root’;
6. Flush privileges
flush privileges;
7. Restart the MySQL service
service mysqld restart
8. Log-in with the new password
mysql -u root -p
<enter new password when prompted>
Cheers,
Noveck
This is a reblog of someone else's post in case it ever gets deleted, I must say it saved my bacon (or at the very lease a couple hours of hair pulling and reinstall)
Credit to: http://gettechgo.wordpress.com/2012/05/10/how-to-reset-mysql-root-password-linux-o-s/
0. Login as root/su
1. Stop the MySQL service
service mysqld stop
2. Start MySQL Safe mode with skip grant tables option
mysqld_safe --skip-grant-tables &
(press ctrl+z to exit, if required)
3. Start the MySQL service
service mysqld start
4. Log into the MySQL server without any password
mysql -u root -p mysql
5. Reset the password for ‘root’ user
UPDATE user SET password=PASSWORD(‘new-password’) where user=’root’;
6. Flush privileges
flush privileges;
7. Restart the MySQL service
service mysqld restart
8. Log-in with the new password
mysql -u root -p
<enter new password when prompted>
Cheers,
Noveck
Labels:
CentOS,
Linux,
MySQL,
reset password
Monday, May 13, 2013
Removing a repository from CentOS
Let's chalk this one up to the Monday Morning Blues.
I'm running through a CentOS 6.x test server and needed to install the EPEL repos. For some reason I managed to install the RHEL5 rpm instead of the RHEL6. (D'Oh!)
This is a quick fix on removing the repo completely to install the proper one.
0. Got root? (or sudo)
1. Remove Repo
From terminal
rpm -qa | grep epel
Expected output (in my case)
epel-release-5-4
yum remove epel-release-5-4
yum clean all
That's it!
-noveck
I'm running through a CentOS 6.x test server and needed to install the EPEL repos. For some reason I managed to install the RHEL5 rpm instead of the RHEL6. (D'Oh!)
This is a quick fix on removing the repo completely to install the proper one.
0. Got root? (or sudo)
1. Remove Repo
From terminal
rpm -qa | grep epel
Expected output (in my case)
epel-release-5-4
yum remove epel-release-5-4
yum clean all
That's it!
-noveck
Labels:
CentOS 6,
EPEL,
removing repo
Monday, April 8, 2013
Issues with ip6tables
Whilst troubleshooting a VM hanging issue with a EXSi 5.0 guest running CentOS 5, i noticed a strange error after my last kernel update.
It had me a bit confused, as I distinctly recall disabling all IPv6 support while building the machine.
Anyhow, since I don't need IPv6 support at the moment and to avoid any unnecessary red flags while booting, I'll just go ahead and disable it completely.
0. Login as root / su
1. Check to see if it loads on boot.
chkconfig --list | grep ip6tables
//in my instance, it was enabled for levels 2,3,4 and 5;
2. Disable ip6tables
chkconfig ip6tables off
3. Stop ip6tables service (if running)
service ip6tables status (if process is runnig, stop; if not, ignore next line)
service ip6tables stop
That's it!
It may or may not help my VM hanging issue, but at least it'd clear that pesky red flag.
Cheers,
noveck
It had me a bit confused, as I distinctly recall disabling all IPv6 support while building the machine.
Anyhow, since I don't need IPv6 support at the moment and to avoid any unnecessary red flags while booting, I'll just go ahead and disable it completely.
0. Login as root / su
1. Check to see if it loads on boot.
chkconfig --list | grep ip6tables
//in my instance, it was enabled for levels 2,3,4 and 5;
2. Disable ip6tables
chkconfig ip6tables off
3. Stop ip6tables service (if running)
service ip6tables status (if process is runnig, stop; if not, ignore next line)
service ip6tables stop
That's it!
It may or may not help my VM hanging issue, but at least it'd clear that pesky red flag.
Cheers,
noveck
Monday, March 11, 2013
InnoDB Restore Backup Procedure
My previous post elaborated on some scripts on backing up a MySQL Database with the InnoDB storage engine.
This post documents the restore procedure using the actual backup generated. Steps 7 and 8 are optional, but recommended. Better safe than sorry!
Assumptions:
· Target restore candidate server has Percona Xtrabackup installed.
· Target database server (restore candidate), has no active/production databases running.
Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto
Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly
0. Got root?
1. Locate appropriate backup file
Assumption that it is compressed like the Daily or Monthly in my previous post.
cp –r /bkp/Daily/mydbnamexxxxxx.tgz /root/Desktop
2. Uncompress backup file
cd /root/Desktop
tar xzf mydbnamexxxxxx.tgz
3. Prepare the backup file
cd /path/to/extracted/backup/
ls
(expected output should be ibdata1 file, folder containing database base and other xtrabackup named files)
innobackupex –apply-log /root/Desktop/*_dbxxxx/201xxxxx/
4. Stop the MYSQL Daemon and copy the files from the desktop to the mysql directory
service mysqld stop
rsync –rvt --exclude ‘xtrabackup_checkpoints’ --exclude ‘xtrabackup_logfile’ * /var/lib/mysql
5. Change ownership of the restored files and restart the MYSQL Daemon
chown –R mysql:mysql /var/lib/mysql
chmod –R 771 /var/lib/mysql
service mysqld start
6. Login to mysql and ensure the database has been restored via a table count
(It helps to have this documented in your production environment)
mysql –u root –p
<enter password when prompted>
show databases;
use mydbname; (use appropriate database name from list)
show tables;
Quit MYSQL:
\q
7: Execute a mysqlcheck with check flag to determine any data inconsistencies
mysqlcheck –c mydbname –u root –p
<enter password when prompted>
8: Execute a mysqlcheck with optimize flag to optimize restored table data
mysqlcheck –o mydbname –u root –p
<enter password when prompted>
--eof
-noveck
This post documents the restore procedure using the actual backup generated. Steps 7 and 8 are optional, but recommended. Better safe than sorry!
Assumptions:
· Target restore candidate server has Percona Xtrabackup installed.
· Target database server (restore candidate), has no active/production databases running.
Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto
Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly
0. Got root?
1. Locate appropriate backup file
Assumption that it is compressed like the Daily or Monthly in my previous post.
cp –r /bkp/Daily/mydbnamexxxxxx.tgz /root/Desktop
2. Uncompress backup file
cd /root/Desktop
tar xzf mydbnamexxxxxx.tgz
3. Prepare the backup file
cd /path/to/extracted/backup/
ls
(expected output should be ibdata1 file, folder containing database base and other xtrabackup named files)
innobackupex –apply-log /root/Desktop/*_dbxxxx/201xxxxx/
4. Stop the MYSQL Daemon and copy the files from the desktop to the mysql directory
service mysqld stop
rsync –rvt --exclude ‘xtrabackup_checkpoints’ --exclude ‘xtrabackup_logfile’ * /var/lib/mysql
5. Change ownership of the restored files and restart the MYSQL Daemon
chown –R mysql:mysql /var/lib/mysql
chmod –R 771 /var/lib/mysql
service mysqld start
6. Login to mysql and ensure the database has been restored via a table count
(It helps to have this documented in your production environment)
mysql –u root –p
<enter password when prompted>
show databases;
use mydbname; (use appropriate database name from list)
show tables;
Quit MYSQL:
\q
7: Execute a mysqlcheck with check flag to determine any data inconsistencies
mysqlcheck –c mydbname –u root –p
<enter password when prompted>
8: Execute a mysqlcheck with optimize flag to optimize restored table data
mysqlcheck –o mydbname –u root –p
<enter password when prompted>
--eof
-noveck
Labels:
CentOS,
innobackupex,
InnoDB,
Linux,
MySQL,
Percona,
xtrabackup
Monday, February 18, 2013
Monitoring and reporting on a remote MySQL database.
Depending on your environment, you won't grant every single server on
your network the ability to send email. This sort of security
precaution had me in a bit of a pickle, as I wanted to step up the
mysqlchecks I currently have scripted via cron on my database servers.
If your database server has the ability to send email, this solution may
not be the best option for you.
I already had a Zabbix Server sitting around, cheerfully gathering data and pumping reports on my little server farm and I thought about giving it some more work to do, without any extra pay. The zabbix server already had specific port access to these servers, so the security experts at my organization would not give me hell for this particular solution.
The logic was simple. Execute scripted checks from a remote host, run some data processing, if everything is good, just take a timestamp of execution. If everything is not good, send an email for the sysadmin or DBA to action immediately. The nice thing about this script is that it lists the tables that are not flagged as OK by MySQL.
Simple stuff, really.
Assumptions:
Database Server is 192.168.1.10
Monitoring Server is 192.168.1.11
Database to be monitored = mydb
Location of user scripts = /scripts
SSMTP is installed on the monitoring server. Installation Instructions here.
Both servers should already see each other or be able to ssh to/from one another. If not, seek assistance from your company's network gurus (walk with chocolates).
0. Login as root on the Database server.
You'll need to create an account that the remote server will access over the network.
Login as mysql root.
mysql -u root -p
<enter password>
GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,INDEX,ALTER on mydb.* to mydbuser@'192.168.1.11' identified by 'mydbuserpassword';
flush privileges;
1. Test that the remote connection works
From terminal on the monitoring server
mysql -u mydbuser -h 192.168.1.10 -p
<enter password>
You should be presented with a mysql prompt.
2. Install script on the Monitoring server
Copy and paste the following into a remotedbcheck.sh file in your scripts directory. Ensure the executable flag is set
chmod +x remotedbcheck.sh
Actual Script
#!/bin/sh
# A Script to execute mysqlcheck -c from a remote server
# and send an email if any table are flagged as NOT OK.
# The tables with issues are sent in the body of the
# email for admin action.
# Written by: Noveck Gowandan
# 09-01-2013
# Version 1.1
# Added script timer which will be appended to new logfile in mylocation (see variables)
##Variables
# Location of Results and Status Files (default: /tmp)
mylocation=/tmp
# Database to check
mydbname=mydb
# Database user with appropriate mysql access
mydbuser=mydbuser
# DB pass
mydbpass=mydbpassword
# Specify remote database host (IP preferred, hostname if entries are in /etc/hosts)
mydbhost=192.168.1.10
# Email variables (Valid Recipent and Sender Email required)
notifyemail=myemailaddress@myorg.com
serveremail=monitoring.server@myorg.com
# These variables can be left as is
resultsfilename=mysqlcheck-c-results.txt
statusfilename=mysqlcheck-c-status.txt
msgtext=`grep -v OK $mylocation/$mydbhost-$resultsfilename`
status=0
# Start timer
time_start=`date +%s`
#Remove any resultsfile that exists
rm -rf $mylocation/$mydbhost-$resultsfilename
#Remove any statusfile that exists
rm -rf $mylocation/$mydbhost-$statusfilename
#Execute mysqlcheck with -c flag only
mysqlcheck -c $mydbname -u $mydbuser -p$mydbpass -h $mydbhost > $mylocation/$mydbhost-$resultsfilename
#Check results file and generate status file
grep -v OK $mylocation/$mydbhost-$resultsfilename | wc -l > $mylocation/$mydbhost-$statusfilename
#Check Status file, send email
status=`cat $mylocation/$mydbhost-$statusfilename`;
if test $status -ge 1
then
ssmtp $notifyemail << EOF
To: $notifyemail
From: $serveremail
Subject: `echo "PROBLEM: MYSQLCHECK "$mydbhost`;
$msgtext
EOF
else
echo "Last Check OK!" >> $mylocation/$mydbhost-$mydbname-check.log
fi
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
date >> $mylocation/$mydbhost-$mydbname-check.log
echo "Execution Time was $total_time seconds." >> $mylocation/$mydbhost-$mydbname-check.log
echo "____________" >> $mylocation/$mydbhost-$mydbname-check.log
3. Add script to cron
Depending on your organization and criticality of the service, this may need to be done daily. Given the size of the database in my case and the length of time it takes to run, this example is weekly on Sundays at 2:01 AM.
nano /etc/crontab
Append to file
01 2 * * 0 root /bin/sh /scripts/remotedbcheck.sh
That's it!
The script will execute quietly and will not send any email if everything reports as OK. Be prepared if you do get any email from this script, you should act immediately!
Cheers,
-noveck
I already had a Zabbix Server sitting around, cheerfully gathering data and pumping reports on my little server farm and I thought about giving it some more work to do, without any extra pay. The zabbix server already had specific port access to these servers, so the security experts at my organization would not give me hell for this particular solution.
The logic was simple. Execute scripted checks from a remote host, run some data processing, if everything is good, just take a timestamp of execution. If everything is not good, send an email for the sysadmin or DBA to action immediately. The nice thing about this script is that it lists the tables that are not flagged as OK by MySQL.
Simple stuff, really.
Assumptions:
Database Server is 192.168.1.10
Monitoring Server is 192.168.1.11
Database to be monitored = mydb
Location of user scripts = /scripts
SSMTP is installed on the monitoring server. Installation Instructions here.
Both servers should already see each other or be able to ssh to/from one another. If not, seek assistance from your company's network gurus (walk with chocolates).
0. Login as root on the Database server.
You'll need to create an account that the remote server will access over the network.
Login as mysql root.
mysql -u root -p
<enter password>
GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,INDEX,ALTER on mydb.* to mydbuser@'192.168.1.11' identified by 'mydbuserpassword';
flush privileges;
1. Test that the remote connection works
From terminal on the monitoring server
mysql -u mydbuser -h 192.168.1.10 -p
<enter password>
You should be presented with a mysql prompt.
2. Install script on the Monitoring server
Copy and paste the following into a remotedbcheck.sh file in your scripts directory. Ensure the executable flag is set
chmod +x remotedbcheck.sh
Actual Script
#!/bin/sh
# A Script to execute mysqlcheck -c from a remote server
# and send an email if any table are flagged as NOT OK.
# The tables with issues are sent in the body of the
# email for admin action.
# Written by: Noveck Gowandan
# 09-01-2013
# Version 1.1
# Added script timer which will be appended to new logfile in mylocation (see variables)
##Variables
# Location of Results and Status Files (default: /tmp)
mylocation=/tmp
# Database to check
mydbname=mydb
# Database user with appropriate mysql access
mydbuser=mydbuser
# DB pass
mydbpass=mydbpassword
# Specify remote database host (IP preferred, hostname if entries are in /etc/hosts)
mydbhost=192.168.1.10
# Email variables (Valid Recipent and Sender Email required)
notifyemail=myemailaddress@myorg.com
serveremail=monitoring.server@myorg.com
# These variables can be left as is
resultsfilename=mysqlcheck-c-results.txt
statusfilename=mysqlcheck-c-status.txt
msgtext=`grep -v OK $mylocation/$mydbhost-$resultsfilename`
status=0
# Start timer
time_start=`date +%s`
#Remove any resultsfile that exists
rm -rf $mylocation/$mydbhost-$resultsfilename
#Remove any statusfile that exists
rm -rf $mylocation/$mydbhost-$statusfilename
#Execute mysqlcheck with -c flag only
mysqlcheck -c $mydbname -u $mydbuser -p$mydbpass -h $mydbhost > $mylocation/$mydbhost-$resultsfilename
#Check results file and generate status file
grep -v OK $mylocation/$mydbhost-$resultsfilename | wc -l > $mylocation/$mydbhost-$statusfilename
#Check Status file, send email
status=`cat $mylocation/$mydbhost-$statusfilename`;
if test $status -ge 1
then
ssmtp $notifyemail << EOF
To: $notifyemail
From: $serveremail
Subject: `echo "PROBLEM: MYSQLCHECK "$mydbhost`;
$msgtext
EOF
else
echo "Last Check OK!" >> $mylocation/$mydbhost-$mydbname-check.log
fi
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
date >> $mylocation/$mydbhost-$mydbname-check.log
echo "Execution Time was $total_time seconds." >> $mylocation/$mydbhost-$mydbname-check.log
echo "____________" >> $mylocation/$mydbhost-$mydbname-check.log
3. Add script to cron
Depending on your organization and criticality of the service, this may need to be done daily. Given the size of the database in my case and the length of time it takes to run, this example is weekly on Sundays at 2:01 AM.
nano /etc/crontab
Append to file
01 2 * * 0 root /bin/sh /scripts/remotedbcheck.sh
That's it!
The script will execute quietly and will not send any email if everything reports as OK. Be prepared if you do get any email from this script, you should act immediately!
Cheers,
-noveck
Labels:
email,
MySQL,
mysqlcheck,
remote server,
SSMTP,
Zabbix
Wednesday, January 16, 2013
Issues with CentOS 6 and EPEL
I was running into some issues with the EPEL repo on CentOS 6.x. I had just installed the OS and the EPEL/remi repos and whilst trying to search for a package the same error kept repeating.
Cannot retrieve metalink for repository: epel
and it kept failing each time.
The fix involved editing the EPEL repo and making a simple change from https to http. Why or how this broke is beyond me, but this fix may help someone else.
Assuming you have root or su:
1. Edit the epel repo
nano /etc/yum.repos.d/epel.repo
Find line:
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
Change to
mirrorlist=http://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
Save and exit.
2. Clean yum
yum clean all
That did it for me!
Cheers,
noveck
Cannot retrieve metalink for repository: epel
and it kept failing each time.
The fix involved editing the EPEL repo and making a simple change from https to http. Why or how this broke is beyond me, but this fix may help someone else.
Assuming you have root or su:
1. Edit the epel repo
nano /etc/yum.repos.d/epel.repo
Find line:
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
Change to
mirrorlist=http://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
Save and exit.
2. Clean yum
yum clean all
That did it for me!
Cheers,
noveck
Labels:
CentOS,
CentOS 6,
EPEL,
repos,
troubleshooting
Wednesday, January 9, 2013
InnoDB Backup Scripts
Following up on my earlier post about converting the storage engine on a MySQL Database from MyISAM to InnoDB, I'd like to share the following scripts with backup rotation built in.
The next blog entry should be a step-by-step restore procedure using the actual backups below.
Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto
Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly
Script Output Log Directory
/tmp
Script Directory
/scripts
Disclaimer: I do not guarantee this is the BEST way of doing this, but it works for me. Copy and paste the following into a *.sh file in your scripts directory. Ensure the executable flag is set
chmod +x scriptname.sh
Actual Scripts
innodb_backup_monthly.sh
#!/bin/sh
# An InnoDB Backup Script to backup database Monthly
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`
# Go to backup location and create Monthly folder with datestamp
cd /bkp/
mkdir M_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/M_db_$(date +%Y%m%d)
# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz M_db*
# Backup rotation section
rm -rf M_db*
rm -rf /bkp/Monthly/$(date +"%B")
mkdir /bkp/Monthly/$(date +"%B")
mv mydbname* /bkp/Monthly/$(date +"%B")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_monthly.log
date >> /tmp/db_backup_monthly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_monthly.log
# Go to backup location and create Daily folder with datestamp
cd /bkp/
mkdir D_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/D_db_$(date +%Y%m%d)
# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz D_db*
# Backup rotation section
rm -rf D_db*
rm -rf /bkp/Daily/$(date +"%A")
mkdir /bkp/Daily/$(date +"%A")
mv mydbname* /bkp/Daily/$(date +"%A")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_daily.log
date >> /tmp/db_backup_daily.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_daily.log
innodb_backup_hourly.sh
The next blog entry should be a step-by-step restore procedure using the actual backups below.
Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto
Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly
Script Output Log Directory
/tmp
Script Directory
/scripts
Disclaimer: I do not guarantee this is the BEST way of doing this, but it works for me. Copy and paste the following into a *.sh file in your scripts directory. Ensure the executable flag is set
chmod +x scriptname.sh
Actual Scripts
innodb_backup_monthly.sh
#!/bin/sh
# An InnoDB Backup Script to backup database Monthly
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`
# Go to backup location and create Monthly folder with datestamp
cd /bkp/
mkdir M_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/M_db_$(date +%Y%m%d)
# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz M_db*
# Backup rotation section
rm -rf M_db*
rm -rf /bkp/Monthly/$(date +"%B")
mkdir /bkp/Monthly/$(date +"%B")
mv mydbname* /bkp/Monthly/$(date +"%B")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_monthly.log
date >> /tmp/db_backup_monthly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_monthly.log
innodb_backup_monthly.sh
#!/bin/sh
# An InnoDB Backup Script to backup database DAILY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# An InnoDB Backup Script to backup database DAILY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`
time_start=`date +%s`
# Go to backup location and create Daily folder with datestamp
cd /bkp/
mkdir D_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/D_db_$(date +%Y%m%d)
# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz D_db*
# Backup rotation section
rm -rf D_db*
rm -rf /bkp/Daily/$(date +"%A")
mkdir /bkp/Daily/$(date +"%A")
mv mydbname* /bkp/Daily/$(date +"%A")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_daily.log
date >> /tmp/db_backup_daily.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_daily.log
innodb_backup_hourly.sh
#!/bin/sh
# An InnoDB Backup Script to backup database HOURLY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Added script timer and modified output to log time
# An InnoDB Backup Script to backup database HOURLY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`
time_start=`date +%s`
# Go to backup location and create Hourly folder with datestamp
cd /bkp/
mkdir H_db_$(date +%Y%m%d)
cd /bkp/
mkdir H_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/H_db_$(date +%Y%m%d)
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/H_db_$(date +%Y%m%d)
# Backup rotation section
rm -rf /bkp/Hourly/$(date +"%H")
mkdir /bkp/Hourly/$(date +"%H")
mv H_db* /bkp/Hourly/$(date +"%H")
rm -rf /bkp/Hourly/$(date +"%H")
mkdir /bkp/Hourly/$(date +"%H")
mv H_db* /bkp/Hourly/$(date +"%H")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_hourly.log
date >> /tmp/db_backup_hourly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_hourly.log
echo "____________" >> /tmp/db_backup_hourly.log
date >> /tmp/db_backup_hourly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_hourly.log
Labels:
CentOS,
innobackupex,
InnoDB,
Linux,
MySQL,
Percona,
xtrabackup
Subscribe to:
Posts (Atom)