zabbix进行数据库备份以及表分区的方法

2025-05-27 0 56

由于测试环境上面使用的zabbix服务器配置比较低,经常会遇到性能瓶颈(主要是数据库和磁盘I/O等),于是倒逼我使用了一些方式来缓解这些问题。

主要是以前使用的那个备份数据库的脚本是对zabbix数据库进行全备的,使用的又是mysql自带的工具mysqldump,当数据量大了之后进行全备所花的时间比较长,这样将会造成数据库的锁读。。。从而使zabbix服务以为mysql死掉了,产生一大堆的报警。

后来发现原来造成数据库数据量大量增加的是zabbix数据库中的一些存储数据的大表导致的。于是备份数据库的时候可以选择跳过这些表进行备份,这样,将大大减少数据库备份所花的时间(PS:之前备份数据库所花时间在十分钟左右,现在跳过大表备份,所花时间在1S左右就能备份完,大大缩短了备份数据库时间)。

下面就贴出某位大神写的专门为zabbix数据库做备份以及恢复的脚本:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81
#!/bin/bash

#author:itnihao

red='\\e[0;31m'#红色

RED='\\e[1;31m'

green='\\e[0;32m'#绿色

GREEN='\\e[1;32m'

blue='\\e[0;34m'#蓝色

BLUE='\\e[1;34m'

purple='\\e[0;35m'#紫色

PURPLE='\\e[1;35m'

NC='\\e[0m'#没有颜色

source/etc/bashrc

source/etc/profile

MySQL_USER=zabbix

MySQL_PASSWORD=zabbix

MySQL_HOST=localhost

MySQL_PORT=3306

MySQL_DUMP_PATH=/opt/backup

MYSQL_BIN_PATH=/opt/software/mysql/bin/mysql

MYSQL_DUMP_BIN_PATH=/opt/software/mysql/bin/mysqldump

MySQL_DATABASE_NAME=zabbix

DATE=$(date'+%Y%m%d')

MySQLDUMP(){

[-d${MySQL_DUMP_PATH}]||mkdir${MySQL_DUMP_PATH}

cd${MySQL_DUMP_PATH}

[-dlogs]||mkdirlogs

[-d${DATE}]||mkdir${DATE}

cd${DATE}

#TABLE_NAME_ALL=$(${MYSQL_BIN_PATH}-u${MySQL_USER}-p${MySQL_PASSWORD}-h${MySQL_HOST}${MySQL_DATABASE_NAME}-e"showtables"|egrep-v"(Tables_in_zabbix)")

TABLE_NAME_ALL=$(${MYSQL_BIN_PATH}-u${MySQL_USER}-p${MySQL_PASSWORD}-h${MySQL_HOST}${MySQL_DATABASE_NAME}-e"showtables"|egrep-v"(Tables_in_zabbix|history*|trends*|acknowledges|alerts|auditlog|events|service_alarms)")

forTABLE_NAMEin${TABLE_NAME_ALL}

do

${MYSQL_DUMP_BIN_PATH}--opt-u${MySQL_USER}-p${MySQL_PASSWORD}-P${MySQL_PORT}-h${MySQL_HOST}${MySQL_DATABASE_NAME}${TABLE_NAME}>${TABLE_NAME}.sql

sleep0.01

done

["$?"==0]&&echo"${DATE}:Backupzabbixsucceed">>${MySQL_DUMP_PATH}/logs/ZabbixMysqlDump.log

["$?"!=0]&&echo"${DATE}:Backupzabbixnotsucceed">>${MySQL_DUMP_PATH}/logs/ZabbixMysqlDump.log

cd${MySQL_DUMP_PATH}/

rm-rf$(date+%Y%m%d--date='5daysago')

exit0

}

MySQLImport(){

cd${MySQL_DUMP_PATH}

DATE=$(ls${MySQL_DUMP_PATH}|egrep"\\b^[0-9]+$\\b")

echo-e"${green}${DATE}"

echo-e"${blue}whatDATEdoyouwanttoimport,pleaseinputdate:${NC}"

readSELECT_DATE

if[-d"${SELECT_DATE}"];then

echo-e"youselectis${green}${SELECT_DATE}${NC},doyouwanttocontine,if,input${red}(yes|y|Y)${NC},elsethenexit"

readInput

[['yes|y|Y'=~"${Input}"]]

status="$?"

if["${status}"=="0"];then

echo"nowimportSQL.......Pleasewait......."

else

exit1

fi

cd${SELECT_DATE}

forPER_TABEL_SQLin$(ls*.sql)

do

${MYSQL_BIN_PATH}-u${MySQL_USER}-p${MySQL_PASSWORD}-h${MySQL_HOST}${MySQL_DATABASE_NAME}<${PER_TABEL_SQL}

echo-e"import${PER_TABEL_SQL}${PURPLE}........................${NC}"

done

echo"FinishimportSQL,PleasecheckZabbixdatabase"

else

echo"Don'texist${SELECT_DATE}DIR"

fi

}

case"$1"in

MySQLDUMP|mysqldump)

MySQLDUMP

;;

MySQLImport|mysqlimport)

MySQLImport

;;

*)

echo"Usage:$0{(MySQLDUMP|mysqldump)(MySQLImport|mysqlimport)}"

;;

esac

该脚本源出处在这https://github.com/itnihao/zabbix-book/blob/master/03-chapter/Zabbix_MySQLdump_per_table_v2.sh

我这是在大神的脚本上做了修改之后形成的适合我自己备份的脚本,各位也可以自行修改成适合自己的备份脚本。这个脚本实现的效果上面已经说了,之前做全备的时候差不多有4G左右的数据量,现在只备份配置文件数据量只有不到10M,果断大大节省时间以及空间呀。

不过这样的话将无法保证数据的备份,我目前考虑使用xtradbbackup对数据进行增量备份,目前还未实现,留待过两天做吧。

好了,关于数据库备份的事情搞了,然后还需要对大数据量的表进行表分区,参考了zabbix官网的一篇文章https://www.zabbix.org/wiki/Docs/howto/mysql_partition各位有兴趣的话可以去看看,我这里将其总结在了一起,更加方便一点。

表分区可以对大数据量的表进行物理上的拆分成多个文件,但是逻辑上来看,还是一张表,对应用程序是透明的。另外,将这一张大表拆分成很多小表的话将使得数据查询速度能够更快。还可以随时删除旧的数据分区,删除过期数据。这种方式适用于大数据量的表,但是查询量比较少的应用场景。如果是大数据量的表,又有大量查询的话建议还是进行分库分表操作。

好了,不多扯了,开始作业了。

首先,登录数据库(PS:这个就不演示了)

然后登陆到zabbix库中修改两张表的结构:

1

2

3
usezabbix;

Altertablehistory_textdropprimarykey,addindex(id),dropindexhistory_text_2,addindexhistory_text_2(itemid,id);

Altertablehistory_logdropprimarykey,addindex(id),dropindexhistory_log_2,addindexhistory_log_2(itemid,id);

修改完之后再按照官网上的过程创建四个存储过程:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31
DELIMITER$$

CREATEPROCEDURE`partition_create`(SCHEMANAMEVARCHAR(64),TABLENAMEVARCHAR(64),PARTITIONNAMEVARCHAR(64),CLOCKINT)

BEGIN

/*

SCHEMANAME=TheDBschemainwhichtomakechanges

TABLENAME=Thetablewithpartitionstopotentiallydelete

PARTITIONNAME=Thenameofthepartitiontocreate

*/

/*

Verifythatthepartitiondoesnotalreadyexist

*/

DECLARERETROWSINT;

SELECTCOUNT(1)INTORETROWS

FROMinformation_schema.partitions

WHEREtable_schema=SCHEMANAMEANDTABLE_NAME=TABLENAMEANDpartition_description>=CLOCK;

IFRETROWS=0THEN

/*

1.Printamessageindicatingthatapartitionwascreated.

2.CreatetheSQLtocreatethepartition.

3.ExecutetheSQLfrom#2.

*/

SELECTCONCAT("partition_create(",SCHEMANAME,",",TABLENAME,",",PARTITIONNAME,",",CLOCK,")")ASmsg;

SET@SQL=CONCAT('ALTERTABLE',SCHEMANAME,'.',TABLENAME,'ADDPARTITION(PARTITION',PARTITIONNAME,'VALUESLESSTHAN(',CLOCK,'));');

PREPARESTMTFROM@SQL;

EXECUTESTMT;

DEALLOCATEPREPARESTMT;

ENDIF;

END

$$DELIMITER;
1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62
DELIMITER$$

CREATEPROCEDURE`partition_drop`(SCHEMANAMEVARCHAR(64),TABLENAMEVARCHAR(64),DELETE_BELOW_PARTITION_DATEBIGINT)

BEGIN

/*

SCHEMANAME=TheDBschemainwhichtomakechanges

TABLENAME=Thetablewithpartitionstopotentiallydelete

DELETE_BELOW_PARTITION_DATE=Deleteanypartitionswithnamesthataredatesolderthanthisone(yyyy-mm-dd)

*/

DECLAREdoneINTDEFAULTFALSE;

DECLAREdrop_part_nameVARCHAR(16);

/*

Getalistofallthepartitionsthatareolderthanthedate

inDELETE_BELOW_PARTITION_DATE.Allpartitionsareprefixedwith

a"p",souseSUBSTRINGTOgetridofthatcharacter.

*/

DECLAREmyCursorCURSORFOR

SELECTpartition_name

FROMinformation_schema.partitions

WHEREtable_schema=SCHEMANAMEANDTABLE_NAME=TABLENAMEANDCAST(SUBSTRING(partition_nameFROM2)ASUNSIGNED)<DELETE_BELOW_PARTITION_DATE;

DECLARECONTINUEHANDLERFORNOTFOUNDSETdone=TRUE;

/*

Createthebasicsforwhenweneedtodropthepartition.Also,create

@drop_partitionstoholdacomma-delimitedlistofallpartitionsthat

shouldbedeleted.

*/

SET@alter_header=CONCAT("ALTERTABLE",SCHEMANAME,".",TABLENAME,"DROPPARTITION");

SET@drop_partitions="";

/*

Startloopingthroughallthepartitionsthataretooold.

*/

OPENmyCursor;

read_loop:LOOP

FETCHmyCursorINTOdrop_part_name;

IFdoneTHEN

LEAVEread_loop;

ENDIF;

SET@drop_partitions=IF(@drop_partitions="",drop_part_name,CONCAT(@drop_partitions,",",drop_part_name));

ENDLOOP;

IF@drop_partitions!=""THEN

/*

1.BuildtheSQLtodropallthenecessarypartitions.

2.RuntheSQLtodropthepartitions.

3.Printoutthetablepartitionsthatweredeleted.

*/

SET@full_sql=CONCAT(@alter_header,@drop_partitions,";");

PREPARESTMTFROM@full_sql;

EXECUTESTMT;

DEALLOCATEPREPARESTMT;

SELECTCONCAT(SCHEMANAME,".",TABLENAME)AS`table`,@drop_partitionsAS`partitions_deleted`;

ELSE

/*

Nopartitionsarebeingdeleted,soprintout"N/A"(Notapplicable)toindicate

thatnochangesweremade.

*/

SELECTCONCAT(SCHEMANAME,".",TABLENAME)AS`table`,"N/A"AS`partitions_deleted`;

ENDIF;

END$$

DELIMITER;
1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28
DELIMITER$$

CREATEPROCEDURE`partition_maintenance`(SCHEMA_NAMEVARCHAR(32),TABLE_NAMEVARCHAR(32),KEEP_DATA_DAYSINT,HOURLY_INTERVALINT,CREATE_NEXT_INTERVALSINT)

BEGIN

DECLAREOLDER_THAN_PARTITION_DATEVARCHAR(16);

DECLAREPARTITION_NAMEVARCHAR(16);

DECLARELESS_THAN_TIMESTAMPINT;

DECLARECUR_TIMEINT;

CALLpartition_verify(SCHEMA_NAME,TABLE_NAME,HOURLY_INTERVAL);

SETCUR_TIME=UNIX_TIMESTAMP(DATE_FORMAT(NOW(),'%Y-%m-%d00:00:00'));

SET@__interval=1;

create_loop:LOOP

IF@__interval>CREATE_NEXT_INTERVALSTHEN

LEAVEcreate_loop;

ENDIF;

SETLESS_THAN_TIMESTAMP=CUR_TIME+(HOURLY_INTERVAL*@__interval*3600);

SETPARTITION_NAME=FROM_UNIXTIME(CUR_TIME+HOURLY_INTERVAL*(@__interval-1)*3600,'p%Y%m%d%H00');

CALLpartition_create(SCHEMA_NAME,TABLE_NAME,PARTITION_NAME,LESS_THAN_TIMESTAMP);

SET@__interval=@__interval+1;

ENDLOOP;

SETOLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(),INTERVALKEEP_DATA_DAYSDAY),'%Y%m%d0000');

CALLpartition_drop(SCHEMA_NAME,TABLE_NAME,OLDER_THAN_PARTITION_DATE);

END$$

DELIMITER;
1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38
DELIMITER$$

CREATEPROCEDURE`partition_verify`(SCHEMANAMEVARCHAR(64),TABLENAMEVARCHAR(64),HOURLYINTERVALINT(11))

BEGIN

DECLAREPARTITION_NAMEVARCHAR(16);

DECLARERETROWSINT(11);

DECLAREFUTURE_TIMESTAMPTIMESTAMP;

/*

*CheckifanypartitionsexistforthegivenSCHEMANAME.TABLENAME.

*/

SELECTCOUNT(1)INTORETROWS

FROMinformation_schema.partitions

WHEREtable_schema=SCHEMANAMEANDTABLE_NAME=TABLENAMEANDpartition_nameISNULL;

/*

*Ifpartitionsdonotexist,goaheadandpartitionthetable

*/

IFRETROWS=1THEN

/*

*Takethecurrentdateat00:00:00andaddHOURLYINTERVALtoit.Thisisthetimestampbelowwhichwewillstorevalues.

*Webeginpartitioningbasedonthebeginningofaday.Thisisbecausewedon'twanttogeneratearandompartition

*thatwon'tnecessarilyfallinlinewiththedesiredpartitionnaming(ie:ifthehourintervalis24hours,wecould

*endupcreatingapartitionnownamed"p201403270600"whenallotherpartitionswillbelike"p201403280000").

*/

SETFUTURE_TIMESTAMP=TIMESTAMPADD(HOUR,HOURLYINTERVAL,CONCAT(CURDATE(),"",'00:00:00'));

SETPARTITION_NAME=DATE_FORMAT(CURDATE(),'p%Y%m%d%H00');

--Createthepartitioningquery

SET@__PARTITION_SQL=CONCAT("ALTERTABLE",SCHEMANAME,".",TABLENAME,"PARTITIONBYRANGE(`clock`)");

SET@__PARTITION_SQL=CONCAT(@__PARTITION_SQL,"(PARTITION",PARTITION_NAME,"VALUESLESSTHAN(",UNIX_TIMESTAMP(FUTURE_TIMESTAMP),"));");

--Runthepartitioningquery

PREPARESTMTFROM@__PARTITION_SQL;

EXECUTESTMT;

DEALLOCATEPREPARESTMT;

ENDIF;

END$$

DELIMITER;

上面四个存储过程执行后将可以使用

1
CALLpartition_maintenance('<zabbix_db_name>','<table_name>',<days_to_keep_data>,<hourly_interval>,<num_future_intervals_to_create>)

命令对想要分区的表进行表分区了。其中的参数我这里解释一下。

这是举例:

1
CALLpartition_maintenance(zabbix,'history_uint',31,24,14);

zabbix_db_name:库名

table_name:表名

days_to_keep_data:保存多少天的数据

hourly_interval:每隔多久生成一个分区

num_future_intervals_to_create:本次一共生成多少个分区

这个例子就是history_uint表最多保存31天的数据,每隔24小时生成一个分区,这次一共生成14个分区

这里可以将上面四个存储过程保存为一个文件,导入到数据库中,文件我稍后将会放在附件中,这里使用的命令是:mysql -uzabbix -pzabbix zabbix<partition_call.sql

然后可以将CALL统一调用也做成一个文件,统一调用的内容如下:

1

2

3

4

5

6

7

8

9

10

11

12
DELIMITER$$

CREATEPROCEDURE`partition_maintenance_all`(SCHEMA_NAMEVARCHAR(32))

BEGIN

CALLpartition_maintenance(SCHEMA_NAME,'history',31,24,14);

CALLpartition_maintenance(SCHEMA_NAME,'history_log',31,24,14);

CALLpartition_maintenance(SCHEMA_NAME,'history_str',31,24,14);

CALLpartition_maintenance(SCHEMA_NAME,'history_text',31,24,14);

CALLpartition_maintenance(SCHEMA_NAME,'history_uint',31,24,14);

CALLpartition_maintenance(SCHEMA_NAME,'trends',180,24,14);

CALLpartition_maintenance(SCHEMA_NAME,'trends_uint',180,24,14);

END$$

DELIMITER;

也将该文件导入到数据库中,使用命令:mysql -uzabbix -pzabbix zabbix<partition_all.sql

好了,到了这里之后就可以使用如下命令执行表分区了:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36
mysql-uzabbix-pzabbixzabbix-e"CALLpartition_maintenance_all('zabbix');"

+----------------+--------------------+

|table|partitions_deleted|

+----------------+--------------------+

|zabbix.history|N/A|

+----------------+--------------------+

+--------------------+--------------------+

|table|partitions_deleted|

+--------------------+--------------------+

|zabbix.history_log|N/A|

+--------------------+--------------------+

+--------------------+--------------------+

|table|partitions_deleted|

+--------------------+--------------------+

|zabbix.history_str|N/A|

+--------------------+--------------------+

+---------------------+--------------------+

|table|partitions_deleted|

+---------------------+--------------------+

|zabbix.history_text|N/A|

+---------------------+--------------------+

+---------------------+--------------------+

|table|partitions_deleted|

+---------------------+--------------------+

|zabbix.history_uint|N/A|

+---------------------+--------------------+

+---------------+--------------------+

|table|partitions_deleted|

+---------------+--------------------+

|zabbix.trends|N/A|

+---------------+--------------------+

+--------------------+--------------------+

|table|partitions_deleted|

+--------------------+--------------------+

|zabbix.trends_uint|N/A|

+--------------------+--------------------+

看到如下结果证明所有7张表都进行了表分区,也可以在Mysql的数data目录下看到新生成的表分区文件。(PS:注意,最好是清空history_uint表的数据之后再执行上面这条命令,否则因为这张表数据量太大,转换时间将会好长,清空表中数据的命令为:truncate table history_uint;)

好了,这样可以进行表分区了。

将上面这条命令写入到计划任务中如下:

1

2
crontab-l|tail-1

0101***/opt/software/mysql/bin/mysql-uzabbix-pzabbixzabbix-e"CALLpartition_maintenance_all('zabbix');"

每天晚上的1点01执行一次。还有之前写的备份数据库的脚本也需要执行计划任务每天的凌晨0点01执行备份:

1

2
crontab-l|tail-2|head-1

0100***/usr/local/scripts/Zabbix_MySQLdump_per_table_v2.shmysqldump

这样就大功告成了,之后再体验一下zabbix的web页面看是不是感觉比以前快了?

收藏 (0) 打赏

感谢您的支持,我会继续努力的!

打开微信/支付宝扫一扫,即可进行扫码打赏哦,分享从这里开始,精彩与您同在
点赞 (0)

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。

快网idc优惠网 行业资讯 zabbix进行数据库备份以及表分区的方法 https://www.kuaiidc.com/63857.html

相关文章

发表评论
暂无评论