签到成功

知道了

CNDBA社区CNDBA社区

MongoDB 4.4 分片集群(3 shard) 搭建手册

2022-05-03 23:57 2911 1 原创 MongoDB
作者: dave

1 环境配置说明


在之前的博客,我们了解了3节点的副本集环境搭建:

https://www.cndba.cn/dave/article/107970

MongoDB 4.4 副本集(3节点) 搭建手册
https://www.cndba.cn/dave/article/107967

本片我们还在这个环境上,搭建一个3 shard的分片集群。https://www.cndba.cn/dave/article/107970

[dave@www.cndba.cn_3 ~]# cat /etc/hosts
127.0.0.1   localhost
172.31.185.120 mongodb1
172.31.185.165 mongodb2
172.31.185.131 mongodb3

架构如下:

3mongos + configdb(replset 3) + shard1(replset 3)

端口配置如下:

Mongos:27000
Config server:27001
Shard1:27018
Shard2:27019
Shard3:27020

2 分片集群组件安装配置


本分片集群主要涉及如下3个部分的安装配置:

  1. 3个shard副本集搭建
  2. 1个configdb 副本集搭建
  3. 3个mongos 实例搭建

因为我们是在3台机器上进行的测试。 所以以上组件会在每个物理机上存在。 在操作之前需要先进行相关的目录创建工作。

操作和之前的副本安装基本一致。https://www.cndba.cn/dave/article/107970

MongoDB 4.4 副本集(3节点) 搭建手册
https://www.cndba.cn/dave/article/107967

在每个节点都创建相关目录:

[dave@www.cndba.cn_1 ~]#mkdir -p /data/mongodb/{data,etc,logs,run}
[dave@www.cndba.cn_1 ~]#mkdir -p /data/mongodb/data/{shard1,shard2,shard3,configdb}

创建安全keyfile:

https://www.cndba.cn/dave/article/107970
https://www.cndba.cn/dave/article/107970

[dave@www.cndba.cn_1 ~]#openssl rand -base64 753 > /data/mongodb/etc/mongo.keyfile
[dave@www.cndba.cn_1 ~]#chmod 600 /data/mongodb/etc/mongo.keyfile

安装Mongodb软件:

Redhat 7.7 平台 MongoDB 4.4.6 安装 配置 手册
https://www.cndba.cn/dave/article/4542

修改shard 配置参数

[dave@www.cndba.cn_1 etc]# cat shard1.conf
systemLog:
  destination: file
  path: "/data/mongodb/logs/shard1.log"
  logAppend: true
storage:
  journal:
    enabled: true
  dbPath: "/data/mongodb/data/shard1"
net:
  bindIp: 0.0.0.0
  port: 27018
security:
  keyFile: /data/mongodb/etc/mongo.keyfile
  authorization: "enabled"
processManagement:
  fork: true
replication:
  replSetName: shard1
sharding:
  clusterRole: shardsvr


[dave@www.cndba.cn_1 etc]# cat shard2.conf
systemLog:
  destination: file
  path: "/data/mongodb/logs/shard2.log"
  logAppend: true
storage:
  journal:
    enabled: true
  dbPath: "/data/mongodb/data/shard2"
net:
  bindIp: 0.0.0.0
  port: 27019
security:
  keyFile: /data/mongodb/etc/mongo.keyfile
  authorization: "enabled"
processManagement:
  fork: true
replication:
  replSetName: shard2
sharding:
  clusterRole: shardsvr


[dave@www.cndba.cn_1 etc]# cat shard3.conf
systemLog:
  destination: file
  path: "/data/mongodb/logs/shard3.log"
  logAppend: true
storage:
  journal:
    enabled: true
  dbPath: "/data/mongodb/data/shard3"
net:
  bindIp: 0.0.0.0
  port: 27020
security:
  keyFile: /data/mongodb/etc/mongo.keyfile
  authorization: "enabled"
processManagement:
  fork: true
replication:
  replSetName: shard3
sharding:
  clusterRole: shardsvr
[dave@www.cndba.cn_1 etc]#

修改configdb 配置参数

[dave@www.cndba.cn_1 etc]# cat configdb.cnf
systemLog:
  destination: file
  path: "/data/mongodb/logs/configdb.log"
  logAppend: true
storage:
  journal:
    enabled: true
  dbPath: "/data/mongodb/data/configdb"
net:
  bindIp: 0.0.0.0
  port: 27001
security:
  keyFile: /data/mongodb/etc/mongo.keyfile
  authorization: "enabled"
processManagement:
  fork: true
replication:
  replSetName: configdb
sharding:
  clusterRole: configsvr
[dave@www.cndba.cn_1 etc]#

修改mongos配置参数

[dave@www.cndba.cn_1 etc]# cat mongos.conf
systemLog:
  destination: file
  path: "/data/mongodb/logs/mongos.log"
  logAppend: true
net:
  bindIp: 0.0.0.0
  port: 27000
security:
  keyFile: /data/mongodb/etc/mongo.keyfile
processManagement:
  fork: true
  pidFilePath: /data/mongodb/run/mongos.pid
sharding:
  configDB: configdb/172.31.185.120:27001,172.31.185.165:27001,172.31.185.131:27001

[dave@www.cndba.cn_1 etc]#

将configdb.cnf、mongos.conf、shard1.conf、shard2.conf、shard3.conf 同步到其他2个节点:

[dave@www.cndba.cn_1 etc]# pwd
/data/mongodb/etc
[dave@www.cndba.cn_1 etc]# ll
total 28
-rw-r--r-- 1 root root  390 May  3 22:42 configdb.conf
-rw-r--r-- 1 root root  340 Apr 30 15:59 mongo.conf
-rw------- 1 root root 1020 Apr 30 13:50 mongo.keyfile
-rw-r--r-- 1 root root  350 May  3 22:49 mongos.conf
-rw-r--r-- 1 root root  383 May  3 22:36 shard1.conf
-rw-r--r-- 1 root root  383 May  3 22:36 shard2.conf
-rw-r--r-- 1 root root  383 May  3 22:37 shard3.conf
[dave@www.cndba.cn_1 etc]#
[dave@www.cndba.cn_1 etc]# scp configdb.conf mongos.conf shard1.conf shard2.conf shard3.conf mongodb2:`pwd`
dave@www.cndba.cn_2's password:
configdb.cnf                                                                                 100%  390   132.3KB/s   00:00
mongos.conf                                                                                  100%  350   154.3KB/s   00:00
shard1.conf                                                                                  100%  383   116.6KB/s   00:00
shard2.conf                                                                                  100%  383   207.5KB/s   00:00
shard3.conf                                                                                  100%  383   216.5KB/s   00:00
[dave@www.cndba.cn_1 etc]# scp configdb.cnf mongos.conf shard1.conf shard2.conf shard3.conf mongodb3:`pwd`
dave@www.cndba.cn_3's password:
configdb.cnf                                                                                 100%  390   162.0KB/s   00:00
mongos.conf                                                                                  100%  350   183.5KB/s   00:00
shard1.conf                                                                                  100%  383   190.6KB/s   00:00
shard2.conf                                                                                  100%  383   251.2KB/s   00:00
shard3.conf                                                                                  100%  383   264.4KB/s   00:00
[dave@www.cndba.cn_1 etc]#

在所有节点启动相关进程。

启动shard 和 configdb:

[dave@www.cndba.cn_1 etc]# mongod -f /data/mongodb/etc/shard1.conf
[dave@www.cndba.cn_1 etc]# mongod -f /data/mongodb/etc/shard2.conf
[dave@www.cndba.cn_1 etc]# mongod -f /data/mongodb/etc/shard3.conf
[dave@www.cndba.cn_1 etc]# mongod -f /data/mongodb/etc/configdb.conf

以上只是节点1上的操作,在其他2个节点重复以上目录。

3 初始化Shard和configdb 副本集


初始化Shard 副本集

这里3个副本集除了端口不同之外,其他操作相同。 这里只记录shard1.

[dave@www.cndba.cn_1 etc]# mongo 127.0.0.1:27018/admin
> config={
    _id:"shard1", 
    members:[
        {_id:0, host:'172.31.185.120:27018'},
        {_id:1, host:'172.31.185.165:27018'}, 
        {_id:2, host:'172.31.185.131:27018'}
    ]};
> rs.initiate(config);

创建个超级管理员,后面验证使用:
shard1:SECONDARY> db.createUser({user:"root",pwd:"root",roles:[{role:"root",db:"admin"}]})

初始化cofigdb 副本集:

[dave@www.cndba.cn_1 etc]# mongo 127.0.0.1:27001/admin
config={
    _id:"configdb", 
    members:[
        {_id:0, host:'172.31.185.120:27001'},
        {_id:1, host:'172.31.185.165:27001'}, 
        {_id:2, host:'172.31.185.131:27001'}
    ]};
rs.initiate(config);

configdb:SECONDARY> db.createUser({user:"root",pwd:"root",roles:[{role:"root",db:"admin"}]})

4 启动Mongos并添加shard信息


在所有节点启动mongos:

https://www.cndba.cn/dave/article/107970
https://www.cndba.cn/dave/article/107970

[dave@www.cndba.cn_1 etc]# mongos -f /data/mongodb/etc/mongos.conf

连接mongos 实例并添加shard信息:

[dave@www.cndba.cn_3 etc]# mongo 127.0.0.1:27000/admin

mongos> db.auth('root','root')

>sh.addShard( "shard1/172.31.185.120:27018,172.31.185.165:27018,172.31.185.131:27018")
>sh.addShard( "shard2/172.31.185.120:27019,172.31.185.165:27019,172.31.185.131:27019")
>sh.addShard( "shard3/172.31.185.120:27020,172.31.185.165:27020,172.31.185.131:27020")

查看集群状态

mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("627147d1763e74c93e6af56a")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.31.185.120:27018,172.31.185.131:27018,172.31.185.165:27018",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.31.185.120:27019,172.31.185.131:27019,172.31.185.165:27019",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/172.31.185.120:27020,172.31.185.131:27020,172.31.185.165:27020",  "state" : 1 }
  active mongoses:
        "4.4.13" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                26 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  998
                                shard2  13
                                shard3  13
                        too many chunks to print, use verbose if you want to force print
mongos>

5 设置分片规则


创建测试库和连接用户,注意这里是在mongos里操作的:

mongos> use cndba
switched to db cndba
mongos> db.createUser({user: "cndba",pwd: "cndba",roles: [ { role: "dbOwner", db: "cndba" } ]});
Successfully added user: {
        "user" : "cndba",
        "roles" : [
                {
                        "role" : "dbOwner",
                        "db" : "cndba"
                }
        ]
}
mongos>

指定要分片的数据库

mongos> sh.enableSharding("cndba")
{
        "ok" : 1,
        "operationTime" : Timestamp(1651592207, 21),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1651592207, 21),
                "signature" : {
                        "hash" : BinData(0,"iTjlyHl7JvRt5KPJpJWCoR0TAZU="),
                        "keyId" : NumberLong("7093529851058978835")
                }
        }
}

指定集合的分片规则:这里表示指定cndba库下的user集合的_id字段,按hash散列进行分片,分片规则一经创建不可修改,只能删除集合再重新设置

mongos> sh.shardCollection("cndba.user", { _id : "hashed" } )
{
        "collectionsharded" : "cndba.user",
        "collectionUUID" : UUID("c2a04e80-e643-42b7-b2ca-fd257b0a0c8c"),
        "ok" : 1,
        "operationTime" : Timestamp(1651592234, 28),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1651592234, 28),
                "signature" : {
                        "hash" : BinData(0,"ZjTWHESWwpkOlXMSsG6h2fzO1F0="),
                        "keyId" : NumberLong("7093529851058978835")
                }
        }
}

插入测试数据

mongos> for(var i=1;i<=100000;i++){db.user.save({_id:i,”name”:”cndba”})};

查询user的集合状态https://www.cndba.cn/dave/article/107970

>use cndba
>db.user.stats()
……
     "shards" : {
                "shard2" : {
                        "ns" : "cndba.user",
                        "size" : 1126862,
                        "count" : 33143,
                        "avgObjSize" : 34,
                        "storageSize" : 425984,
                        "freeStorageSize" : 155648,
                        "capped" : false,
                        "wiredTiger" : {
                                "metadata" : {
                                        "formatVersion" : 1
                                },
……
                "shard1" : {
                        "ns" : "cndba.user",
                        "size" : 1147670,
                        "count" : 33755,
                        "avgObjSize" : 34,
                        "storageSize" : 487424,
                        "freeStorageSize" : 143360,
                        "capped" : false,
                        "wiredTiger" : {
                                "metadata" : {
                                        "formatVersion" : 1
                                },
……
                "shard3" : {
                        "ns" : "cndba.user",
                        "size" : 1125468,
                        "count" : 33102,
                        "avgObjSize" : 34,
                        "storageSize" : 442368,
                        "freeStorageSize" : 98304,
                        "capped" : false,
                        "wiredTiger" : {
                                "metadata" : {
                                        "formatVersion" : 1
                                },
……

mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("627147d1763e74c93e6af56a")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.31.185.120:27018,172.31.185.131:27018,172.31.185.165:27018",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.31.185.120:27019,172.31.185.131:27019,172.31.185.165:27019",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/172.31.185.120:27020,172.31.185.131:27020,172.31.185.165:27020",  "state" : 1 }
  active mongoses:
        "4.4.13" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                439 : Success
  databases:
        {  "_id" : "cndba",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("7b17dc2d-343a-46e6-a982-10fe903a083a"),  "lastMod" : 1 } }
                cndba.user
                        shard key: { "_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  2
                                shard2  2
                                shard3  2
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(1, 0)
                        { "_id" : NumberLong("-6148914691236517204") } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(1, 1)
                        { "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong(0) } on : shard2 Timestamp(1, 2)
                        { "_id" : NumberLong(0) } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(1, 3)
                        { "_id" : NumberLong("3074457345618258602") } -->> { "_id" : NumberLong("6148914691236517204") } on : shard3 Timestamp(1, 4)
                        { "_id" : NumberLong("6148914691236517204") } -->> { "_id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 5)
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  585
                                shard2  219
                                shard3  220
                        too many chunks to print, use verbose if you want to force print
mongos>

一共插入插入了10万条数据库,每个shard上差不多3.3万,数据还是比较平均的。

https://www.cndba.cn/dave/article/107970

6 分片集群的相关命令


这里直接查看help,列举如下:

https://www.cndba.cn/dave/article/107970

mongos> sh.help()
        sh.addShard( host )                       server:port OR setname/server:port
        sh.addShardToZone(shard,zone)             adds the shard to the zone
        sh.updateZoneKeyRange(fullName,min,max,zone)      assigns the specified range of the given collection to a zone
        sh.disableBalancing(coll)                 disable balancing on one collection
        sh.enableBalancing(coll)                  re-enable balancing on one collection
        sh.enableSharding(dbname, shardName)      enables sharding on the database dbname, optionally use shardName as primary
        sh.getBalancerState()                     returns whether the balancer is enabled
        sh.isBalancerRunning()                    return true if the balancer has work in progress on any mongos
        sh.moveChunk(fullName,find,to)            move the chunk where 'find' is to 'to' (name of shard)
        sh.removeShardFromZone(shard,zone)      removes the shard from zone
        sh.removeRangeFromZone(fullName,min,max)   removes the range of the given collection from any zone
        sh.shardCollection(fullName,key,unique,options)   shards the collection
        sh.splitAt(fullName,middle)               splits the chunk that middle is in at middle
        sh.splitFind(fullName,find)               splits the chunk that find is in at the median
        sh.startBalancer()                        starts the balancer so chunks are balanced automatically
        sh.status()                               prints a general overview of the cluster
        sh.stopBalancer()                         stops the balancer so chunks are not balanced automatically
        sh.disableAutoSplit()                   disable autoSplit on one collection
        sh.enableAutoSplit()                    re-enable autoSplit on one collection
        sh.getShouldAutoSplit()                 returns whether autosplit is enabled
        sh.balancerCollectionStatus(fullName)       returns wheter the specified collection is balanced or the balancer needs to take more actions on it
mongos>

版权声明:本文为博主原创文章,未经博主允许不得转载。

用户评论
* 以下用户言论只代表其个人观点,不代表CNDBA社区的观点或立场
dave

dave

关注

人的一生应该是这样度过的:当他回首往事的时候,他不会因为虚度年华而悔恨,也不会因为碌碌无为而羞耻;这样,在临死的时候,他就能够说:“我的整个生命和全部精力,都已经献给世界上最壮丽的事业....."

  • 2297
    原创
  • 3
    翻译
  • 703
    转载
  • 201
    评论
  • 访问:10281591次
  • 积分:4601
  • 等级:核心会员
  • 排名:第1名
精华文章
    热门文章
      Copyright © 2016 All Rights Reserved. Powered by CNDBA · 皖ICP备2022006297号-1·

      QQ交流群

      注册联系QQ