irpas技术客

一台Linux服务器,使用docker 部署多个 SeaTa服务(一)【SeaTa集群篇】_会飞的小蜗

未知 1155

文章目录 一、前言:nacos准备工作二、装第一个SeaTa服务三、在SeaTa集群模式下,要把seata配置存到nacos里四、安装第二个seata服务

一、前言:nacos准备工作

1、创建命名空间 创建好后我们可以看一下:

二、装第一个SeaTa服务

1、拉取镜像【这里说一下,必须跟我的版本一致】

docker pull seataio/seata-server:1.4.2

2、运行第一个容器

docker run --name seata-server01 -p 8091:8091 -d seataio/seata-server:1.4.2

3、创建目录并赋予最高权限【!!这步很重要,不然后面可能于行不起来】

mkdir -m 777 /mydata/seata/

4、复制容器配置文件到宿主机

docker cp seata-server01:/seata-server /mydata/seata

5、停止第一个容器

docker stop seata-server01

6、删除第一个容器

docker rm -f seata-server01

7、重新运行服务,至此服务已经启动完成,接下来就是在/mydata/seata/seata-server目录中修改对应的配置(设置开机自启和关键配置挂载到本地目录方便修改配置)

docker run -d --restart always --name seata-server01 -p 8091:8091 -v /mydata/seata/seata-server:/seata-server -e SEATA_IP=192.168.56.10 -e SEATA_PORT=8091 seataio/seata-server:1.4.2

8、查看容器运行日志 (!!!注意:如果运行不起来,请检查你映射到宿主机的配置文件目录是否有操作权限!!!)

docker logs seata-server01

9、切换到SeaTa的配置目录下

cd /mydata/seata/seata-server/resources

10、修改registry.conf文件,修为nacos启用方式

registry { # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "nacos" nacos { application = "seata-server" serverAddr = "192.168.56.10:8848" group = "SEATA_GROUP" namespace = "9d34df17-51c7-4fde-9ae8-8dd064fa4bd0" cluster = "s_cluster" username = "nacos" password = "nacos" } eureka { serviceUrl = "http://localhost:8761/eureka" application = "default" weight = "1" } redis { serverAddr = "localhost:6379" db = 0 password = "" cluster = "default" timeout = 0 } zk { cluster = "default" serverAddr = "127.0.0.1:2181" sessionTimeout = 6000 connectTimeout = 2000 username = "" password = "" } consul { cluster = "default" serverAddr = "127.0.0.1:8500" } etcd3 { cluster = "default" serverAddr = "http://localhost:2379" } sofa { serverAddr = "127.0.0.1:9603" application = "default" region = "SEATA_GROUP" datacenter = "DefaultDataCenter" cluster = "default" group = "SEATA_GROUP" addressWaitTime = "3000" } file { name = "file.conf" } } config { # file、nacos 、apollo、zk、consul、etcd3 type = "nacos" nacos { serverAddr = "192.168.56.10:8848" namespace = "9d34df17-51c7-4fde-9ae8-8dd064fa4bd0" group = "SEATA_GROUP" username = "nacos" password = "nacos" } consul { serverAddr = "127.0.0.1:8500" } apollo { appId = "seata-server" apolloMeta = "http://192.168.1.204:8801" namespace = "application" } zk { serverAddr = "127.0.0.1:2181" sessionTimeout = 6000 connectTimeout = 2000 username = "" password = "" } etcd3 { serverAddr = "http://localhost:2379" } file { name = "file.conf" } }

11、修改 file.cong 文件中存储方式为mysql

## transaction log store, only used in seata-server store { ## store mode: file、db mode = "db" ## file store property file { ## store location dir dir = "sessionStore" # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions maxBranchSessionSize = 16384 # globe session size , if exceeded throws exceptions maxGlobalSessionSize = 512 # file buffer size , if exceeded allocate new buffer fileWriteBufferCacheSize = 16384 # when recover batch read size sessionReloadReadSize = 100 # async, sync flushDiskMode = async } ## database store property db { ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc. datasource = "druid" ## mysql/oracle/postgresql/h2/oceanbase etc. dbType = "mysql" driverClassName = "com.mysql.cj.jdbc.Driver" url = "jdbc:mysql://192.168.56.10:3306/seata?useUnicode=true&characterEncoding=utf8&autoReconnect=true&useSSL=false&serverTimezone=Asia/Shanghai" user = "root" password = "root" minConn = 5 maxConn = 30 globalTable = "global_table" branchTable = "branch_table" lockTable = "lock_table" queryLimit = 100 maxWait = 5000 } } 三、在SeaTa集群模式下,要把seata配置存到nacos里

配置文件导入的下载地址,我们可以把 config-center 这个文件夹下载下来

https://github.com/seata/seata/tree/1.4.2/script/config-center

下载完成后,可以把config-center 文件夹 传到服务器 , 比如我的就放在了这个位置 “/mydata/seata”

下载下来之后我们看到目录中有这些文件: 然后我们需要修改一下 config.txt 这个配置文件:

transport.type=TCP transport.server=NIO transport.heartbeat=true transport.enableClientBatchSendRequest=false transport.threadFactory.bossThreadPrefix=NettyBoss transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler transport.threadFactory.shareBossWorker=false transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector transport.threadFactory.clientSelectorThreadSize=1 transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread transport.threadFactory.bossThreadSize=1 transport.threadFactory.workerThreadSize=default transport.shutdown.wait=3 service.vgroupMapping.my_test_tx_group=default #!!!这里改不改都行 service.default.grouplist=192.168.56.10:8091 service.enableDegrade=false service.disableGlobalTransaction=false client.rm.asyncCommitBufferLimit=10000 client.rm.lock.retryInterval=10 client.rm.lock.retryTimes=30 client.rm.lock.retryPolicyBranchRollbackOnConflict=true client.rm.reportRetryCount=5 client.rm.tableMetaCheckEnable=false client.rm.sqlParserType=druid client.rm.reportSuccessEnable=false client.rm.sagaBranchRegisterEnable=false client.tm.commitRetryCount=5 client.tm.rollbackRetryCount=5 store.mode=file store.file.dir=file_store/data store.file.maxBranchSessionSize=16384 store.file.maxGlobalSessionSize=512 store.file.fileWriteBufferCacheSize=16384 store.file.flushDiskMode=async store.file.sessionReloadReadSize=100 store.db.datasource=druid store.db.dbType=mysql #这里改 store.db.driverClassName=com.mysql.cj.jdbc.Driver #这里改 store.db.url=jdbc:mysql://192.168.56.10:3306/seata?useUnicode=true&characterEncoding=utf8&autoReconnect=true&useSSL=false&serverTimezone=Asia/Shanghai #这里改 store.db.user=root #这里改 store.db.password=root store.db.minConn=5 store.db.maxConn=30 store.db.globalTable=global_table store.db.branchTable=branch_table store.db.queryLimit=100 store.db.lockTable=lock_table store.db.maxWait=5000 server.recovery.committingRetryPeriod=1000 server.recovery.asynCommittingRetryPeriod=1000 server.recovery.rollbackingRetryPeriod=1000 server.recovery.timeoutRetryPeriod=1000 server.maxCommitRetryTimeout=-1 server.maxRollbackRetryTimeout=-1 server.rollbackRetryTimeoutUnlockEnable=false client.undo.dataValidation=true client.undo.logSerialization=jackson server.undo.logSaveDays=7 server.undo.logDeletePeriod=86400000 client.undo.logTable=undo_log client.log.exceptionRate=100 transport.serialization=seata transport.compressor=none metrics.enabled=false metrics.registryType=compact metrics.exporterList=prometheus metrics.exporterPrometheusPort=9898

进入到你上传的目录中:

cd /mydata/seata/config-center/nacos

执行导入脚本

#SEATA_GROUP 是分组名称;9d34df17-51c7-4fde-9ae8-8dd064fa4bd0 是刚才前言中nacos里创建的命名空间的ID sh nacos-config.sh -h 192.168.56.10 -p 8848 -g SEATA_GROUP -t 9d34df17-51c7-4fde-9ae8-8dd064fa4bd0

完成后在nacos 里就能看见这些配置文件了: 然后我们重启 seata-server01 这个容器:

#然后重启容器 docker restart seata-server01 四、安装第二个seata服务

创建并运行第二个SeaTa容器服务,因为之前第一个容器里面是8091端口,所以这个记得改为SEATA_PORT=8092,不然nacos里根本就不显示这个实例!!!起来是能起来,但是nacos 里的实例数并没增加。

docker run -d --restart always --name seata-server02 -p 8092:8092 -v /mydata/seata/seata-server:/seata-server -e SEATA_IP=192.168.56.10 -e SEATA_PORT=8092 seataio/seata-server:1.4.2

启动完了之后我们看一下日志:

#查看第一个 docker logs seata-server01 #查看第二个 docker logs seata-server02

都启东了之后我们看下nacos 中的实例数: 目前集群部署已完成。


1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,会注明原创字样,如未注明都非原创,如有侵权请联系删除!;3.作者投稿可能会经我们编辑修改或补充;4.本站不提供任何储存功能只提供收集或者投稿人的网盘链接。

标签: #一台Linux服务器 #使用docker #部署多个 #pull #run