大纲 前言 官方资源 电商业务介绍 针对传统的电商系统,用户下单后的核心处理流程如下:
实战案例一 1、案例说明 本案例是电商实战项目,使用了 Seata 的 AT 模式,并基于 Seata 分布式交易解决方案(如下图所示)。 为了方便演示,本案例默认只使用 Nacos 作为注册中心,不使用 Nacos 作为配置中心,即使用 file.conf
配置文件来存储 TC(Seata Server)相关的配置信息。
1.1、版本说明 在本文的所有案例中,各个组件统一使用以下版本:
组件 版本 说明 MySQL 5.7 Nacos Server 1.4.0 Seata Server 1.4.0 Spring Boot 2.3.2.RELEASE Spring Cloud Hoxton.SR8 Spring Cloud Alibaba 2.2.3.RELEASE
特别注意
Spring Boot 和 Spring Cloud 以及 Spring Cloud Alibaba 的版本号需要互相匹配,否则可能会存在各种问题,具体可以参考官方的版本说明
1.2、案例目标 本案例将创建三个服务,分别是订单服务、库存服务、账户服务,各服务之间的调用流程如下:
1)当用户下单时,调用订单服务创建一个订单,然后通过远程调用(OpenFeign)让库存服务扣减下单商品的库存 2)订单服务再通过远程调用(OpenFeign)让账户服务来扣减用户账户里面的余额 3)最后在订单服务中修改订单状态为已完成 上述操作跨越了三个数据库,有两次远程调用,很明显会有分布式事务的问题,项目的整体结构如下:
1 2 3 4 5 seata-transaction-demo ├── seata-common-api # API模块 ├── seata-account-service # 账户模块,端口:2002 ├── seata-storage-service # 库存模块,端口:2000 └── seata-order-service # 订单模块,端口:2001
1.3、代码下载 由于篇幅有限,本案例只给出各个模块的核心代码和配置,完整的案例代码(简版)可以从 这里 下载得到。 最新发布的文章内容 ,已追加 TC(Seata Server)整合 Nacos 作为配置中心的教程内容,完整的案例代码(配置中心版)可以从 这里 下载得到。 2、准备工作 2.1、初始化数据库 这里使用 MySQL 数据库来存储 Seata Server(TC)的全局事务会话信息,因此需要执行 SQL 初始化脚本(完整) 来创建 Seata 所需的数据库,还有业务所需的数据库。由于 Seata 的 AT 模式需要用到 UNDO_LOG
回滚日志表,因此在每个业务数据库里都要单独创建 UNDO_LOG
回滚日志表,最终所有用到的数据库和业务表如下图所示:
2.2、Nacos 创建命名空间 在 Nacos 的控制台创建新的命名空间,后面会将命名空间写在 registry.confg
配置文件中,让 Seata Server 将自身的服务注册到 Nacos
3、配置 Seata Server 3.1、创建 file.conf file.conf
是 Seata Server(TC)的配置文件,用于指定 TC 的相关配置,其核心配置内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 service { vgroupMapping.seata-order-service-tx-group = "default" vgroupMapping.seata-storage-service-tx-group = "default" vgroupMapping.seata-account-service-tx-group = "default" } store { mode = "db" db { ### the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp)/HikariDataSource(hikari) etc. datasource = "druid" ### mysql/oracle/postgresql/h2/oceanbase etc. dbType = "mysql" driverClassName = "com.mysql.cj.jdbc.Driver" url = "jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&useSSL=false" user = "root" password = "123456" minConn = 5 maxConn = 100 globalTable = "global_table" branchTable = "branch_table" lockTable = "lock_table" queryLimit = 100 maxWait = 5000 } }
file.conf
完整的参考配置内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 transport { # tcp udt unix-domain-socket type = "TCP" #NIO NATIVE server = "NIO" #enable heartbeat heartbeat = true #thread factory for netty thread-factory { boss-thread-prefix = "NettyBoss" worker-thread-prefix = "NettyServerNIOWorker" server-executor-thread-prefix = "NettyServerBizHandler" share-boss-worker = false client-selector-thread-prefix = "NettyClientSelector" client-selector-thread-size = 1 client-worker-thread-prefix = "NettyClientWorkerThread" # netty boss thread size,will not be used for UDT boss-thread-size = 1 #auto default pin or 8 worker-thread-size = 8 } shutdown { # when destroy server, wait seconds wait = 3 } serialization = "seata" compressor = "none" } service { vgroupMapping.seata-order-service-tx-group = "default" vgroupMapping.seata-storage-service-tx-group = "default" vgroupMapping.seata-account-service-tx-group = "default" default.grouplist = "127.0.0.1:8091" enableDegrade = false disable = false max.commit.retry.timeout = "-1" max.rollback.retry.timeout = "-1" disableGlobalTransaction = false } client { async.commit.buffer.limit = 10000 lock { retry.internal = 10 retry.times = 30 } report.retry.count = 5 tm.commit.retry.count = 1 tm.rollback.retry.count = 1 } ### transaction log store, only used in seata-server store { ### store mode: file、db、redis mode = "db" ### file store property file { ### store location dir dir = "sessionStore" # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions maxBranchSessionSize = 16384 # globe session size , if exceeded throws exceptions maxGlobalSessionSize = 512 # file buffer size , if exceeded allocate new buffer fileWriteBufferCacheSize = 16384 # when recover batch read size sessionReloadReadSize = 100 # async, sync flushDiskMode = async } ### database store property db { ### the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp)/HikariDataSource(hikari) etc. datasource = "druid" ### mysql/oracle/postgresql/h2/oceanbase etc. dbType = "mysql" driverClassName = "com.mysql.cj.jdbc.Driver" url = "jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&useSSL=false" user = "root" password = "123456" minConn = 5 maxConn = 100 globalTable = "global_table" branchTable = "branch_table" lockTable = "lock_table" queryLimit = 100 maxWait = 5000 } ### redis store property redis { host = "127.0.0.1" port = "6379" password = "" database = "0" minConn = 1 maxConn = 10 maxTotal = 100 queryLimit = 100 } } lock { ### the lock store mode: local、remote mode = "remote" local { ### store locks in user's database } remote { ### store locks in the seata's server } } recovery { #schedule committing retry period in milliseconds committing-retry-period = 1000 #schedule asyn committing retry period in milliseconds asyn-committing-retry-period = 1000 #schedule rollbacking retry period in milliseconds rollbacking-retry-period = 1000 #schedule timeout retry period in milliseconds timeout-retry-period = 1000 } transaction { undo.data.validation = true undo.log.serialization = "jackson" undo.log.save.days = 7 #schedule delete expired undo_log in milliseconds undo.log.delete.period = 86400000 undo.log.table = "undo_log" } ### metrics settings metrics { enabled = false registry-type = "compact" # multi exporters use comma divided exporter-list = "prometheus" exporter-prometheus-port = 9898 } support { ### spring spring { # auto proxy the DataSource bean datasource.autoproxy = false } }
3.2、创建 registry.conf registry.conf
用于指定 TC 的注册中心和 TC 的配置文件,这里使用 Nacos 作为注册中心,但 TC 的配置信息直接从 file.conf
配置文件中读取,其核心配置如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 registry { type = "nacos" nacos { application = "seata-server" serverAddr = "127.0.0.1:8848" group = "seata_demo" namespace = "ee08c2b7-2b41-4e9d-aeae-aae35a8dbd1d" cluster = "default" username = "" password = "" } } config { type = "file" file { name = "file.conf" } }
registry.conf
完整的参考配置内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 registry { # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "nacos" nacos { application = "seata-server" serverAddr = "127.0.0.1:8848" group = "seata_demo" namespace = "ee08c2b7-2b41-4e9d-aeae-aae35a8dbd1d" cluster = "default" username = "" password = "" } eureka { serviceUrl = "http://localhost:8761/eureka" application = "default" weight = "1" } redis { serverAddr = "localhost:6379" db = 0 password = "" cluster = "default" timeout = 0 } zk { cluster = "default" serverAddr = "127.0.0.1:2181" sessionTimeout = 6000 connectTimeout = 2000 username = "" password = "" } consul { cluster = "default" serverAddr = "127.0.0.1:8500" } etcd3 { cluster = "default" serverAddr = "http://localhost:2379" } sofa { serverAddr = "127.0.0.1:9603" application = "default" region = "DEFAULT_ZONE" datacenter = "DefaultDataCenter" cluster = "default" group = "SEATA_GROUP" addressWaitTime = "3000" } file { name = "file.conf" } } config { # file、nacos 、apollo、zk、consul、etcd3 type = "file" nacos { serverAddr = "127.0.0.1:8848" namespace = "" group = "SEATA_GROUP" username = "" password = "" } consul { serverAddr = "127.0.0.1:8500" } apollo { appId = "seata-server" apolloMeta = "http://192.168.1.204:8801" namespace = "application" } zk { serverAddr = "127.0.0.1:2181" sessionTimeout = 6000 connectTimeout = 2000 username = "" password = "" } etcd3 { serverAddr = "http://localhost:2379" } file { name = "file.conf" } }
3.3、拷贝配置文件 1)将上面的 file.conf
、registry.conf
配置文件拷贝到 Seata Server 的 conf
目录下,直接覆盖原有的配置文件即可 2)由于这里没有使用配置中心(如 Nacos),因此还需要将上面的 file.conf
配置文件拷贝到每个 Maven 子工程的 src/main/resource
目录下 4、创建 Maven 父工程 创建 Maven 父工程,配置好工程需要的父级依赖,目的是为了更方便管理与简化配置。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 <modules > <module > seata-common-api</module > <module > seata-order-service</module > <module > seata-storage-service</module > <module > seata-account-service</module > </modules > <properties > <junit.version > 4.12</junit.version > <log4j.version > 1.2.17</log4j.version > <mysql.version > 8.0.21</mysql.version > <spring.cloud.version > Hoxton.SR8</spring.cloud.version > <spring.boot.version > 2.3.2.RELEASE</spring.boot.version > <spring.cloud.alibaba > 2.2.3.RELEASE</spring.cloud.alibaba > <seata.spring.boot.version > 1.4.0</seata.spring.boot.version > <druid.spring.boot.version > 1.2.4</druid.spring.boot.version > <mybatis.spring.boot.version > 2.1.3</mybatis.spring.boot.version > </properties > <dependencyManagement > <dependencies > <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-dependencies</artifactId > <version > ${spring.boot.version}</version > <type > pom</type > <scope > import</scope > </dependency > <dependency > <groupId > org.springframework.cloud</groupId > <artifactId > spring-cloud-dependencies</artifactId > <version > ${spring.cloud.version}</version > <type > pom</type > <scope > import</scope > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-alibaba-dependencies</artifactId > <version > ${spring.cloud.alibaba}</version > <type > pom</type > <scope > import</scope > </dependency > <dependency > <groupId > mysql</groupId > <artifactId > mysql-connector-java</artifactId > <version > ${mysql.version}</version > </dependency > <dependency > <groupId > com.alibaba</groupId > <artifactId > druid-spring-boot-starter</artifactId > <version > ${druid.spring.boot.version}</version > </dependency > <dependency > <groupId > org.mybatis.spring.boot</groupId > <artifactId > mybatis-spring-boot-starter</artifactId > <version > ${mybatis.spring.boot.version}</version > </dependency > <dependency > <groupId > log4j</groupId > <artifactId > log4j</artifactId > <version > ${log4j.version}</version > </dependency > </dependencies > </dependencyManagement >
5、创建订单工程 5.1、创建 pom.xml 订单工程的 Maven 配置内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 <parent > <groupId > com.seata.study</groupId > <artifactId > seata-transaction-demo</artifactId > <version > 1.0-SNAPSHOT</version > </parent > <dependencies > <dependency > <groupId > com.seata.study</groupId > <artifactId > seata-common-api</artifactId > <version > 1.0-SNAPSHOT</version > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-nacos-config</artifactId > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-nacos-discovery</artifactId > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-seata</artifactId > <exclusions > <exclusion > <groupId > io.seata</groupId > <artifactId > seata-spring-boot-starter</artifactId > </exclusion > </exclusions > </dependency > <dependency > <groupId > io.seata</groupId > <artifactId > seata-spring-boot-starter</artifactId > <version > ${seata.spring.boot.version}</version > </dependency > <dependency > <groupId > org.springframework.cloud</groupId > <artifactId > spring-cloud-starter-openfeign</artifactId > </dependency > <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-web</artifactId > </dependency > <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-actuator</artifactId > </dependency > <dependency > <groupId > mysql</groupId > <artifactId > mysql-connector-java</artifactId > </dependency > <dependency > <groupId > com.alibaba</groupId > <artifactId > druid-spring-boot-starter</artifactId > </dependency > <dependency > <groupId > org.mybatis.spring.boot</groupId > <artifactId > mybatis-spring-boot-starter</artifactId > </dependency > </dependencies >
5.2、创建 bootstrap.yml Seata 1.1.0 版本之后客户端已经支持用 YAML 文件替代 xxxx.conf
文件。以下 bootstrap.yml
由于添加了 seata.registry
来配置 Seata Server 所使用的注册中心,因此不再需要拷贝 Seata Server 的 registry.conf
配置文件拷到每个 Maven 子工程的 src/main/resource
目录下。
特别注意:bootstrap.yml
中的 Seata 配置项,必须严格与 Seata Server 的 registry.conf
、file.conf
的配置一致,否则会导致应用启动后无法正常连接 Seata Server
seata.registry.nacos.group
必须与 Seata Server 的 registry.conf
中的 registry.nacos.group
一致seata.registry.nacos.namespace
必须与 Seata Server 的 registry.conf
中的 registry.nacos.namespace
一致seata.registry.nacos.server-addr
必须与 Seata Server 的 registry.conf
中的 registry.nacos.serverAddr
一致seata.registry.nacos.application
必须与 Seata Server 的 registry.conf
中的 registry.nacos.application
一致seata.tx-service-group
必须与 Seata Server 的 file.conf
中的 service.vgroupMapping.xxxx = "default"
的 xxxx
一致在 file.conf
里,service.vgroupMapping.xxxx = "default"
支持配置多个,对应的就是多个微服务应用 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 nacos: server-addr: 127.0 .0 .1 :8848 namespace: ee08c2b7-2b41-4e9d-aeae-aae35a8dbd1d group: seata_demo seata: application: seata-server tx-service-group: seata-order-service-tx-group server: port: 2001 spring: application: name: seata-order-service cloud: nacos: discovery: server-addr: ${nacos.server-addr} namespace: ${nacos.namespace} group: ${nacos.group} datasource: driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3306/seata_order?useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&useSSL=false username: root password: 123456 mybatis: mapperLocations: classpath*:mapper/*.xml type-aliases-package: com.seata.study.domain seata: enabled: true application-id: ${spring.application.name} tx-service-group: ${nacos.seata.tx-service-group} enable-auto-data-source-proxy: false registry: type: nacos nacos: application: ${nacos.seata.application} server-addr: ${nacos.server-addr} namespace: ${nacos.namespace} group: ${nacos.group} username: "" password: "" config: type: file feign: hystrix: enabled: false logging: level: io: seata: info
5.3、注入代理数据源 Seata 通过代理数据源的方式实现分支事务,其中 MyBatis 和 JPA 都需要注入 io.seata.rm.datasource.DataSourceProxy
, 不同的是,MyBatis 还需要额外注入 org.apache.ibatis.session.SqlSessionFactory
。在 Spring Boot Seata Starter 2.2.0.RELEASE 及以后版本,代理数据源的注入 Seata 已经自动实现了,即不需要再手动去配置。若希望 Seata 自动注入代理数据源,需要在工程里的 file.conf
配置文件添加 support.spring.datasource.autoproxy=true
,手动实现的方式如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 @Configuration public class DataSourceProxyConfig { @Value("${mybatis.mapperLocations}") private String mapperLocations; @Value("${mybatis.type-aliases-package}") private String typeAliasesPackage; @Bean @ConfigurationProperties(prefix = "spring.datasource") public DataSource dataSource () { return new DruidDataSource(); } @Bean public DataSourceProxy dataSourceProxy (DataSource dataSource) { return new DataSourceProxy(dataSource); } @Bean public SqlSessionFactory sqlSessionFactory (DataSourceProxy dataSourceProxy) throws Exception { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); sqlSessionFactoryBean.setDataSource(dataSourceProxy); sqlSessionFactoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(mapperLocations)); sqlSessionFactoryBean.setTypeAliasesPackage(typeAliasesPackage); sqlSessionFactoryBean.setTransactionFactory(new SpringManagedTransactionFactory()); return sqlSessionFactoryBean.getObject(); } }
5.4、添加全局事务注解 在订单创建的入口方法上面添加 @GlobalTransactional
来控制分布式事务,这里使用 OpenFeign 去调用库存服务和账户服务的接口
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 @Service public class OrderServiceImpl implements OrderService { @Resource private OrderMapper orderMapper; @Resource private AccountClient accountClient; @Resource private StorageClient storageClient; @Override @GlobalTransactional(name = "create-order", rollbackFor = Exception.class) public CommonResult createOrder (Order order) { orderMapper.create(order); storageClient.decrease(order.getProductId(), order.getCount()); accountClient.decrease(order.getUserId(), order.getMoney()); orderMapper.update(order.getId(), OrderStatus.FINISHED.getValue()); return new CommonResult(); } }
5.5、创建主启动类 1 2 3 4 5 6 7 8 9 @SpringBootApplication(exclude = DataSourceAutoConfiguration.class) @EnableDiscoveryClient @EnableFeignClients public class OrderApplication { public static void main (String[] args) { SpringApplication.run(OrderApplication.class, args); } }
6、创建库存工程 6.1、创建 pom.xml 库存工程的 Maven 配置内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 <parent > <groupId > com.seata.study</groupId > <artifactId > seata-transaction-demo</artifactId > <version > 1.0-SNAPSHOT</version > </parent > <dependencies > <dependency > <groupId > com.seata.study</groupId > <artifactId > seata-common-api</artifactId > <version > 1.0-SNAPSHOT</version > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-nacos-config</artifactId > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-nacos-discovery</artifactId > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-seata</artifactId > <exclusions > <exclusion > <groupId > io.seata</groupId > <artifactId > seata-spring-boot-starter</artifactId > </exclusion > </exclusions > </dependency > <dependency > <groupId > io.seata</groupId > <artifactId > seata-spring-boot-starter</artifactId > <version > ${seata.spring.boot.version}</version > </dependency > <dependency > <groupId > org.springframework.cloud</groupId > <artifactId > spring-cloud-starter-openfeign</artifactId > </dependency > <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-web</artifactId > </dependency > <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-actuator</artifactId > </dependency > <dependency > <groupId > mysql</groupId > <artifactId > mysql-connector-java</artifactId > </dependency > <dependency > <groupId > com.alibaba</groupId > <artifactId > druid-spring-boot-starter</artifactId > </dependency > <dependency > <groupId > org.mybatis.spring.boot</groupId > <artifactId > mybatis-spring-boot-starter</artifactId > </dependency > </dependencies >
6.2、创建 bootstrap.yml 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 nacos: server-addr: 127.0 .0 .1 :8848 namespace:ee08c2b7-2b41-4e9d-aeae-aae35a8dbd1d group: seata_demo seata: application: seata-server tx-service-group: seata-storage-service-tx-group server: port: 2000 spring: application: name: seata-storage-service cloud: nacos: discovery: server-addr: ${nacos.server-addr} namespace: ${nacos.namespace} group: ${nacos.group} datasource: driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3306/seata_storage?useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&useSSL=false username: root password: 123456 mybatis: mapperLocations: classpath*:mapper/*.xml type-aliases-package: com.seata.study.domain seata: enabled: true application-id: ${spring.application.name} tx-service-group: ${nacos.seata.tx-service-group} enable-auto-data-source-proxy: false registry: type: nacos nacos: application: ${nacos.seata.application} server-addr: ${nacos.server-addr} namespace: ${nacos.namespace} group: ${nacos.group} username: "" password: "" config: type: file feign: hystrix: enabled: false logging: level: io: seata: info
6.3、注入代理数据源 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 @Configuration public class DataSourceProxyConfig { @Value("${mybatis.mapperLocations}") private String mapperLocations; @Value("${mybatis.type-aliases-package}") private String typeAliasesPackage; @Bean @ConfigurationProperties(prefix = "spring.datasource") public DataSource dataSource () { return new DruidDataSource(); } @Bean public DataSourceProxy dataSourceProxy (DataSource dataSource) { return new DataSourceProxy(dataSource); } @Bean public SqlSessionFactory sqlSessionFactory (DataSourceProxy dataSourceProxy) throws Exception { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); sqlSessionFactoryBean.setDataSource(dataSourceProxy); sqlSessionFactoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(mapperLocations)); sqlSessionFactoryBean.setTypeAliasesPackage(typeAliasesPackage); sqlSessionFactoryBean.setTransactionFactory(new SpringManagedTransactionFactory()); return sqlSessionFactoryBean.getObject(); } }
6.4、创建业务处理类 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 @Service public class StorageServiceImpl implements StorageService { @Resource private StorageMapper storageMapper; @Override public CommonResult decrease (Long productId, Long count) { Storage storage = storageMapper.findByProduct(productId); Long total = storage.getTotal(); Long used = storage.getUsed(); Long residue = storage.getResidue(); if (count == null || count <= 0 ) { return new CommonResult(SystemCode.ERROR_PARAMETER); } if (count > residue) { return new CommonResult(SystemCode.STORAGE_NOT_ENOUGH); } storage.setUsed(used + count); storage.setResidue(residue - count); storageMapper.update(storage); return new CommonResult(); } }
6.5、创建启动主类 1 2 3 4 5 6 7 8 9 @SpringBootApplication @EnableDiscoveryClient @EnableFeignClients public class StorageApplication { public static void main (String[] args) { SpringApplication.run(StorageApplication.class, args); } }
7、创建账户工程 7.1、创建 pom.xml 账户工程的 Maven 配置内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 <parent > <groupId > com.seata.study</groupId > <artifactId > seata-transaction-demo</artifactId > <version > 1.0-SNAPSHOT</version > </parent > <dependencies > <dependency > <groupId > com.seata.study</groupId > <artifactId > seata-common-api</artifactId > <version > 1.0-SNAPSHOT</version > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-nacos-config</artifactId > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-nacos-discovery</artifactId > </dependency > <dependency > <groupId > com.alibaba.cloud</groupId > <artifactId > spring-cloud-starter-alibaba-seata</artifactId > <exclusions > <exclusion > <groupId > io.seata</groupId > <artifactId > seata-spring-boot-starter</artifactId > </exclusion > </exclusions > </dependency > <dependency > <groupId > io.seata</groupId > <artifactId > seata-spring-boot-starter</artifactId > <version > ${seata.spring.boot.version}</version > </dependency > <dependency > <groupId > org.springframework.cloud</groupId > <artifactId > spring-cloud-starter-openfeign</artifactId > </dependency > <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-web</artifactId > </dependency > <dependency > <groupId > org.springframework.boot</groupId > <artifactId > spring-boot-starter-actuator</artifactId > </dependency > <dependency > <groupId > mysql</groupId > <artifactId > mysql-connector-java</artifactId > </dependency > <dependency > <groupId > com.alibaba</groupId > <artifactId > druid-spring-boot-starter</artifactId > </dependency > <dependency > <groupId > org.mybatis.spring.boot</groupId > <artifactId > mybatis-spring-boot-starter</artifactId > </dependency > </dependencies >
7.2、创建 bootstrap.yml 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 nacos: server-addr: 127.0 .0 .1 :8848 namespace:ee08c2b7-2b41-4e9d-aeae-aae35a8dbd1d group: seata_demo seata: application: seata-server tx-service-group: seata-account-service-tx-group server: port: 2002 spring: application: name: seata-account-service cloud: nacos: discovery: server-addr: ${nacos.server-addr} namespace: ${nacos.namespace} group: ${nacos.group} datasource: driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3306/seata_account?useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&useSSL=false username: root password: 123456 mybatis: mapperLocations: classpath*:mapper/*.xml type-aliases-package: com.seata.study.domain seata: enabled: true application-id: ${spring.application.name} tx-service-group: ${nacos.seata.tx-service-group} enable-auto-data-source-proxy: false registry: type: nacos nacos: application: ${nacos.seata.application} server-addr: ${nacos.server-addr} namespace: ${nacos.namespace} group: ${nacos.group} username: "" password: "" config: type: file feign: hystrix: enabled: false logging: level: io: seata: info
7.3、注入代理数据源 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 @Configuration public class DataSourceProxyConfig { @Value("${mybatis.mapperLocations}") private String mapperLocations; @Value("${mybatis.type-aliases-package}") private String typeAliasesPackage; @Bean @ConfigurationProperties(prefix = "spring.datasource") public DataSource dataSource () { return new DruidDataSource(); } @Bean public DataSourceProxy dataSourceProxy (DataSource dataSource) { return new DataSourceProxy(dataSource); } @Bean public SqlSessionFactory sqlSessionFactory (DataSourceProxy dataSourceProxy) throws Exception { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); sqlSessionFactoryBean.setDataSource(dataSourceProxy); sqlSessionFactoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(mapperLocations)); sqlSessionFactoryBean.setTypeAliasesPackage(typeAliasesPackage); sqlSessionFactoryBean.setTransactionFactory(new SpringManagedTransactionFactory()); return sqlSessionFactoryBean.getObject(); } }
7.4、创建业务处理类 这里添加了模拟账户业务处理超时的代码,延时时间为 10 秒。因为 OpenFeign 的默认超时时间为 1 秒,所以当订单服务远程调用账户服务来扣减账户余额时,会抛出请求超时的异常,这时就可以测试全局事务注解 @GlobalTransactional
是否生效了。若 @GlobalTransactional
生效,当订单服务的远程调用抛出请求超时的异常后,账户数据库里对应的账户余额不会被修改;若账户余额被修改了,则说明 @GlobalTransactional
没有生效。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 @Service public class AccountServiceImpl implements AccountService { @Resource private AccountMapper accountMapper; @Override public CommonResult decrease (Long userId, BigDecimal money) { Account account = accountMapper.findByUser(userId); BigDecimal total = account.getTotal(); BigDecimal used = account.getUsed(); BigDecimal residue = account.getResidue(); try { Thread.sleep(10000 ); } catch (InterruptedException e) { e.printStackTrace(); } if (money == null || money.compareTo(BigDecimal.ZERO) < 1 ) { return new CommonResult(SystemCode.ERROR_PARAMETER); } if (money.compareTo(residue) == 1 ) { return new CommonResult(SystemCode.ACCOUNT_NOT_ENOUGH); } account.setUsed(account.getUsed().add(money)); account.setResidue(account.getResidue().subtract(money)); accountMapper.update(account); return new CommonResult(); } }
7.5、创建主启动类 1 2 3 4 5 6 7 8 9 @SpringBootApplication @EnableDiscoveryClient @EnableFeignClients public class AccountApplication { public static void main (String[] args) { SpringApplication.run(AccountApplication.class, args); } }
8、案例代码测试 1)首先启动 MySQL Server、Nacos Server、Seata Server,并按照上文介绍的准备工作 进行初始化
2)分别启动 seata-account-service
、seata-storage-service
、seata-order-service
服务
3)浏览器访问 http://127.0.0.1:8848/nacos
打开 Nacos 的控制台,各服务成功启动后,在 Nacos 的控制台里可以看到有多个服务已注册(如下图)
4)观察不同数据库中的 seata_account.t_account
、seata_storage.t_storage
业务表的数据,如下图:
5)浏览器访问 http://127.0.0.1:2001/order/create?userId=1&count=3&money=20&productId=1
调用订单创建接口,由于订单服务远程调用账户服务来扣减账户余额时,抛出了请求超时的异常,因此响应的 500
错误页面显示如下:
1 2 3 4 5 6 7 8 9 10 11 12 ################### seata_order 服务的日志 ##################### java.net.SocketTimeoutException: Read timed out at java.base/java.net.SocketInputStream.socketRead0(Native Method) ~[na:na] at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115) ~[na:na] at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168) ~[na:na] at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) ~[na:na] at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[na:na] at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:292) ~[na:na] at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:351) ~[na:na] at java.base/sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:746) ~[na:na] at java.base/sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:689) ~[na:na]
1 2 3 4 5 6 ################### seata_storage 服务的日志 ##################### [_RMROLE_1_2_144] i.s.c.r.p.c.RmBranchRollbackProcessor : rm handle branch rollback process:xid=192.168.1.130:8091:86489181212647424,branchId=86489188837892097,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seata_storage,applicationData=null [_RMROLE_1_2_144] io.seata.rm.AbstractRMHandler : Branch Rollbacking: 192.168.1.130:8091:86489181212647424 86489188837892097 jdbc:mysql://127.0.0.1:3306/seata_storage [_RMROLE_1_2_144] i.s.r.d.undo.AbstractUndoLogManager : xid 192.168.1.130:8091:86489181212647424 branch 86489188837892097, undo_log deleted with GlobalFinished [_RMROLE_1_2_144] io.seata.rm.AbstractRMHandler : Branch Rollbacked result: PhaseTwo_Rollbacked
1 2 3 4 5 6 7 8 9 10 11 ################### seata_account 服务的日志 ##################### io.seata.core.exception.RmTransactionException: Response[ TransactionException[Could not found global transaction xid = 192.168.1.130:8091:86489181212647424, may be has finished.] ] at io.seata.rm.AbstractResourceManager.branchRegister(AbstractResourceManager.java:69) ~[seata-all-1.4.0.jar:1.4.0] at io.seata.rm.DefaultResourceManager.branchRegister(DefaultResourceManager.java:96) ~[seata-all-1.4.0.jar:1.4.0] at io.seata.rm.datasource.ConnectionProxy.register(ConnectionProxy.java:241) ~[seata-all-1.4.0.jar:1.4.0] at io.seata.rm.datasource.ConnectionProxy.processGlobalTransactionCommit(ConnectionProxy.java:219) ~[seata-all-1.4.0.jar:1.4.0] at io.seata.rm.datasource.ConnectionProxy.doCommit(ConnectionProxy.java:199) ~[seata-all-1.4.0.jar:1.4.0] at io.seata.rm.datasource.ConnectionProxy.lambda$commit$0(ConnectionProxy.java:184) ~[seata-all-1.4.0.jar:1.4.0] at io.seata.rm.datasource.ConnectionProxy$LockRetryPolicy.execute(ConnectionProxy.java:292) ~[seata-all-1.4.0.jar:1.4.0] at io.seata.rm.datasource.ConnectionProxy.commit(ConnectionProxy.java:183) ~[seata-all-1.4.0.jar:1.4.0]
实战案例二 1.1、案例代码说明 本案例是在上面案例一的基础上开发的,主要演示 TC(Seata Server)如何将配置信息存储在 Nacos 中。值得一提的是,在上面案例一中,并没有使用 Nacos 配置中心来存储 TC(Seata Server)相关的配置信息,而是直接使用了 file.conf
,但在生产环境中一般极少采用这种方式,推荐将配置信息统一存储在分布式配置中心中(如 Nacos)。
特别注意
当使用 Seata Server 使用 Nacos 作为配置中心后,Seata Server 启动时只需要依赖 registry.conf
,即不再需要 file.conf
。 同时在 Spring Cloud 应用中不再需要依赖任何 file.conf
、registry.conf
,直接在 bootstrap.yml
里就可以完成 Seata 的所有配置。 1.2、配置 Seata Server 在 Seata Server 的 registry.conf
里,指定使用 Nacos 配置中心来存储 TC 的相关配置(如下):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 registry { type = "nacos" nacos { application = "seata-server" serverAddr = "127.0.0.1:8848" group = "seata_demo" namespace = "ee08c2b7-2b41-4e9d-aeae-aae35a8dbd1d" cluster = "default" username = "" password = "" } } config { type = "nacos" nacos { serverAddr = "127.0.0.1:8848" namespace = "ee08c2b7-2b41-4e9d-aeae-aae35a8dbd1d" group = "seata_demo" username = "" password = "" } }
1.3、Nacos 导入配置信息 Seata 官方提供了将配置信息(file.conf
)批量导入到各种主流配置中心的 Shell 脚本,存放路径是在 Seata 源码 目录下的 script/config-center
目录(如下):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 script/config-center ├── apollo │ └── apollo-config.sh ├── config.txt ├── consul │ └── consul-config.sh ├── etcd3 │ └── etcd3-config.sh ├── nacos │ ├── nacos-config.py │ └── nacos-config.sh ├── README.md └── zk └── zk-config.sh
其中 config.txt
为通用参数文件,包含了 Seata Server(TC)需要的所有配置信息,需要根据实际情况更改文件里的以下内容:
1 2 3 4 5 6 7 8 9 10 11 service.vgroupMapping.seata-order-service-tx-group=default service.vgroupMapping.seata-storage-service-tx-group=default service.vgroupMapping.seata-account-service-tx-group=default store.mode=db store.db.datasource=druid store.db.dbType=mysql store.db.driverClassName=com.mysql.cj.jdbc.Driver store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&useSSL=false store.db.user=root store.db.password=123456
通用参数文件 config.txt
更改完成后,执行对应的 Shell 脚本将配置信息写入到配置中心即可。值得一提的是,config.txt
文件必须在 xxxx.sh
的上级目录里,而且 Shell 脚本可以重复执行多次。若使用 Nacos 作为配置中心,执行脚本时可以指定一些启动参数,如 Nacos 的 IP、端口号、命名空间、配置组等,Shell 脚本的具体使用方法可以查看官方说明文档
1 2 $ sh nacos-config.sh -h 127.0.0.1 -p 8848 -t ee08c2b7-2b41-4e9d-aeae-aae35a8dbd1d -g seata_demo
成功批量导入配置信息到 Nacos 后,控制台会输出如下提示:
1 2 3 4 ========================================================================= Complete initialization parameters, total-count:79 , failure-count:0 ========================================================================= Init nacos config finished, please start seata-server.
访问 Nacos 的控制台,可以看到已经有对应的配置信息(如下):
1.4、配置 Spring Cloud 项目 以订单模块为例,bootstrap.yml
的完整配置如下,此时订单模块的 src/main/resources
目录下不再需要存放 file.conf
、registry.conf
配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 nacos: server-addr: 127.0 .0 .1 :8848 namespace: ee08c2b7-2b41-4e9d-aeae-aae35a8dbd1d group: seata_demo seata: application: seata-server tx-service-group: seata-order-service-tx-group server: port: 2001 spring: application: name: seata-order-service cloud: nacos: discovery: server-addr: ${nacos.server-addr} namespace: ${nacos.namespace} group: ${nacos.group} config: server-addr: ${nacos.server-addr} prefix: ${spring.application.name} file-extension: yaml namespace: ${nacos.namespace} group: ${nacos.group} datasource: driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3306/seata_order?useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&useSSL=false username: root password: 123456 mybatis: mapperLocations: classpath*:mapper/*.xml type-aliases-package: com.seata.study.domain seata: enabled: true application-id: ${spring.application.name} tx-service-group: ${nacos.seata.tx-service-group} enable-auto-data-source-proxy: false registry: type: nacos nacos: application: ${nacos.seata.application} server-addr: ${nacos.server-addr} namespace: ${nacos.namespace} group: ${nacos.group} username: "" password: "" config: type: nacos nacos: server-addr: ${nacos.server-addr} namespace: ${nacos.namespace} group: ${nacos.group} username: "" password: "" feign: hystrix: enabled: false logging: level: io: seata: info
1.5、代码下载(配置中心版)