springcloud alibaba之seata分布式事务

springcloud学习笔记,第十四章,Seata分布式事务解决方案。

简介:

Seata 是一款开源的分布式事务解决方案,致力于提供高性能和简单易用的分布式事务服务。Seata 将为用户提供了 AT、TCC、SAGA 和 XA 事务模式,为用户打造一站式的分布式解决方案。

单体应用被拆分成微服务应用,原来的三个模块被拆分成三个独立的应用,分别使用三个独立的数据源,业务操作需要调用三个服务来完成。此时每个服务内部的数据一致性由本地事务来保证,但是全局的数据一致性问题没法保证。

例:用户购买商品的业务逻辑,整个业务逻辑由3个微服务提供支持。

  • 仓储服务:对给定的商品扣除仓储数量。
  • 订单服务:根据采购需求创建订单。
  • 账户服务:从用户账户中扣除余额。

一句话就是:一次业务操作需要跨多个数据源或需要跨多个系统进行远程调用,就会产生分布式事务问题。

分布式事务处理过程的ID+三组件模型:

处理过程:

Seata-server安装:

  1. 下载安装seata的安装包

  2. 修改file.conf,修改之前请先备份好

    service模块:

    1
    vgroup_mapping.my_test_tx_group = "fsp_tx_group"

    store模块:

    1
    2
    3
    4
    5
    mode = "db"

    url = "jdbc:mysql://127.0.0.1:3306/seata"
    user = "root"
    password = "你自己的密码"
  3. mysql建库建表

    1. 上面指定了数据库为seata,所以创建一个数据库名为seata。

    2. 建表,在seata的安装目录下有一个db_store.sql,运行即可。

  4. 继续修改配置文件,修改seta\conf\registry.conf

    配置seata作为微服务,指定注册中心。

  5. 启动:

    先启动nacos,再启动seata-server(运行安装目录下的seata-server.bat。

Seata业务实例:

下单—>库存—>账号余额。

业务数据库准备:

  1. 创建三个数据库:

    • seata_order存储订单的数据库
    • seata_storage存储库存的数据库
    • seata_account存储账户信息的数据库
  2. 按照上面的数据库创建对应的业务表:

    • seata_order库下建t_order表
    • seata_storage库下建t_storage表
    • seata_account库下建t_account表
  3. 创建回滚日志表,方便查看:

    • 订单-库存-账户3个库下都需要建各自的回滚日志表。建表sql位于seata\conf目录下的db_undo_log.sql
    • 注意每个库都要执行一次这个sql,生成回滚日志表
  4. 最终效果:

业务需求:下订单——》减库存——》扣余额——》改(订单)状态。订单状态0表示整个业务流程还未走完。1则表示整个业务流程已经走完了。

Seata之order-moudle配置搭建:

  1. 每个业务都创建一个微服务,也就是要有三个微服务,订单,库存,账号。订单seta-order-2001

  2. pom

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
       
    <!--nacos-->
    <dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
    </dependency>
    <!--seata-->
    <dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-seata</artifactId>
    <exclusions>
    <exclusion>
    <artifactId>seata-all</artifactId>
    <groupId>io.seata</groupId>
    </exclusion>
    </exclusions>
    </dependency>
    <dependency>
    <groupId>io.seata</groupId>
    <artifactId>seata-all</artifactId>
    <version>0.9.0</version>
    </dependency>
    <!--feign-->
    <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-openfeign</artifactId>
    </dependency>

  3. YML配置文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    server:
    port: 2001

    spring:
    application:
    name: seata-order-service
    cloud:
    alibaba:
    seata:
    # 自定义事务组名称需要与seata-server中的对应,我们之前在seata的配置文件中配置的名字
    tx-service-group: fsp_tx_group
    nacos:
    discovery:
    server-addr: 127.0.0.1:8848
    datasource:
    # 当前数据源操作类型
    type: com.alibaba.druid.pool.DruidDataSource
    # mysql驱动类
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://localhost:3306/seata_order?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8
    username: root
    password: root
    feign:
    hystrix:
    enabled: false
    logging:
    level:
    io:
    seata: info

    mybatis:
    mapperLocations: classpath*:mapper/*.xml
  4. 还要额外创建其他配置文件,创建一个file.conf:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    transport {
    # tcp udt unix-domain-socket
    type = "TCP"
    #NIO NATIVE
    server = "NIO"
    #enable heartbeat
    heartbeat = true
    #thread factory for netty
    thread-factory {
    boss-thread-prefix = "NettyBoss"
    worker-thread-prefix = "NettyServerNIOWorker"
    server-executor-thread-prefix = "NettyServerBizHandler"
    share-boss-worker = false
    client-selector-thread-prefix = "NettyClientSelector"
    client-selector-thread-size = 1
    client-worker-thread-prefix = "NettyClientWorkerThread"
    # netty boss thread size,will not be used for UDT
    boss-thread-size = 1
    #auto default pin or 8
    worker-thread-size = 8
    }
    shutdown {
    # when destroy server, wait seconds
    wait = 3
    }
    serialization = "seata"
    compressor = "none"
    }
    service {
    #vgroup->rgroup
    # 事务组名称
    vgroup_mapping.fsp_tx_group = "default"
    #only support single node
    default.grouplist = "127.0.0.1:8091"
    #degrade current not support
    enableDegrade = false
    #disable
    disable = false
    #unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent
    max.commit.retry.timeout = "-1"
    max.rollback.retry.timeout = "-1"
    }

    client {
    async.commit.buffer.limit = 10000
    lock {
    retry.internal = 10
    retry.times = 30
    }
    report.retry.count = 5
    tm.commit.retry.count = 1
    tm.rollback.retry.count = 1
    }

    ## transaction log store
    store {
    ## store mode: file、db
    #mode = "file"
    mode = "db"

    ## file store
    file {
    dir = "sessionStore"

    # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
    max-branch-session-size = 16384
    # globe session size , if exceeded throws exceptions
    max-global-session-size = 512
    # file buffer size , if exceeded allocate new buffer
    file-write-buffer-cache-size = 16384
    # when recover batch read size
    session.reload.read_size = 100
    # async, sync
    flush-disk-mode = async
    }

    ## database store
    db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
    datasource = "dbcp"
    ## mysql/oracle/h2/oceanbase etc.
    db-type = "mysql"
    driver-class-name = "com.mysql.jdbc.Driver"
    url = "jdbc:mysql://127.0.0.1:3306/seata"
    user = "root"
    password = "root"
    min-conn = 1
    max-conn = 3
    global.table = "global_table"
    branch.table = "branch_table"
    lock-table = "lock_table"
    query-limit = 100
    }
    }
    lock {
    ## the lock store mode: local、remote
    mode = "remote"

    local {
    ## store locks in user's database
    }

    remote {
    ## store locks in the seata's server
    }
    }
    recovery {
    #schedule committing retry period in milliseconds
    committing-retry-period = 1000
    #schedule asyn committing retry period in milliseconds
    asyn-committing-retry-period = 1000
    #schedule rollbacking retry period in milliseconds
    rollbacking-retry-period = 1000
    #schedule timeout retry period in milliseconds
    timeout-retry-period = 1000
    }

    transaction {
    undo.data.validation = true
    undo.log.serialization = "jackson"
    undo.log.save.days = 7
    #schedule delete expired undo_log in milliseconds
    undo.log.delete.period = 86400000
    undo.log.table = "undo_log"
    }

    ## metrics settings
    metrics {
    enabled = false
    registry-type = "compact"
    # multi exporters use comma divided
    exporter-list = "prometheus"
    exporter-prometheus-port = 9898
    }

    support {
    ## spring
    spring {
    # auto proxy the DataSource bean
    datasource.autoproxy = false
    }
    }

    创建registry.conf:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    registry {
    # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
    type = "nacos"

    nacos {
    #serverAddr = "localhost"
    serverAddr = "localhost:8848"
    namespace = ""
    cluster = "default"
    }
    eureka {
    serviceUrl = "http://localhost:8761/eureka"
    application = "default"
    weight = "1"
    }
    redis {
    serverAddr = "localhost:6379"
    db = "0"
    }
    zk {
    cluster = "default"
    serverAddr = "127.0.0.1:2181"
    session.timeout = 6000
    connect.timeout = 2000
    }
    consul {
    cluster = "default"
    serverAddr = "127.0.0.1:8500"
    }
    etcd3 {
    cluster = "default"
    serverAddr = "http://localhost:2379"
    }
    sofa {
    serverAddr = "127.0.0.1:9603"
    application = "default"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    cluster = "default"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
    }
    file {
    name = "file.conf"
    }
    }

    config {
    # file、nacos 、apollo、zk、consul、etcd3
    type = "file"

    nacos {
    serverAddr = "localhost"
    namespace = ""
    }
    consul {
    serverAddr = "127.0.0.1:8500"
    }
    apollo {
    app.id = "seata-server"
    apollo.meta = "http://192.168.1.204:8801"
    }
    zk {
    serverAddr = "127.0.0.1:2181"
    session.timeout = 6000
    connect.timeout = 2000
    }
    etcd3 {
    serverAddr = "http://localhost:2379"
    }
    file {
    name = "file.conf"
    }
    }

    实际上,就是要将seata中的我们之前修改的两个配置文件复制到这个项目下

  5. 主启动类

    1
    2
    3
    4
    5
    6
    7
    8
    9
    @SpringBootApplication(exclude = DataSourceAutoConfiguration.class) //取消数据源的自动创建
    @EnableDiscoveryClient
    @EnableFeignClients
    public class SeataOrderMain2001 {

    public static void main(String[] args) {
    SpringApplication.run(SeataOrderMain2001.class,args);
    }
    }
  6. service层

    1
    2
    3
    4
    5
    6
    7
    8
    public interface OrderService {

    /**
    * 创建订单
    * @param order
    */
    void create(Order order);
    }
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    @FeignClient(value = "seata-storage-service")
    public interface StorageService {

    /**
    * 减库存
    * @param productId
    * @param count
    * @return
    */
    @PostMapping(value = "/storage/decrease")
    CommonResult decrease(@RequestParam("productId") Long productId, @RequestParam("count") Integer count);
    }
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    @FeignClient(value = "seata-account-service")
    public interface AccountService {

    /**
    * 减余额
    * @param userId
    * @param money
    * @return
    */
    @PostMapping(value = "/account/decrease")
    CommonResult decrease(@RequestParam("userId") Long userId, @RequestParam("money") BigDecimal money);
    }

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    @Service
    @Slf4j
    public class OrderServiceImpl implements OrderService {

    @Resource
    private OrderDao orderDao;
    @Resource
    private AccountService accountService;
    @Resource
    private StorageService storageService;

    /**
    * 创建订单->调用库存服务扣减库存->调用账户服务扣减账户余额->修改订单状态
    * 简单说:
    * 下订单->减库存->减余额->改状态
    * GlobalTransactional seata开启分布式事务,异常时回滚,name保证唯一即可
    * @param order 订单对象
    */
    @Override
    ///@GlobalTransactional(name = "fsp-create-order", rollbackFor = Exception.class)
    public void create(Order order) {
    // 1 新建订单
    log.info("----->开始新建订单");
    orderDao.create(order);

    // 2 扣减库存
    log.info("----->订单微服务开始调用库存,做扣减Count");
    storageService.decrease(order.getProductId(), order.getCount());
    log.info("----->订单微服务开始调用库存,做扣减End");

    // 3 扣减账户
    log.info("----->订单微服务开始调用账户,做扣减Money");
    accountService.decrease(order.getUserId(), order.getMoney());
    log.info("----->订单微服务开始调用账户,做扣减End");

    // 4 修改订单状态,从0到1,1代表已完成
    log.info("----->修改订单状态开始");
    orderDao.update(order.getUserId(), 0);

    log.nfo("----->下订单结束了,O(∩_∩)O哈哈~");
    }
    }
  7. dao层,也就是接口

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    @Mapper
    public interface OrderDao {
    /**
    * 1 新建订单
    * @param order
    * @return
    */
    int create(Order order);

    /**
    * 2 修改订单状态,从0改为1
    * @param userId
    * @param status
    * @return
    */
    int update(@Param("userId") Long userId, @Param("status") Integer status);
    }

    ==在resource下创建mapper文件夹,编写mapper.xml==

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    <?xml version="1.0" encoding="UTF-8" ?>
    <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
    "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
    <mapper namespace="com.eiletxie.springcloud.alibaba.dao.OrderDao">

    <resultMap id="BaseResultMap" type="com.eiletxie.springcloud.alibaba.domain.Order">
    <id column="id" property="id" jdbcType="BIGINT"></id>//id是主键唯一标识
    <result column="user_id" property="userId" jdbcType="BIGINT"></result>//这里的写法就是在数据库里面是user_id,在java的实体类里面是userId。其他的属性使用result
    <result column="product_id" property="productId" jdbcType="BIGINT"></result>
    <result column="count" property="count" jdbcType="INTEGER"></result>
    <result column="money" property="money" jdbcType="DECIMAL"></result>
    <result column="status" property="status" jdbcType="INTEGER"></result>
    </resultMap>

    <insert id="create" parameterType="com.eiletxie.springcloud.alibaba.domain.Order" u seGeneratedKeys="true"
    keyProperty="id">
    insert into t_order(user_id,product_id,count,money,status) values (#{userId},#{productId},#{count},#{money},0);
    </insert>

    <update id="update">
    update t_order set status =1 where user_id =#{userId} and status=#{status};
    </update>
    </mapper>

  8. controller层

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    @RestController
    public class OrderController {
    @Resource
    private OrderService orderService;


    /**
    * 创建订单
    *
    * @param order
    * @return
    */
    @GetMapping("/order/create")
    public CommonResult create(Order order) {
    orderService.create(order);
    return new CommonResult(200, "订单创建成功");
    }

    }
  9. entity类(也叫domain类)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    public class CommonResult<T> {
    private Integer code;
    private String message;
    private T data;

    public CommonResult(Integer code, String message) {
    this(code, message, null);
    }
    }

  10. config配置类

    1
    2
    3
    4
    5
    @Configuration
    @MapperScan({"com.eiletxie.springcloud.alibaba.dao"}) 指定我们的接口的位置
    public class MyBatisConfig {

    }
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32

    /**
    * @Author EiletXie
    * @Since 2020/3/18 21:51
    * 使用Seata对数据源进行代理
    */
    @Configuration
    public class DataSourceProxyConfig {

    @Value("${mybatis.mapperLocations}")
    private String mapperLocations;

    @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DataSource druidDataSource() {
    return new DruidDataSource();
    }

    @Bean
    public DataSourceProxy dataSourceProxy(DataSource druidDataSource) {
    return new DataSourceProxy(druidDataSource);
    }

    @Bean
    public SqlSessionFactory sqlSessionFactoryBean(DataSourceProxy dataSourceProxy) throws Exception {
    SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
    bean.setDataSource(dataSourceProxy);
    ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver();
    bean.setMapperLocations(resolver.getResources(mapperLocations));
    return bean.getObject();
    }
    }

Seata之storage-moudle说明:

库存seta-storage-2002

  1. pom

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    <dependencies>
    <!--nacos-->
    <dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
    </dependency>
    <!--seata-->
    <dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-seata</artifactId>
    <exclusions>
    <exclusion>
    <artifactId>seata-all</artifactId>
    <groupId>io.seata</groupId>
    </exclusion>
    </exclusions>
    </dependency>
    <dependency>
    <groupId>io.seata</groupId>
    <artifactId>seata-all</artifactId>
    <version>0.9.0</version>
    </dependency>
    <!--feign-->
    <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-openfeign</artifactId>
    </dependency>
    <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
    </dependency>
    <dependency>
    <groupId>org.mybatis.spring.boot</groupId>
    <artifactId>mybatis-spring-boot-starter</artifactId>
    <version>2.0.0</version>
    </dependency>
    <dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.37</version>
    </dependency>
    <dependency>
    <groupId>com.alibaba</groupId>
    <artifactId>druid-spring-boot-starter</artifactId>
    <version>1.1.10</version>
    </dependency>
    <dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <optional>true</optional>
    </dependency>
    </dependencies>
  2. 配置文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    server:
    port: 2002

    spring:
    application:
    name: seata-storage-service
    cloud:
    alibaba:
    seata:
    tx-service-group: fsp_tx_group
    nacos:
    discovery:
    server-addr: localhost:8848
    datasource:
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://localhost:3306/seata_storage
    username: root
    password: 111111

    logging:
    level:
    io:
    seata: info

    mybatis:
    mapperLocations: classpath:mapper/*.xml
  3. file.conf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    transport {
    # tcp udt unix-domain-socket
    type = "TCP"
    #NIO NATIVE
    server = "NIO"
    #enable heartbeat
    heartbeat = true
    #thread factory for netty
    thread-factory {
    boss-thread-prefix = "NettyBoss"
    worker-thread-prefix = "NettyServerNIOWorker"
    server-executor-thread-prefix = "NettyServerBizHandler"
    share-boss-worker = false
    client-selector-thread-prefix = "NettyClientSelector"
    client-selector-thread-size = 1
    client-worker-thread-prefix = "NettyClientWorkerThread"
    # netty boss thread size,will not be used for UDT
    boss-thread-size = 1
    #auto default pin or 8
    worker-thread-size = 8
    }
    shutdown {
    # when destroy server, wait seconds
    wait = 3
    }
    serialization = "seata"
    compressor = "none"
    }

    service {
    #vgroup->rgroup
    vgroup_mapping.fsp_tx_group = "default"
    #only support single node
    default.grouplist = "127.0.0.1:8091"
    #degrade current not support
    enableDegrade = false
    #disable
    disable = false
    #unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent
    max.commit.retry.timeout = "-1"
    max.rollback.retry.timeout = "-1"
    disableGlobalTransaction = false
    }

    client {
    async.commit.buffer.limit = 10000
    lock {
    retry.internal = 10
    retry.times = 30
    }
    report.retry.count = 5
    tm.commit.retry.count = 1
    tm.rollback.retry.count = 1
    }

    transaction {
    undo.data.validation = true
    undo.log.serialization = "jackson"
    undo.log.save.days = 7
    #schedule delete expired undo_log in milliseconds
    undo.log.delete.period = 86400000
    undo.log.table = "undo_log"
    }

    support {
    ## spring
    spring {
    # auto proxy the DataSource bean
    datasource.autoproxy = false
    }
    }
  4. registry.conf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    registry {
    # file 、nacos 、eureka、redis、zk
    type = "nacos"

    nacos {
    serverAddr = "localhost:8848"
    namespace = ""
    cluster = "default"
    }
    eureka {
    serviceUrl = "http://localhost:8761/eureka"
    application = "default"
    weight = "1"
    }
    redis {
    serverAddr = "localhost:6381"
    db = "0"
    }
    zk {
    cluster = "default"
    serverAddr = "127.0.0.1:2181"
    session.timeout = 6000
    connect.timeout = 2000
    }
    file {
    name = "file.conf"
    }
    }

    config {
    # file、nacos 、apollo、zk
    type = "file"

    nacos {
    serverAddr = "localhost"
    namespace = ""
    cluster = "default"
    }
    apollo {
    app.id = "fescar-server"
    apollo.meta = "http://192.168.1.204:8801"
    }
    zk {
    serverAddr = "127.0.0.1:2181"
    session.timeout = 6000
    connect.timeout = 2000
    }
    file {
    name = "file.conf"
    }
    }
  5. 主启动类

    1
    2
    3
    @SpringBootApplication(exclude = DataSourceAutoConfiguration.class)
    @EnableDiscoveryClient
    @EnableFeignClients
  6. service层,记得写接口

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    private static final Logger LOGGER = LoggerFactory.getLogger(StorageServiceImpl.class);

    @Resource
    private StorageDao storageDao;

    // 扣减库存
    @Override
    public void decrease(Long productId, Integer count) {
    LOGGER.info("------->storage-service中扣减库存开始");
    storageDao.decrease(productId,count);
    LOGGER.info("------->storage-service中扣减库存结束");
    }
  7. dao层

    1
    2
    3
    4
    5
    6
    7
    @Mapper
    public interface StorageDao {


    //扣减库存信息
    void decrease(@Param("productId") Long productId, @Param("count") Integer count);
    }
  8. controller层

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    @Autowired
    private StorageService storageService;


    //扣减库存
    @RequestMapping("/storage/decrease")
    public CommonResult decrease(Long productId, Integer count) {
    storageService.decrease(productId, count);
    return new CommonResult(200,"扣减库存成功!");
    }
  9. config配置:

    1
    2
    3
    4
    @Configuration
    @MapperScan({"com.atguigu.springcloud.alibaba.dao"})
    public class MyBatisConfig {
    }
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    @Value("${mybatis.mapperLocations}")
    private String mapperLocations;

    @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DataSource druidDataSource(){
    return new DruidDataSource();
    }

    @Bean
    public DataSourceProxy dataSourceProxy(DataSource dataSource) {
    return new DataSourceProxy(dataSource);
    }

    @Bean
    public SqlSessionFactory sqlSessionFactoryBean(DataSourceProxy dataSourceProxy) throws Exception {
    SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean();
    sqlSessionFactoryBean.setDataSource(dataSourceProxy);
    sqlSessionFactoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(mapperLocations));
    sqlSessionFactoryBean.setTransactionFactory(new SpringManagedTransactionFactory());
    return sqlSessionFactoryBean.getObject();
    }

Seata之account-moudle说明:

账号seta-account-2003

  1. pom
  2. 配置文件
  3. 主启动类
  4. service层
  5. dao层
  6. controller层
  7. 全局创建完成后,首先测试不加seata

使用seata的注解@GlobalTransaction:

在订单模块的serviceImpl类中的==create方法==添加启动分布式事务的注解

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
/**
这里添加开启分布式事务的注解,name指定当前全局事务的名称
rollbackFor表示,发生什么异常需要回滚
noRollbackFor:表示,发生什么异常不需要回滚
*/
@GlobalTransactional(name = "fsp-create-order",rollbackFor = Exception.class)
///@GlobalTransactional(name = "fsp-create-order", rollbackFor = Exception.class)
public void create(Order order) {
// 1 新建订单
log.info("----->开始新建订单");
orderDao.create(order);

// 2 扣减库存
log.info("----->订单微服务开始调用库存,做扣减Count");
storageService.decrease(order.getProductId(), order.getCount());
log.info("----->订单微服务开始调用库存,做扣减End");

// 3 扣减账户
log.info("----->订单微服务开始调用账户,做扣减Money");
accountService.decrease(order.getUserId(), order.getMoney());
log.info("----->订单微服务开始调用账户,做扣减End");

// 4 修改订单状态,从0到1,1代表已完成
log.info("----->修改订单状态开始");
orderDao.update(order.getUserId(), 0);

log.info("----->下订单结束了,O(∩_∩)O哈哈~");
}

  1. 此时在测试

    发现,发生异常后,直接回滚了,前面的修改操作都回滚了

  • 版权声明: 本博客所有文章除特别声明外,著作权归作者所有。转载请注明出处!

请我喝杯咖啡吧~

支付宝
微信