文章目录
- 分片审计
- 数据加密
- 读写分离
- 广播表
- 绑定表
分片审计
开发者手册
分片审计功能是针对数据库分片场景下对执行的 SQL 语句进行审计操作。可以对我们要执行的sql进行拦截并进行审核。
目前ShardingSphere内置的分片审计算法只有一个,DML_SHARDING_CONDITIONS。他的功能是要求对逻辑表查询时,必须带上分片键。
# 分片审计规则: SQL查询必须带上分片键
spring.shardingsphere.rules.sharding.tables.sys_user.audit-strategy.auditor-names[0]=course_auditor
spring.shardingsphere.rules.sharding.auditors.course_auditor.type=DML_SHARDING_CONDITIONS
yml完整配置
server:
port: 8084
spring:
shardingsphere:
props:
# 打印shardingjdbc的日志 shardingsphere5之前的版本配置项是 spring.shardingsphere.props.sql.show,而这里是sql-show
sql-show: true
# 数据源配置
datasource:
# 虚拟库的名字,并指定对应的真实库
names: hsdb0,hsdb1
hsdb0:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere1?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
hsdb1:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere2?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
# 分布式序列算法配置
rules:
sharding:
# 分片键生成策略
key-generators:
# 雪花算法,生成Long类型主键。
# alg_snowflake为我们程序员自定义的字符串名字,可以更改
alg_snowflake:
type: COSID_SNOWFLAKE
props:
worker:
id: 1
# 分片算法
sharding-algorithms:
# 指定分库策略 按2取模方式 sys_user_db_alg是我们程序员自定义的字符串名字,可以更改
sys_user_db_alg:
type: MOD
props:
sharding-count: 2
# 指定分表策略 sys_user_tbl_alg是我们自定义string 可以更改
sys_user_tbl_alg:
type: INLINE
props:
algorithm-expression: sys_user$->{((uid+1)%4).intdiv(2)+1}
# 分片审计
auditors:
# sys_user_audit是我们自定义string 可以更改
sys_user_audit:
type: DML_SHARDING_CONDITIONS
# 指定分布式主键生成策略
tables:
# sys_user为虚拟表名,可以更改,但需要和实体类中的表名对应上
sys_user:
# 配置实际分片节点 $->{1..2}表达式的功能就是[1,2] hsdb0.sys_user1 hsdb0.sys_user2 hsdb1.sys_user1 hsdb1.sys_user2
actual-data-nodes: hsdb$->{0..1}.sys_user$->{1..2}
# 指定虚拟表的分片键,以及分片键的生成策略
key-generate-strategy:
# 指定分片建为 uid 这个是和数据表中的字段对应的
column: uid
# 指定分片键字段的生成策略, alg_snowflake 也就是我们上面自定义的雪花算法
key-generator-name: alg_snowflake
# 配置分库策略,按uid字段取模
database-strategy:
standard:
sharding-column: uid
sharding-algorithm-name: sys_user_db_alg
# 指定分表策略 standard-按单一分片键进行精确或范围分片
table-strategy:
standard:
sharding-column: uid
sharding-algorithm-name: sys_user_tbl_alg
# 开启分片审计功能
audit-strategy:
# 指定分片审计是sys_user_audit
auditorNames[0]: sys_user_audit
因为我上方的配置文件中是使用uid进行的分片键,我现在查询如果不带uid就会报错
@Test
public void testInline(){
QueryWrapper<User> queryWrapper = new QueryWrapper<>();
// queryWrapper.eq("uid", 1819587788580864004L);
List<User> users = userMapper.selectList(queryWrapper);
System.out.println(users);
}
数据加密
官方开发手册
我们可以对真实表中的某个字段进行加密保存
全限定类名org.apache.shardingsphere.encrypt.spi.EncryptAlgorithm
配置标识 | 详细说明 | 全限定类名 |
---|---|---|
MD5 | 基于 MD5 的数据加密算法 | org.apache.shardingsphere.encrypt.algorithm.MD5EncryptAlgorithm.java |
AES | 基于 AES 的数据加密算法 | org.apache.shardingsphere.encrypt.algorithm.AESEncryptAlgorithm.java |
RC4 | 基于 RC4 的数据加密算法 | org.apache.shardingsphere.encrypt.algorithm.RC4EncryptAlgorithm.java |
SM3 | 基于 SM3 的数据加密算法 | org.apache.shardingsphere.encrypt.sm.algorithm.SM3EncryptAlgorithm.java |
SM4 | 基于 SM4 的数据加密算法 | org.apache.shardingsphere.encrypt.sm.algorithm.SM4EncryptAlgorithm.java |
# 数据加密:对password字段进行加密
# 存储明文的字段,逻辑列名需要和java实体类中操作的列名对应上, value是真实数据表中的列名
spring.shardingsphere.rules.encrypt.tables.逻辑表名.columns.逻辑列名.plainColumn = password
# 存储密文的字段
spring.shardingsphere.rules.encrypt.tables.逻辑表名.columns.逻辑列名.cipherColumn = password_cipher
# 加密器 sys_user_password_encry为我们自定义加密器
spring.shardingsphere.rules.encrypt.tables.逻辑表名.columns.逻辑列名.encryptorName = sys_user_password_encry
# AES加密器
#spring.shardingsphere.rules.encrypt.encryptors.sys_user_password_encry.type=AES
#spring.shardingsphere.rules.encrypt.encryptors.sys_user_password_encry.props.aes-key-value=123456
# MD5加密器
#spring.shardingsphere.rules.encrypt.encryptors.sys_user_password_encry.type=MD5
# SM3加密器
spring.shardingsphere.rules.encrypt.encryptors.sys_user_password_encry.type=SM3
spring.shardingsphere.rules.encrypt.encryptors.sys_user_password_encry.props.sm3-salt=12345678
# sm4加密器
#spring.shardingsphere.rules.encrypt.encryptors.sys_user_password_encry.type=SM4
完整yml配置
server:
port: 8084
spring:
shardingsphere:
props:
# 打印shardingjdbc的日志 shardingsphere5之前的版本配置项是 spring.shardingsphere.props.sql.show,而这里是sql-show
sql-show: true
# 数据源配置
datasource:
# 虚拟库的名字,并指定对应的真实库
names: hsdb0,hsdb1
hsdb0:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere1?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
hsdb1:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere2?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
# 分布式序列算法配置
rules:
sharding:
# 分片键生成策略
key-generators:
# 雪花算法,生成Long类型主键。
# alg_snowflake为我们程序员自定义的字符串名字,可以更改
alg_snowflake:
type: COSID_SNOWFLAKE
props:
worker:
id: 1
# 分片算法
sharding-algorithms:
# 指定分库策略 按2取模方式 sys_user_db_alg是我们程序员自定义的字符串名字,可以更改
sys_user_db_alg:
type: MOD
props:
sharding-count: 2
# 指定分表策略 sys_user_tbl_alg是我们自定义string 可以更改
sys_user_tbl_alg:
type: INLINE
props:
algorithm-expression: sys_user$->{((uid+1)%4).intdiv(2)+1}
# 指定分布式主键生成策略
tables:
# sys_user为虚拟表名,可以更改,但需要和实体类中的表名对应上
sys_user:
# 配置实际分片节点 $->{1..2}表达式的功能就是[1,2] hsdb0.sys_user1 hsdb0.sys_user2 hsdb1.sys_user1 hsdb1.sys_user2
actual-data-nodes: hsdb$->{0..1}.sys_user$->{1..2}
# 指定虚拟表的分片键,以及分片键的生成策略
key-generate-strategy:
# 指定分片建为 uid 这个是和数据表中的字段对应的
column: uid
# 指定分片键字段的生成策略, alg_snowflake 也就是我们上面自定义的雪花算法
key-generator-name: alg_snowflake
# 配置分库策略,按uid字段取模
database-strategy:
standard:
sharding-column: uid
sharding-algorithm-name: sys_user_db_alg
# 指定分表策略 standard-按单一分片键进行精确或范围分片
table-strategy:
standard:
sharding-column: uid
sharding-algorithm-name: sys_user_tbl_alg
# 定义加密器
encrypt:
encryptors:
# sys_user_password_encry为我们自定义string 可以更改
sys_user_password_encry:
type: SM3
props:
sm3-salt: 12345678
tables:
# 逻辑表名
sys_user:
columns:
# 逻辑列名
password:
# 这下面的password就是加密之后的密文保存在数据表的哪一个字段中
cipher-column: password
encryptor-name: sys_user_password_encry
进行插入测试
@Test
public void testAdd(){
for (int i = 0; i < 10; i++) {
User user = new User();
user.setUsername("husahng" + i);
user.setPassword("123456");
user.setName("name");
user.setDescription("描述信息");
user.setStatus(1);
userMapper.insert(user);
}
}
从日志信息就能看出来这里进行了加密
查询测试
@Test
public void testInline(){
QueryWrapper<User> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("uid", 1026796515198238724L);
queryWrapper.eq("password", "123456");
List<User> users = userMapper.selectList(queryWrapper);
System.out.println(users);
}
打印日志信息
# 逻辑表查询
Logic SQL: SELECT uid,username,password,name,description,status,create_time,update_time FROM sys_user WHERE (uid = ? AND password = ?)
# sql路由之后的分片查询
Actual SQL: hsdb0 ::: SELECT uid,username,password AS password,name,description,status,create_time,update_time FROM sys_user1
WHERE (uid = ? AND password = ?) ::: [1026796515198238724, 950d9aac597a86c9df5f35e5055ea5e7e92364a9983fd37a057e39a8c9894239]
# 输出结果
[User(uid=1026796515198238724, username=husahng3, password=950d9aac597a86c9df5f35e5055ea5e7e92364a9983fd37a057e39a8c9894239, name=name, description=描述信息, status=1, createTime=Sun Aug 04 10:02:46 CST 2024, updateTime=Sun Aug 04 10:02:46 CST 2024)]
读写分离
开发者手册
在ShardingSphere中,实现读写分离也非常简单。只需要创建一个类型为readwrite-splitting的分片规则即可。
下方案例就使用sharding_sphere1和sharding_sphere2数据库中的sys_user1数据表进行读写分离的测试
移除数据分片相关的策略,添加读写分离的配置
#-----------------------配置读写分离
# 要配置成读写分离的虚拟库
spring.shardingsphere.rules.sharding.tables.逻辑表名.actual-data-nodes=readwrite-splitting的逻辑库.真实表名
# 配置读写分离虚拟库 主库一个,从库多个 下方的逻辑库名就和上方readwrite-splitting的逻辑库名对应上 hsdb0是上文中配置的普通逻辑库名
spring.shardingsphere.rules.readwrite-splitting.data-sources.逻辑库名.static-strategy.write-data-source-name=hsdb0
spring.shardingsphere.rules.readwrite-splitting.data-sources.逻辑库名.static-strategy.read-data-source-names[0]=hsdb1
# 指定负载均衡器
spring.shardingsphere.rules.readwrite-splitting.data-sources.逻辑库名.load-balancer-name=自定义负载均衡器名
# 配置负载均衡器
# 按操作轮训
spring.shardingsphere.rules.readwrite-splitting.load-balancers.自定义负载均衡器名.type=ROUND_ROBIN
# 按事务轮训
#spring.shardingsphere.rules.readwrite-splitting.load-balancers.user_lb.type=TRANSACTION_ROUND_ROBIN
# 按操作随机
#spring.shardingsphere.rules.readwrite-splitting.load-balancers.user_lb.type=RANDOM
# 按事务随机
#spring.shardingsphere.rules.readwrite-splitting.load-balancers.user_lb.type=TRANSACTION_RANDOM
# 读请求全部强制路由到主库
#spring.shardingsphere.rules.readwrite-splitting.load-balancers.user_lb.type=FIXED_PRIMARY
完整配置如下
server:
port: 8084
spring:
shardingsphere:
props:
# 打印shardingjdbc的日志 shardingsphere5之前的版本配置项是 spring.shardingsphere.props.sql.show,而这里是sql-show
sql-show: true
# 数据源配置
datasource:
# 虚拟库的名字,并指定对应的真实库
names: hsdb0,hsdb1
hsdb0:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere1?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
hsdb1:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere2?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
# 分布式序列算法配置
rules:
sharding:
# 分片键生成策略
key-generators:
# 雪花算法,生成Long类型主键。
# alg_snowflake为我们程序员自定义的字符串名字,可以更改
alg_snowflake:
type: COSID_SNOWFLAKE
props:
worker:
id: 1
# 指定分布式主键生成策略
tables:
# sys_user为虚拟表名,可以更改,但需要和实体类中的表名对应上
sys_user:
# 配置实际分片节点 改为 要配置成读写分离的虚拟库 hsuserdb为读写分离虚拟库 sys_user1为真实表
# actual-data-nodes: hsdb$->{0..1}.sys_user$->{1..2}
actual-data-nodes: hsuserdb.sys_user1
# 指定虚拟表的分片键,以及分片键的生成策略
key-generate-strategy:
# 指定分片建为 uid 这个是和数据表中的字段对应的
column: uid
# 指定分片键字段的生成策略, alg_snowflake 也就是我们上面自定义的雪花算法
key-generator-name: alg_snowflake
readwrite-splitting:
# 配置负载均衡器
load-balancers:
# sys_user_lb为自定义string 可以更改
sys_user_lb:
type: ROUND_ROBIN
# 配置读写分离数据库
data-sources:
hsuserdb:
# 指定负载均衡器
load-balancer-name: sys_user_lb
# 使用静态策略
static-strategy:
# 指定一个写库 可以指定多个从库,如果有多个从库就会使用上方的负载均衡器
write-data-source-name: hsdb0
read-data-source-names[0]: hsdb1
测试写入操作
@Test
public void testAdd(){
for (int i = 0; i < 10; i++) {
User user = new User();
user.setUsername("husahng" + i);
user.setPassword("123456");
user.setName("name");
user.setDescription("描述信息");
user.setStatus(1);
userMapper.insert(user);
}
}
数据全都写入到了sharding_sphere1数据库的sys_user1数据表中了
测试查询操作
@Test
public void testInline(){
QueryWrapper<User> queryWrapper = new QueryWrapper<>();
List<User> users = userMapper.selectList(queryWrapper);
System.out.println(users);
}
打印日志,从打印日志就可以发现,这里的查询只会去hsdb1数据库进行读取
Logic SQL: SELECT uid,username,password,name,description,status,create_time,update_time FROM sys_user
Actual SQL: hsdb1 ::: SELECT uid,username,password,name,description,status,create_time,update_time FROM sys_user1
广播表
广播表表指所有的分片数据源中都存在的表,表结构及其数据在每个数据库中均完全一致。适用于数据量不大且需要与海量数据的表进行关联查询的场景,例如:字典表。示例如下:
CREATE TABLE `sharding_sphere1`.`dict` (
`dictId` bigint(0) NOT NULL,
`dictkey` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`dictVal` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
PRIMARY KEY (`dictId`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;
CREATE TABLE `sharding_sphere2`.`dict` (
`dictId` bigint(0) NOT NULL,
`dictkey` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`dictVal` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
PRIMARY KEY (`dictId`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;
创建实体
@Data
@AllArgsConstructor
@NoArgsConstructor
@TableName("dict")
public class Dict {
private Long dictid;
private String dictkey;
private String dictval;
}
创建Mapper
@Mapper
public interface DictMapper extends BaseMapper<Dict> {
}
主要核心配置为最后一行
# 分布式主键生成类型
spring.shardingsphere.rules.sharding.key-generators.自定义主键生成策略名.type=COSID_SNOWFLAKE
# 指定分布式主键生成策略
spring.shardingsphere.rules.sharding.tables.虚拟表名.key-generate-strategy.column=主键字段 或者是 分片字段名
spring.shardingsphere.rules.sharding.tables.虚拟表名.key-generate-strategy.key-generator-name=自定义主键生成策略名
#-----------------------配置读写分离
# 指定虚拟表访问的虚拟库与真实表
spring.shardingsphere.rules.sharding.虚拟表名.dict.actual-data-nodes=m$->{0..1}.dict
# 指定广播表。广播表会忽略分表的逻辑,只往多个库的同一个表中插入数据。 这里要和上方的虚拟表名对应上
spring.shardingsphere.rules.sharding.broadcast-tables=dict
完整的yml配置
server:
port: 8084
spring:
shardingsphere:
props:
# 打印shardingjdbc的日志 shardingsphere5之前的版本配置项是 spring.shardingsphere.props.sql.show,而这里是sql-show
sql-show: true
# 数据源配置
datasource:
# 虚拟库的名字,并指定对应的真实库
names: hsdb0,hsdb1
hsdb0:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere1?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
hsdb1:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere2?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
# 分布式序列算法配置
rules:
sharding:
# 分片键生成策略
key-generators:
# 雪花算法,生成Long类型主键。 alg_snowflake为我们程序员自定义的字符串名字,可以更改
alg_snowflake:
type: COSID_SNOWFLAKE
props:
worker:
id: 1
# 指定分布式主键生成策略
tables:
# dict虚拟表名,可以更改,但需要和实体类中的表名对应上
dict:
# 配置实际分片节点
actual-data-nodes: hsdb$->{0..1}.dict
# 指定虚拟表的分片键,以及分片键的生成策略
key-generate-strategy:
column: dictid
# 指定分片键字段的生成策略, alg_snowflake 也就是我们上面自定义的雪花算法
key-generator-name: alg_snowflake
# 指定广播表 dict要和上方的虚拟表名对应上
broadcast-tables[0]: dict
新增测试
@Test
public void testAdd(){
Dict dict = new Dict();
dict.setDictkey("name");
dict.setDictval("hushang");
dictMapper.insert(dict);
}
打印日志如下,这里会往两个数据库都进行插入操作
Logic SQL: INSERT INTO dict ( dictkey,dictval ) VALUES ( ?,? )
Actual SQL: hsdb0 ::: INSERT INTO dict ( dictkey,dictval , dictid) VALUES (?, ?, ?) ::: [name, hushang, 1026819603256311809]
Actual SQL: hsdb1 ::: INSERT INTO dict ( dictkey,dictval , dictid) VALUES (?, ?, ?) ::: [name, hushang, 1026819603256311809]
绑定表
绑定表指分片规则一致的一组分片表。 就比如两个数据表都使用的同一个关联字段作为分片键
使用绑定表进行多表关联查询时,必须使用分片键进行关联,否则会出现笛卡尔积关联或跨库关联,从而影响查询效率。
此时我们已经有两用户表,我们再来为sys_user数据表创建一个关联表用户信息表
创建实体类
@Data
@AllArgsConstructor
@NoArgsConstructor
@TableName("user_info")
public class UserInfo {
private Long ifid;
private Long uid;
private String userinfo;
}
Mapper接口
@Mapper
public interface UserInfoMapper extends BaseMapper<UserInfo> {
}
yml配置内容如下
server:
port: 8084
spring:
shardingsphere:
props:
# 打印shardingjdbc的日志 shardingsphere5之前的版本配置项是 spring.shardingsphere.props.sql.show,而这里是sql-show
sql-show: true
# 数据源配置
datasource:
# 虚拟库的名字,并指定对应的真实库
names: hsdb0
hsdb0:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere1?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
# 分布式序列算法配置
rules:
sharding:
# 分片键生成策略
key-generators:
# 雪花算法,生成Long类型主键。 alg_snowflake为我们程序员自定义的字符串名字,可以更改
alg_snowflake:
type: COSID_SNOWFLAKE
props:
worker:
id: 1
sharding-algorithms:
# 定义sys_user的分表策略
sys_user_tab_alg:
type: INLINE
props:
algorithm-expression: sys_user$->{((uid+1)%4).intdiv(2)+1}
# 定义user_info的分表策略
user_info_tab_alg:
type: INLINE
props:
algorithm-expression: user_info$->{((uid+1)%4).intdiv(2)+1}
tables:
# sys_user逻辑表的处理
sys_user:
# 配置实际分片节点
actual-data-nodes: hsdb0.sys_user$->{1..2}
# 分表策略
table-strategy:
standard:
sharding-column: uid
sharding-algorithm-name: sys_user_tab_alg
# user_info逻辑表的处理
user_info:
# 配置实际分片节点
actual-data-nodes: hsdb0.user_info$->{1..2}
# 配置主键生成策略
key-generate-strategy:
column: ifid
key-generator-name: alg_snowflake
# 分表策略
table-strategy:
standard:
sharding-column: uid
sharding-algorithm-name: user_info_tab_alg
进行插入操作
@Test
public void testAdd(){
for (int i = 1; i <= 10; i++) {
User user = new User();
// uid 给两张表的关联字段进行赋值
Long uid = (long)i;
user.setUid(uid);
user.setUsername("husahng" + i);
user.setPassword("123456");
user.setName("name");
user.setDescription("描述信息");
user.setStatus(1);
userMapper.insert(user);
for (int j = 0; j < 5; j++) {
UserInfo userInfo = new UserInfo();
userInfo.setUid(uid);
userInfo.setUserinfo("test user info");
userInfoMapper.insert(userInfo);
}
}
}
接下来再进行关联查询
@Mapper
public interface UserInfoMapper extends BaseMapper<UserInfo> {
@Select("select info.* from sys_user as u, user_info as info where u.uid = info.uid")
List<UserInfo> queryUserInfo();
}
查看日志,我们就可以发现,这里竟然执行了四次sql。我两个数据表都是用的uid进行的分表,按理说一条uid数据要么同时在sys_user1 和 user_info1 数据表中,要么同时在sys_user2 和 user_info2 数据表中 根本就不需要进行四次sql执行。
Logic SQL: select info.* from sys_user as u, user_info as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user1 as u, user_info1 as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user1 as u, user_info2 as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user2 as u, user_info1 as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user2 as u, user_info2 as info where u.uid = info.uid
这种查询明显性能是非常低的,如果两张表的分片数更多,执行的SQL也会更多。而实际上,用户表和用户信息表,他们都是按照uid进行分片的,他们的分片规则是一致的。
这样,再把绑定关系加上,此时查询,就会按照相同的uid分片进行查询。
# 指定绑定表 ,这里使用的都是虚拟表名
spring.shardingsphere.rules.sharding.binding-tables[0]=sys_user,user_info
# 此时就只有两条sql执行了
Logic SQL: select info.* from sys_user as u, user_info as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user1 as u, user_info1 as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user2 as u, user_info2 as info where u.uid = info.uid
如果是下面这种分库分表的情况下
server:
port: 8084
spring:
shardingsphere:
props:
# 打印shardingjdbc的日志 shardingsphere5之前的版本配置项是 spring.shardingsphere.props.sql.show,而这里是sql-show
sql-show: true
# 数据源配置
datasource:
# 虚拟库的名字,并指定对应的真实库
names: hsdb0,hsdb1
hsdb0:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere1?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
hsdb1:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sharding_sphere2?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: 1234
# 分布式序列算法配置
rules:
sharding:
# 分片键生成策略
key-generators:
# 雪花算法,生成Long类型主键。 alg_snowflake为我们程序员自定义的字符串名字,可以更改
alg_snowflake:
type: COSID_SNOWFLAKE
props:
worker:
id: 1
sharding-algorithms:
# 定义分库策略 db_alg为自定义string 可以更改
db_alg:
type: MOD
props:
sharding-count: 2
# 定义sys_user的分表策略
sys_user_tab_alg:
type: INLINE
props:
algorithm-expression: sys_user$->{((uid+1)%4).intdiv(2)+1}
# 定义user_info的分表策略
user_info_tab_alg:
type: INLINE
props:
algorithm-expression: user_info$->{((uid+1)%4).intdiv(2)+1}
tables:
# sys_user逻辑表的处理
sys_user:
# 配置实际分片节点
actual-data-nodes: hsdb$->{0..1}.sys_user$->{1..2}
# 分库策略
database-strategy:
standard:
sharding-column: uid
sharding-algorithm-name: db_alg
# 分表策略
table-strategy:
standard:
sharding-column: uid
sharding-algorithm-name: sys_user_tab_alg
# user_info逻辑表的处理
user_info:
# 配置实际分片节点
actual-data-nodes: hsdb$->{0..1}.user_info$->{1..2}
# 配置主键生成策略
key-generate-strategy:
column: ifid
key-generator-name: alg_snowflake
# 分库策略
database-strategy:
standard:
sharding-column: uid
sharding-algorithm-name: db_alg
# 分表策略
table-strategy:
standard:
sharding-column: uid
sharding-algorithm-name: user_info_tab_alg
未加关联表的查询
Logic SQL: select info.* from sys_user as u, user_info as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user1 as u, user_info1 as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user1 as u, user_info2 as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user2 as u, user_info1 as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user2 as u, user_info2 as info where u.uid = info.uid
Actual SQL: hsdb1 ::: select info.* from sys_user1 as u, user_info1 as info where u.uid = info.uid
Actual SQL: hsdb1 ::: select info.* from sys_user1 as u, user_info2 as info where u.uid = info.uid
Actual SQL: hsdb1 ::: select info.* from sys_user2 as u, user_info1 as info where u.uid = info.uid
Actual SQL: hsdb1 ::: select info.* from sys_user2 as u, user_info2 as info where u.uid = info.uid
添加关联表的查询
Logic SQL: select info.* from sys_user as u, user_info as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user1 as u, user_info1 as info where u.uid = info.uid
Actual SQL: hsdb0 ::: select info.* from sys_user2 as u, user_info2 as info where u.uid = info.uid
Actual SQL: hsdb1 ::: select info.* from sys_user1 as u, user_info1 as info where u.uid = info.uid
Actual SQL: hsdb1 ::: select info.* from sys_user2 as u, user_info2 as info where u.uid = info.uid