highlight: arduino-light
4.4 批量消息
4.4.1 发送限制
生产者进行消息发送时可以一次发送多条消息,批量发送消息能显著提高传递小消息的性能。
不过需要注意以下几点:
- 批量发送的消息必须具有相同的Topic
- 批量发送的消息必须具有相同的刷盘策略
- 批量发送的消息不能是延时消息与事务消息
- 此外,这一批消息的总大小不应超过4MB。
批量消息发送是将同一个主题的多条消息一起打包发送到消息服务端,减少网络调用次数,提高网络传输效率。当然,并不是在同一批次中发送的消息数量越多越好,其判断依据是单条消息的长度,如果单条消息内容比较长,则打包多条消息发送会影响其他线程发送消息的响应时间,并且单批次消息总长度不能超过DefaultMQProducer#maxMessageSize即4M。
4.4.2 批量发送大小
默认情况下,一批发送的消息总大小不能超过4MB字节。如果想超出该值,有两种解决方案:
方案一:将批量消息进行拆分,拆分为若干不大于4M的消息集合分多次批量发送
方案二:在Producer端与Broker端修改属性 Producer端需要在发送之前设置Producer的maxMessageSize属性 Broker端需要修改其加载的配置文件中的maxMessageSize属性
4.4.3消息的组成
生产者通过send()方法发送的Message,并不是直接将Message序列化后发送到网络上的,而是通过这个Message生成了一个字符串发送出去的。这个字符串由四部分构成:Topic、消息Body、消息日志(占20字节),及用于描述消息的一堆属性key-value。这些属性中包含例如生产者地址、生产时间、要发送的QueueId等。最终写入到Broker中消息单元中的数据都是来自于这些属性。
批量消息发送要解决的问题是如何将这些消息编码以便服务端能够正确解码出每条消息的消息内容。
4.4.4 批量消费消息:修改批量消费属性
消息发送可以通过批量消息提高发送的速率,我们通过修改批量消费属性提高消费者消费的速率。
```java /** * @author: claude-彭方亮 * @package: com.yl.tmp.whitecloud.consumer.core.consumer.DataSyncConsumer * @date: 2022/11/9 7:43 * @description: * @version: 1.0 */
@Slf4j @Component("dataSyncConsumer") @RocketMQMessageListener(topic = RocketConstant.SYNCALLTOPIC, consumerGroup = RocketConstant.SYNCALLTOPIC + "-consumer-group") public class DataSyncConsumer implements RocketMQPushConsumerLifecycleListener, RocketMQListener {
@Autowired
private RocketMqConsumerConfig rocketMqConsumerConfig;
@Autowired
DataSyncService dataSyncService;
@Autowired
RocketMQTemplate template;
@Autowired
@Qualifier("logJdbcTemplate")
JdbcTemplate logJdbcTemplate;
@Autowired
@Qualifier("scanJdbcTemplate")
JdbcTemplate scanJdbcTemplate;
@Autowired
@Qualifier("logTransactionTemplate")
TransactionTemplate logTransactionTemplate;
@Autowired
@Qualifier("scanTransactionTemplate")
TransactionTemplate scanTransactionTemplate;
@Autowired
@Qualifier("compareJdbcTemplate")
JdbcTemplate compareJdbcTemplate;
public DataSyncConsumer(RocketMqConsumerConfig rocketMqConsumerConfig) {
this.rocketMqConsumerConfig = rocketMqConsumerConfig;
}
@Override
public void onMessage(MessageExt messageExt) {
String message = new String(messageExt.getBody(), Charset.forName("utf-8"));
DataSyncDTO dataSyncDto = this.convertMsg(message);
String table = dataSyncDto.getTable();
//根据不同的表选择不同的template
if (DataSyncConstant.TABLE_SCAN_NAME.equals(table) || DataSyncConstant.TABLE_SCAN_UPLOAD_NAME.equals(table)) {
dataSyncService.write(dataSyncDto.getList(), dataSyncDto.getDataSourceKey(), dataSyncDto.getTableName(), scanJdbcTemplate, scanTransactionTemplate, dataSyncDto.getLimitStart(), dataSyncDto.getPageSize());
} else {
dataSyncService.write(dataSyncDto.getList(), dataSyncDto.getDataSourceKey(), dataSyncDto.getTableName(), logJdbcTemplate, logTransactionTemplate, dataSyncDto.getLimitStart(), dataSyncDto.getPageSize());
}
}
/***
* 消息转换为DataSyncDto
* @param msg
* @return
*/
private DataSyncDTO convertMsg(String msg) {
DataSyncDTO dataSyncDto = JSON.parseObject(msg, DataSyncDTO.class);
return dataSyncDto;
}
/***
* 设置批量消费属性
* @param defaultMQPushConsumer
*/
@Override
public void prepareStart(DefaultMQPushConsumer defaultMQPushConsumer) {
Map<String, ConsumerProperties> consumers = this.rocketMqConsumerConfig.getConsumers();
ConsumerProperties consumerProperties = (ConsumerProperties) consumers.get(defaultMQPushConsumer.getConsumerGroup());
if (null != consumerProperties) {
if (Boolean.TRUE.equals(consumerProperties.getDefaultFlag())) {
consumerProperties = this.rocketMqConsumerConfig.getDefaultGroup();
}
defaultMQPushConsumer.setPullBatchSize(consumerProperties.getPullBatchSize());
defaultMQPushConsumer.setConsumeThreadMin(consumerProperties.getConsumeThread());
defaultMQPushConsumer.setConsumeThreadMax(consumerProperties.getConsumeThread());
defaultMQPushConsumer.setConsumeMessageBatchMaxSize(consumerProperties.getConsumeMessageBatchMaxSize());
}
}
} ```
Consumer的MessageListenerConcurrently监听接口的consumeMessage()方法的第一个参数为消息列表,但默认情况下每次只能消费一条消息。若要使其一次可以消费多条消息,则可以通过修改Consumer的consumeMessageBatchMaxSize属性来指定。不过,该值不能超过32。因为默认情况下消费者每次可以拉取的消息最多是32条。若要修改一次拉取的最大值,则可通过修改Consumer的pullBatchSize属性来指定。
ConsumeThreadMin和ConsumeThreadMax主要是负责消费的线程池大小,ConsumeMessageBatchMaxSize是每个消费线程每次消费的最大消息的数量。
4.4.5 批量消费存在的问题
Consumer的pullBatchSize属性与consumeMessageBatchMaxSize属性是否设置的越大越好?当然不是。
pullBatchSize值设置的越大,Consumer每拉取一次需要的时间就会越长,且在网络上传输出现问题的可能性就越高。若在拉取过程中若出现了问题,那么本批次所有消息都需要全部重新拉取。
consumeMessageBatchMaxSize值设置的越大,Consumer的消息并发消费能力越低,且这批被消费的消息具有相同的消费结果。因为consumeMessageBatchMaxSize指定的一批消息只会使用一个线程进行处理,且在处理过程中只要有一个消息处理异常,则这批消息需要全部重新再次消费处理。所以如果出现频繁的消费异常,会导致大面积的消费重试!性能也会急剧下降。
4.4.6批量发送消息源码
源代码:DefaultMQProducer#send
send方法调用了batch方法。batch方法对消息做了压缩。
java public SendResult send(Collection<Message> msgs) throws MQClientException, RemotingException, MQBrokerException, InterruptedException { //压缩消息集合成一条消息,然后发送出去 return this.defaultMQProducerImpl.send(batch(msgs)); }
代码:DefaultMQProducer#batch
java private MessageBatch batch(Collection<Message> msgs) throws MQClientException { MessageBatch msgBatch; try { //将集合消息封装到MessageBatch msgBatch = MessageBatch.generateFromList(msgs); //遍历消息集合,检查消息合法性,设置消息ID,设置Topic for (Message message : msgBatch) { Validators.checkMessage(message, this); MessageClientIDSetter.setUniqID(message); message.setTopic(withNamespace(message.getTopic())); } //压缩消息,设置消息body msgBatch.setBody(msgBatch.encode()); } catch (Exception e) { throw new MQClientException("Failed to initiate the MessageBatch", e); } //设置msgBatch的topic msgBatch.setTopic(withNamespace(msgBatch.getTopic())); return msgBatch; }
4.4.7批量发送消息代码示例
调用的是DefaultMQProducer#send。
```java package com.itheima.mq.rocketmq.batch; import org.apache.rocketmq.client.producer.DefaultMQProducer; import org.apache.rocketmq.client.producer.SendResult; import org.apache.rocketmq.client.producer.SendStatus; import org.apache.rocketmq.common.message.Message; import java.util.ArrayList; import java.util.List; import java.util.concurrent.TimeUnit; public class Producer { public static void main(String[] args) throws Exception { //1.创建消息生产者producer,并制定生产者组名 DefaultMQProducer producer = new DefaultMQProducer("group1"); //2.指定Nameserver地址 producer.setNamesrvAddr("127.0.0.1:9876"); //3.启动producer producer.start();
List msgs = new ArrayList ();
//4.创建消息对象,指定主题Topic、Tag和消息体 /** * 参数一:消息主题Topic * 参数二:消息Tag * 参数三:消息内容 */ Message msg1 = new Message("test", "batch", ("Hello World" + 1).getBytes()); Message msg2 = new Message("test", "batch", ("Hello World" + 2).getBytes()); Message msg3 = new Message("test", "batch", ("Hello World" + 3).getBytes()); msgs.add(msg1); msgs.add(msg2); msgs.add(msg3); //5.发送消息 SendResult result = producer.send(msgs); //发送状态 SendStatus status = result.getSendStatus(); System.out.println("发送结果:" + result); //线程睡1秒 TimeUnit.SECONDS.sleep(1); //6.关闭生产者producer producer.shutdown(); } } ```
消费者消费消息
java package com.itheima.mq.rocketmq.batch; import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer; import org.apache.rocketmq.client.consumer.listener.ConsumeConcurrentlyContext; import org.apache.rocketmq.client.consumer.listener.ConsumeConcurrentlyStatus; import org.apache.rocketmq.client.consumer.listener.MessageListenerConcurrently; import org.apache.rocketmq.common.message.MessageExt; import java.util.List; public class Consumer { public static void main(String[] args) throws Exception { //1.创建消费者Consumer,制定消费者组名 DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("group1"); //2.指定Nameserver地址 consumer.setNamesrvAddr("127.0.0.1:9876"); //3.订阅主题Topic和Tag consumer.subscribe("test", "batch"); //4.设置回调函数,处理消息 consumer.registerMessageListener(new MessageListenerConcurrently() { //接受消息内容 @Override public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs, ConsumeConcurrentlyContext context) { for (MessageExt msg : msgs) { System.out.println("consumeThread=" + Thread.currentThread().getName() + "," + new String(msg.getBody())); } return ConsumeConcurrentlyStatus.CONSUME_SUCCESS; } }); //5.启动消费者consumer consumer.start(); System.out.println("消费者启动"); } }
4.4.8 消息列表切割
如果消息的总长度可能大于4MB时,这时候最好把消息进行分割
```java public class ListSplitter implements Iterator > { private final int SIZE LIMIT = 1024 * 1024 * 4; private final List messages; private int currIndex; public ListSplitter(List messages) { this.messages = messages; } @Override public boolean hasNext() { return currIndex < messages.size(); } @Override public List next() { int nextIndex = currIndex; int totalSize = 0; for (; nextIndex < messages.size(); nextIndex++) { Message message = messages.get(nextIndex); int tmpSize = message.getTopic().length() + message.getBody().length; Map properties = message.getProperties(); for (Map.Entry entry : properties.entrySet()) { tmpSize += entry.getKey().length() + entry.getValue().length(); } tmpSize = tmpSize + 20; // 增加日志的开销20字节 if (tmpSize > SIZE LIMIT) { //单个消息超过了最大的限制 //忽略,否则会阻塞分裂的进程 if (nextIndex - currIndex == 0) { //假如下一个子列表没有元素,则添加这个子列表然后退出循环,否则只是退出循环 nextIndex++; } break; } if (tmpSize + totalSize > SIZE_LIMIT) { break; } else { totalSize += tmpSize; }
}
List<Message> subList = messages.subList(currIndex, nextIndex);
currIndex = nextIndex;
return subList;
} } //把大的消息分裂成若干个小的消息 ListSplitter splitter = new ListSplitter(messages); while (splitter.hasNext()) { try { List listItem = splitter.next(); producer.send(listItem); } catch (Exception e) { e.printStackTrace(); //处理error } } ```
4.4.9 数据同步批量消息做数据切割
java /*** * 发送消息 * @param dataSourceKey * @param table * @param limitStart * @param tableIndex * @param tableName * @param list */ private void sendMessage(String dataSourceKey, String table, Integer limitStart, int tableIndex, String tableName, List<LinkedHashMap<String, Object>> list) { //要被发送消息的dto DataSyncDTO dataSyncDto = new DataSyncDTO(); dataSyncDto.setDataSourceKey(dataSourceKey); dataSyncDto.setTable(table); dataSyncDto.setLimitStart(limitStart); dataSyncDto.setList(list); dataSyncDto.setTableName(tableName); dataSyncDto.setPageSize(DataSyncConstant.MAX_PAGE_SIZE); //将发送消息的dto转换为json String json = JSON.toJSONStringWithDateFormat(dataSyncDto, DataSyncConstant.dateFormat, SerializerFeature.WriteMapNullValue); //获取要发送的json的消息的大小(字节) byte[] bytes = json.getBytes(); //转换为单位mb int sizeOfJson = bytes.length / 1024 / 1024; //计算批次,默认每批最少向mq发送20条数据 MIN_SEND_PAGE_SIZE = 20 int batchNum = calculateBatchNum(list.size(), DataSyncConstant.MIN_SEND_PAGE_SIZE); //判断大于1mb 这里并没有设为4mb if (sizeOfJson >= DataSyncConstant.MQ_MESSAGE_SIZE) { //遍历批次 依次发送消息 for (int i = 0; i < batchNum; i++) { //切割list List subList = startPage(list, i + 1, DataSyncConstant.MIN_SEND_PAGE_SIZE); //发送消息 sendMessage(dataSourceKey, table, limitStart, tableIndex, tableName, subList); } return; } //如果小于2mb 直接发送消息 DefaultMQProducer producer = mqTemplate.getProducer(); Message message = new Message(); message.setBody(json.getBytes()); message.setTopic(RocketConstant.SYNC_ALL_TOPIC); message.setTags(table); Long sendStart = System.currentTimeMillis(); SendResult send = null; try { send = producer.send(message, DataSyncConstant.sendMessageMaxTimeOut); } catch (Exception e) { try { if (DataSyncConstant.TABLE_SCAN_NAME.equals(table) || DataSyncConstant.TABLE_SCAN_UPLOAD_NAME.equals(table)) { addSyncAllRecord(dataSourceKey, tableName, list, e.getMessage(), scanJdbcTemplate, scanTransactionTemplate); } else { addSyncAllRecord(dataSourceKey, tableName, list, e.getMessage(), logJdbcTemplate, logTransactionTemplate); } } catch (Exception ex) { log.info("数据同步-发送到mq-插入同步记录异常$$$$ :{} $$$!dataSourceKey:{},tableName:{},size:{},json:{}", ex.getMessage(), dataSourceKey, tableName, list.size(), JSON.toJSONStringWithDateFormat(list, DataSyncConstant.dateFormat, SerializerFeature.WriteMapNullValue)); } } if (EmptyUtil.isNotEmpty(send) && !SendStatus.SEND_OK.equals(send.getSendStatus())) { try { if (DataSyncConstant.TABLE_SCAN_NAME.equals(table) || DataSyncConstant.TABLE_SCAN_UPLOAD_NAME.equals(table)) { addSyncAllRecord(dataSourceKey, tableName, list, "发送mq失败", scanJdbcTemplate, scanTransactionTemplate); } else { addSyncAllRecord(dataSourceKey, tableName, list, "发送mq失败", logJdbcTemplate, logTransactionTemplate); } } catch (Exception ex) { log.info("数据同步-发送到mq-插入同步记录异常$$$$ :{} $$$!dataSourceKey:{},tableName:{},size:{},json:{}", ex.getMessage(), dataSourceKey, tableName, list.size(), JSON.toJSONStringWithDateFormat(list, DataSyncConstant.dateFormat, SerializerFeature.WriteMapNullValue)); } } }
批量发送消息最大限制是4mb,但是并不是说消息一次发送的越大越好,当消息的大小趋近于4mb的时候,性能会急剧下降。官方文档里给的建议是不要大于1mb。