日日操夜夜添-日日操影院-日日草夜夜操-日日干干-精品一区二区三区波多野结衣-精品一区二区三区高清免费不卡

公告:魔扣目錄網為廣大站長提供免費收錄網站服務,提交前請做好本站友鏈:【 網站目錄:http://www.ylptlb.cn 】, 免友鏈快審服務(50元/站),

點擊這里在線咨詢客服
新站提交
  • 網站:51998
  • 待審:31
  • 小程序:12
  • 文章:1030137
  • 會員:747

本文介紹了Spring Kafka請求回復分區模式:消息處理后無法提交偏移量的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

問題描述

我正在使用Spring Kafka實現同步請求回復模式。
堆棧:

org.springframework.cloud:spring-cloud-dependencies:2020.0.2

org.springfrawork.kafka:Spring-kafka

io.confluent:kafka-avro-serializer:6.2.0

Java 11


我有一個包含5個分區的請求主題和8個分區的響應

我的響應消費者端配置如下。為簡潔起見,我尚未顯示生產者配置:

  @Bean
    public ReplyingKafkaTemplate<String, PhmsPatientSearchRequest, PhmsPatientSearchResponse> replyKafkaTemplate(ProducerFactory<String, PhmsPatientSearchRequest> pf,
                                                                                                                 KafkaMessageListenerContainer<String, PhmsPatientSearchResponse> container) {
        final ReplyingKafkaTemplate<String, PhmsPatientSearchRequest, PhmsPatientSearchResponse> repl = new ReplyingKafkaTemplate<>(pf, container);
        repl.setMessageConverter(new StringJsonMessageConverter());
        return repl;
    }

 @Bean
    public KafkaMessageListenerContainer replyContainer(ConsumerFactory<String, PhmsPatientSearchResponse> replyConsumerFactory) {
        TopicPartitionOffset topicPartitionOffset = new TopicPartitionOffset(replyTopic, replyPartition);
        ContainerProperties containerProperties = new ContainerProperties(topicPartitionOffset);
        final KafkaMessageListenerContainer<String, PhmsPatientSearchResponse> msgListenerContainer = new KafkaMessageListenerContainer<>(replyConsumerFactory, containerProperties);
        return msgListenerContainer;
    }

 @Bean
    public ConsumerFactory<String, PhmsPatientSearchResponse> replyConsumerFactory() {
        final DefaultKafkaConsumerFactory<String, PhmsPatientSearchResponse> consumerFactory = new DefaultKafkaConsumerFactory<>(consumerConfigs());
        return consumerFactory;
    }

 @Bean
    public Map<String, Object> consumerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, "ResponseConsumer");
        props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 30000);
        props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 40000);
        props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 30000);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
        props.put(JsonDeserializer.TRUSTED_PACKAGES, replyDeserializerTrustedPkg);
        props.put(SCHEMA_REGISTRY_URL, schemRegistryUrl);
        props.put(SPECIFIC_AVRO_READER, true);
        return props;
    }

我的請求回復代碼是


  ProducerRecord<String, PhmsPatientSearchRequest> patientSearchRequestRecord = new ProducerRecord(requestTopic, phmsPatientSearchRequest);
        // set reply topic in header
       
        patientSearchRequestRecord.headers().add(new RecordHeader(KafkaHeaders.MESSAGE_KEY, messageKey.getBytes()));
        //patientSearchRequestRecord.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, replyTopic.getBytes()));
        //patientSearchRequestRecord.headers().add(new RecordHeader(KafkaHeaders.REPLY_PARTITION, intToBytesBigEndian(replyPartition)));
       
        // post in kafka topic
        RequestReplyFuture<String, PhmsPatientSearchRequest, PhmsPatientSearchResponse> sendAndReceive = replyingKafkaTemplate.sendAndReceive(patientSearchRequestRecord);

        // get consumer record

        ConsumerRecord<String, PhmsPatientSearchResponse> consumerRecord = sendAndReceive.get();

我在正確的分區上收到響應消息,但未提交偏移量。
每次我的響應使用者讀取消息時,都會觀察到下面的堆棧跟蹤。我不認為這是由于投票延遲造成的。


org.springframework.kafka.KafkaException: Seek to current after exception; nested exception is org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
    at org.springframework.kafka.listener.SeekToCurrentBatchErrorHandler.handle(SeekToCurrentBatchErrorHandler.java:79) ~[spring-kafka-2.7.4.jar:2.7.4]
    at org.springframework.kafka.listener.RecoveringBatchErrorHandler.handle(RecoveringBatchErrorHandler.java:124) ~[spring-kafka-2.7.4.jar:2.7.4]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.handleConsumerException(KafkaMessageListenerContainer.java:1606) ~[spring-kafka-2.7.4.jar:2.7.4]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1210) ~[spring-kafka-2.7.4.jar:2.7.4]
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
    at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
Caused by: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:1256) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:1163) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1173) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1148) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:206) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:169) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:129) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:602) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:412) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:1005) ~[kafka-clients-2.7.1.jar:na]
    at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1495) ~[kafka-clients-2.7.1.jar:na]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doCommitSync(KafkaMessageListenerContainer.java:2656) ~[spring-kafka-2.7.4.jar:2.7.4]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.commitSync(KafkaMessageListenerContainer.java:2651) ~[spring-kafka-2.7.4.jar:2.7.4]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.commitIfNecessary(KafkaMessageListenerContainer.java:2637) ~[spring-kafka-2.7.4.jar:2.7.4]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.processCommits(KafkaMessageListenerContainer.java:2451) ~[spring-kafka-2.7.4.jar:2.7.4]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1235) ~[spring-kafka-2.7.4.jar:2.7.4]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1161) ~[spring-kafka-2.7.4.jar:2.7.4]
    ... 3 common frames omitted


如果我不使用TopicPartitionOffset,則我的使用者將偵聽響應主題中的所有分區,并且沒有問題。

請求有關此問題的幫助。

推薦答案

我剛剛復制了您的代碼(但使用了String),它按預期工作…

@SpringBootApplication
public class So68461640Application {

    public static void main(String[] args) {
        SpringApplication.run(So68461640Application.class, args);
    }

    @Bean
    public NewTopic topic() {
        return TopicBuilder.name("so68461640").partitions(5).replicas(1).build();
    }

    @Bean
    public NewTopic reply() {
        return TopicBuilder.name("so68461640.replies").partitions(8).replicas(1).build();
    }

    @Bean
    public ReplyingKafkaTemplate<String, String, String> replyKafkaTemplate(ProducerFactory<String, String> pf,
            KafkaMessageListenerContainer<String, String> container) {
        final ReplyingKafkaTemplate<String, String, String> repl = new ReplyingKafkaTemplate<>(
                pf, container);
//      repl.setMessageConverter(new StringJsonMessageConverter());
        return repl;
    }

    @Bean
    public KafkaMessageListenerContainer replyContainer(
            ConsumerFactory<String, String> replyConsumerFactory) {

        TopicPartitionOffset topicPartitionOffset = new TopicPartitionOffset("so68461640.replies", 3);
        ContainerProperties containerProperties = new ContainerProperties(topicPartitionOffset);
        final KafkaMessageListenerContainer<String, String> msgListenerContainer = new KafkaMessageListenerContainer<>(
                replyConsumerFactory, containerProperties);
        return msgListenerContainer;
    }

    @Bean
    public ConsumerFactory<String, String> replyConsumerFactory() {
        final DefaultKafkaConsumerFactory<String, String> consumerFactory = new DefaultKafkaConsumerFactory<>(
                consumerConfigs());
        return consumerFactory;
    }

    @Bean
    public Map<String, Object> consumerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put(ConsumerConfig.GROUP_ID_CONFIG, "ResponseConsumer");
        props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 30000);
        props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 40000);
        props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 30000);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,  "earliest");
        return props;
    }


    @KafkaListener(id = "so68461640", topics = "so68461640")
    @SendTo
    public String listen(String in) {
        System.out.println(in);
        return in.toUpperCase();
    }

    @Bean
    KafkaTemplate<String, String> replyTemplate(ProducerFactory<String, String> pf) {
        return new KafkaTemplate<>(pf);
    }

    @Bean
    public ApplicationRunner runner(ReplyingKafkaTemplate<String, String, String> replyKafkaTemplate,
            KafkaTemplate<String, String> replyTemplate,
            ConcurrentKafkaListenerContainerFactory<?, ?> factory) {

        factory.setReplyTemplate(replyTemplate);

        return args -> {
            RequestReplyFuture<String, String, String> future =
                    replyKafkaTemplate.sendAndReceive(new ProducerRecord("so68461640", 0, null, "test"));
            future.getSendFuture().get(10, TimeUnit.SECONDS);
            ConsumerRecord<String, String> reply = future.get(10, TimeUnit.SECONDS);
            System.out.println(reply.value());
        };
    }

}
test
TEST
% kafka-consumer-groups --bootstrap-server localhost:9092 --describe --group ResponseConsumer

Consumer group 'ResponseConsumer' has no active members.

GROUP            TOPIC              PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID     HOST            CLIENT-ID
ResponseConsumer so68461640.replies 3          1               1               0               -               -               -

這篇關于Spring Kafka請求回復分區模式:消息處理后無法提交偏移量的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,

分享到:
標簽:偏移量 分區 回復 提交 模式 消息 請求
用戶無頭像

網友整理

注冊時間:

網站:5 個   小程序:0 個  文章:12 篇

  • 51998

    網站

  • 12

    小程序

  • 1030137

    文章

  • 747

    會員

趕快注冊賬號,推廣您的網站吧!
最新入駐小程序

數獨大挑戰2018-06-03

數獨一種數學游戲,玩家需要根據9

答題星2018-06-03

您可以通過答題星輕松地創建試卷

全階人生考試2018-06-03

各種考試題,題庫,初中,高中,大學四六

運動步數有氧達人2018-06-03

記錄運動步數,積累氧氣值。還可偷

每日養生app2018-06-03

每日養生,天天健康

體育訓練成績評定2018-06-03

通用課目體育訓練成績評定