Kafka producer timeoutexception failed to update metadata after 60000 ms

Keywords: Kafka - AWS - Technical issue - Other
bnsupport ID: 1399006e-00e5-f8a4-18a8-13ef4b410f6b
Description:
I am using the following code with the bitnami settings that come out of the box for the AWS AMI to send signal remotely from a different EC2 -

from kafka import KafkaProducer
topic_for_consuming_review_signal = 'topic_for_consuming_review_signal'

producer = KafkaProducer(bootstrap_servers='172.31.0.209:9092',api_version=(0,9),security_protocol='SASL_PLAINTEXT',sasl_mechanism = 'PLAIN')
print('producer created')

reviews = ['good', 'great', 'disgusting', 'bad', 'poor']

for msg in reviews:
    # producer.send(topic_for_consuming_review_signal, msg)
    print('started sending signal - ' + msg)
    producer.send('test', bytes(msg,'utf8'))
    print('signal sent successfully - ' + msg)

producer.flush()

each time I am getting - raise Errors.KafkaTimeoutError( kafka.errors.KafkaTimeoutError: KafkaTimeoutError: Failed to update metadata after 60.0 secs.

few important points -

  1. both the ec2 are on the same private subnet
  2. kafka’s ec2 has open 9092
  3. I am able to use cli to produce and consume signals

Hi @umangj2,

Can you use the CLI to produce and consume signals from remote machines? This way we can verify there is not any connectivity issue

I see that you are using bootstrap_servers='172.31.0.209:9092' in your code. Is that the private IP of the machine from where you are running the code or is it the IP of the remote instance? Can you try to use localhost:9092 to see if the code works properly when you do not use a remote connection?

Thanks

Thanks jota for the help. I was able to debug this issue further -

earlier, there were some configuration issues which i was able to resolve…

but now this one is something i am unable to resolve -

DEBUG:kafka.client:Give up sending metadata request since no node is available
DEBUG:kafka.producer.sender:Node 1001 not ready; delaying produce of accumulated batch

and i am also seeing this, which i suspect is due to some config mismatch on kafka machine

INFO:kafka.conn:<BrokerConnection node_id=1001 host=localhost:9092 <connecting> [IPv6 ('::1', 9092, 0, 0)]>: connecting to localhost:9092 [('::1', 9092, 0, 0) IPv6]

i tried to change the server properties using this

 sudo kafka-server-start.sh /opt/bitnami/kafka/config/server.properties --override  advertised.listeners=PLAINTEXT://172.31.0.209:9092

but this also gives an error as below

T://172.31.0.209:9092
[2020-06-18 07:25:47,712] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-06-18 07:25:48,250] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2020-06-18 07:25:48,269] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.IllegalArgumentException: requirement failed: inter.broker.listener.name must be a listener name defined in advertised.listeners. The valid options based on currently configured listeners are PLAINTEXT
        at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1699)
        at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1674)
        at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1238)
        at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1218)
        at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:34)
        at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:29)
        at kafka.Kafka$.main(Kafka.scala:68)
        at kafka.Kafka.main(Kafka.scala)

also, updated code -

producer = KafkaProducer(bootstrap_servers=['172.31.0.209:9092'],api_version=(0,10),security_protocol='SASL_PLAINTEXT',sasl_mechanism = 'PLAIN',sasl_plain_username = 'user',sasl_plain_password = 'L9WxeTljNIxG')

Thanks @jota for the help here
I checked there is no remote connection issue…

Hi @umangj2,

So there was something wrong in the code, no?

Was this the problem? If the code doesn’t work yet, I suggest you contact the Kafka community to get more information about what’s happening.

https://kafka.apache.org/contact

Thanks

Hi @jota,

I was able to resolve this issue. The issue was with the server.properities generated.

The file contains two settings -
listeners=SASL_PLAINTEXT://:9092
advertiserd.listeners=SASL_PLAINTEXT://:9092

which needed to be replaced to -

listeners=SASL_PLAINTEXT://172.31.0.209:9092
advertised.listeners=SASL_PLAINTEXT://172.31.0.209:9092

the reason was that even on remote connection, the default setting used localhost which referred to the machine where the code was triggered from.

Hopefully, this can help other developers as well.

Best,
Umang

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.