I've seen many places that show enabling Kafka client authentication using the same example code as here:
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_security.html#deploying_ssl_for_kafka__d18295e284
Namely:
ssl.keystore.location=/var/private/ssl/kafka.client.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
My question is, how does the client specify the particular key within the keystore to use? Everywhere else I see JKS keystores discussed, keys are specified using an alias. The only things I can figure is that:
SSL Overview By default, Apache Kafka sends all data as clear text and without any authentication. First of all, we can configure SSL for encryption between the broker and the client. This, by default, requires one-way authentication using public key encryption where the client authenticates the server certificate.
Generate SSL key and certificate for each Kafka broker You need to specify two parameters in the above command: keystore: the keystore file that stores the certificate. The keystore file contains the private key of the certificate; therefore, it needs to be kept safely.
mTLS provides two-way authentication to ensure that traffic between clients and the MDS is secure, and that you can trust content coming from both directions. In this example: Clients communicating with Kafka can use mTLS (encryption plus authentication).
identification. algorithm The endpoint identification algorithm used by clients to validate server host name. The default value is HTTPS . Clients verify that the broker host name matches the host name in the broker's certificate. Server host name verification may be disabled by setting ssl.
None of the above. If you don't specify ssl.keymanager.algorithm
(see SslConfigs:96) then it uses the JVM default (see SslEngineBuilder:138), which is probably going to be SunX509
(the only standard name is PKIX
, but there's no indication of what that does differently; see Standard Algorithm Names § KeyManagerFactory algorithms). Despite the description of the standard algorithm, RFC 3280 does not specify a key selection process per se. However, the actual implemention simply selects some key of one of the desired types for which the corresponding certificate's certification path contains one of the desired issuers (see call chain starting at SunX509KeyManagerImpl.chooseClientAlias).
So the client's choice of key alias is going to be dictated by the certificate issuers that the server says it trusts and the types of keys that the server says it accepts (this is almost always RSA today, but may be different in the future or in specific scenarios). If you have just 1 RSA key issued by a CA that the server trusts, then that's the key it will pick. If 0, then the connection will fail, and if 2 or more, you don't know which one will be picked. In particular, having an expired and unexpired cert that both match the criteria is a recipe for trouble.
I found some interesting details on KeyManager
s and KeyStore
s on a terse systems blog post, but some of the customization they talk about won't be possible without patching Kafka itself. If you need to control key selection with more precision, you'll probably have to implement your own KeyManager or use a third-party one that meets your needs.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With