一、Zookeeper概述與安裝
Zookeeper概述與安裝請參考我之前的文章:分布式開源協調服務——Zookeeper
Zookeeper的安裝方式有兩種,兩種方式都會講,其實大致配置都是一樣的,只是少部分配置有一丟丟的區別。kafka的鑒權認證可以參考我之前的文章:大數據Hadoop之——Kafka鑒權認證(kerberos認證+賬號密碼認證)
二、Zookeeper Kerberos 鑒權認證
1)Kerberos安裝
Kerberos安裝可以參考我之前的文章:Kerberos認證原理與環境部署
2)創建用戶并生成keytab鑒權文件(前期準備)
#服務端
kadmin.local -q "addprinc -randkey zookeeper/hadoop-node1@HADOOP.COM"
kadmin.local -q "addprinc -randkey zookeeper/hadoop-node2@HADOOP.COM"
kadmin.local -q "addprinc -randkey zookeeper/hadoop-node3@HADOOP.COM"
# 導致keytab文件
kadmin.local -q "xst -k /root/zookeeper.keytab zookeeper/hadoop-node1@HADOOP.COM"
# 先定義其它名字,當使用之前得改回zookeeper-server.keytab
kadmin.local -q "xst -k /root/zookeeper-node2.keytab zookeeper/hadoop-node2@HADOOP.COM"
kadmin.local -q "xst -k /root/zookeeper-node3.keytab zookeeper/hadoop-node3@HADOOP.COM"
#客戶端
kadmin.local -q "addprinc -randkey zkcli@HADOOP.COM"
# 導致keytab文件
kadmin.local -q "xst -k /root/zkcli.keytab zkcli@HADOOP.COM"
3)獨立zookeeper配置
1、配置zoo.cfg
$ cd $ZOOKEEPER_HOME
# 將上面生成的keytab 放到zk目錄下
$ mkdir conf/kerberos
$ mv /root/zookeeper.keytab /root/zookeeper-ndoe2.keytab /root/zookeeper-node3.keytab /root/zkcli.keytab conf/kerberos/
$ vi conf/zoo.cfg
# 在conf/zoo-kerberos.cfg配置文件中添加如下內容:
authProvider.1=org.Apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
#將principal對應的主機名去掉,防止hbase等服務訪問zookeeper時報錯,如GSS initiate failed時就有可能是該項沒配置
kerberos.removeHostFromPrincipal=true
kerberos.removeRealmFromPrincipal=true
2、配置jaas.conf
把服務端和客戶端的配置放在一起
$ cat > $ZOOKEEPER_HOME/conf/kerberos/jaas.conf <<EOF
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/opt/bigdata/hadoop/server/apache-zookeeper-3.8.0-bin/conf/kerberos/zookeeper.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/hadoop-node1@HADOOP.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/opt/bigdata/hadoop/server/apache-zookeeper-3.8.0-bin/conf/kerberos/zkcli.keytab"
storeKey=true
useTicketCache=false
principal="zkcli@HADOOP.COM";
};
EOF
JAAS配置文件定義用于身份驗證的屬性,如服務主體和 keytab 文件的位置等。其中的屬性意義如下:
- useKeyTab:這個布爾屬性定義了我們是否應該使用一個keytab文件(在這種情況下是true)。
- keyTab:JAAS配置文件的此部分用于主體的keytab文件的位置和名稱。路徑應該用雙引號括起來。
- storeKey:這個布爾屬性允許密鑰存儲在用戶的私人憑證中。
- useTicketCache:該布爾屬性允許從票證緩存中獲取票證。
- debug:此布爾屬性將輸出調試消息,以幫助進行疑難解答。
- principal:要使用的服務主體的名稱。
3、配置JAVA.env
$ cat > $ZOOKEEPER_HOME/conf/java.env <<EOF
export JVMFLAGS="-Djava.security.auth.login.config=/opt/bigdata/hadoop/server/apache-zookeeper-3.8.0-bin/conf/kerberos/jaas.conf"
EOF
4、將配置copy到其它節點
# copy kerberos認證文件
$ scp -r $ZOOKEEPER_HOME/conf/kerberos hadoop-node2:/$ZOOKEEPER_HOME/conf/
$ scp -r $ZOOKEEPER_HOME/conf/kerberos hadoop-node3:/$ZOOKEEPER_HOME/conf/
# copy zoo.cfg
$ scp $ZOOKEEPER_HOME/conf/zoo.cfg hadoop-node2:/$ZOOKEEPER_HOME/conf/
$ scp $ZOOKEEPER_HOME/conf/zoo.cfg hadoop-node3:/$ZOOKEEPER_HOME/conf/
# copy java.env
$ scp $ZOOKEEPER_HOME/conf/java.env hadoop-node2:/$ZOOKEEPER_HOME/conf/
$ scp $ZOOKEEPER_HOME/conf/java.env hadoop-node3:/$ZOOKEEPER_HOME/conf/
【溫馨提示】記得把zookeeper-node2.keytab和zookeeper-node3.keytab改回zookeeper.keytab,并且把jaas.conf文件里的主機名修改
5、啟動服務
$ cd $ZOOKEEPER_HOME
$ ./bin/zkServer.sh start
# 查看狀態
$ ./bin/zkServer.sh status
6、登錄客戶端驗證
$ cd $ZOOKEEPER_HOME
$ ./bin/zkCli.sh -server hadoop-node1:2181
ls /
4)kafka內置zookeeper配置
1、把kerberos文件移到指定目錄下
$ mkdir $KAFKA_HOME/config/zk-kerberos/
$ mv /root/zk.keytab /root/zookeeper.keytab /root/zookeeper-node2.keytab /root/zookeeper-node3.keytab $KAFKA_HOME/config/zk-kerberos/
$ ll $KAFKA_HOME/config/zk-kerberos/
# 將krb5.conf copy一份到$KAFKA_HOME/config/zk-kerberos/目錄下
$ cp /etc/krb5.conf $KAFKA_HOME/config/zk-kerberos/
2、創建 JAAS 配置文件(服務端配置)
在 $KAFKA_HOME/config/zk-kerberos/的配置文件目錄創建zookeeper-server-jaas.conf 文件,內容如下:
$ cat > $KAFKA_HOME/config/zk-kerberos/jaas.conf<<EOF
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/kerberos/zookeeper.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/hadoop-node1@HADOOP.COM";
};
EOF
3、修改服務啟動配置
這里也copy一份配置進行修改,這樣好來回切換
$ cp $KAFKA_HOME/config/zookeeper.properties $KAFKA_HOME/config/zookeeper-kerberos.properties
修改或增加配置如下:
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
sessionRequireClientSASLAuth=true
4、修改服務啟動腳本
$ cp $KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/bin/zookeeper-server-start-kerberos.sh
在倒數第二行增加如下配置(bin/zookeeper-server-start-kerberos.sh):
export KAFKA_OPTS="-Djava.security.krb5.conf=/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/zk-kerberos/krb5.conf -Djava.security.auth.login.config=/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/zk-kerberos/jaas.conf"
5、創建 JAAS 配置文件(客戶端配置)
$ cat > $KAFKA_HOME/config/zk-kerberos/zookeeper-client-jaas.conf<<EOF
//客戶端配置
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/zk-kerberos/zkcli.keytab"
storeKey=true
useTicketCache=false
principal="zkcli@HADOOP.COM";
};
EOF
6、修改客戶端腳本
$ cp $KAFKA_HOME/bin/zookeeper-shell.sh $KAFKA_HOME/bin/zookeeper-shell-kerberos.sh
在倒數第二行增加如下配置(bin/zookeeper-shell-kerberos.sh):
export KAFKA_OPTS="-Djava.security.krb5.conf=/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/zk-kerberos/krb5.conf -Djava.security.auth.login.config=/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/zk-kerberos/zookeeper-client-jaas.conf"
7、將配置copy到其它節點并修改
# kerberos認證文件
$ scp -r $KAFKA_HOME/config/zk-kerberos hadoop-node2:$KAFKA_HOME/config/
$ scp -r $KAFKA_HOME/config/zk-kerberos hadoop-node3:$KAFKA_HOME/config/
# 服務啟動配置
$ scp $KAFKA_HOME/config/zookeeper-kerberos.properties hadoop-node2:$KAFKA_HOME/config/
$ scp $KAFKA_HOME/config/zookeeper-kerberos.properties hadoop-node3:$KAFKA_HOME/config/
# 服務啟動腳本
$ scp $KAFKA_HOME/bin/zookeeper-server-start-kerberos.sh hadoop-node2:$KAFKA_HOME/bin/
$ scp $KAFKA_HOME/bin/zookeeper-server-start-kerberos.sh hadoop-node3:$KAFKA_HOME/bin/
【溫馨提示】各個節點需要修改的配置有以下兩點:
- 需要把keytab文件名字修改回zookeeper-server.keytab
- 把zookeeper-server-jaas.conf配置文件里的主機名修改成當前機器主機名
在hadoop-node2執行
$ mv $KAFKA_HOME/config/zk-kerberos/zookeeper-server-node2.keytab $KAFKA_HOME/config/zk-kerberos/zookeeper-server.keytab
$ vi $KAFKA_HOME/config/zk-kerberos/zookeeper-server-jaas.conf
在hadoop-node3執行
$ mv $KAFKA_HOME/config/zk-kerberos/zookeeper-server-node3.keytab $KAFKA_HOME/config/zk-kerberos/zookeeper-server.keytab
$ vi $KAFKA_HOME/config/zk-kerberos/zookeeper-server-jaas.conf
8、啟動服務
$ cd $KAFKA_HOME
$ ./bin/zookeeper-server-start-kerberos.sh -daemon ./config/zookeeper-kerberos.properties
9、測試驗證
$ cd $KAFKA_HOME
$ ./bin/zookeeper-shell-kerberos.sh hadoop-node1:2181
三、Zookeeper 賬號密碼鑒權認證
1)獨立zookeeper配置
1、創建存儲目錄
$ mkdir $ZOOKEEPER_HOME/conf/userpwd
2、配置zoo.cfg
$ vi $ZOOKEEPER_HOME/conf/zoo.cfg
# 配置如下內容:
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
sessionRequireClientSASLAuth=true
3、配置jaas
服務端和客戶端配置都配置在一個文件中,省事
$ cat >$ZOOKEEPER_HOME/conf/userpwd/jaas.conf <<EOF
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="123456"
user_kafka="123456";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password=“123456";
};
EOF
4、配置環境變量java.env
$ cat >$ZOOKEEPER_HOME/conf/java.env<<EOF
JVMFLAGS="-Djava.security.auth.login.config=/opt/bigdata/hadoop/server/apache-zookeeper-3.8.0-bin/conf/userpwd/jaas.conf"
EOF
5、將配置copy到其它節點
# jass文件
$ scp -r $ZOOKEEPER_HOME/conf/userpwd hadoop-node2:$ZOOKEEPER_HOME/conf/
$ scp -r $ZOOKEEPER_HOME/conf/userpwd hadoop-node3:$ZOOKEEPER_HOME/conf/
# zoo.cfg
$ scp $ZOOKEEPER_HOME/conf/zoo.cfg hadoop-node2:$ZOOKEEPER_HOME/conf/
$ scp $ZOOKEEPER_HOME/conf/zoo.cfg hadoop-node3:$ZOOKEEPER_HOME/conf/
# java.env
$ scp $ZOOKEEPER_HOME/conf/java.env hadoop-node2:$ZOOKEEPER_HOME/conf/
$ scp $ZOOKEEPER_HOME/conf/java.env hadoop-node3:$ZOOKEEPER_HOME/conf/
6、啟動zookeeper服務
$ cd $ZOOKEEPER_HOME
$ ./bin/zkServer.sh start
$ ./bin/zkServer.sh status
6、啟動zookeeper客戶端驗證
【溫馨提示】這里我把端口改成了12181了
$ cd $ZOOKEEPER_HOME
$ ./bin/zkCli.sh -server hadoop-node1:12181
2)kafka內置zookeeper配置
1、創建存儲目錄
$ mkdir $KAFKA_HOME/config/zk-userpwd
2、配置zookeeper.properties
$ cp $KAFKA_HOME/config/zookeeper.properties $KAFKA_HOME/config/zookeeper-userpwd.properties
# 配置如下內容:
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
sessionRequireClientSASLAuth=true
3、配置jaas
【溫馨提示】這里服務端和客戶端配置得分開配置,因為這里沒有類似java.env配置文件。
服務端配置如下:
$ cat >$KAFKA_HOME/config/zk-userpwd/zk-server-jaas.conf <<EOF
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="123456"
user_kafka="123456";
};
EOF
客戶端配置如下:
$ cat >$KAFKA_HOME/config/zk-userpwd/zk-client-jaas.conf <<EOF
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="123456";
};
EOF
4、配置服務端環境變量zookeeper-server-start.sh
$ cp $KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/bin/zookeeper-server-start-userpwd.sh
$ vi $KAFKA_HOME/bin/zookeeper-server-start-userpwd.sh
# 配置如下:
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/zk-userpwd/zk-server-jaas.conf"
5、配置客戶端環境變量zookeeper-shell.sh
$ cp $KAFKA_HOME/bin/zookeeper-shell.sh $KAFKA_HOME/bin/zookeeper-shell-userpwd.sh
$ vi $KAFKA_HOME/bin/zookeeper-shell-userpwd.sh
# 配置如下:
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/zk-userpwd/zk-client-jaas.conf"
6、將配置copy到其它節點
# jaas配置文件
$ scp -r $KAFKA_HOME/config/zk-userpwd hadoop-node2:$KAFKA_HOME/config/
$ scp -r $KAFKA_HOME/config/zk-userpwd hadoop-node3:$KAFKA_HOME/config/
# zookeeper-userpwd.properties
$ scp $KAFKA_HOME/config/zookeeper-userpwd.properties hadoop-node2:$KAFKA_HOME/config/
$ scp $KAFKA_HOME/config/zookeeper-userpwd.properties hadoop-node3:$KAFKA_HOME/config/
# zookeeper-server-start-userpwd.sh
$ scp $KAFKA_HOME/bin/zookeeper-server-start-userpwd.sh hadoop-node2:$KAFKA_HOME/bin/
$ scp $KAFKA_HOME/bin/zookeeper-server-start-userpwd.sh hadoop-node3:$KAFKA_HOME/bin/
# zookeeper-shell-userpwd.sh
$ scp $KAFKA_HOME/bin/zookeeper-shell-userpwd.sh hadoop-node2:$KAFKA_HOME/bin/
$ scp $KAFKA_HOME/bin/zookeeper-shell-userpwd.sh hadoop-node3:$KAFKA_HOME/bin/
7、啟動zookeeper服務
$ cd $KAFKA_HOME
$ ./bin/zookeeper-server-start-userpwd.sh ./config/zookeeper-userpwd.properties
# 后臺執行
$ ./bin/zookeeper-server-start-userpwd.sh -daemon ./config/zookeeper-userpwd.properties
8、啟動zookeeper客戶端驗證
$ cd $KAFKA_HOME
$ ./bin/zookeeper-shell-userpwd.sh hadoop-node1:12181
ls /
四、zookeeper+Kafka鑒權認證
kafka kerberos認證可以參考我之前的文章:大數據Hadoop之——Kafka鑒權認證(kerberos認證+賬號密碼認證)
1)kafka和zookeeper同時開啟kerberos鑒權認證
1、開啟zookeeper kerberos鑒權
配置的話,上面很詳細,這里只是給出啟動命令
$ cd $KAFKA_HOME
$ ./bin/zookeeper-server-start-kerberos.sh ./config/zookeeper-kerberos.properties
# 后臺啟動
$ ./bin/zookeeper-server-start-kerberos.sh -daemon ./config/zookeeper-kerberos.properties
2、配置server.properties
$ cp $KAFKA_HOME/config/server.properties $KAFKA_HOME/config/server-zkcli-kerberos.properties
$ vi $KAFKA_HOME/config/server-zkcli-kerberos.properties
# 配置修改如下:
listeners=SASL_PLAINTEXT://0.0.0.0:19092
# 是暴露給外部的listeners,如果沒有設置,會用listeners,參數的作用就是將Broker的Listener信息發布到Zookeeper中,注意其它節點得修改成本身的hostnaem或者ip,不支持0.0.0.0
advertised.listeners=SASL_PLAINTEXT://hadoop-node1:19092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka-server
3、配置jaas
$ cp $KAFKA_HOME/config/kerberos/kafka-server-jaas.conf $KAFKA_HOME/config/kerberos/kafka-server-zkcli-jaas.conf
$ vi $KAFKA_HOME/config/kerberos/kafka-server-zkcli-jaas.conf
# 增加如下配置:
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/zk-kerberos/zkcli.keytab"
principal="zkcli@HADOOP.COM";
};
4、修改kafka環境變量
$ cp $KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/bin/kafka-server-start-zkcli-kerberos.sh
# 增加或修改如下內容:
export KAFKA_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=zookeeper -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/kerberos/kafka-server-zkcli-jaas.conf"
5、將配置copy到其它節點
# server-zkcli-kerberos.properties
$ scp $KAFKA_HOME/config/server-zkcli-kerberos.properties hadoop-node2:$KAFKA_HOME/config/
$ scp $KAFKA_HOME/config/server-zkcli-kerberos.properties hadoop-node3:$KAFKA_HOME/config/
# kafka-server-zkcli-jaas.conf
$ scp $KAFKA_HOME/config/kerberos/kafka-server-zkcli-jaas.conf hadoop-node2:$KAFKA_HOME/config/kerberos/
$ scp $KAFKA_HOME/config/kerberos/kafka-server-zkcli-jaas.conf hadoop-node3:$KAFKA_HOME/config/kerberos/
# kafka-server-start-zkcli-kerberos.sh
$ scp $KAFKA_HOME/bin/kafka-server-start-zkcli-kerberos.sh hadoop-node2:$KAFKA_HOME/bin/
$ scp $KAFKA_HOME/bin/kafka-server-start-zkcli-kerberos.sh hadoop-node3:$KAFKA_HOME/bin/
6、修改其它節點上的配置
# 修改broker.id和advertised.listeners的主機名
$ vi $KAFKA_HOME/config/server-zkcli-kerberos.properties
# 修改主機名
$ vi $KAFKA_HOME/config/kerberos/kafka-server-zkcli-jaas.conf
7、啟動kafka服務
$ cd $KAFKA_HOME
$ ./bin/kafka-server-start-zkcli-kerberos.sh ./config/server-zkcli-kerberos.properties
# 后臺執行
$ ./bin/kafka-server-start-zkcli-kerberos.sh -daemon ./config/server-zkcli-kerberos.properties
8、kafka客戶端測試驗證
$ cd $KAFKA_HOME
# 查看topic列表
$ ./bin/kafka-topics-sasl.sh --list --bootstrap-server hadoop-node1:19092 --command-config config/kerberos/client.properties
2)kafka+zookeeper同時開啟賬號密碼鑒權認證
1、開啟zookeeper 賬號密碼鑒權
$ cd $KAFKA_HOME
$ ./bin/zookeeper-server-start-userpwd.sh ./config/zookeeper-userpwd.properties
# 后臺執行
$ ./bin/zookeeper-server-start-userpwd.sh -daemon ./config/zookeeper-userpwd.properties
2、配置server.properties
$ cp $KAFKA_HOME/config/server.properties $KAFKA_HOME/config/server-zkcli-userpwd.properties
$ vi $KAFKA_HOME/config/server-zkcli-userpwd.properties
# 配置如下:
listeners=SASL_PLAINTEXT://0.0.0.0:19092
# 是暴露給外部的listeners,如果沒有設置,會用listeners,參數的作用就是將Broker的Listener信息發布到Zookeeper中,注意其它節點得修改成本身的hostnaem或者ip,不支持0.0.0.0
advertised.listeners=SASL_PLAINTEXT://hadoop-node1:19092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=true
3、配置jaas
$ scp $KAFKA_HOME/config/userpwd/kafka_server_jaas.conf $KAFKA_HOME/config/userpwd/kafka-server-zkcli-jaas.conf
$ vi $KAFKA_HOME/config/userpwd/kafka-server-zkcli-jaas.conf
# 增加如下內容:
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="123456";
};
4、修改kafka環境變量
$ cp $KAFKA_HOME/bin/kafka-server-start-pwd.sh $KAFKA_HOME/bin/kafka-server-start-zkcli-userpwd.sh
# 配置如下:
export KAFKA_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=zookeeper -Djava.security.auth.login.config=/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/userpwd/kafka-server-zkcli-jaas.conf"
5、將配置copy到其它節點
# server-zkcli-userpwd.properties
$ scp $KAFKA_HOME/config/server-zkcli-userpwd.properties hadoop-node2:$KAFKA_HOME/config/
$ scp $KAFKA_HOME/config/server-zkcli-userpwd.properties hadoop-node3:$KAFKA_HOME/config/
# kafka-server-zkcli-jaas.conf
$ scp $KAFKA_HOME/config/userpwd/kafka-server-zkcli-jaas.conf hadoop-node2:$KAFKA_HOME/config/userpwd
$ scp $KAFKA_HOME/config/userpwd/kafka-server-zkcli-jaas.conf hadoop-node3:$KAFKA_HOME/config/userpwd
# kafka-server-start-zkcli-userpwd.sh
$ scp $KAFKA_HOME/bin/kafka-server-start-zkcli-userpwd.sh hadoop-node2:$KAFKA_HOME/bin/
$ scp $KAFKA_HOME/bin/kafka-server-start-zkcli-userpwd.sh hadoop-node3:$KAFKA_HOME/bin/
6、修改其它節點上的配置
# 修改broker.id和advertised.listeners的主機名
$ vi $KAFKA_HOME/config/server-zkcli-userpwd.properties
7、啟動kafka服務
$ cd $KAFKA_HOME
$ ./bin/kafka-server-start-zkcli-userpwd.sh ./config/server-zkcli-userpwd.properties
# 后臺執行
$ ./bin/kafka-server-start-zkcli-userpwd.sh -daemon ./config/server-zkcli-userpwd.properties
8、kafka客戶端測試驗證
$ cd $KAFKA_HOME
# 查看topic列表
$ ./bin/kafka-topics-pwd.sh --list --bootstrap-server hadoop-node1:19092 --command-config config/userpwd/client.properties
3)Zookeeper賬號密碼認證+Kafka Kerberos認證
1、開啟zookeeper 賬號密碼鑒權
$ cd $KAFKA_HOME
$ ./bin/zookeeper-server-start-userpwd.sh ./config/zookeeper-userpwd.properties
# 后臺執行
$ ./bin/zookeeper-server-start-userpwd.sh -daemon ./config/zookeeper-userpwd.properties
2、配置server.properties
$ cp $KAFKA_HOME/config/server.properties $KAFKA_HOME/config/server-kerberos-zkcli-userpwd.properties
$ vi $KAFKA_HOME/config/server-kerberos-zkcli-userpwd.properties
# 配置修改如下:
listeners=SASL_PLAINTEXT://0.0.0.0:19092
# 是暴露給外部的listeners,如果沒有設置,會用listeners,參數的作用就是將Broker的Listener信息發布到Zookeeper中,注意其它節點得修改成本身的hostnaem或者ip,不支持0.0.0.0
advertised.listeners=SASL_PLAINTEXT://hadoop-node1:19092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka-server
3、配置jaas
$ cp $KAFKA_HOME/config/kerberos/kafka-server-jaas.conf $KAFKA_HOME/config/kerberos/kafka-server-kerberos-zkcli-userpwd-jaas.conf
$ vi $KAFKA_HOME/config/kerberos/kafka-server-kerberos-zkcli-userpwd-jaas.conf
# 增加如下配置:
// Zookeeper client authentication
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="123456";
};
4、修改kafka環境變量
$ cp $KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/bin/kafka-server-kerberos-zkcli-userpwd-start.sh
$ vi $KAFKA_HOME/bin/kafka-server-kerberos-zkcli-userpwd-start.sh
# 增加或修改如下內容:
export KAFKA_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=zookeeper -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/bigdata/hadoop/server/kafka_2.13-3.1.1/config/kerberos/kafka-server-kerberos-zkcli-userpwd-jaas.conf"
5、將配置copy到其它節點
# server-zkcli-kerberos.properties
$ scp $KAFKA_HOME/config/server-kerberos-zkcli-userpwd.properties hadoop-node2:$KAFKA_HOME/config/
$ scp $KAFKA_HOME/config/server-kerberos-zkcli-userpwd.properties hadoop-node3:$KAFKA_HOME/config/
# kafka-server-zkcli-jaas.conf
$ scp $KAFKA_HOME/config/kerberos/kafka-server-kerberos-zkcli-userpwd-jaas.conf hadoop-node2:$KAFKA_HOME/config/kerberos/
$ scp $KAFKA_HOME/config/kerberos/kafka-server-kerberos-zkcli-userpwd-jaas.conf hadoop-node3:$KAFKA_HOME/config/kerberos/
# kafka-server-start-zkcli-kerberos.sh
$ scp $KAFKA_HOME/bin/kafka-server-kerberos-zkcli-userpwd-start.sh hadoop-node2:$KAFKA_HOME/bin/
$ scp $KAFKA_HOME/bin/kafka-server-kerberos-zkcli-userpwd-start.sh hadoop-node3:$KAFKA_HOME/bin/
6、修改其它節點上的配置
# 修改broker.id和advertised.listeners的主機名
$ vi $KAFKA_HOME/config/server-kerberos-zkcli-userpwd.properties
# 修改主機名
$ vi $KAFKA_HOME/config/kerberos/kafka-server-kerberos-zkcli-userpwd-jaas.conf
7、啟動kafka服務
$ cd $KAFKA_HOME
$ ./bin/kafka-server-kerberos-zkcli-userpwd-start.sh ./config/server-kerberos-zkcli-userpwd.properties
# 后臺執行
$ ./bin/kafka-server-kerberos-zkcli-userpwd-start.sh -daemon ./config/server-kerberos-zkcli-userpwd.properties
8、kafka客戶端測試驗證
$ cd $KAFKA_HOME
# 查看topic列表
$ ./bin/kafka-topics-sasl.sh --list --bootstrap-server hadoop-node1:19092 --command-config config/kerberos/client.properties
Zookeeper鑒權認證+Kafka鑒權認證就先到這里了,有疑問的小伙伴歡迎給我留言哦,后面會持續更新關于大數據方向的文章,請小伙伴耐心等待……