k3s でクラスタ (Master 1台/Worker 2台) を構築してみた

こんにちは。サイオステクノロジー OSS サポート担当 Y です。

今回は、最近話題の k3s で 3台構成 (Master 1台/Worker 2台) のクラスタを構築してみました。(※以下の内容は CentOS 7.6/k3s v0.2.0 にて検証しています。)

■はじめに

今回は、Rancher Labs が開発した k3s にて、以下の様な Master 1台/Worker 2台のクラスタを構築してみました。

[ホスト名及び IP アドレス]
Master   : k3s-master.example.com (10.1.1.21)
Worker 1 : k3s-node-1.example.com (10.1.3.134)
Worker 2 : k3s-node-2.example.com (10.1.1.192)

なお、こちらの記事でシングルノード構成の k3s サーバを構築しているため、興味がある方はそちらもご参照下さい。

■k3s のインストール

まずは各ノードでのインストールですが、k3s はシングルバイナリで提供されているため、それぞれのノードでそのバイナリをダウンロードするだけです。

[Master]
[root@k3s-master ~]# cd /usr/local/bin/
[root@k3s-master bin]# 
[root@k3s-master bin]# wget https://github.com/rancher/k3s/releases/download/v0.2.0/k3s

~(中略)~

[root@k3s-master bin]# 
[root@k3s-master bin]# ls -l ./k3s     
-rw-r--r--. 1 root root 37735552 Mar  9 11:48 ./k3s
[root@k3s-master bin]# 
[root@k3s-master bin]# chmod 755 ./k3s 
[root@k3s-master bin]# 
[root@k3s-master bin]# ls -l ./k3s 
-rwxr-xr-x. 1 root root 37735552 Mar  9 11:48 ./k3s
[Worker 1]
[root@k3s-node-1 ~]# cd /usr/local/bin/
[root@k3s-node-1 bin]# 
[root@k3s-node-1 bin]# wget https://github.com/rancher/k3s/releases/download/v0.2.0/k3s

~(中略)~

[root@k3s-node-1 bin]# 
[root@k3s-node-1 bin]# ls -l ./k3s
-rw-r--r--. 1 root root 37735552 Mar  9 11:48 ./k3s
[root@k3s-node-1 bin]# 
[root@k3s-node-1 bin]# chmod 755 ./k3s
[root@k3s-node-1 bin]# 
[root@k3s-node-1 bin]# ls -l ./k3s
-rwxr-xr-x. 1 root root 37735552 Mar  9 11:48 ./k3s
[Worker 2]
[root@k3s-node-2 ~]# cd /usr/local/bin/
[root@k3s-node-2 bin]# 
[root@k3s-node-2 bin]# wget https://github.com/rancher/k3s/releases/download/v0.2.0/k3s

~(中略)~

[root@k3s-node-2 bin]# 
[root@k3s-node-2 bin]# ls -l ./k3s
-rw-r--r--. 1 root root 37735552 Mar  9 11:48 ./k3s
[root@k3s-node-2 bin]# 
[root@k3s-node-2 bin]# chmod 755 ./k3s
[root@k3s-node-2 bin]# 
[root@k3s-node-2 bin]# ls -l ./k3s
-rwxr-xr-x. 1 root root 37735552 Mar  9 11:48 ./k3s

■Master 起動

まずは、ダウンロードしたバイナリを使って k3s の Master を起動します。

最初に、こちらの記事で発生したエラーを回避するため、/etc/hosts に自分自身のホスト名を記載します。

加えて、/etc/hosts で各ノードの名前解決も実施できるように設定しておきます。

[Master]
[root@k3s-master ~]# tail -n 3 /etc/hosts
10.1.1.21   k3s-master.example.com
10.1.3.134  k3s-node-1.example.com
10.1.1.192  k3s-node-2.example.com

上記設定実施後、以下のコマンドを実行し Master を起動します。

[Master]
[root@k3s-master ~]# k3s server &
[1] 6966
[root@k3s-master ~]# INFO[0000] Preparing data dir /var/lib/rancher/k3s/data/e44f7a46cadac4cec9a759756f2a27fdb25e705a83d8d563207c6a6c5fa368b4 
INFO[2019-04-03T15:42:48.934015535+09:00] Starting k3s v0.2.0 (2771ae1)                
INFO[2019-04-03T15:42:52.008140927+09:00] Running kube-apiserver --watch-cache=false --cert-dir /var/lib/rancher/k3s/server/tls/temporary-certs --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key 
INFO[2019-04-03T15:42:52.962654144+09:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 10251 --address 127.0.0.1 --secure-port 0 --leader-elect=false 
INFO[2019-04-03T15:42:52.965153276+09:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 10252 --address 127.0.0.1 --secure-port 0 --leader-elect=false 
Flag --address has been deprecated, see --bind-address instead.
INFO[2019-04-03T15:42:53.255925984+09:00] Creating CRD listenerconfigs.k3s.cattle.io   
INFO[2019-04-03T15:42:53.298588459+09:00] Creating CRD addons.k3s.cattle.io            
INFO[2019-04-03T15:42:53.318134007+09:00] Creating CRD helmcharts.k3s.cattle.io        
INFO[2019-04-03T15:42:53.363540654+09:00] Waiting for CRD addons.k3s.cattle.io to become available 
INFO[2019-04-03T15:42:53.865552610+09:00] Done waiting for CRD addons.k3s.cattle.io to become available 
INFO[2019-04-03T15:42:53.865593139+09:00] Waiting for CRD helmcharts.k3s.cattle.io to become available 
INFO[2019-04-03T15:42:54.366634024+09:00] Done waiting for CRD helmcharts.k3s.cattle.io to become available 
INFO[2019-04-03T15:42:54.369551732+09:00] Listening on :6443                           
INFO[2019-04-03T15:42:55.359293798+09:00] Node token is available at /var/lib/rancher/k3s/server/node-token 
INFO[2019-04-03T15:42:55.359322551+09:00] To join node to cluster: k3s agent -s https://10.1.1.21:6443 -t ${NODE_TOKEN} 
INFO[2019-04-03T15:42:55.361607895+09:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml 
INFO[2019-04-03T15:42:55.361737466+09:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml 
INFO[2019-04-03T15:42:55.673465909+09:00] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml   
INFO[2019-04-03T15:42:55.673495552+09:00] Run: k3s kubectl                             
INFO[2019-04-03T15:42:55.673505492+09:00] k3s is up and running                        
INFO[2019-04-03T15:42:55.821734706+09:00] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[2019-04-03T15:42:55.823335649+09:00] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[2019-04-03T15:42:55.842991117+09:00] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory" 
WARN[2019-04-03T15:42:56.910374422+09:00] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory 
INFO[2019-04-03T15:42:56.911405033+09:00] Connecting to wss://localhost:6443/v1-k3s/connect 
INFO[2019-04-03T15:42:56.911447077+09:00] Connecting to proxy                           url="wss://localhost:6443/v1-k3s/connect"
INFO[2019-04-03T15:42:56.917257812+09:00] Handling backend connection request [k3s-master.example.com] 
WARN[2019-04-03T15:42:56.919020751+09:00] Disabling CPU quotas due to missing cpu.cfs_period_us 
INFO[2019-04-03T15:42:56.919903605+09:00] Running kubelet --healthz-bind-address 127.0.0.1 --read-only-port 0 --allow-privileged=true --cluster-domain cluster.local --kubeconfig /var/lib/rancher/k3s/agent/kubeconfig.yaml --eviction-hard imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --cgroup-driver cgroupfs --root-dir /var/lib/rancher/k3s/agent/kubelet --cert-dir /var/lib/rancher/k3s/agent/kubelet/pki --seccomp-profile-root /var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir /var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir /var/lib/rancher/k3s/data/e44f7a46cadac4cec9a759756f2a27fdb25e705a83d8d563207c6a6c5fa368b4/bin --cluster-dns 10.43.0.10 --container-runtime remote --container-runtime-endpoint unix:///run/k3s/containerd/containerd.sock --address 127.0.0.1 --anonymous-auth=false --client-ca-file /var/lib/rancher/k3s/agent/client-ca.pem --hostname-override k3s-master.example.com --cpu-cfs-quota=false --runtime-cgroups /systemd/user.slice/user-0.slice --kubelet-cgroups /systemd/user.slice/user-0.slice 
Flag --allow-privileged has been deprecated, will be removed in a future version
INFO[2019-04-03T15:42:57.354916653+09:00] waiting for node k3s-master.example.com: nodes "k3s-master.example.com" not found 
INFO[2019-04-03T15:42:59.356930173+09:00] waiting for node k3s-master.example.com CIDR not assigned yet 
INFO[2019-04-03T15:43:01.358965906+09:00] waiting for node k3s-master.example.com CIDR not assigned yet 
INFO[2019-04-03T15:43:03.360869352+09:00] waiting for node k3s-master.example.com CIDR not assigned yet 
INFO[2019-04-03T15:43:05.362885653+09:00] waiting for node k3s-master.example.com CIDR not assigned yet 

[root@k3s-master ~]# 

上記コマンドを実行後、k3s kubectl node コマンドで Master ノードの STATUS が Ready になってりいることを確認します。

[Master]
[root@k3s-master ~]# k3s kubectl get node
NAME                     STATUS   ROLES    AGE   VERSION
k3s-master.example.com   Ready       74s   v1.13.4-k3s.1

これで Master の起動は完了です。

■Worker 起動 (Master へ接続)

次に、k3s の Worker を起動 (及び Master へ接続) します。

始めに、Worker ノードでもこちらの記事で発生したエラーを回避するため、/etc/hosts に自分自身のホスト名を記載しつつ、各ノードの名前解決も実施できるように設定しておきます。

[Worker 1]
[root@k3s-node-1 ~]# tail -n 3 /etc/hosts
10.1.1.21   k3s-master.example.com
10.1.3.134  k3s-node-1.example.com
10.1.1.192  k3s-node-2.example.com
[Worker 2]
[root@k3s-node-2 ~]# tail -n 3 /etc/hosts
10.1.1.21   k3s-master.example.com
10.1.3.134  k3s-node-1.example.com
10.1.1.192  k3s-node-2.example.com

次に Master に接続するための token を確認します。この token の情報は Master の /var/lib/rancher/k3s/server/node-token で確認することが可能です。

[Master]
[root@k3s-master ~]# cat /var/lib/rancher/k3s/server/node-token 
K10d1ff4b0ba432258abc164b1bade3ac4cbf3be376893f372784630b4b51f67df4::node:3d12c8efa7a40c8a5fe60ad55eb8f30f

そして、Quick start の記載を参考に、上記 token を指定しつつ、以下の様なコマンドを実行します。

[Worker 1]
[root@k3s-node-1 ~]# k3s agent --server https://k3s-master.example.com:6443 --token K10d1ff4b0ba432258abc164b1bade3ac4cbf3be376893f372784630b4b51f67df4::node:3d12c8efa7a40c8a5fe60ad55eb8f30f &
[1] 7252
[root@k3s-node-1 ~]# INFO[0000] Preparing data dir /var/lib/rancher/k3s/data/e44f7a46cadac4cec9a759756f2a27fdb25e705a83d8d563207c6a6c5fa368b4 
INFO[2019-04-03T16:00:32.062755533+09:00] Starting k3s agent v0.2.0 (2771ae1)          
ERRO[2019-04-03T16:00:32.067804444+09:00] failed to get CA certs at https://k3s-master.example.com:6443/cacerts: Get https://k3s-master.example.com:6443/cacerts: dial tcp 10.1.1.21:6443: connect: no route to host 
ERRO[2019-04-03T16:00:34.069639665+09:00] failed to get CA certs at https://k3s-master.example.com:6443/cacerts: Get https://k3s-master.example.com:6443/cacerts: dial tcp 10.1.1.21:6443: connect: no route to host 
ERRO[2019-04-03T16:00:36.071041970+09:00] failed to get CA certs at https://k3s-master.example.com:6443/cacerts: Get https://k3s-master.example.com:6443/cacerts: dial tcp 10.1.1.21:6443: connect: no route to host 

しかし、上記の様なエラーになってしまいました。

“failed to get CA certs” となっていますが、メッセージの後半に “no route to host” と出力されているため、そもそも Master との通信が確立できていないようにも見えます。

ドキュメントを調べてみたところ、Open ports / Network security に k3s が利用する port の情報が記載されていました。恐らく、OS 側の firewalld で Master – Worker 間の通信が弾かれてしまっていると思われます。

そのため、上記のドキュメントの記載に従い Master の firewalld にて TCP/6443 と UDP/8472 宛のパケットを許可する設定を実施します。(※今回は単純な検証であるため、ざっくりと TCP/6443 及び UDP/8472 を空けていますが、実際には各環境のセキュリティ要件に合わせた設定を実施する必要があるのでご注意下さい)

[Master]
[root@k3s-master ~]# firewall-cmd --get-active-zones
public
  interfaces: ens192
[root@k3s-master ~]# 
[root@k3s-master ~]# firewall-cmd --list-ports --zone=public

[root@k3s-master ~]# 
[root@k3s-master ~]# firewall-cmd --list-ports --zone=public --permanent

[root@k3s-master ~]# 
[root@k3s-master ~]# firewall-cmd --add-port=6443/tcp --zone=public --permanent
success
[root@k3s-master ~]# 
[root@k3s-master ~]# firewall-cmd --add-port=8472/udp --zone=public --permanent
success
[root@k3s-master ~]# 
[root@k3s-master ~]# firewall-cmd --list-ports --zone=public

[root@k3s-master ~]# 
[root@k3s-master ~]# firewall-cmd --list-ports --zone=public --permanent
6443/tcp 8472/udp
[root@k3s-master ~]# 
[root@k3s-master ~]# firewall-cmd --reload
success
[root@k3s-master ~]# 
[root@k3s-master ~]# firewall-cmd --list-ports --zone=public
6443/tcp 8472/udp
[root@k3s-master ~]# 

また、UDP/8472 を利用した通信についてはノード間での通信に利用される旨の記載があるため、各 Worker でも UDP/8472 宛のパケットを許可する設定を実施します。

[Worker 1]
[root@k3s-node-1 ~]# firewall-cmd --get-active-zones
public
  interfaces: ens192
[root@k3s-node-1 ~]# 
[root@k3s-node-1 ~]# firewall-cmd --list-ports --zone=public

[root@k3s-node-1 ~]# 
[root@k3s-node-1 ~]# firewall-cmd --list-ports --zone=public --permanent

[root@k3s-node-1 ~]# 
[root@k3s-node-1 ~]# firewall-cmd --add-port=8472/udp --zone=public --permanent
success
[root@k3s-node-1 ~]# 
[root@k3s-node-1 ~]# firewall-cmd --list-ports --zone=public

[root@k3s-node-1 ~]# 
[root@k3s-node-1 ~]# firewall-cmd --list-ports --zone=public --permanent
8472/udp
[root@k3s-node-1 ~]# 
[root@k3s-node-1 ~]# firewall-cmd --reload
success
[root@k3s-node-1 ~]# 
[root@k3s-node-1 ~]# firewall-cmd --list-ports --zone=public
8472/udp
[root@k3s-node-1 ~]# 
[Worker 2]
[root@k3s-node-2 ~]# firewall-cmd --get-active-zones
public
  interfaces: ens192
[root@k3s-node-2 ~]# 
[root@k3s-node-2 ~]# firewall-cmd --list-ports --zone=public

[root@k3s-node-2 ~]# 
[root@k3s-node-2 ~]# firewall-cmd --list-ports --zone=public --permanent

[root@k3s-node-2 ~]# 
[root@k3s-node-2 ~]# firewall-cmd --add-port=8472/udp --zone=public --permanent
success
[root@k3s-node-2 ~]# 
[root@k3s-node-2 ~]# firewall-cmd --list-ports --zone=public

[root@k3s-node-2 ~]# 
[root@k3s-node-2 ~]# firewall-cmd --list-ports --zone=public --permanent
8472/udp
[root@k3s-node-2 ~]# 
[root@k3s-node-2 ~]# firewall-cmd --reload
success
[root@k3s-node-2 ~]# 
[root@k3s-node-2 ~]# firewall-cmd --list-ports --zone=public
8472/udp
[root@k3s-node-2 ~]# 

それでは、改めて Worker 1 で k3s を起動してみます。

[root@k3s-node-1 ~]# k3s agent --server https://k3s-master.example.com:6443 --token K10d1ff4b0ba432258abc164b1bade3ac4cbf3be376893f372784630b4b51f67df4::node:3d12c8efa7a40c8a5fe60ad55eb8f30f &
[1] 7559
[root@k3s-node-1 ~]# INFO[2019-04-03T16:09:57.254451659+09:00] Starting k3s agent v0.2.0 (2771ae1)          
INFO[2019-04-03T16:09:58.040554320+09:00] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[2019-04-03T16:09:58.041148754+09:00] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[2019-04-03T16:09:58.058061917+09:00] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory" 
WARN[2019-04-03T16:09:59.094016869+09:00] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory 
INFO[2019-04-03T16:09:59.095980478+09:00] Connecting to wss://k3s-master.example.com:6443/v1-k3s/connect 
INFO[2019-04-03T16:09:59.096040157+09:00] Connecting to proxy                           url="wss://k3s-master.example.com:6443/v1-k3s/connect"
WARN[2019-04-03T16:09:59.105132623+09:00] Disabling CPU quotas due to missing cpu.cfs_period_us 
INFO[2019-04-03T16:09:59.107015059+09:00] Running kubelet --healthz-bind-address 127.0.0.1 --read-only-port 0 --allow-privileged=true --cluster-domain cluster.local --kubeconfig /var/lib/rancher/k3s/agent/kubeconfig.yaml --eviction-hard imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --cgroup-driver cgroupfs --root-dir /var/lib/rancher/k3s/agent/kubelet --cert-dir /var/lib/rancher/k3s/agent/kubelet/pki --seccomp-profile-root /var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir /var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir /var/lib/rancher/k3s/data/e44f7a46cadac4cec9a759756f2a27fdb25e705a83d8d563207c6a6c5fa368b4/bin --cluster-dns 10.43.0.10 --container-runtime remote --container-runtime-endpoint unix:///run/k3s/containerd/containerd.sock --address 127.0.0.1 --anonymous-auth=false --client-ca-file /var/lib/rancher/k3s/agent/client-ca.pem --hostname-override k3s-node-1.example.com --cpu-cfs-quota=false --runtime-cgroups /systemd/user.slice/user-0.slice --kubelet-cgroups /systemd/user.slice/user-0.slice 
Flag --allow-privileged has been deprecated, will be removed in a future version
W0403 16:09:59.124090    7559 server.go:198] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0403 16:09:59.156519    7559 proxier.go:493] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0403 16:09:59.159732    7559 proxier.go:493] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0403 16:09:59.162875    7559 proxier.go:493] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0403 16:09:59.166017    7559 proxier.go:493] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
INFO[2019-04-03T16:09:59.434436397+09:00] waiting for node k3s-node-1.example.com: nodes "k3s-node-1.example.com" not found 
W0403 16:09:59.477704    7559 node.go:103] Failed to retrieve node info: nodes "k3s-node-1.example.com" not found
I0403 16:09:59.477734    7559 server_others.go:148] Using iptables Proxier.
W0403 16:09:59.477878    7559 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
I0403 16:09:59.499010    7559 server_others.go:178] Tearing down inactive rules.
E0403 16:09:59.545074    7559 proxier.go:232] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-HOST'

Try `iptables -h' or 'iptables --help' for more information.
E0403 16:09:59.546127    7559 proxier.go:238] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-CONTAINER'

Try `iptables -h' or 'iptables --help' for more information.
E0403 16:09:59.593303    7559 proxier.go:246] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-HOST'

Try `iptables -h' or 'iptables --help' for more information.
E0403 16:09:59.611875    7559 proxier.go:252] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-CONTAINER'

Try `iptables -h' or 'iptables --help' for more information.
E0403 16:09:59.626087    7559 proxier.go:259] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-NON-LOCAL'

Try `iptables -h' or 'iptables --help' for more information.
I0403 16:09:59.764064    7559 server.go:483] Version: v1.13.4-k3s.1
I0403 16:09:59.772894    7559 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0403 16:09:59.772966    7559 conntrack.go:52] Setting nf_conntrack_max to 131072
I0403 16:09:59.822542    7559 conntrack.go:83] Setting conntrack hashsize to 32768
I0403 16:09:59.822794    7559 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0403 16:09:59.822870    7559 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0403 16:09:59.836990    7559 config.go:202] Starting service config controller
I0403 16:09:59.837037    7559 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0403 16:09:59.837073    7559 config.go:102] Starting endpoints config controller
I0403 16:09:59.837082    7559 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0403 16:09:59.946355    7559 controller_utils.go:1034] Caches are synced for endpoints config controller
I0403 16:10:00.062013    7559 controller_utils.go:1034] Caches are synced for service config controller
I0403 16:10:00.485188    7559 server.go:393] Version: v1.13.4-k3s.1
I0403 16:10:00.492400    7559 server.go:630] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0403 16:10:00.493099    7559 container_manager_linux.go:247] container manager verified user specified cgroup-root exists: []
I0403 16:10:00.493127    7559 container_manager_linux.go:252] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/user.slice/user-0.slice SystemCgroupsName: KubeletCgroupsName:/systemd/user.slice/user-0.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/rancher/k3s/agent/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms}
I0403 16:10:00.493247    7559 container_manager_linux.go:271] Creating device plugin manager: true
I0403 16:10:00.493455    7559 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0403 16:10:00.496734    7559 kubelet.go:297] Watching apiserver
I0403 16:10:00.522624    7559 kuberuntime_manager.go:192] Container runtime containerd initialized, version: 1.2.4+unknown, apiVersion: v1alpha2
W0403 16:10:00.523449    7559 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0403 16:10:00.524362    7559 server.go:946] Started kubelet
I0403 16:10:00.543913    7559 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0403 16:10:00.543977    7559 status_manager.go:152] Starting to sync pod status with apiserver
I0403 16:10:00.544003    7559 kubelet.go:1735] Starting kubelet main sync loop.
I0403 16:10:00.544024    7559 kubelet.go:1752] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
I0403 16:10:00.544162    7559 server.go:133] Starting to listen on 127.0.0.1:10250
I0403 16:10:00.545191    7559 server.go:318] Adding debug handlers to kubelet server.
I0403 16:10:00.547194    7559 volume_manager.go:248] Starting Kubelet Volume Manager
I0403 16:10:00.549667    7559 desired_state_of_world_populator.go:130] Desired state populator starts to run
E0403 16:10:00.578172    7559 cri_stats_provider.go:320] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0403 16:10:00.578206    7559 kubelet.go:1229] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
W0403 16:10:00.590913    7559 container.go:409] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
W0403 16:10:00.591175    7559 container.go:409] Failed to create summary reader for "/system.slice/system-selinux\\x2dpolicy\\x2dmigrate\\x2dlocal\\x2dchanges.slice": none of the resources are being tracked.
W0403 16:10:00.591385    7559 container.go:409] Failed to create summary reader for "/system.slice/rhel-import-state.service": none of the resources are being tracked.
W0403 16:10:00.591589    7559 container.go:409] Failed to create summary reader for "/system.slice/dev-dm\\x2d1.swap": none of the resources are being tracked.
W0403 16:10:00.591794    7559 container.go:409] Failed to create summary reader for "/system.slice/vmtoolsd.service": none of the resources are being tracked.
W0403 16:10:00.592003    7559 container.go:409] Failed to create summary reader for "/system.slice/gssproxy.service": none of the resources are being tracked.
W0403 16:10:00.592212    7559 container.go:409] Failed to create summary reader for "/system.slice/lvm2-monitor.service": none of the resources are being tracked.
W0403 16:10:00.592413    7559 container.go:409] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
W0403 16:10:00.592624    7559 container.go:409] Failed to create summary reader for "/system.slice/wpa_supplicant.service": none of the resources are being tracked.
W0403 16:10:00.592814    7559 container.go:409] Failed to create summary reader for "/system.slice/blk-availability.service": none of the resources are being tracked.
W0403 16:10:00.593068    7559 container.go:409] Failed to create summary reader for "/system.slice/cups.service": none of the resources are being tracked.
W0403 16:10:00.593268    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-vconsole-setup.service": none of the resources are being tracked.
W0403 16:10:00.593475    7559 container.go:409] Failed to create summary reader for "/system.slice/vdo.service": none of the resources are being tracked.
W0403 16:10:00.593683    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-udev-settle.service": none of the resources are being tracked.
W0403 16:10:00.593889    7559 container.go:409] Failed to create summary reader for "/system.slice/lvm2-lvmetad.service": none of the resources are being tracked.
W0403 16:10:00.594258    7559 container.go:409] Failed to create summary reader for "/system.slice/libvirtd.service": none of the resources are being tracked.
W0403 16:10:00.594495    7559 container.go:409] Failed to create summary reader for "/system.slice/ksm.service": none of the resources are being tracked.
W0403 16:10:00.594724    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-update-utmp.service": none of the resources are being tracked.
W0403 16:10:00.594928    7559 container.go:409] Failed to create summary reader for "/system.slice/mcelog.service": none of the resources are being tracked.
W0403 16:10:00.595145    7559 container.go:409] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
W0403 16:10:00.595350    7559 container.go:409] Failed to create summary reader for "/system.slice": none of the resources are being tracked.
W0403 16:10:00.595536    7559 container.go:409] Failed to create summary reader for "/system.slice/bolt.service": none of the resources are being tracked.
W0403 16:10:00.595738    7559 container.go:409] Failed to create summary reader for "/system.slice/kdump.service": none of the resources are being tracked.
W0403 16:10:00.595974    7559 container.go:409] Failed to create summary reader for "/system.slice/dev-disk-by\\x2did-dm\\x2dname\\x2dcentos\\x2dswap.swap": none of the resources are being tracked.
W0403 16:10:00.596174    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-remount-fs.service": none of the resources are being tracked.
W0403 16:10:00.596357    7559 container.go:409] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
W0403 16:10:00.596557    7559 container.go:409] Failed to create summary reader for "/system.slice/NetworkManager-wait-online.service": none of the resources are being tracked.
W0403 16:10:00.596742    7559 container.go:409] Failed to create summary reader for "/system.slice/abrt-ccpp.service": none of the resources are being tracked.
W0403 16:10:00.597106    7559 container.go:409] Failed to create summary reader for "/system.slice/vgauthd.service": none of the resources are being tracked.
W0403 16:10:00.597325    7559 container.go:409] Failed to create summary reader for "/system.slice/chronyd.service": none of the resources are being tracked.
W0403 16:10:00.597584    7559 container.go:409] Failed to create summary reader for "/user.slice": none of the resources are being tracked.
W0403 16:10:00.597777    7559 container.go:409] Failed to create summary reader for "/system.slice/gdm.service": none of the resources are being tracked.
W0403 16:10:00.598000    7559 container.go:409] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
W0403 16:10:00.598188    7559 container.go:409] Failed to create summary reader for "/system.slice/plymouth-start.service": none of the resources are being tracked.
W0403 16:10:00.598394    7559 container.go:409] Failed to create summary reader for "/system.slice/accounts-daemon.service": none of the resources are being tracked.
W0403 16:10:00.598583    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-journal-flush.service": none of the resources are being tracked.
W0403 16:10:00.598792    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-sysctl.service": none of the resources are being tracked.
W0403 16:10:00.599250    7559 container.go:409] Failed to create summary reader for "/system.slice/rpcbind.service": none of the resources are being tracked.
W0403 16:10:00.599493    7559 container.go:409] Failed to create summary reader for "/system.slice/dev-mapper-centos\\x2dswap.swap": none of the resources are being tracked.
W0403 16:10:00.599701    7559 container.go:409] Failed to create summary reader for "/system.slice/boot.mount": none of the resources are being tracked.
W0403 16:10:00.599900    7559 container.go:409] Failed to create summary reader for "/system.slice/-.mount": none of the resources are being tracked.
W0403 16:10:00.600123    7559 container.go:409] Failed to create summary reader for "/system.slice/realmd.service": none of the resources are being tracked.
W0403 16:10:00.600310    7559 container.go:409] Failed to create summary reader for "/system.slice/smartd.service": none of the resources are being tracked.
W0403 16:10:00.600505    7559 container.go:409] Failed to create summary reader for "/system.slice/ModemManager.service": none of the resources are being tracked.
W0403 16:10:00.600684    7559 container.go:409] Failed to create summary reader for "/system.slice/rhel-dmesg.service": none of the resources are being tracked.
W0403 16:10:00.600867    7559 container.go:409] Failed to create summary reader for "/system.slice/rhel-readonly.service": none of the resources are being tracked.
W0403 16:10:00.601070    7559 container.go:409] Failed to create summary reader for "/machine.slice": none of the resources are being tracked.
W0403 16:10:00.601253    7559 container.go:409] Failed to create summary reader for "/system.slice/packagekit.service": none of the resources are being tracked.
W0403 16:10:00.601449    7559 container.go:409] Failed to create summary reader for "/system.slice/upower.service": none of the resources are being tracked.
W0403 16:10:00.601638    7559 container.go:409] Failed to create summary reader for "/system.slice/abrtd.service": none of the resources are being tracked.
W0403 16:10:00.601824    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-udev-trigger.service": none of the resources are being tracked.
W0403 16:10:00.610902    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-fsck-root.service": none of the resources are being tracked.
W0403 16:10:00.611516    7559 container.go:409] Failed to create summary reader for "/system.slice/colord.service": none of the resources are being tracked.
W0403 16:10:00.611774    7559 container.go:409] Failed to create summary reader for "/system.slice/firewalld.service": none of the resources are being tracked.
W0403 16:10:00.612001    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
W0403 16:10:00.612197    7559 container.go:409] Failed to create summary reader for "/system.slice/postfix.service": none of the resources are being tracked.
W0403 16:10:00.612380    7559 container.go:409] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
W0403 16:10:00.612550    7559 container.go:409] Failed to create summary reader for "/system.slice/rtkit-daemon.service": none of the resources are being tracked.
W0403 16:10:00.612745    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup-dev.service": none of the resources are being tracked.
W0403 16:10:00.612916    7559 container.go:409] Failed to create summary reader for "/system.slice/rhel-domainname.service": none of the resources are being tracked.
W0403 16:10:00.613118    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-user-sessions.service": none of the resources are being tracked.
W0403 16:10:00.613297    7559 container.go:409] Failed to create summary reader for "/system.slice/crond.service": none of the resources are being tracked.
W0403 16:10:00.613484    7559 container.go:409] Failed to create summary reader for "/system.slice/network.service": none of the resources are being tracked.
W0403 16:10:00.613666    7559 container.go:409] Failed to create summary reader for "/system.slice/system-lvm2\\x2dpvscan.slice": none of the resources are being tracked.
W0403 16:10:00.613838    7559 container.go:409] Failed to create summary reader for "/system.slice/sys-kernel-config.mount": none of the resources are being tracked.
W0403 16:10:00.614030    7559 container.go:409] Failed to create summary reader for "/system.slice/dev-mqueue.mount": none of the resources are being tracked.
W0403 16:10:00.614216    7559 container.go:409] Failed to create summary reader for "/system.slice/sys-kernel-debug.mount": none of the resources are being tracked.
W0403 16:10:00.614405    7559 container.go:409] Failed to create summary reader for "/system.slice/rngd.service": none of the resources are being tracked.
W0403 16:10:00.614593    7559 container.go:409] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
W0403 16:10:00.614768    7559 container.go:409] Failed to create summary reader for "/system.slice/avahi-daemon.service": none of the resources are being tracked.
W0403 16:10:00.615136    7559 container.go:409] Failed to create summary reader for "/system.slice/sysstat.service": none of the resources are being tracked.
W0403 16:10:00.615335    7559 container.go:409] Failed to create summary reader for "/system.slice/abrt-oops.service": none of the resources are being tracked.
W0403 16:10:00.615571    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
W0403 16:10:00.616263    7559 container.go:409] Failed to create summary reader for "/system.slice/iscsi-shutdown.service": none of the resources are being tracked.
W0403 16:10:00.616440    7559 container.go:409] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
W0403 16:10:00.616629    7559 container.go:409] Failed to create summary reader for "/system.slice/dev-disk-by\\x2duuid-e69edb1c\\x2db98e\\x2d4826\\x2d99a1\\x2dbac921a6cddb.swap": none of the resources are being tracked.
W0403 16:10:00.616811    7559 container.go:409] Failed to create summary reader for "/system.slice/atd.service": none of the resources are being tracked.
W0403 16:10:00.617002    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup.service": none of the resources are being tracked.
W0403 16:10:00.617203    7559 container.go:409] Failed to create summary reader for "/system.slice/var-lib-nfs-rpc_pipefs.mount": none of the resources are being tracked.
W0403 16:10:00.617378    7559 container.go:409] Failed to create summary reader for "/system.slice/dev-disk-by\\x2did-dm\\x2duuid\\x2dLVM\\x2dTUD5p60e46iAiGafwnaHSe3eADQ8QzlZdn46aPs4MIDbRUsKc8rSeYN9RwF78jiw.swap": none of the resources are being tracked.
W0403 16:10:00.617971    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-random-seed.service": none of the resources are being tracked.
W0403 16:10:00.618156    7559 container.go:409] Failed to create summary reader for "/system.slice/kmod-static-nodes.service": none of the resources are being tracked.
W0403 16:10:00.618335    7559 container.go:409] Failed to create summary reader for "/system.slice/run-user-989.mount": none of the resources are being tracked.
W0403 16:10:00.618647    7559 container.go:409] Failed to create summary reader for "/system.slice/abrt-xorg.service": none of the resources are being tracked.
W0403 16:10:00.618831    7559 container.go:409] Failed to create summary reader for "/system.slice/udisks2.service": none of the resources are being tracked.
W0403 16:10:00.619167    7559 container.go:409] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
W0403 16:10:00.619367    7559 container.go:409] Failed to create summary reader for "/system.slice/geoclue.service": none of the resources are being tracked.
W0403 16:10:00.619601    7559 container.go:409] Failed to create summary reader for "/system.slice/libstoragemgmt.service": none of the resources are being tracked.
W0403 16:10:00.619822    7559 container.go:409] Failed to create summary reader for "/system.slice/dev-hugepages.mount": none of the resources are being tracked.
W0403 16:10:00.620033    7559 container.go:409] Failed to create summary reader for "/system.slice/run-user-0.mount": none of the resources are being tracked.
W0403 16:10:00.620205    7559 container.go:409] Failed to create summary reader for "/system.slice/ksmtuned.service": none of the resources are being tracked.
W0403 16:10:00.620395    7559 container.go:409] Failed to create summary reader for "/system.slice/dev-centos-swap.swap": none of the resources are being tracked.
I0403 16:10:00.637637    7559 kubelet_node_status.go:267] Setting node annotation to enable volume controller attach/detach
E0403 16:10:00.648444    7559 kubelet.go:2167] node "k3s-node-1.example.com" not found
I0403 16:10:00.648680    7559 cpu_manager.go:155] [cpumanager] starting with none policy
I0403 16:10:00.648696    7559 cpu_manager.go:156] [cpumanager] reconciling every 10s
I0403 16:10:00.648721    7559 policy_none.go:42] [cpumanager] none policy: Start
W0403 16:10:00.666835    7559 manager.go:527] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
I0403 16:10:00.667834    7559 kubelet_node_status.go:267] Setting node annotation to enable volume controller attach/detach
I0403 16:10:00.696886    7559 kubelet_node_status.go:70] Attempting to register node k3s-node-1.example.com
E0403 16:10:00.697869    7559 eviction_manager.go:243] eviction manager: failed to get summary stats: failed to get node info: node "k3s-node-1.example.com" not found
I0403 16:10:00.718555    7559 kubelet_node_status.go:73] Successfully registered node k3s-node-1.example.com
E0403 16:10:00.748666    7559 kubelet.go:2167] node "k3s-node-1.example.com" not found
I0403 16:10:00.757696    7559 kuberuntime_manager.go:930] updating runtime config through cri with podcidr 10.42.1.0/24
I0403 16:10:00.758686    7559 kubelet_network.go:69] Setting Pod CIDR:  -> 10.42.1.0/24
I0403 16:10:00.793516    7559 reconciler.go:154] Reconciler: start to sync state
I0403 16:10:01.438522    7559 flannel.go:89] Determining IP address of default interface
I0403 16:10:01.438965    7559 flannel.go:99] Using interface with name ens192 and address 10.1.3.134
I0403 16:10:01.441026    7559 kube.go:127] Waiting 10m0s for node controller to sync
I0403 16:10:01.441080    7559 kube.go:306] Starting kube subnet manager
I0403 16:10:02.441232    7559 kube.go:134] Node controller sync successful
I0403 16:10:02.441348    7559 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
I0403 16:10:02.550834    7559 flannel.go:75] Wrote subnet file to /run/flannel/subnet.env
I0403 16:10:02.550854    7559 flannel.go:79] Running backend.
I0403 16:10:02.550864    7559 vxlan_network.go:60] watching for new subnet leases
I0403 16:10:02.553800    7559 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0403 16:10:02.553818    7559 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0403 16:10:02.554592    7559 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0403 16:10:02.555404    7559 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN
I0403 16:10:02.556195    7559 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0403 16:10:02.557008    7559 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0403 16:10:02.558872    7559 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0403 16:10:02.563835    7559 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0403 16:10:02.563861    7559 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0403 16:10:02.564682    7559 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0403 16:10:02.565488    7559 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0403 16:10:02.567785    7559 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0403 16:10:02.569670    7559 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN
I0403 16:10:02.571450    7559 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully

[root@k3s-node-1 ~]# 

Worker 1 での k3s コマンド実行後に Master で k3s kubectl get node を実行すると、起動した Worker 1 (k3s-node-1.example.com) が接続されていることが確認できました。

[Master]
[root@k3s-master ~]# k3s kubectl get node
NAME                     STATUS   ROLES    AGE     VERSION
k3s-master.example.com   Ready       36m     v1.13.4-k3s.1
k3s-node-1.example.com   Ready       8m59s   v1.13.4-k3s.1

同様に Worker 2 の k3s も起動 (Master に接続) してみます。

[root@k3s-node-2 ~]# k3s agent --server https://k3s-master.example.com:6443 --token K10d1ff4b0ba432258abc164b1bade3ac4cbf3be376893f372784630b4b51f67df4::node:3d12c8efa7a40c8a5fe60ad55eb8f30f &
[1] 7585
[root@k3s-node-2 ~]# INFO[0000] Preparing data dir /var/lib/rancher/k3s/data/e44f7a46cadac4cec9a759756f2a27fdb25e705a83d8d563207c6a6c5fa368b4 
INFO[2019-04-03T16:18:03.357012616+09:00] Starting k3s agent v0.2.0 (2771ae1)          
INFO[2019-04-03T16:18:03.505975431+09:00] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[2019-04-03T16:18:03.509490675+09:00] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[2019-04-03T16:18:03.528648099+09:00] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory" 
WARN[2019-04-03T16:18:04.557467112+09:00] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory 
INFO[2019-04-03T16:18:04.559173905+09:00] Connecting to wss://k3s-master.example.com:6443/v1-k3s/connect 
INFO[2019-04-03T16:18:04.559230606+09:00] Connecting to proxy                           url="wss://k3s-master.example.com:6443/v1-k3s/connect"
WARN[2019-04-03T16:18:04.574243650+09:00] Disabling CPU quotas due to missing cpu.cfs_period_us 
INFO[2019-04-03T16:18:04.575810962+09:00] Running kubelet --healthz-bind-address 127.0.0.1 --read-only-port 0 --allow-privileged=true --cluster-domain cluster.local --kubeconfig /var/lib/rancher/k3s/agent/kubeconfig.yaml --eviction-hard imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --cgroup-driver cgroupfs --root-dir /var/lib/rancher/k3s/agent/kubelet --cert-dir /var/lib/rancher/k3s/agent/kubelet/pki --seccomp-profile-root /var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir /var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir /var/lib/rancher/k3s/data/e44f7a46cadac4cec9a759756f2a27fdb25e705a83d8d563207c6a6c5fa368b4/bin --cluster-dns 10.43.0.10 --container-runtime remote --container-runtime-endpoint unix:///run/k3s/containerd/containerd.sock --address 127.0.0.1 --anonymous-auth=false --client-ca-file /var/lib/rancher/k3s/agent/client-ca.pem --hostname-override k3s-node-2.example.com --cpu-cfs-quota=false --runtime-cgroups /systemd/user.slice/user-0.slice --kubelet-cgroups /systemd/user.slice/user-0.slice 
Flag --allow-privileged has been deprecated, will be removed in a future version
W0403 16:18:04.593998    7585 server.go:198] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0403 16:18:04.654006    7585 proxier.go:493] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0403 16:18:04.657349    7585 proxier.go:493] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0403 16:18:04.660451    7585 proxier.go:493] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0403 16:18:04.663601    7585 proxier.go:493] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
INFO[2019-04-03T16:18:04.909481391+09:00] waiting for node k3s-node-2.example.com: nodes "k3s-node-2.example.com" not found 
W0403 16:18:04.953055    7585 node.go:103] Failed to retrieve node info: nodes "k3s-node-2.example.com" not found
I0403 16:18:04.953087    7585 server_others.go:148] Using iptables Proxier.
W0403 16:18:04.953209    7585 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
I0403 16:18:04.981060    7585 server_others.go:178] Tearing down inactive rules.
E0403 16:18:05.004947    7585 proxier.go:232] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-HOST'

Try `iptables -h' or 'iptables --help' for more information.
E0403 16:18:05.005763    7585 proxier.go:238] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-CONTAINER'

Try `iptables -h' or 'iptables --help' for more information.
E0403 16:18:05.043090    7585 proxier.go:246] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-HOST'

Try `iptables -h' or 'iptables --help' for more information.
E0403 16:18:05.083802    7585 proxier.go:252] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-CONTAINER'

Try `iptables -h' or 'iptables --help' for more information.
E0403 16:18:05.085734    7585 proxier.go:259] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-NON-LOCAL'

Try `iptables -h' or 'iptables --help' for more information.
I0403 16:18:05.227000    7585 server.go:483] Version: v1.13.4-k3s.1
I0403 16:18:05.236036    7585 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0403 16:18:05.236079    7585 conntrack.go:52] Setting nf_conntrack_max to 131072
I0403 16:18:05.297551    7585 conntrack.go:83] Setting conntrack hashsize to 32768
I0403 16:18:05.298422    7585 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0403 16:18:05.298506    7585 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0403 16:18:05.311905    7585 config.go:202] Starting service config controller
I0403 16:18:05.311930    7585 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0403 16:18:05.311965    7585 config.go:102] Starting endpoints config controller
I0403 16:18:05.311975    7585 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0403 16:18:05.335265    7585 server.go:393] Version: v1.13.4-k3s.1
I0403 16:18:05.344191    7585 server.go:630] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0403 16:18:05.344495    7585 container_manager_linux.go:247] container manager verified user specified cgroup-root exists: []
I0403 16:18:05.344522    7585 container_manager_linux.go:252] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/user.slice/user-0.slice SystemCgroupsName: KubeletCgroupsName:/systemd/user.slice/user-0.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/rancher/k3s/agent/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms}
I0403 16:18:05.344637    7585 container_manager_linux.go:271] Creating device plugin manager: true
I0403 16:18:05.344800    7585 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0403 16:18:05.355086    7585 kubelet.go:297] Watching apiserver
I0403 16:18:05.367926    7585 kuberuntime_manager.go:192] Container runtime containerd initialized, version: 1.2.4+unknown, apiVersion: v1alpha2
W0403 16:18:05.369248    7585 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0403 16:18:05.370241    7585 server.go:946] Started kubelet
I0403 16:18:05.372772    7585 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0403 16:18:05.372828    7585 status_manager.go:152] Starting to sync pod status with apiserver
I0403 16:18:05.372847    7585 kubelet.go:1735] Starting kubelet main sync loop.
I0403 16:18:05.372881    7585 kubelet.go:1752] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
I0403 16:18:05.372979    7585 server.go:133] Starting to listen on 127.0.0.1:10250
I0403 16:18:05.373851    7585 server.go:318] Adding debug handlers to kubelet server.
I0403 16:18:05.384359    7585 volume_manager.go:248] Starting Kubelet Volume Manager
I0403 16:18:05.391297    7585 desired_state_of_world_populator.go:130] Desired state populator starts to run
E0403 16:18:05.401319    7585 cri_stats_provider.go:320] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0403 16:18:05.401346    7585 kubelet.go:1229] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
W0403 16:18:05.418832    7585 container.go:409] Failed to create summary reader for "/system.slice/packagekit.service": none of the resources are being tracked.
W0403 16:18:05.419189    7585 container.go:409] Failed to create summary reader for "/system.slice/udisks2.service": none of the resources are being tracked.
W0403 16:18:05.419436    7585 container.go:409] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
W0403 16:18:05.419631    7585 container.go:409] Failed to create summary reader for "/system.slice/rhel-import-state.service": none of the resources are being tracked.
W0403 16:18:05.419816    7585 container.go:409] Failed to create summary reader for "/system.slice/dev-disk-by\\x2did-dm\\x2dname\\x2dcentos\\x2dswap.swap": none of the resources are being tracked.
W0403 16:18:05.420022    7585 container.go:409] Failed to create summary reader for "/system.slice/dev-disk-by\\x2duuid-e69edb1c\\x2db98e\\x2d4826\\x2d99a1\\x2dbac921a6cddb.swap": none of the resources are being tracked.
W0403 16:18:05.420206    7585 container.go:409] Failed to create summary reader for "/system.slice/lvm2-monitor.service": none of the resources are being tracked.
W0403 16:18:05.420388    7585 container.go:409] Failed to create summary reader for "/system.slice/boot.mount": none of the resources are being tracked.
W0403 16:18:05.420559    7585 container.go:409] Failed to create summary reader for "/system.slice/lvm2-lvmetad.service": none of the resources are being tracked.
W0403 16:18:05.420749    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-fsck-root.service": none of the resources are being tracked.
W0403 16:18:05.420950    7585 container.go:409] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
W0403 16:18:05.421144    7585 container.go:409] Failed to create summary reader for "/system.slice/vgauthd.service": none of the resources are being tracked.
W0403 16:18:05.421324    7585 container.go:409] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
W0403 16:18:05.421498    7585 container.go:409] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
W0403 16:18:05.421688    7585 container.go:409] Failed to create summary reader for "/system.slice/wpa_supplicant.service": none of the resources are being tracked.
W0403 16:18:05.421981    7585 container.go:409] Failed to create summary reader for "/system.slice/atd.service": none of the resources are being tracked.
W0403 16:18:05.422208    7585 container.go:409] Failed to create summary reader for "/system.slice/rtkit-daemon.service": none of the resources are being tracked.
W0403 16:18:05.422388    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-random-seed.service": none of the resources are being tracked.
W0403 16:18:05.422613    7585 container.go:409] Failed to create summary reader for "/system.slice/bolt.service": none of the resources are being tracked.
W0403 16:18:05.422791    7585 container.go:409] Failed to create summary reader for "/system.slice/crond.service": none of the resources are being tracked.
W0403 16:18:05.423003    7585 container.go:409] Failed to create summary reader for "/system.slice/iscsi-shutdown.service": none of the resources are being tracked.
W0403 16:18:05.423182    7585 container.go:409] Failed to create summary reader for "/system.slice/firewalld.service": none of the resources are being tracked.
W0403 16:18:05.423393    7585 container.go:409] Failed to create summary reader for "/system.slice/ksmtuned.service": none of the resources are being tracked.
W0403 16:18:05.423568    7585 container.go:409] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
W0403 16:18:05.423747    7585 container.go:409] Failed to create summary reader for "/system.slice/abrt-xorg.service": none of the resources are being tracked.
W0403 16:18:05.423945    7585 container.go:409] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
W0403 16:18:05.424131    7585 container.go:409] Failed to create summary reader for "/system.slice/avahi-daemon.service": none of the resources are being tracked.
W0403 16:18:05.424300    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup.service": none of the resources are being tracked.
W0403 16:18:05.424480    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-journal-flush.service": none of the resources are being tracked.
W0403 16:18:05.424652    7585 container.go:409] Failed to create summary reader for "/system.slice/kmod-static-nodes.service": none of the resources are being tracked.
W0403 16:18:05.424829    7585 container.go:409] Failed to create summary reader for "/system.slice/-.mount": none of the resources are being tracked.
W0403 16:18:05.425225    7585 container.go:409] Failed to create summary reader for "/system.slice/system-selinux\\x2dpolicy\\x2dmigrate\\x2dlocal\\x2dchanges.slice": none of the resources are being tracked.
W0403 16:18:05.425457    7585 container.go:409] Failed to create summary reader for "/user.slice": none of the resources are being tracked.
W0403 16:18:05.425642    7585 container.go:409] Failed to create summary reader for "/system.slice/ModemManager.service": none of the resources are being tracked.
W0403 16:18:05.425850    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
W0403 16:18:05.426065    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-udev-settle.service": none of the resources are being tracked.
W0403 16:18:05.426238    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
W0403 16:18:05.426403    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-remount-fs.service": none of the resources are being tracked.
W0403 16:18:05.426574    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
W0403 16:18:05.426746    7585 container.go:409] Failed to create summary reader for "/system.slice/geoclue.service": none of the resources are being tracked.
W0403 16:18:05.426952    7585 container.go:409] Failed to create summary reader for "/system.slice/upower.service": none of the resources are being tracked.
W0403 16:18:05.427141    7585 container.go:409] Failed to create summary reader for "/system.slice/kdump.service": none of the resources are being tracked.
W0403 16:18:05.427322    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-user-sessions.service": none of the resources are being tracked.
W0403 16:18:05.427496    7585 container.go:409] Failed to create summary reader for "/system.slice/postfix.service": none of the resources are being tracked.
W0403 16:18:05.427677    7585 container.go:409] Failed to create summary reader for "/system.slice/NetworkManager-wait-online.service": none of the resources are being tracked.
W0403 16:18:05.427845    7585 container.go:409] Failed to create summary reader for "/system.slice/mcelog.service": none of the resources are being tracked.
W0403 16:18:05.442179    7585 container.go:409] Failed to create summary reader for "/system.slice/run-user-0.mount": none of the resources are being tracked.
W0403 16:18:05.442358    7585 container.go:409] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
W0403 16:18:05.442536    7585 container.go:409] Failed to create summary reader for "/system.slice/dev-centos-swap.swap": none of the resources are being tracked.
W0403 16:18:05.442713    7585 container.go:409] Failed to create summary reader for "/system.slice/dev-disk-by\\x2did-dm\\x2duuid\\x2dLVM\\x2dTUD5p60e46iAiGafwnaHSe3eADQ8QzlZdn46aPs4MIDbRUsKc8rSeYN9RwF78jiw.swap": none of the resources are being tracked.
W0403 16:18:05.442913    7585 container.go:409] Failed to create summary reader for "/system.slice/rhel-domainname.service": none of the resources are being tracked.
W0403 16:18:05.443109    7585 container.go:409] Failed to create summary reader for "/system.slice/plymouth-start.service": none of the resources are being tracked.
W0403 16:18:05.443276    7585 container.go:409] Failed to create summary reader for "/machine.slice": none of the resources are being tracked.
W0403 16:18:05.443444    7585 container.go:409] Failed to create summary reader for "/system.slice/realmd.service": none of the resources are being tracked.
W0403 16:18:05.443608    7585 container.go:409] Failed to create summary reader for "/system.slice/vdo.service": none of the resources are being tracked.
W0403 16:18:05.443777    7585 container.go:409] Failed to create summary reader for "/system.slice/ksm.service": none of the resources are being tracked.
W0403 16:18:05.443954    7585 container.go:409] Failed to create summary reader for "/system.slice/smartd.service": none of the resources are being tracked.
W0403 16:18:05.444116    7585 container.go:409] Failed to create summary reader for "/system.slice/rpcbind.service": none of the resources are being tracked.
W0403 16:18:05.444296    7585 container.go:409] Failed to create summary reader for "/system.slice/run-user-989.mount": none of the resources are being tracked.
W0403 16:18:05.444461    7585 container.go:409] Failed to create summary reader for "/system.slice/gdm.service": none of the resources are being tracked.
W0403 16:18:05.444633    7585 container.go:409] Failed to create summary reader for "/system.slice/cups.service": none of the resources are being tracked.
W0403 16:18:05.444798    7585 container.go:409] Failed to create summary reader for "/system.slice/abrt-oops.service": none of the resources are being tracked.
W0403 16:18:05.445181    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-sysctl.service": none of the resources are being tracked.
W0403 16:18:05.445354    7585 container.go:409] Failed to create summary reader for "/system.slice/sys-kernel-debug.mount": none of the resources are being tracked.
W0403 16:18:05.445527    7585 container.go:409] Failed to create summary reader for "/system.slice/network.service": none of the resources are being tracked.
W0403 16:18:05.445738    7585 container.go:409] Failed to create summary reader for "/system.slice/sysstat.service": none of the resources are being tracked.
W0403 16:18:05.445953    7585 container.go:409] Failed to create summary reader for "/system.slice/abrtd.service": none of the resources are being tracked.
W0403 16:18:05.446121    7585 container.go:409] Failed to create summary reader for "/system.slice/vmtoolsd.service": none of the resources are being tracked.
W0403 16:18:05.446283    7585 container.go:409] Failed to create summary reader for "/system.slice/rngd.service": none of the resources are being tracked.
W0403 16:18:05.446449    7585 container.go:409] Failed to create summary reader for "/system.slice/accounts-daemon.service": none of the resources are being tracked.
W0403 16:18:05.446612    7585 container.go:409] Failed to create summary reader for "/system.slice/dev-dm\\x2d1.swap": none of the resources are being tracked.
W0403 16:18:05.447249    7585 container.go:409] Failed to create summary reader for "/system.slice/colord.service": none of the resources are being tracked.
W0403 16:18:05.447409    7585 container.go:409] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
W0403 16:18:05.447592    7585 container.go:409] Failed to create summary reader for "/system.slice/rhel-dmesg.service": none of the resources are being tracked.
W0403 16:18:05.447757    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-update-utmp.service": none of the resources are being tracked.
W0403 16:18:05.448092    7585 container.go:409] Failed to create summary reader for "/system.slice/chronyd.service": none of the resources are being tracked.
W0403 16:18:05.448304    7585 container.go:409] Failed to create summary reader for "/system.slice/rhel-readonly.service": none of the resources are being tracked.
W0403 16:18:05.448545    7585 container.go:409] Failed to create summary reader for "/system.slice/sys-kernel-config.mount": none of the resources are being tracked.
W0403 16:18:05.448715    7585 container.go:409] Failed to create summary reader for "/system.slice": none of the resources are being tracked.
W0403 16:18:05.448907    7585 container.go:409] Failed to create summary reader for "/system.slice/gssproxy.service": none of the resources are being tracked.
W0403 16:18:05.449083    7585 container.go:409] Failed to create summary reader for "/system.slice/var-lib-nfs-rpc_pipefs.mount": none of the resources are being tracked.
W0403 16:18:05.449263    7585 container.go:409] Failed to create summary reader for "/system.slice/system-lvm2\\x2dpvscan.slice": none of the resources are being tracked.
W0403 16:18:05.449429    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup-dev.service": none of the resources are being tracked.
W0403 16:18:05.449594    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-vconsole-setup.service": none of the resources are being tracked.
W0403 16:18:05.449757    7585 container.go:409] Failed to create summary reader for "/system.slice/dev-mqueue.mount": none of the resources are being tracked.
W0403 16:18:05.450208    7585 container.go:409] Failed to create summary reader for "/system.slice/libvirtd.service": none of the resources are being tracked.
W0403 16:18:05.450435    7585 container.go:409] Failed to create summary reader for "/system.slice/blk-availability.service": none of the resources are being tracked.
W0403 16:18:05.450625    7585 container.go:409] Failed to create summary reader for "/system.slice/abrt-ccpp.service": none of the resources are being tracked.
W0403 16:18:05.450785    7585 container.go:409] Failed to create summary reader for "/system.slice/libstoragemgmt.service": none of the resources are being tracked.
W0403 16:18:05.451006    7585 container.go:409] Failed to create summary reader for "/system.slice/dev-mapper-centos\\x2dswap.swap": none of the resources are being tracked.
W0403 16:18:05.451175    7585 container.go:409] Failed to create summary reader for "/system.slice/systemd-udev-trigger.service": none of the resources are being tracked.
W0403 16:18:05.451339    7585 container.go:409] Failed to create summary reader for "/system.slice/dev-hugepages.mount": none of the resources are being tracked.
I0403 16:18:05.474981    7585 kubelet_node_status.go:267] Setting node annotation to enable volume controller attach/detach
I0403 16:18:05.484679    7585 controller_utils.go:1034] Caches are synced for service config controller
I0403 16:18:05.484797    7585 controller_utils.go:1034] Caches are synced for endpoints config controller
I0403 16:18:05.487041    7585 kubelet.go:1752] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
E0403 16:18:05.490366    7585 kubelet.go:2167] node "k3s-node-2.example.com" not found
I0403 16:18:05.504346    7585 kubelet_node_status.go:267] Setting node annotation to enable volume controller attach/detach
I0403 16:18:05.508458    7585 cpu_manager.go:155] [cpumanager] starting with none policy
I0403 16:18:05.508473    7585 cpu_manager.go:156] [cpumanager] reconciling every 10s
I0403 16:18:05.508486    7585 policy_none.go:42] [cpumanager] none policy: Start
I0403 16:18:05.530998    7585 kubelet_node_status.go:70] Attempting to register node k3s-node-2.example.com
W0403 16:18:05.545905    7585 manager.go:527] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
E0403 16:18:05.578417    7585 eviction_manager.go:243] eviction manager: failed to get summary stats: failed to get node info: node "k3s-node-2.example.com" not found
E0403 16:18:05.591296    7585 kubelet.go:2167] node "k3s-node-2.example.com" not found
I0403 16:18:05.593151    7585 kubelet_node_status.go:73] Successfully registered node k3s-node-2.example.com
I0403 16:18:05.707798    7585 reconciler.go:154] Reconciler: start to sync state
I0403 16:18:05.891635    7585 kuberuntime_manager.go:930] updating runtime config through cri with podcidr 10.42.3.0/24
I0403 16:18:05.892306    7585 kubelet_network.go:69] Setting Pod CIDR:  -> 10.42.3.0/24
I0403 16:18:06.913885    7585 flannel.go:89] Determining IP address of default interface
I0403 16:18:06.914330    7585 flannel.go:99] Using interface with name ens192 and address 10.1.1.192
I0403 16:18:06.915623    7585 kube.go:127] Waiting 10m0s for node controller to sync
I0403 16:18:06.915654    7585 kube.go:306] Starting kube subnet manager
I0403 16:18:07.915831    7585 kube.go:134] Node controller sync successful
I0403 16:18:07.915966    7585 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
I0403 16:18:08.022456    7585 flannel.go:75] Wrote subnet file to /run/flannel/subnet.env
I0403 16:18:08.022476    7585 flannel.go:79] Running backend.
I0403 16:18:08.022485    7585 vxlan_network.go:60] watching for new subnet leases
I0403 16:18:08.028853    7585 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0403 16:18:08.028882    7585 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0403 16:18:08.029687    7585 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0403 16:18:08.030473    7585 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.3.0/24 -j RETURN
I0403 16:18:08.031286    7585 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0403 16:18:08.033123    7585 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0403 16:18:08.036053    7585 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0403 16:18:08.041006    7585 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.3.0/24 -j RETURN
I0403 16:18:08.043932    7585 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0403 16:18:08.048583    7585 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0403 16:18:08.048600    7585 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0403 16:18:08.049496    7585 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0403 16:18:08.070297    7585 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0403 16:18:08.072364    7585 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT

[root@k3s-node-2 ~]# 

Worker 2 起動後に Master で k3s kubectl get node を実行すると、Worker 2 (k3s-node-2.example.com) も接続されていることが確認できました。

[root@k3s-master ~]# k3s kubectl get node
NAME                     STATUS   ROLES    AGE     VERSION
k3s-master.example.com   Ready       37m     v1.13.4-k3s.1
k3s-node-1.example.com   Ready       10m     v1.13.4-k3s.1
k3s-node-2.example.com   Ready       2m22s   v1.13.4-k3s.1

正直なところ、いくつか Error や Failed のメッセージが出力されていて気になるのですが、とりあえず Master に接続 (Master が各 Worker を認識) できている様なので、今回はこのまま動作検証を続行します。

■動作検証 (Pod のデプロイ)

最後に、構築した k3s クラスタ上にコンテナ (Pod) をデプロイしてみます。

前回の記事で利用したものと同じマニフェスト (YAML ファイル) を作成し、k3s kubectl apply コマンドで適用します。

[Master]
[root@k3s-master ~]# cat << EOF > ~/k3s-yaml/nginx-test.yaml 
> apiVersion: v1
> kind: Service
> metadata:
>   name: nginx-test-service
>   labels:
>     app: nginx-test
> spec:
>   selector:
>     app: nginx-test
>   type: NodePort
>   ports:
>   - protocol: TCP 
>     port: 80
>     targetPort: 80
>     nodePort: 30080
> ---
> apiVersion: apps/v1
> kind: Deployment
> metadata:
>   name: nginx-test-deployment
> spec:
>   replicas: 3
>   selector:
>     matchLabels:
>       app: nginx-test
>   template:
>     metadata:
>       labels:
>         app: nginx-test
>     spec:
>       containers:
>       - name: nginx-container
>         image: nginx:1.15.9
>         ports:
>         - containerPort: 80
>         lifecycle:
>           postStart:
>             exec:
>               command:
>                 - /bin/sh
>                 - -c
>                 - echo "NGINX container!" > /usr/share/nginx/html/index.html && echo "Pod name is \$(hostname) ." >> /usr/share/nginx/html/index.html
> EOF
[Master]
[root@k3s-master ~]# k3s kubectl apply -f ~/k3s-yaml/nginx-test.yaml 
service/nginx-test-service created
deployment.apps/nginx-test-deployment created

マニフェストの詳細は割愛しますが、大雑把に説明すると以下の様な設定を実施しています。

・NGINX コンテナ (Pod) を 3つデプロイ。
・k3s をインストールした仮想サーバの 30080番 port 宛の通信を、いずれかの NGINX コンテナの 80番 port に転送。

また、どの NGINX コンテナ (Pod) からのレスポンスであるかを判断できるように、Pod 名をレスポンス (index.html) に含めるようにしています。

デプロイが完了すると、各リソースの状態が以下の様になると思います。

[Master]
[root@k3s-master ~]# k3s kubectl get service
NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes           ClusterIP   10.43.0.1             443/TCP        47m
nginx-test-service   NodePort    10.43.17.47           80:30080/TCP   4m3s
[root@k3s-master ~]# 
[root@k3s-master ~]# k3s kubectl get deployment
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
nginx-test-deployment   3/3     3            3           4m10s
[root@k3s-master ~]# 
[root@k3s-master ~]# k3s kubectl get pod       
NAME                                     READY   STATUS    RESTARTS   AGE
nginx-test-deployment-56fdfc565c-5mwk2   1/1     Running   0          4m14s
nginx-test-deployment-56fdfc565c-n7wmk   1/1     Running   0          4m14s
nginx-test-deployment-56fdfc565c-rk9tp   1/1     Running   0          4m14s

それでは、実際に k3s をインストールした仮想サーバの 30080番 port 宛てに HTTP リクエストを投げてみます。

[Master]
[root@k3s-master ~]# curl -XGET https://k3s-master.example.com:30080/
curl: (7) Failed connect to k3s-master.example.com:30080; No route to host

すると、上記の様なエラーが返ってきてしまいました。

何回かリクエストを実行してみると、”エラーになる場合” と “正常にレスポンスが返ってくる場合” の 2通りの動作をしているようです。

[Master 宛てのリクエスト実行]
[root@k3s-master ~]# for i in `seq 1 10` ; do curl -XGET https://k3s-master.example.com:30080/; done
curl: (7) Failed connect to k3s-master.example.com:30080; No route to host
curl: (7) Failed connect to k3s-master.example.com:30080; No route to host
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .
curl: (7) Failed connect to k3s-master.example.com:30080; No route to host
curl: (7) Failed connect to k3s-master.example.com:30080; No route to host
curl: (7) Failed connect to k3s-master.example.com:30080; No route to host
curl: (7) Failed connect to k3s-master.example.com:30080; No route to host
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .
curl: (7) Failed connect to k3s-master.example.com:30080; No route to host
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .

また、各ノードに対してリクエストを実行してみると、同じ様に “エラーになる場合” と”正常にレスポンスが返ってくる場合” があるようですが、各ノード毎に “正常にレスポンスが返ってきている Pod 名が異なっている” 様に見受けられます。

[Worker 1 宛てのリクエスト実行]
[root@k3s-master ~]# for i in `seq 1 10` ; do curl -XGET https://k3s-node-1.example.com:30080/; done
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
curl: (7) Failed connect to k3s-node-1.example.com:30080; No route to host
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
curl: (7) Failed connect to k3s-node-1.example.com:30080; No route to host
curl: (7) Failed connect to k3s-node-1.example.com:30080; No route to host
curl: (7) Failed connect to k3s-node-1.example.com:30080; No route to host
curl: (7) Failed connect to k3s-node-1.example.com:30080; No route to host
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
[Worker 2 宛てのリクエスト実行]
[root@k3s-master ~]# for i in `seq 1 10` ; do curl -XGET https://k3s-node-2.example.com:30080/; done
curl: (7) Failed connect to k3s-node-2.example.com:30080; No route to host
curl: (7) Failed connect to k3s-node-2.example.com:30080; No route to host
curl: (7) Failed connect to k3s-node-2.example.com:30080; No route to host
curl: (7) Failed connect to k3s-node-2.example.com:30080; No route to host
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-rk9tp .
curl: (7) Failed connect to k3s-node-2.example.com:30080; No route to host
curl: (7) Failed connect to k3s-node-2.example.com:30080; No route to host
curl: (7) Failed connect to k3s-node-2.example.com:30080; No route to host
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-rk9tp .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-rk9tp .

まとめてみると、リクエスト送信先のノードと正常にレスポンスが返ってくる Pod の組み合わせは以下の様な感じになっています。

k3s-master.example.com : nginx-test-deployment-56fdfc565c-5mwk2
k3s-node-1.example.com : nginx-test-deployment-56fdfc565c-n7wmk
k3s-node-2.example.com : nginx-test-deployment-56fdfc565c-rk9tp

これについて、各 Pod がデプロイされているノードを調べてみると以下の様になっていました。

[Master]
[root@k3s-master ~]# k3s kubectl get pod -o wide
NAME                                     READY   STATUS    RESTARTS   AGE   IP          NODE                     NOMINATED NODE   READINESS GATES
nginx-test-deployment-56fdfc565c-5mwk2   1/1     Running   0          10m   10.42.0.4   k3s-master.example.com              
nginx-test-deployment-56fdfc565c-n7wmk   1/1     Running   0          10m   10.42.1.2   k3s-node-1.example.com              
nginx-test-deployment-56fdfc565c-rk9tp   1/1     Running   0          10m   10.42.3.2   k3s-node-2.example.com              

どうやら、NodePort (30080 port) 宛てのリクエストについて、”リクエストを受信したノード上にある Pod にリクエストが割り振られた場合” には正常にレスポンスが返ってきており、”リクエストを受信したノード以外のノードにある Pod にリクエストが割り振られた場合” は正常にレスポンスを得られていないようです。

調べてみると、こちらのドキュメントに記載があるように、Kubernetes での NodePort を利用した通信では、リクエストを受信したノード (例: k3s-node-1.example.com) と異なるノード (例: k3s-node-2.example.com) 上にある Pod にリクエストを割り振る場合、デフォルトではノード間で通信 (パケット転送) を行う仕組みになっているようです。

これらの情報を鑑みると、各ノード間での通信 (パケット転送) に何等かの問題が発生している可能性がありそうです。

そのため、リクエストをループ処理で実行しながら iptables -nvL コマンドの出力を眺めてみたところ、以下の様に FORWARD チェインの REJECT (下から 3行目のルール) に合致したパケット数が増加していました。

[Master]
[root@k3s-master ~]# iptables -nvL FORWARD
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      virbr0  0.0.0.0/0            192.168.122.0/24     ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  virbr0 *       192.168.122.0/24     0.0.0.0/0           
    0     0 ACCEPT     all  --  virbr0 virbr0  0.0.0.0/0            0.0.0.0/0           
    0     0 REJECT     all  --  *      virbr0  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
    0     0 REJECT     all  --  virbr0 *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
  341 32223 KUBE-FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
  341 32223 FORWARD_direct  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  341 32223 FORWARD_IN_ZONES_SOURCE  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  341 32223 FORWARD_IN_ZONES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  341 32223 FORWARD_OUT_ZONES_SOURCE  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  341 32223 FORWARD_OUT_ZONES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID
  341 32223 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
    0     0 ACCEPT     all  --  *      *       10.42.0.0/16         0.0.0.0/0           
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.42.0.0/16        

上記より、FORWARD チェインにてノード間の通信 (パケットの転送) が REJECT されてしまっている可能性がありそうです。

そのため、各ノードで以下の様なコマンドを実行し LAN 内でのパケット転送を許可してみました。(※今回は単純な検証であるため、ざっくりと LAN 内でのパケット転送を全て許可していますが、実際には各環境のセキュリティ要件に合わせた設定を実施する必要があるのでご注意下さい)

[Master]
[root@k3s-master ~]# firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -p tcp -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
success
[root@k3s-master ~]# 
[root@k3s-master ~]# firewall-cmd --direct --get-all-rules
ipv4 filter FORWARD 0 -p tcp -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
[root@k3s-master ~]# 
[root@k3s-master ~]# iptables -nvL FORWARD_direct
Chain FORWARD_direct (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp  --  *      *       10.0.0.0/8           10.0.0.0/8      
[Worker 1]
[root@k3s-node-1 ~]# firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -p tcp -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
success
[root@k3s-node-1 ~]# 
[root@k3s-node-1 ~]# firewall-cmd --direct --get-all-rules
ipv4 filter FORWARD 0 -p tcp -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
[root@k3s-node-1 ~]# 
[root@k3s-node-1 ~]# iptables -nvL FORWARD_direct
Chain FORWARD_direct (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp  --  *      *       10.0.0.0/8           10.0.0.0/8        
[Worker 2]
[root@k3s-node-2 ~]# firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -p tcp -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
success
[root@k3s-node-2 ~]# 
[root@k3s-node-2 ~]# firewall-cmd --direct --get-all-rules
ipv4 filter FORWARD 0 -p tcp -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
[root@k3s-node-2 ~]# 
[root@k3s-node-2 ~]# iptables -nvL FORWARD_direct
Chain FORWARD_direct (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp  --  *      *       10.0.0.0/8           10.0.0.0/8          

すると、各ノード (Master/Worker 1/Worker 2) 宛てのリクエストに対して、3つの Pod にリクエストが割り振られつつ、全ての Pod から正常にレスポンスを受け取ることができるようになりました。

[Master 宛てのリクエスト実行]
[root@k3s-master ~]# for i in `seq 1 10` ; do curl -XGET https://k3s-master.example.com:30080/; done
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-rk9tp .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-rk9tp .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
[Worker 1 宛てのリクエスト実行]
[root@k3s-master ~]# for i in `seq 1 10` ; do curl -XGET https://k3s-node-1.example.com:30080/; done
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-rk9tp .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-rk9tp .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-rk9tp .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
[Worker 2 宛てのリクエスト実行]
[root@k3s-master ~]# for i in `seq 1 10` ; do curl -XGET https://k3s-node-2.example.com:30080/; done
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-rk9tp .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-rk9tp .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-n7wmk .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .
NGINX container!
Pod name is nginx-test-deployment-56fdfc565c-5mwk2 .

■まとめ

今回は、最近話題の k3s のクラスタを構築してみました。

途中何回か躓いた部分はあったのですが、どうにか 3台構成のクラスタを構築して基本的な動作 (Pod のデプロイとリクエストの処理) を検証することができました。

まだリリースされて日が浅い OSS ですが、軽量な Kubernetes として興味深い製品であるため、今後も機会があれば検証を実施してみようと思います。

ご覧いただきありがとうございます! この投稿はお役に立ちましたか?

役に立った 役に立たなかった

0人がこの投稿は役に立ったと言っています。

コメントを残す

メールアドレスが公開されることはありません。 が付いている欄は必須項目です