R-Car/k8s-draft

= はじめに =

複数のR-Carを用いてKubernetesの高可用性クラスタを構築する手順を紹介する

= 2. 環境構築手順 =

1台のR-Car で Kubernetes が動作する環境を構築する手順を説明する.

以下のような流れで環境構築を行う.
 * 1) Yocto-Gen3 をビルドする.
 * 2) 1.のビルドイメージを使用してUbuntu on R-Car をセットアップする.
 * 3) 2.に Kubernetes をインストールする.
 * 4) 1.～3.の環境を書き込んだ SD Card を R-Car の台数分コピーする.

(複数のR-Carを使って k8sクラスタを構築する手順は3章で説明する. )

概要
本手順で構築する環境の概要を以下に示す.


 * OS: Ubuntu 18.04.5 LTS (GNU/Linux 5.4.72-yocto-standard aarch64)
 * Docker: version 19.03.12
 * Kubeadm: version 1.19.0
 * Kubelet: version 1.19.0
 * Kubectl: version 1.19.0
 * HAProxy: version 1.8.8

開発環境は以下を使用している.


 * ホストPC: Ubuntu 18.04.5 LTS (推奨)
 * 評価ボード：R-Car Starter Kit Premier (H3) v3.0 × 5台
 * microSD Card 8GB × 5枚
 * Yocto: Yocto-Gen3/v4.7.0, Yocto Project 3.1.3

2.3.	Yoctoビルド手順
Yoctoビルド手順を説明する.

まず、Yocto-Gen3/v4.7.0/Building the BSP for Renesas H3 Starter Kit, M3 Starter Kit を参考に bitbake core-image-weston の直前まで行う.

Docker が実行するために必要なカーネルオプションを追加する. docker-config.cfg を新規作成し、以下の内容を記載する.

cat < meta-renesas/meta-rcar-gen3/recipes-kernel/linux/linux-renesas/docker-config.cfg CONFIG_CGROUP_FREEZER=y CONFIG_NETFILTER_XT_MATCH_IPVS=m CONFIG_IP_VS=m CONFIG_IP_VS_TAB_BITS=12 CONFIG_IP_VS_SH_TAB_BITS=8 CONFIG_BLK_DEV_THROTTLING=y CONFIG_CFQ_GROUP_IOSCHED=y CONFIG_NET_SCHED=y CONFIG_NET_CLS=y CONFIG_NET_CLS_CGROUP=m CONFIG_NET_SCH_FIFO=y CONFIG_CGROUP_NET_CLASSID=y CONFIG_RT_GROUP_SCHED=y CONFIG_CGROUP_NET_PRIO=y CONFIG_CFS_BANDWIDTH=y CONFIG_IP_VS_NFCT=y CONFIG_IP_VS_PROTO_TCP=y CONFIG_IP_VS_PROTO_UDP=y CONFIG_IP_VS_RR=m CONFIG_EXT3_FS_POSIX_ACL=y CONFIG_EXT3_FS_SECURITY=y CONFIG_EXT4_FS_SECURITY=y CONFIG_XFRM_USER=m CONFIG_XFRM_ALGO=m CONFIG_INET_ESP=m CONFIG_NET_L3_MASTER_DEV=y CONFIG_IPVLAN=m CONFIG_DUMMY=m CONFIG_NF_NAT_FTP=m CONFIG_NF_CONNTRACK_FTP=m CONFIG_NF_NAT_TFTP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_DAX=m CONFIG_DM_THIN_PROVISIONING=m CONFIG_SQUASHFS_XZ=y EOF

Kubernetes が実行するために必要なカーネルオプションを追加する. kubernetes-config.cfg を新規作成し、以下の内容を記載する.

cat < meta-renesas/meta-rcar-gen3/recipes-kernel/linux/linux-renesas/kubernetes-config.cfg CONFIG_BRIDGE_NF_EBTABLES=m CONFIG_IP_NF_TARGET_REDIRECT=m CONFIG_NETFILTER_XT_MATCH_COMMENT=m CONFIG_IP_NF_RAW=m CONFIG_IP_VS=m CONFIG_IP_VS_WRR=m CONFIG_IP_VS_SH=m CONFIG_NF_CT_NETLINK=m CONFIG_NF_CONNTRACK_IPV4=m CONFIG_NETFILTER_XT_SET=m CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m CONFIG_NETFILTER_XT_MATCH_RECENT=m CONFIG_NETFILTER_XT_TARGET_REDIRECT=m CONFIG_IP_SET=m CONFIG_IP_SET_HASH_IP=m CONFIG_IP_SET_HASH_NET=m CONFIG_NETFILTER_XT_MARK=m CONFIG_NETFILTER_XT_MATCH_STATISTIC=m EOF

上記 cfg ファイルを適用させるために、linux-renesas_5.4.bb　を以下のように修正する.

diff --git a/meta-rcar-gen3/recipes-kernel/linux/linux-renesas_5.4.bb b/meta-rcar-gen3/recipes-kernel/linux/linux-renesas_5.4.bb index 91a3a3e..06abfb9 100644 --- a/meta-rcar-gen3/recipes-kernel/linux/linux-renesas_5.4.bb +++ b/meta-rcar-gen3/recipes-kernel/linux/linux-renesas_5.4.bb @@ -27,6 +27,8 @@ KBUILD_DEFCONFIG = "defconfig" SRC_URI_append = " \     file://touch.cfg \      ${@oe.utils.conditional("USE_AVB", "1", " file://usb-video-class.cfg", "", d)} \ +    file://docker-config.cfg \ +    file://kubernetes-config.cfg \  " # Enable RPMSG_VIRTIO depend on ICCOM

ビルドする.

bitbake core-image-minimal

Flashing firmware/In case of DDR 8GiB board の手順に従い、ビルドした firmware を書き込む. (※使用する R-Car の台数分書き込む. )

3.1.	k8sクラスタ構成手順
HAProxy用1台、master node 用 2台、worker node 用 2台 をそれぞれ設定し、k8sクラスタを構成する手順を説明する. 事前にすべてのR-Carを起動し、同一ネットワークに接続しておく. (本手順は環境にもよるが20～30分以上かかる. )

HAProxy設定
R-Car (HAProxy) の設定手順を説明する. 本手順はR-Car (HAProxy)上で実施する.

/etc/haproxy/haproxy.cfg を開いて、最後尾に以下の設定を追記する. master1, master2 のIPアドレスは環境に合わせて変更する.

・・・ #- #- frontend kubernetes bind *:6443 option tcplog mode tcp default_backend kubernetes-master-nodes #- #- backend kubernetes-master-nodes mode tcp balance    roundrobin option tcp-check server master1 192.168.179.48:6443 check server master2 192.168.179.49:6443 check
 * 1) main frontend which proxys to the backends
 * 1) round robin balancing between the various backends

HAProxy を起動する

systemctl start haproxy

master1設定
R-Car (master1) の設定手順を説明する. 本手順はR-Car (master1)上で実施する.

WARNING! デフォルトではブロックデバイス上(/var/lib/etcd) にデータストアが作成されるが、データストアへアクセス時のタイムアウトエラーが多く発生して動作が安定しない. 本手順では、アクセス速度を速くするために、データストアの場所をRAM上に変更する(tmpfsでマウントする). kubeadm の --config オプションを使ってデータストアのパス変更もできるが、--config オプションは他のオプションと併用できない為、今回は次の方法を取る. Kubernetes のデータストア(etcd)をRAM上にマウントする.

mkdir /var/lib/etcd mount -t tmpfs tmpfs /var/lib/etcd

HAProxy サーバのIPアドレスを設定する. (IPアドレスは環境に合わせて変更する. )

LOAD_BALANCER_DNS=192.168.179.52 LOAD_BALANCER_PORT=6443

1つめのコントロールプレーンノードを初期化する.

[init] Using Kubernetes version: v1.21.0 [preflight] Running pre-flight checks [WARNING Hostname]: hostname "master1" could not be reached [WARNING Hostname]: hostname "master1": lookup master1 on 192.168.179.1:53: no such host [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' ^[[A^[[A^[[A^[[A^[[B^[[B^[[B^[[B [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 192.168.179.48 192.168.179.52] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [192.168.179.48 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [192.168.179.48 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 92.558093 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: a7dbea8c50522416fc30be35a8cfd2b72c60d2540c74e6bad5832e3dcf3ff9c9 [mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 21sx8f.t536gdy7uzhk5o2o [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run:   export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:   https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root:   kubeadm join 192.168.179.52:6443 --token 21sx8f.t536gdy7uzhk5o2o \         --discovery-token-ca-cert-hash sha256:232e02ecc69e4ba4bf5806d6ae7cba591be6b67e4de3973597c069c0a9fc1be1 \         --control-plane --certificate-key a7dbea8c50522416fc30be35a8cfd2b72c60d2540c74e6bad5832e3dcf3ff9c9 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.179.52:6443 --token 21sx8f.t536gdy7uzhk5o2o \         --discovery-token-ca-cert-hash sha256:232e02ecc69e4ba4bf5806d6ae7cba591be6b67e4de3973597c069c0a9fc1be1
 * 1) kubeadm init --control-plane-endpoint "${LOAD_BALANCER_DNS}:${LOAD_BALANCER_PORT}" --upload-certs --node-name master1

上記表示にも説明があるように、以下の設定を行う. これでmaster1 で kubectrl が使えるようになる.

mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config

Podネットワークプラグインをインストールする. 本手順では Calico を使用する.

configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget poddisruptionbudget.policy/calico-kube-controllers created
 * 1) kubectl apply -f https://docs.projectcalico.org/v3.15/manifests/calico.yaml

master2設定
R-Car (master2) の設定手順を説明する. 本手順はR-Car (master2)上で実施する.

Kubernetes のデータストア(etcd)をRAM上にマウントする.

mkdir /var/lib/etcd mount -t tmpfs tmpfs /var/lib/etcd

2つめのコントロールプレーンノードを初期化する. master1 のノードを初期化したときに表示されるコントロールプレーンノード用のコマンドを実行する. （ハッシュ値は環境によって異なるので、環境に合わせて変更する） --node-name master2 を追加

kubeadm join 192.168.179.52:6443 --token 21sx8f.t536gdy7uzhk5o2o \ --discovery-token-ca-cert-hash sha256:232e02ecc69e4ba4bf5806d6ae7cba591be6b67e4de3973597c069c0a9fc1be1 \ --control-plane --certificate-key a7dbea8c50522416fc30be35a8cfd2b72c60d2540c74e6bad5832e3dcf3ff9c9 --node-name master2

以下の設定を行う. これでmaster1 で kubectrl が使えるようになる.

mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config

k8sクラスタの状態確認
NAMESPACE    NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES kube-system  calico-kube-controllers-7d66c56c96-hh5tx   1/1     Running   0          40m   172.16.137.66    master1 kube-system  calico-node-55ndb                          0/1     Running   0          25m   192.168.179.49   master2 kube-system  calico-node-8lrcm                          1/1     Running   0          14m   192.168.179.50   work1 kube-system  calico-node-9nsv5                          0/1     Running   0          40m   192.168.179.48   master1 kube-system  calico-node-s2tnv                          0/1     Running   0          13m   192.168.179.51   work2 kube-system  coredns-f9fd979d6-5f4pj                    1/1     Running   0          40m   172.16.137.65    master1 kube-system  coredns-f9fd979d6-pd7lm                    1/1     Running   0          40m   172.16.137.67    master1 kube-system  etcd-master1                               1/1     Running   1          40m   192.168.179.48   master1 kube-system  etcd-master2                               1/1     Running   0          22m   192.168.179.49   master2 kube-system  kube-apiserver-master1                     1/1     Running   4          40m   192.168.179.48   master1 kube-system  kube-apiserver-master2                     1/1     Running   0          22m   192.168.179.49   master2 kube-system  kube-controller-manager-master1            1/1     Running   1          40m   192.168.179.48   master1 kube-system  kube-controller-manager-master2            1/1     Running   0          22m   192.168.179.49   master2 kube-system  kube-proxy-d8gm5                           1/1     Running   0          14m   192.168.179.50   work1 kube-system  kube-proxy-gq6l4                           1/1     Running   0          40m   192.168.179.48   master1 kube-system  kube-proxy-klhmd                           1/1     Running   1          13m   192.168.179.51   work2 kube-system  kube-proxy-l8w5k                           1/1     Running   0          25m   192.168.179.49   master2 kube-system  kube-scheduler-master1                     1/1     Running   1          40m   192.168.179.48   master1 kube-system  kube-scheduler-master2                     1/1     Running   0          22m   192.168.179.49   master2
 * 1) kubectl get pods -A -o wide

Pod展開手順
deployment.apps/app created service/web-service created
 * 1) kubectl apply -f app.yml

NAME                  READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES app-849858b7fd-2cd6d  1/1     Running   0          23m   172.16.215.2   work1 app-849858b7fd-hc82t  1/1     Running   0          23m   172.16.123.5   work2 app-849858b7fd-k9twv  1/1     Running   0          23m   172.16.215.5   work1 app-849858b7fd-kpwq8  1/1     Running   0          23m   172.16.215.4   work1 app-849858b7fd-mt77h  1/1     Running   0          23m   172.16.123.1   work2 app-849858b7fd-q4pz7  1/1     Running   0          23m   172.16.123.2   work2 app-849858b7fd-q7w6m  1/1     Running   0          23m   172.16.123.3   work2 app-849858b7fd-qv4t6  1/1     Running   0          23m   172.16.123.4   work2 app-849858b7fd-sncpg  1/1     Running   0          23m   172.16.215.1   work1 app-849858b7fd-xvhkz  1/1     Running   0          23m   172.16.215.3   work1
 * 1) kubectl get pods -o wide