kun432's blog

Alexaなどスマートスピーカーの話題中心に、Voiceflowの日本語情報を発信してます。たまにAWSやkubernetesなど。

〜スマートスピーカーやVoiceflowの記事は右メニューのカテゴリからどうぞ。〜

kube-prometheusでKubernetesの監視環境をマルっと用意する①

f:id:kun432:20210124210855p:plain

Kubernetesで監視を行う場合は一般的に

  • Prometheus
  • AlertManager
  • Grafana

みたいな組み合わせが多いかと思いますが、これを一つづつ用意するのは手間です。kube-prometheusを使うとこの手間を省いて一通り用意してくれます。

github.com

早速やってみましょう。

目次

Quickstart

まずはQuickstartに従ってやってみましょう。

レポジトリをcloneします。リリースごとにKubernetesのバージョンとの互換性が決まっているので注意してください。私の環境ではv1.18を使っているのでrelease-0.6になります。

$ git clone https://github.com/prometheus-operator/kube-prometheus  --branch release-0.6
$ cd kube-prometheus

本来はjsonnetでテンプレートからmanifestを生成する必要がありますが、予めmanifestディレクトリ以下に生成されたものがあるのでまずはこれを使いましょう。

$ kubectl create -f manifests/setup
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created

$ until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
No resources found

$ kubectl create -f manifests/
alertmanager.monitoring.coreos.com/main created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
servicemonitor.monitoring.coreos.com/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-operator created
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created

ではpodを確認してみます。kube-prometheusはmonitoringというネームスペースになります。

$ kubectl -n monitoring get pod
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          78s
alertmanager-main-1                    2/2     Running   0          78s
alertmanager-main-2                    2/2     Running   0          78s
grafana-67dfc5f687-nxrbj               1/1     Running   0          78s
kube-state-metrics-69d4c7c69d-b5mfg    3/3     Running   0          78s
node-exporter-9642b                    2/2     Running   0          77s
node-exporter-f8cxg                    2/2     Running   0          77s
node-exporter-lznln                    2/2     Running   0          77s
node-exporter-mlj79                    2/2     Running   0          78s
prometheus-adapter-66b855f564-9b862    1/1     Running   0          77s
prometheus-k8s-0                       3/3     Running   1          77s
prometheus-k8s-1                       3/3     Running   1          77s
prometheus-operator-75c98bcfd7-vwjln   2/2     Running   0          102s

いろいろ立ち上がっていますね。これだけでも結構リソースが必要な感じです。

各ダッシュボードにアクセスする

次にダッシュボードを見ていきましょう。Quickstartでは以下のような感じでkubectl port-forwardを使って、localhostからダッシュボードにアクセスするようになっていますが、やっぱり外部からアクセスしたいですよね。

$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
$ kubectl --namespace monitoring port-forward svc/grafana 3000
$ kubectl --namespace monitoring port-forward svc/alertmanager-main 9093

ということで、ザクっと書き換えます。Quickstartによるお試しなので。

$ kubectl -n monitoring patch service prometheus-k8s -p '{"spec":{"type": "NodePort"}}'
$ kubectl -n monitoring patch service prometheus-k8s --type='json' -p='[{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 30080}]'
$ kubectl -n monitoring patch service alertmanager-main -p '{"spec":{"type": "NodePort"}}'
$ kubectl -n monitoring patch service alertmanager-main --type='json' -p='[{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 30081}]'
$ kubectl -n monitoring patch service grafana -p '{"spec":{"type": "NodePort"}}'
$ kubectl -n monitoring patch service grafana --type='json' -p='[{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 30082}]'

それぞれのnodeportでアクセスします。

f:id:kun432:20210125021912p:plain

f:id:kun432:20210125021922p:plain

f:id:kun432:20210125021931p:plain

一通り見えていますね!

kube-prometheusの削除

確認ができたので一旦削除しましょう。

$ kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup

まとめ

監視に必要なコンポーネントがまとめて入るのはとても楽ちんですね。ただ、環境にあわせて変更したいところも出てくるかと思います。

ということで、次回はjsonnetを使ったkube-prometheusのカスタマイズをやってみたいと思います。