Kuma
kuma 설치
curl -L [https://kuma.io/installer.sh](https://kuma.io/installer.sh) | VERSION=2.10.1 sh -
export PATH=$(pwd)/kuma-2.10.1/bin:$PATH
kumactl install control-plane | kubectl apply -f -
namespace 에서 label 설정
apiVersion: v1
kind: Namespace
metadata:
labels:
kuma.io/sidecar-injection: enabled
name: test-cloud
gateway에 labels, annotations 추가 후 apply
labels:
app: gateway
kuma.io/sidecar-injection: enabled
annotations:
kuma.io/gateway: enabled
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway
namespace: test-cloud
spec:
selector:
matchLabels:
app: gateway
replicas: 1
template:
metadata:
name: gateway
labels:
app: gateway
kuma.io/sidecar-injection: enabled
annotations:
kuma.io/gateway: enabled
spec:
imagePullSecrets:
- name: apim-secret
serviceAccountName: cruzapim
서비스 트래픽 허용
apiVersion: kuma.io/v1alpha1
kind: TrafficPermission
metadata:
name: gateway-to-services
mesh: default
spec:
sources:
- match:
kuma.io/service: gateway_test-cloud_svc_8731
destinations:
- match:
kuma.io/service: "*"
kuma dashboard (:30681/gui/)로 접속
apiVersion: v1
kind: Service
metadata:
name: kuma-dashboard-external
namespace: kuma-system
spec:
type: NodePort # 또는 LoadBalancer
ports:
- port: 5681
targetPort: 5681
nodePort: 30681 # 30000-32767 범위 내에서 선택 (NodePort 타입의 경우)
selector:
app: kuma-control-plane
Kuma와 Grafana 연동 방법
kumactl install observability | kubectl apply -f -
Grafana에서 Kuma 서비스 정보를 시각화하기 전에 Kuma에 적절한 정책을 설정
- MeshMetric: 메트릭 수집을 위한 정책
- MeshTrace: 트레이싱을 위한 정책
- MeshAccessLog: 로그 수집을 위한 정책
Kuma가 제공하는 이미 만들어진 여러 대시보드를 제공
- Kuma Dataplane: 단일 데이터플레인의 상태 분석
- Kuma Mesh: 메시 전체의 통계 - 서비스 맵, 요청 수, 오류율 포함
- Kuma Service to Service: 소스 서비스에서 대상 서비스로의 통계
- Kuma CP: 컨트롤 플레인 통계
- Kuma Service: 각 서비스별 통합 통계
- Kuma MeshGateway: 내장 게이트웨이에 대한 통계
mesh.yaml 생성 및 apply
apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
name: default
spec:
metrics:
enabledBackend: prometheus
backends:
- name: prometheus
type: prometheus
conf:
port: 9090
path: /metrics
meshMetric.yaml 생성 및 apply
apiVersion: kuma.io/v1alpha1
kind: MeshMetric
metadata:
name: metrics-default
namespace: kuma-system
labels:
kuma.io/mesh: default
spec:
default:
sidecar:
includeUnused: true
profiles:
appendProfiles:
- name: All
include:
- type: Exact
match: envoy_cluster_default_total_match_count
backends:
- type: Prometheus
prometheus:
port: 9090
path: "/metrics"
grafana-svc.yaml 생성 및 apply
apiVersion: v1
kind: Service
metadata:
name: grafana-nodeport
namespace: mesh-observability
labels:
app: grafana
spec:
type: NodePort
ports:
- port: 80
targetPort: 3000
protocol: TCP
nodePort: 30300
selector:
app: grafana
참고자료
https://kuma.io/docs/2.10.x/introduction/install/
linkerd
Gateway API CRD 설치
kubectl apply -f [https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml](https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml)
linkerd 설치
linkerd install --crds | kubectl apply -f -
linkerd install --set proxyInit.runAsRoot=true | kubectl apply -f -
# 이 옵션은 Linkerd의 proxy-init 컨테이너가 root 권한으로 실행되도록 설정
linkerd check
$ linkerd check
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API
kubernetes-version
------------------
√ is running the minimum Kubernetes API version
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ control plane pods are ready
√ cluster networks contains all node podCIDRs
√ cluster networks contains all pods
√ cluster networks contains all services
linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ proxy-init container runs as root user if docker container runtime is used
linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls
-------------------------------
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days
√ policy-validator webhook has valid cert
√ policy-validator cert is valid for at least 60 days
linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date
control-plane-version
---------------------
√ can retrieve the control plane version
√ control plane is up-to-date
√ control plane and cli versions match
linkerd-control-plane-proxy
---------------------------
√ control plane proxies are healthy
√ control plane proxies are up-to-date
√ control plane proxies and cli versions match
linkerd-extension-checks
------------------------
√ namespace configuration for extensions
Status check results are √
linkerd.io/inject: enabled annotations 적용, 각 서비스에도 같은 위치에 적용
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway
namespace: test-cloud
spec:
selector:
matchLabels:
app: gateway
replicas: 1
template:
metadata:
name: gateway
labels:
app: gateway
annotations:
linkerd.io/inject: enabled
dashboard service.yaml 생성 및 apply
apiVersion: v1
kind: Service
metadata:
name: linkerd-dashboard-external
namespace: linkerd
spec:
type: NodePort
ports:
- name: http
port: 8084
targetPort: 8084
nodePort: 30084 # 30000-32767 범위에서 선택
selector:
app.kubernetes.io/name: web
app.kubernetes.io/part-of: Linkerd
문제: nodeport ip로 접근 불가
It appears that you are trying to reach this service with a host of '192.168.3.147:30084'.
This does not match /^(localhost|127\.0\.0\.1|web\.linkerd-viz\.svc\.cluster\.local|web\.linkerd-viz\.svc|\[::1\])(:\d+)?$/ and has been denied for security reasons.
Please see https://linkerd.io/dns-rebinding for an explanation of what is happening and how to fix it.
Linkerd 대시보드의 DNS 리바인딩 보호 기능 때문에 발생
Linkerd는 기본적으로 허용된 호스트 이름으로만 접근을 허용
해결: enforcedHostRegexp 설정 변경
Linkerd viz 컴포넌트를 수정하여 특정 IP 주소로의 접근을 허용
# 먼저 linkerd-viz add-on이 설치되어 있는지 확인
kubectl get ns linkerd-viz
# 설치되어 있지 않다면 설치
linkerd viz install | kubectl apply -f -
#
kubectl edit deployment web -n linkerd-viz
- -enforced-host=.* 추가
/...
spec:
automountServiceAccountToken: false
containers:
- args:
- -linkerd-metrics-api-addr=metrics-api.linkerd-viz.svc.cluster.local:8085
- -cluster-domain=cluster.local
- -controller-namespace=linkerd
- -log-level=info
- -log-format=plain
- -enforced-host=.*
- -enable-pprof=false
image: cr.l5d.io/linkerd/web:edge-25.4.1
imagePullPolicy: IfNotPresent
...
#
kubectl rollout restart deployment web -n linkerd-viz
참고자료
'Container & Orchestration > Kubernetes' 카테고리의 다른 글
ubuntu 22.x 에서 도커로 Rancher 설치 (0) | 2025.02.11 |
---|