第十八章 在kubernetes 使用ceph
最后更新于:2022-04-02 05:07:18
部署环境依赖:如果使用kubeadmin方式部署K8S。因为apiserver 使用docker方式。默认镜像不带有ceph-common 客户端驱动。通过部署rbd-provisioner,手动加载驱动方式解决此问题。
* * * * *
创建一个rbd-provisioner.yaml 驱动:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: monitoring
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
args: ["-master=http://10.18.19.98:8080", "-id=rbd-provisioner"]
### 生成 Ceph secret
使用 Ceph 管理员提供给你的 ceph.client.admin.keyring 文件,我们将它放在了 /etc/ceph 目录下,用来生成 secret。
grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
### 创建 Ceph secret
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: monitoring
type: "kubernetes.io/rbd"
data:
key: QVFCZU54dFlkMVNvRUJBQUlMTUVXMldSS29mdWhlamNKaC8yRXc9PQ==
### 创建 StorageClass
二进制部署方式参考,ceph-class.yaml 文件内容为:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: prometheus-ceph
namespace: noah
provisioner: ceph.com/rbd
parameters:
monitors: 10.18.19.91:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: monitoring
userSecretName: ceph-secret
pool: prometheus #建立自己的RBD存储池
userId: admin
调用rbd-provisioner, 参考以下内容
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: kong-cassandra-fast
namespace: monitoring
provisioner: ceph.com/rbd #调用rbd-provisione
parameters:
monitors: 10.18.19.91:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: monitoring
userSecretName: ceph-secret
pool: prometheus
userId: admin
### 列出所有的pool
ceph osd lspools
### 列出pool中的所有镜像
rbd ls prometheus
### 创建pool
ceph osd pool create prometheus 128 128
配置 prometheus,添加ceph class
配置文件如下:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: prometheus-core
namespace: monitoring
labels:
app: prometheus
component: core
version: v1
spec:
serviceName: prometheus-core
replicas: 1
template:
metadata:
labels:
app: prometheus
component: core
spec:
serviceAccountName: prometheus-k8s
containers:
- name: prometheus
image: prom/prometheus:v1.7.0
args:
- '-storage.local.retention=336h'
- '-storage.local.memory-chunks=1048576'
- '-config.file=/etc/prometheus/prometheus.yaml'
- '-alertmanager.url=http://alertmanager:9093/'
ports:
- name: webui
containerPort: 9090
resources:
requests:
cpu: 2
memory: 2Gi
limits:
cpu: 2
memory: 2Gi
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
- name: rules-volume
mountPath: /etc/prometheus-rules
- name: data
mountPath: /prometheus/data
volumes:
- name: config-volume
configMap:
name: prometheus-core
- name: rules-volume
configMap:
name: prometheus-rules
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: "ceph-aliyun"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
';