# 离线 Kubernetes 环境中 APISIX API 网关部署指南
## 前言
在离线 Kubernetes 环境中部署 APISIX API 网关需要考虑镜像拉取、配置管理、存储持久化等特殊因素。本文将详细介绍如何在离线 K8s 环境中完整部署 APISIX 网关系统。
## 环境准备
### 系统要求
- Kubernetes 1.20+
- kubectl 1.20+
- Helm 3.0+ (可选)
- 离线镜像仓库或本地镜像
- 持久化存储支持 (PV/PVC)
### 离线环境特殊考虑
- **镜像管理**: 需要预先拉取所有镜像到离线环境
- **配置管理**: 使用 ConfigMap 和 Secret 管理配置
- **存储持久化**: etcd 数据需要持久化存储
- **网络策略**: 配置 Service 和 Ingress
- **资源限制**: 合理设置资源请求和限制
## 镜像准备
### 1. 所需镜像列表
```bash
# APISIX 相关镜像
apache/apisix:3.14.1-debian
apache/apisix-dashboard:3.0.1-alpine
# etcd 镜像
bitnami/etcd:3.5.11
# 示例服务镜像
nginx:1.19.0-alpine
```
### 2. 镜像拉取和推送
```bash
# 在有网络的环境中拉取镜像
docker pull apache/apisix:3.14.1-debian
docker pull apache/apisix-dashboard:3.0.1-alpine
docker pull bitnami/etcd:3.5.11
docker pull nginx:1.19.0-alpine
# 保存镜像为 tar 文件
docker save apache/apisix:3.14.1-debian -o apisix-3.14.1-debian.tar
docker save apache/apisix-dashboard:3.0.1-alpine -o apisix-dashboard-3.0.1-alpine.tar
docker save bitnami/etcd:3.5.11 -o etcd-3.5.11.tar
docker save nginx:1.19.0-alpine -o nginx-1.19.0-alpine.tar
# 在离线环境中加载镜像
docker load -i apisix-3.14.1-debian.tar
docker load -i apisix-dashboard-3.0.1-alpine.tar
docker load -i etcd-3.5.11.tar
docker load -i nginx-1.19.0-alpine.tar
# 推送到离线镜像仓库(如果有)
docker tag apache/apisix:3.14.1-debian your-registry/apisix:3.14.1-debian
docker push your-registry/apisix:3.14.1-debian
```
## Kubernetes 部署配置
### 1. 命名空间创建
```yaml
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: apisix-gateway
labels:
name: apisix-gateway
```
### 2. etcd 配置
```yaml
# etcd-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: etcd-config
namespace: apisix-gateway
data:
etcd.conf.yml: |
name: etcd
data-dir: /bitnami/etcd
listen-client-urls: http://0.0.0.0:2379
advertise-client-urls: http://etcd-service:2379
listen-peer-urls: http://0.0.0.0:2380
initial-advertise-peer-urls: http://etcd-service:2380
initial-cluster: etcd=http://etcd-service:2380
initial-cluster-token: etcd-cluster-1
initial-cluster-state: new
```
```yaml
# etcd-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: etcd
namespace: apisix-gateway
labels:
app: etcd
spec:
replicas: 1
selector:
matchLabels:
app: etcd
template:
metadata:
labels:
app: etcd
spec:
containers:
- name: etcd
image: bitnami/etcd:3.5.11
ports:
- containerPort: 2379
name: client
- containerPort: 2380
name: peer
env:
- name: ETCD_ENABLE_V2
value: "true"
- name: ALLOW_NONE_AUTHENTICATION
value: "yes"
- name: ETCD_ADVERTISE_CLIENT_URLS
value: "http://etcd-service:2379"
- name: ETCD_LISTEN_CLIENT_URLS
value: "http://0.0.0.0:2379"
- name: ETCD_DATA_DIR
value: "/bitnami/etcd"
volumeMounts:
- name: etcd-data
mountPath: /bitnami/etcd
- name: etcd-config
mountPath: /opt/bitnami/etcd/conf/etcd.conf.yml
subPath: etcd.conf.yml
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: etcd-data
persistentVolumeClaim:
claimName: etcd-pvc
- name: etcd-config
configMap:
name: etcd-config
```
```yaml
# etcd-service.yaml
apiVersion: v1
kind: Service
metadata:
name: etcd-service
namespace: apisix-gateway
spec:
selector:
app: etcd
ports:
- name: client
port: 2379
targetPort: 2379
- name: peer
port: 2380
targetPort: 2380
```
```yaml
# etcd-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: etcd-pvc
namespace: apisix-gateway
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: your-storage-class # 替换为实际的存储类
```
### 3. APISIX 配置
```yaml
# apisix-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: apisix-config
namespace: apisix-gateway
data:
config.yaml: |
apisix:
node_listen: 9080
enable_ipv6: false
enable_admin: true
enable_admin_cors: true
enable_debug: false
enable_dev_mode: false
enable_reuseport: true
config_center: etcd
nginx_config:
error_log: "/dev/stderr"
error_log_level: "warn"
deployment:
role: traditional
role_traditional:
config_provider: etcd
admin:
admin_key:
- name: "admin"
key: edd1c9f034335f136f87ad84b625c8f1
role: admin
allow_admin:
- 0.0.0.0/0
etcd:
host:
- "http://etcd-service:2379"
prefix: "/apisix"
timeout: 30
plugin_attr:
prometheus:
export_addr:
ip: "0.0.0.0"
port: 9091
```
```yaml
# apisix-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: apisix
namespace: apisix-gateway
labels:
app: apisix
spec:
replicas: 2
selector:
matchLabels:
app: apisix
template:
metadata:
labels:
app: apisix
spec:
containers:
- name: apisix
image: apache/apisix:3.14.1-debian
ports:
- containerPort: 9080
name: http
- containerPort: 9443
name: https
- containerPort: 9180
name: admin
- containerPort: 9091
name: metrics
env:
- name: APISIX_STAND_ALONE
value: "false"
volumeMounts:
- name: apisix-config
mountPath: /usr/local/apisix/conf/config.yaml
subPath: config.yaml
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /apisix/admin/services/
port: 9180
httpHeaders:
- name: X-API-KEY
value: edd1c9f034335f136f87ad84b625c8f1
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /apisix/admin/services/
port: 9180
httpHeaders:
- name: X-API-KEY
value: edd1c9f034335f136f87ad84b625c8f1
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: apisix-config
configMap:
name: apisix-config
```
```yaml
# apisix-service.yaml
apiVersion: v1
kind: Service
metadata:
name: apisix-service
namespace: apisix-gateway
spec:
selector:
app: apisix
ports:
- name: http
port: 9080
targetPort: 9080
- name: https
port: 9443
targetPort: 9443
- name: admin
port: 9180
targetPort: 9180
- name: metrics
port: 9091
targetPort: 9091
type: ClusterIP
```
### 4. APISIX Dashboard 配置
```yaml
# dashboard-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: dashboard-config
namespace: apisix-gateway
data:
conf.yaml: |
conf:
listen:
host: 0.0.0.0
port: 9000
etcd:
endpoints:
- http://etcd-service:2379
log:
error_log:
level: warn
file_path: logs/error.log
authentication:
secret: secret
expire_time: 3600
users:
- username: admin
password: admin
- username: user
password: user
plugin_attr:
prometheus:
export_addr:
ip: "0.0.0.0"
port: 9091
```
```yaml
# dashboard-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: apisix-dashboard
namespace: apisix-gateway
labels:
app: apisix-dashboard
spec:
replicas: 1
selector:
matchLabels:
app: apisix-dashboard
template:
metadata:
labels:
app: apisix-dashboard
spec:
containers:
- name: dashboard
image: apache/apisix-dashboard:3.0.1-alpine
ports:
- containerPort: 9000
name: http
volumeMounts:
- name: dashboard-config
mountPath: /usr/local/apisix-dashboard/conf/conf.yaml
subPath: conf.yaml
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /
port: 9000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 9000
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: dashboard-config
configMap:
name: dashboard-config
```
```yaml
# dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
name: apisix-dashboard-service
namespace: apisix-gateway
spec:
selector:
app: apisix-dashboard
ports:
- name: http
port: 9000
targetPort: 9000
type: ClusterIP
```
### 5. 示例上游服务
```yaml
# web1-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: web1-config
namespace: apisix-gateway
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost;
location / {
return 200 'Hello from Web Service 1!\n';
add_header Content-Type text/plain;
}
location /health {
return 200 'OK';
add_header Content-Type text/plain;
}
}
}
```
```yaml
# web1-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web1
namespace: apisix-gateway
labels:
app: web1
spec:
replicas: 2
selector:
matchLabels:
app: web1
template:
metadata:
labels:
app: web1
spec:
containers:
- name: nginx
image: nginx:1.19.0-alpine
ports:
- containerPort: 80
name: http
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: nginx-config
configMap:
name: web1-config
```
```yaml
# web1-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web1-service
namespace: apisix-gateway
spec:
selector:
app: web1
ports:
- name: http
port: 80
targetPort: 80
type: ClusterIP
```
### 6. Ingress 配置
```yaml
# apisix-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: apisix-ingress
namespace: apisix-gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: apisix.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: apisix-service
port:
number: 9080
- host: dashboard.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: apisix-dashboard-service
port:
number: 9000
```
## 部署步骤
### 1. 创建命名空间
```bash
kubectl apply -f namespace.yaml
```
### 2. 部署 etcd
```bash
kubectl apply -f etcd-configmap.yaml
kubectl apply -f etcd-pvc.yaml
kubectl apply -f etcd-deployment.yaml
kubectl apply -f etcd-service.yaml
```
### 3. 部署 APISIX
```bash
kubectl apply -f apisix-configmap.yaml
kubectl apply -f apisix-deployment.yaml
kubectl apply -f apisix-service.yaml
```
### 4. 部署 Dashboard
```bash
kubectl apply -f dashboard-configmap.yaml
kubectl apply -f dashboard-deployment.yaml
kubectl apply -f dashboard-service.yaml
```
### 5. 部署示例服务
```bash
kubectl apply -f web1-configmap.yaml
kubectl apply -f web1-deployment.yaml
kubectl apply -f web1-service.yaml
```
### 6. 配置 Ingress
```bash
kubectl apply -f apisix-ingress.yaml
```
## 验证部署
### 1. 检查 Pod 状态
```bash
kubectl get pods -n apisix-gateway
```
### 2. 检查服务状态
```bash
kubectl get svc -n apisix-gateway
```
### 3. 测试 APISIX Admin API
```bash
# 获取 APISIX 服务 IP
APISIX_IP=$(kubectl get svc apisix-service -n apisix-gateway -o jsonpath='{.spec.clusterIP}')
# 测试 Admin API
kubectl run test-pod --image=curlimages/curl --rm -it --restart=Never -- \
curl "http://${APISIX_IP}:9180/apisix/admin/services/" \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1'
```
### 4. 测试示例服务
```bash
# 获取 web1 服务 IP
WEB1_IP=$(kubectl get svc web1-service -n apisix-gateway -o jsonpath='{.spec.clusterIP}')
# 测试 web1 服务
kubectl run test-pod --image=curlimages/curl --rm -it --restart=Never -- \
curl "http://${WEB1_IP}/"
```
## 创建路由示例
### 1. 通过 Admin API 创建路由
```bash
# 创建路由到 web1 服务
kubectl run test-pod --image=curlimages/curl --rm -it --restart=Never -- \
curl -X POST "http://${APISIX_IP}:9180/apisix/admin/routes/1" \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' \
-H 'Content-Type: application/json' \
-d '{
"uri": "/web1/*",
"upstream": {
"type": "roundrobin",
"nodes": {
"web1-service.apisix-gateway.svc.cluster.local:80": 1
}
}
}'
```
### 2. 测试路由
```bash
# 测试路由
kubectl run test-pod --image=curlimages/curl --rm -it --restart=Never -- \
curl "http://${APISIX_IP}:9080/web1/"
```
## 监控和运维
### 1. 查看日志
```bash
# 查看 APISIX 日志
kubectl logs -f deployment/apisix -n apisix-gateway
# 查看 Dashboard 日志
kubectl logs -f deployment/apisix-dashboard -n apisix-gateway
# 查看 etcd 日志
kubectl logs -f deployment/etcd -n apisix-gateway
```
### 2. 扩缩容
```bash
# 扩展 APISIX 实例
kubectl scale deployment apisix --replicas=3 -n apisix-gateway
# 扩展 web1 服务
kubectl scale deployment web1 --replicas=3 -n apisix-gateway
```
### 3. 更新配置
```bash
# 更新 APISIX 配置
kubectl edit configmap apisix-config -n apisix-gateway
# 重启 APISIX 使配置生效
kubectl rollout restart deployment/apisix -n apisix-gateway
```
## 高可用配置
### 1. etcd 集群部署
```yaml
# etcd-cluster.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: etcd
namespace: apisix-gateway
spec:
serviceName: etcd-headless
replicas: 3
selector:
matchLabels:
app: etcd
template:
metadata:
labels:
app: etcd
spec:
containers:
- name: etcd
image: bitnami/etcd:3.5.11
ports:
- containerPort: 2379
name: client
- containerPort: 2380
name: peer
env:
- name: ETCD_ENABLE_V2
value: "true"
- name: ALLOW_NONE_AUTHENTICATION
value: "yes"
- name: ETCD_ADVERTISE_CLIENT_URLS
value: "http://$(POD_NAME).etcd-headless.apisix-gateway.svc.cluster.local:2379"
- name: ETCD_LISTEN_CLIENT_URLS
value: "http://0.0.0.0:2379"
- name: ETCD_INITIAL_ADVERTISE_PEER_URLS
value: "http://$(POD_NAME).etcd-headless.apisix-gateway.svc.cluster.local:2380"
- name: ETCD_LISTEN_PEER_URLS
value: "http://0.0.0.0:2380"
- name: ETCD_INITIAL_CLUSTER
value: "etcd-0=http://etcd-0.etcd-headless.apisix-gateway.svc.cluster.local:2380,etcd-1=http://etcd-1.etcd-headless.apisix-gateway.svc.cluster.local:2380,etcd-2=http://etcd-2.etcd-headless.apisix-gateway.svc.cluster.local:2380"
- name: ETCD_INITIAL_CLUSTER_STATE
value: "new"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: etcd-data
mountPath: /bitnami/etcd
volumeClaimTemplates:
- metadata:
name: etcd-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
```
### 2. APISIX 集群配置更新
```yaml
# 更新 APISIX 配置以支持 etcd 集群
etcd:
host:
- "http://etcd-0.etcd-headless.apisix-gateway.svc.cluster.local:2379"
- "http://etcd-1.etcd-headless.apisix-gateway.svc.cluster.local:2379"
- "http://etcd-2.etcd-headless.apisix-gateway.svc.cluster.local:2379"
prefix: "/apisix"
timeout: 30
```
## 安全加固
### 1. 网络策略
```yaml
# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: apisix-network-policy
namespace: apisix-gateway
spec:
podSelector:
matchLabels:
app: apisix
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx # 允许 Ingress Controller 访问
ports:
- protocol: TCP
port: 9080
- protocol: TCP
port: 9443
egress:
- to:
- podSelector:
matchLabels:
app: etcd
ports:
- protocol: TCP
port: 2379
- to: [] # 允许访问外部服务
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
```
### 2. RBAC 配置
```yaml
# rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: apisix-sa
namespace: apisix-gateway
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: apisix-role
namespace: apisix-gateway
rules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: apisix-rolebinding
namespace: apisix-gateway
subjects:
- kind: ServiceAccount
name: apisix-sa
namespace: apisix-gateway
roleRef:
kind: Role
name: apisix-role
apiGroup: rbac.authorization.k8s.io
```
## 故障排除
### 1. 常见问题
**Pod 启动失败**
```bash
# 查看 Pod 状态
kubectl describe pod <pod-name> -n apisix-gateway
# 查看日志
kubectl logs <pod-name> -n apisix-gateway
```
**etcd 连接失败**
```bash
# 检查 etcd 服务
kubectl get svc etcd-service -n apisix-gateway
# 测试 etcd 连接
kubectl run test-pod --image=curlimages/curl --rm -it --restart=Never -- \
curl "http://etcd-service.apisix-gateway.svc.cluster.local:2379/health"
```
**存储问题**
```bash
# 检查 PVC 状态
kubectl get pvc -n apisix-gateway
# 检查 PV 状态
kubectl get pv
```
### 2. 性能调优
**资源限制调整**
```yaml
resources:
requests:
memory: "1Gi"
cpu: "1000m"
limits:
memory: "2Gi"
cpu: "2000m"
```
**HPA 配置**
```yaml
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: apisix-hpa
namespace: apisix-gateway
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: apisix
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
```
## 总结
通过本文的详细配置,我们成功在离线 Kubernetes 环境中部署了完整的 APISIX API 网关系统,包括:
- **etcd 集群**: 提供可靠的配置存储
- **APISIX 网关**: 支持高可用的 API 网关服务
- **Dashboard**: Web 管理界面
- **示例服务**: 便于测试和验证
- **持久化存储**: 数据持久化保障
- **网络配置**: Service 和 Ingress 配置
- **安全策略**: 网络策略和 RBAC 配置
- **监控支持**: 健康检查和指标导出
这个配置适用于生产环境,提供了高可用、可扩展、安全的 API 网关解决方案。
## 相关资源
- [Kubernetes 官方文档](https://kubernetes.io/docs/)
- [Apache APISIX 官方文档](https://apisix.apache.org/docs/)
- [etcd 官方文档](https://etcd.io/docs/)
- [Kubernetes 网络策略](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
---
*本文基于 Kubernetes 1.20+ 和 Apache APISIX 3.14.1 版本编写,配置和功能可能会随版本更新而变化。*