Extra services#

Ingress controller#

Attention

Using Kubernetes versions before 1.25.x is risky due to a vulnerability in ingress-controller. If you need to use Ingress Controller, we recommend choosing Kubernetes version 1.25 or later.

If you choose the Ingress Controller option when creating a cluster, an extra worker node will be deployed in the cluster. It allows you to configure access to applications running inside the cluster through a single entry point. Create Ingress resource to make the service available via Ingress Controller.

The example below shows a cluster configuration with Ingress Controller Elastic IP: 185.12.31.21. It is required to grant access via http://185.12.31.211/example to the application, deployed in the cluster.

Example of the service configuration in a cluster
apiVersion: v1
kind: Pod
metadata:
  name: example
  labels:
    k8s-app: example
spec:
  containers:
  - name: example-app
    image: quay.io/coreos/example-app:v1.0
  imagePullSecrets:
  - name: regcred

---
kind: Service
apiVersion: v1
metadata:
  name: example-service
  namespace: default
spec:
  selector:
    k8s-app: example
  ports:
  - protocol: TCP
    port: 80
  type: LoadBalancer

To make your application available via http://185.12.31.211/example, you have to open the 80 port and create the configuration of Ingress:

Example of the Ingress configuration
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /example
        pathType: Prefix
        backend:
          serviceName: example-service
          servicePort: 80
Example of the Ingress configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /example
        pathType: Prefix
        backend:
          service:
            name: example-service
            port:
              number: 80

HTTPS setup on Ingress Controller#

This instruction will help to ensure security of services that process sensitive data, since HTTPS connections are an important part of a secure web service and guarantee data confidentiality and integrity.

In order to use HTTPS on your Ingress Controller, you need to have:

  • Domain name for Elastic IP, which was associated with Ingress Controller.

  • TLS private key and certificate.

In this example, we will complement the Ingress configuration from the above instruction.

To protect Ingress, specify a Kubernetes secret that contains the TLS private key tls.key and certificate tls.crt.

Example of the Kubernetes Secret configuration
apiVersion: v1
kind: Secret
metadata:
  name: tls-secret
  namespace: ingress-nginx
data:
  tls.crt: base64 encoded cert
  tls.key: base64 encoded key
type: kubernetes.io/tls

TLS will not work with the rule by default, so the following changes should be made to the Ingress configuration:

Required changes in the Ingress configuration
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
    - hosts:
      - "Your Domain"
      secretName: tls-secret
  rules:
  - host: "Your Domain"
    http:
      paths:
      - path: /example
        pathType: Prefix
        backend:
          serviceName: example-service
          servicePort: 80
Required changes in the Ingress configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
    - hosts:
      - "Your Domain"
      secretName: tls-secret
  rules:
  - host: "Your Domain"
    http:
      paths:
      - path: /example
        pathType: Prefix
        backend:
          service:
            name: example-service
            port:
              number: 80

After applying these configurations, you will be able to use the secure HTTPS protocol for Ingress Controller.

EBS-provider#

If you select the EBS-provider option when creating a cluster, the particular service will be deployed in the cluster. It allows Kubernetes to manage volumes in CROC Cloud and use them as Persistent Volumes. The service can operate with the existing volumes or to create them by itself.

The created volumes will be available in the Volumes subsection of the Storage section.

The EBS provider supports the following Kubernetes versions: 1.27.3, 1.26.6, 1.25.11, 1.22.3, 1.20.9, and 1.18.2.

To use volumes as Persistent Volumes in Kubernetes, you need to describe the following configurations:

  1. Storage class is a description of the Storage class. More information on the Storage class can be found in the official documentation.

  2. Persistent Volume is a description of the directly attached volume.

  3. Persistent Volume Claim is a request for Persistent Volume, describing the required volume parameters. If Persistent Volume with the same or better parameters is found, Kubernetes will use it.

Scenario of using the existing in CROC Cloud volume#

To use the existing volume in Kubernetes as Persistent Volume, specify ebs.csi.aws.com in the driver field and volume ID in the volumeHandle field:

Example of using an existing volume
apiVersion: v1
kind: PersistentVolume
metadata:
  name: static-pv
spec:
  capacity:
    storage: 48Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-static
  csi:
    driver: ebs.csi.aws.com
    volumeHandle: vol-9991C120
    fsType: xfs
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.ebs.csi.aws.com/zone
          operator: In
          values:
          - ru-msk-vol51

It’s important that in the nodeAffinity section (in the last line of the example above) is specified the availability zone, in which the volume was created. Also the volume and the instance must be located in one availability zone be, otherwise the volume couldn’t be attached to the instance.

To use this volume in the future, you just need to create a Persistent Volume Claim which appropriates parameters of the volume and to use it in the required resource. The storageClassName of this Claim must match the one specified in Persistent Volume.

Configuration example for creating the pod with a volume of more than 20 GiB
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: static-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-static
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: static-claim

Scenario with creating new volumes#

To create the new volumes as Storage class specify ebs.csi.aws.com in the provisioner field. In the parameters field you may specify parameters for the created volumes:

Parameter

Valid values

Default value

Description

csi.storage.k8s.io/fsType

xfs, ext2, ext3, ext4

ext4

File system in which the new volume will be formated.

type

io2, gp2, st2

gp2

Volume type

iopsPerGB

IOPS per gibibyte. Required for io2 volumes

If there is no parameter, it will use the default value.

A configuration example
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ebs-dynamic
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
parameters:
  csi.storage.k8s.io/fstype: xfs
  type: io2
  iopsPerGB: "50"
  encrypted: "true"

A Persistent Volume will be created upon a Persistent Volume Claim when creating new volumes.

Persistent Volumes in CROC Cloud support accessModes only with the ReadWriteOnce value for EBS; the link to Kubernetes documentation.

Request example for creating the pod with a volume of more than 4 GiB
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-dynamic-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-dynamic
  resources:
    requests:
      storage: 4Gi

When creating a pod, which uses this request, Kubernetes will automatically create in the cloud the 4 GiB volume with the parameters specified in storage class and will attach it to the pod.

Pod configuration example
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-dynamic-claim

Scenario with snapshots#

To take volume snapshots, you must first create a pod with a disk and Storage Class, as well as Volume Snapshot Class.

Example of Volume Snapshot Class configuration for Kubernetes versions 1.20.9 and 1.18.2
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: csi-aws-vsc
driver: ebs.csi.aws.com
deletionPolicy: Delete
parameters:
  csi.storage.k8s.io/snapshotter-secret-name: aws-secret
  csi.storage.k8s.io/snapshotter-secret-namespace: kube-system
Example of Volume Snapshot Class configuration for Kubernetes versions 1.27.3, 1.26.6, 1.25.11, and 1.22.3
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: csi-aws-vsc
driver: ebs.csi.aws.com
deletionPolicy: Delete
parameters:
  csi.storage.k8s.io/snapshotter-secret-name: aws-secret
  csi.storage.k8s.io/snapshotter-secret-namespace: kube-system

When creating a Volume Snapshot Class using this claim, a VolumeSnapshotClass object will be automatically created. The same data will be used for authorization in the cloud as for the EBS provider. In addition, you will need Volume Snapshot.

Example of Volume Snapshot configuration for Kubernetes versions 1.20.9 and 1.18.2
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: ebs-volume-snapshot-2
  namespace: default
spec:
  volumeSnapshotClassName: csi-aws-vsc
  source:
    persistentVolumeClaimName: ebs-dynamic-claim
Example of Volume Snapshot configuration for Kubernetes versions 1.27.3, 1.26.6, 1.25.11, and 1.22.3
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: ebs-volume-snapshot-2
  namespace: default
spec:
  volumeSnapshotClassName: csi-aws-vsc
  source:
    persistentVolumeClaimName: ebs-dynamic-claim

If you use this request to create a Volume Snapshot. In that case, an object of the VolumeSnapshot class and a volume snapshot will be automatically created in the cloud according to the current state of the Persistent Volume Claim in the Kubernetes cluster. Now you can use this Volume Snapshot as a data source (dataSource) for Persistent Volume Claim.

An example of Persistent Volume Claim configuration
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-restore-claim
spec:
  dataSource:
    name: ebs-volume-snapshot-2
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-dynamic
  resources:
    requests:
      storage: 32Gi
An example of Persistent Volume Claim configuration in the pod configuration
apiVersion: v1
kind: Pod
metadata:
  name: app-restored
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage-restored
      mountPath: /data
  volumes:
  - name: persistent-storage-restored
    persistentVolumeClaim:
      claimName: ebs-restore-claim

Scenario with increasing disk size#

To make it possible to increase the volume size, you must specify the allowVolumeExpansion field with a value of true in the Storage Class <https://kubernetes.io/docs/concepts/storage/storage-classes/> configuration.

The file system volume size can only be changed for the xfs, ext3, ext4 file systems.

An example of a pod configuration with a dynamically created volume of 8 GiB, which can be increased in size
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ebs-dynamic
provisioner: ebs.csi.aws.com
allowVolumeExpansion: true
parameters:
  csi.storage.k8s.io/fstype: xfs
  type: gp2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-dynamic-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-dynamic
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-dynamic-claim

To send a request to increase the size of a previously created volume, you must edit the spec.resources.requests.storage field in the Persistent Volume Claim configuration. The new value must be larger than the current volume size and be a multiple of 8 GiB.

The Persistent Volume Claim configuration can be edited with the command:

kubectl edit pvc ebs-dynamic-claim

It takes some time to change the volume size. You can find out the result by querying the current Persistent Volume Claim configuration:

kubectl get pvc ebs-dynamic-claim -o yaml

After the operation is complete, the status.capacity.storage field must contain the new volume size.

Installation of EBS-provider in your Kubernetes Cluster#

You can install the EBS provider separately from the cloud service.

To do this, you need to create a Secret with data for authorizing the user on whose behalf operation with the cloud will be performed:

A configuration example for a secret
apiVersion: v1
kind: Secret
metadata:
  name: aws-secret
  namespace: kube-system
stringData:
  key_id: "<AWS_ACCESS_KEY_ID>"
  access_key: "<AWS_SECRET_ACCESS_KEY>"

For correct operation, a user with the Kubernetes EBS Provider user role, whose data appears in the key_id and access_key fields, must have privileges in the infrastructure service for the following actions:

  • attach_volume

  • detach_volume

  • describe_instances

  • describe_volumes

Optionally, to be able to create and delete volumes, as well as to increase their size:

  • create_volume

  • delete_volume

  • modify_volume

Optionally, to enable the creation and deletion of snapshots:

  • create_snapshot

  • delete_snapshot

  • describe_snapshots

To check or update the set of actions available to a user with the Kubernetes EBS Provider user role, go to the IAM section, open the Projects tab on the user page and click Configure next to the corresponding project. If the infrastructure service tacks actions such as create_snapshot, delete_snapshot or describe_snapshots to use volume snapshots or modify_volume to increase volume size, add them.

You can also delete the user from the project and re-add them as Kubernetes EBS Provider user to expand available grants. However, the existing EBS providers in deployed clusters will stop working.

After setting the required privileges, apply the configuration (version 1.22.3 as an example):

kubectl apply -f https://storage.cloud.croc.ru/kaas/latest/deployment/1.22.3/ebs/ebs.yaml

If the installation is successful (pods with the `` ebs-csi- * ‘’ prefix in the name are launched), CROC Cloud volumes will become available for usage in Kubernetes.

To use snapshots, follow these steps:

  1. Run the commands on any master node in the Kubernetes cluster (only for clusters that were deployed prior to 05.10.2021 and use EBS provider; version 1.22.3 as an example):

    kubectl delete -f https://storage.cloud.croc.ru/kaas/latest/deployment/1.22.3/ebs/ebs.yaml
    kubectl create -f https://storage.cloud.croc.ru/kaas/latest/deployment/1.22.3/ebs/ebs.yaml
    
  2. Apply the configuration (version 1.22.3 as an example):

    kubectl create -f https://storage.cloud.croc.ru/kaas/latest/deployment/1.22.3/ebs/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
    kubectl create -f https://storage.cloud.croc.ru/kaas/latest/deployment/1.22.3/ebs/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
    kubectl create -f https://storage.cloud.croc.ru/kaas/latest/deployment/1.22.3/ebs/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
    kubectl create -f https://storage.cloud.croc.ru/kaas/latest/deployment/1.22.3/ebs/snapshot-controller/rbac-snapshot-controller.yaml
    kubectl create -f https://storage.cloud.croc.ru/kaas/latest/deployment/1.22.3/ebs/snapshot-controller/setup-snapshot-controller.yaml
    

If the installation is successful (pod with the prefix snapshot-controller* in the name will be launched), you will be able to create snapshots in the CROC Cloud for volumes used as Persistent Volume Claim in Kubernetes.

Docker Registry#

Docker Registry is a scalable server application that stores Docker snapshots and allows you to distribute and use them. If you have selected the Docker Registry service when creating a cluster, then it will be installed on a master node.

To upload images from local computer to Docker Registry, install Docker.

Having installed, run the command and enter your password:

docker login <IP-адрес docker-registry>

Then download images by setting a tag starting with <IP address docker-registry>:5000/. For example, for an existing image quay.io/coreos/example-app:v1.0 a tag will be:

docker tag quay.io/coreos/example-app:v1.0 185.12.31.211:5000/example-app:v1.0
docker push 185.12.31.211:5000/example-app:v1.0

In the future, instead of public IP of the Docker Registry, you can use private IP and vice versa.

Use the regcred credentials configured in the cluster to create the pod from a downloaded image.

A configuration example
apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
  - name: example-app
    image: '172.31.0.4:5000/example-app:v1.0'
  imagePullSecrets:
  - name: regcred

Kubernetes Dashboard#

Kubernetes Dashboard is a web-based Kubernetes user interface. The dashboard can be used to deploy containerized applications in a Kubernetes cluster, eliminate faults in containerized application, and manage cluster resources. The dashboard can be used to get visibility of the applications running in your cluster and to create or modify certain Kubernetes resources. For example, you can scale a deployment, apply the latest update, restart a module, or deploy new applications.

By default, your Kubernetes cluster already has Kubernetes Dashboard service running. To access it, do the following:

  1. Set up kubectl according to the instructions.

  2. Then copy and paste the http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/# to a browser.

  3. Get a token for authorization in CROC Cloud. For information about the token to access the Kubernetes Dashboard, go to the Kubernetes Clusters section in the Information tab.

    If you have limited or no access to CROC Cloud, you can also get a token using the command:

    kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"