Kubernetes Clusters

General information

Google has designed Kubernetes as an open-source container orchestration platform for deploying, scaling, and managing containerized applications. The platform has become a standard for container orchestration and the flagship project of The Cloud Native Computing Foundation, supported by Google, AWS, Microsoft, IBM, Intel, Cisco, and Red Hat.

For more information about Kubernetes see our blog and official website.

In CROC Cloud Kubernetes creates an abstraction layer above a group of instances and allows to easily implement and use applications with microservice architecture.

Glossary

Cluster is a basic Kubernetes element. It is made of multiple virtual or physical machines, each performing a specific function: a worker node or master node.

Master node is a control plane cluster node, it hosts service applications that cluster needs to operate.

Worker node is the compute node of the cluster where user tasks are performed.

Pod is a number of containers that share the network, the IP address and other resources (storage, labels).

Kubernetes Clusters in CROC Cloud

You can quickly launch containerized applications integrated with CROC Cloud services by distributing traffic and scaling clusters in a secure and stable cloud infrastructure. You can manage security groups, link Kubernetes clusters to existing instances, use object storage, and configure VPN connections between your infrastructure and Kubernetes clusters.

You can manage the service via the web interface or API.

In the web interface of the service Kubernetes clusters you have the following actions:

Additional services can be installed in Kubernetes Clusters service:

  • Ingress controller, which can be used to route all requests, coming from outside to applications, deployed in Kubernetes.

  • EBS-provider, which allows Kubernetes to manage volumes in CROC Cloud and use them as Persistent Volumes.

  • Docker Registry configured for use in Kubernetes. You can safely store your images in the registry before deploying it in Kubernetes.

See the official Kubernetes documentation for more information on the Kubernetes.

When for do you need Kubernetes clusters service?

  • If you need fast deployment of scalable developer stands.

  • If you have the infrastructure with a large number of changes and releases.

  • If you have a floating workload depending on the number of users.

  • If time to market is very important for you.

  • If you have applications with microservice architecture.

Before you begin

To begin operating with Kubernetes Cluster service, you need:

  1. Create a project, if you don’t have one.

  2. Create a user in the IAM section with Kubernetes Administrator or Cloud Administrator role and add him to the project.

  3. Make sure that in your project you have - subnets, SSH-keys, security groups. Create, if you don’t have them.

  4. If you need the EBS-provider, in the IAM section add a specific user with Kubernetes EBS Provider User role and add to the project.

Important

Kubernetes Administrator has the full set of rights to work with Kubernetes clusters service. Still, it does not include many actions with resources in the cloud, so other sections of the web interface may not be available.

Creating a Kubernetes cluster

The process of creating a cluster can be divided into two parts: infrastructure creation and cluster installation. At the moment, the infrastructure creation consists of creating the required number of instances of a given configuration from a prepared template. When the instances successfully start, the cluster installation process begins. Upon the successful completion of installation, the cluster enters the Ready state.

Any other cluster state indicates that the cluster creation process has not been completed yet. A cluster in the Ready state is considered ready for operation. Additional services are Ingress controller, Docker-registry, and EBS provider. The process of creating a cluster involves creating instances, installing Kubernetes components on them, and (optionally) installing additional services. If your create a cluster with additional services, the Ready state indicates that they also have been installed successfully.

Croc Cloud makes using Kubernetes easier and allows you to deploy the infrastructure for a cluster with a single Create button.

On the first step of creating a new cluster set the required parameters:

  • Name.

  • Version of Kubernetes that will be installed on all nodes.

  • VPC where the cluster will be created.

  • Option High Availability cluster. If you select this option, then a high availability cluster will be deployed. Its three nodes may be placed in three availability zones or a placement group within one availability zone. If any of these nodes fails, the cluster will continue to run on the remaining nodes.

  • Pod subnet address. You can specify an IP address block in CIDR notation (X.X.X.X/Y), which will be allocated to the pod subnet. If you do not specify this parameter, a default range of IP addresses will be allocated.

  • Service subnet address. You can specify an IP address block in CIDR notation (X.X.X.X/Y), which will be allocated to the service subnet. If you do not specify this parameter, a default range of IP addresses will be allocated.

At the second step, set the network parameters required for the cluster operation:

  • Subnets in which the database will be created.

  • SSH key for connecting to the cluster.

  • Security groups, which control traffic of instances interfaces.

  • The Allocate Elastic IP for API Server option. If you select this option, an Elastic IP will be allocated to a master node. This enables external access to the cluster’s API server.

At the third step, select a configuration of the master node, which will host service applications required for the cluster operation. This configuration will be applied to all master nodes if you select the High Availability cluster option. Specify the instance type and the volume type, size and IOPS (if available for the type you choose).

Note

Master node components are performance sensitive. We recommend using high performance volumes gp2: Universal (SSD), io2: Ultimate (SSD)

At the fourth step, specify a configuration of worker nodes, which will run user tasks. Specify the required number of worker nodes, instance type, and volume parameters: type, size and IOPS (if available for the type you choose). Selecting the Use placement groups option will create placement groups, in which instances with the cluster worker node will be then started.

At the fifth step, you can select additional services to be installed in the cluster:

  • Ingress controller to route request. You can select the Allocate Elastic IP for Ingress controller option.

  • Docker registry, for which you should set the volume configuration (type, size, IOPS ) to store your container images.

  • EBS-provider, in which you set the volume management user.

In the last step, you can specify user data to describe operations that will be automatically performed when the cluster nodes are created. User data is useful when you need, for example, to install packages, create and modify files, and execute shell scripts. To add user data, you need to specify the following information in the form:

  • User data type. Two user data types are currently supported: x-shellscript and cloud-config.

  • User data. If you have selected the x-shellscript type, enter your shell script to this field. If you have selected the cloud-config type, enter a configuration for cloud-config in YAML format to this field. For examples of operations that cloud-config allows and the corresponding configurations, please see the official cloud-init documentation.

The specified user data will be applied to all cluster nodes.

After completing the previous steps, click Create.

Note

The process of creating a new Kubernetes cluster can take from 5 to 15 minutes.

The Cluster-manager application will be installed in the cluster for proper monitoring functioning and for editing the count of worker nodes in cluster. Deleting it may cause the Kubernetes Clusters service incorrect work with the cluster.

To ensure correct cluster operation, a new security group is automatically created when a cluster is created. The following rules will be added to the group:

  • the rule to permit inbound traffic from interfaces that are in the same security group;

  • the rule to enable all outbound IPv4 traffic.

If the cluster is deleted, the security group will also be deleted.

Managing the Kubernetes Cluster

Important

The service does not modify existing security groups in the cloud, so to enable access to the API-server of the cluster, you have to open port 6443.

Wait until the cluster status changes to Ready and upload this configuration to the computer by clicking Get settings.

To work with Kubernetes Dashboard install kubectl on the computer and run the following commands in the OS console:

  1. Set the environment variable KUBECONFIG:

    for MacOS and Linux:

    export KUBECONFIG=<configuration file path>
    

    for Windows:

    set KUBECONFIG=<configuration file path>
    
  2. Activate a proxy server using the command:

    kubectl proxy
    

    Important

    A proxy server should work, do not close the console.

  3. Open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/# in the browser

  4. To log in to Dashboard, select Token and copy the token to access Kubernetes Dashboard from the cluster page.

You can use any suitable Kubernetes management tools for clusters (Draft, Helm, Terraform, etc.).

Changing the cluster worker node quantity

You can change the number of worker nodes. When increasing the number of worker nodes, new instances will be created, all necessary components will be installed in them and reconfiguration process for adding new nodes will start inside the cluster. During this process the cluster state will be Unhealthy. After the successful ending of this process the cluster state becomes Healthy. When decreasing the number of worker nodes, the process of putting nodes in maintenance mode will be started and then they will be deleted from the cluster via Kubernetes API. After that, freed instances will be deleted from the cloud. During this process the cluster state will be Unhealthy. After the successful ending of this process the cluster state becomes Healthy.

To change the number of worker nodes go to the Kubernetes cluster page, click an icon to edit the parameter Nodes count, enter new value in the field, and then save changes.

If the attempt to change the number of worker nodes fails, the cluster will continue to operate. A record with the failure details will be displayed on the Warnings tab.

Deleting a Kubernetes cluster

Deleting a cluster is deleting all instances, created for it. Instances, created for additional services are also deleting. Volumes created by EBS-provider will be available for deletion in the Volumes section of the management console.

To delete a Kubernetes cluster and related services (Container Registry, EBS-provider), click Delete.

Attention

When you delete a cluster, the volume with the Docker Registry images will also be deleted!

The “High-Availability cluster” mode

In this mode, a cluster starts in a configuration with three master nodes. Kubernetes cluster master nodes can be deployed in either three availability zones or a placement group within an availability zone. Distribution across multiple physical computing nodes allows the cluster to remain operational if one master node fails.

If an Elastic IP has been assigned to the failed node, it will be reassigned to a healthy master node.

Cluster node placement options

  • Placement in three availability zones was selected along with the High Availability cluster option, and the Use placement groups flag was set at the worker node configuration step. This combination of options guarantees that the nodes will be placed on different computing nodes in each availability zone.

  • Placement in three availability zones was selected along with the High Availability cluster option, but the Use placement groups flag was not set at the worker node configuration step. In this combination, master nodes and worker nodes will be distributed across three availability zones. Still, it is not guaranteed that worker nodes will be distributed across different computing nodes within an availability zone.

  • Placement in one availability zone was selected along with the High Availability cluster option, and the Use placement groups flag was set at the worker node configuration step. This combination guarantees that nodes will be placed on different computing nodes in an availability zone, though they will be distributed independently of each other. Therefore, a master node may be placed on the same computing node as a worker node.

  • Placement in one Availability Zone was selected along with the High Availability cluster option, but the Use placement groups flag was not set at the worker node configuration step. In this combination, master nodes will be placed on different computing nodes in an availability zone. Still, it is not guaranteed that worker nodes will be distributed across different computing nodes within the availability zone.

Ingress controller

If you choose the Ingress Controller option when creating a cluster, an extra worker node will be deployed in the cluster. It allows you to configure access to applications running inside the cluster through a single entry point. Create Ingress resource to make the service available via Ingress Controller.

The example below shows a cluster configuration with Ingress Controller Elastic IP: 185.12.31.21. Need to grant the access via http://185.12.31.211/example to the application, deployed in the cluster.

The application is running in the cluster with the following configuration:

apiVersion: v1
kind: Pod
metadata:
  name: example
  labels:
    k8s-app: example
spec:
  containers:
  - name: example-app
    image: quay.io/coreos/example-app:v1.0
  imagePullSecrets:
  - name: regcred

---
kind: Service
apiVersion: v1
metadata:
  name: example-service
  namespace: default
spec:
  selector:
    k8s-app: example
  ports:
  - protocol: TCP
    port: 80
  type: LoadBalancer

To make your application available via http://185.12.31.211/example, you have to open the 80 port and create the next configuration of Ingress:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /example
        pathType: Prefix
        backend:
          serviceName: example-service
          servicePort: 80

EBS-provider

If you select the EBS-provider option when creating a cluster, the particular service will be deployed in the cluster. It allows Kubernetes to manage volumes in CROC Cloud and use them as Persistent Volumes. The service can operate with the existing volumes or to create them by itself.

The created volumes will be available in the Volumes section of the management console.

To use volumes in Kubernetes as Persistent Volumes describe the following configuration:

  1. Storage class is a description of the Storage class. More information on the Storage class can be found in the official documentation.

  2. Persistent Volume is a description of the directly attached volume.

  3. Persistent Volume Claim is a request for Persistent Volume, describing the required volume parameters. If Persistent Volume with the same or better parameters is found, Kubernetes will use it.

Scenario of using the existing in CROC Cloud volume

To use the existing volume in Kubernetes as Persistent Volume, specify ebs.csi.aws.com in the driver field and volume ID in the volumeHandle field:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: static-pv
spec:
  capacity:
    storage: 48Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-static
  csi:
    driver: ebs.csi.aws.com
    volumeHandle: vol-9991C120
    fsType: xfs
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.ebs.csi.aws.com/zone
          operator: In
          values:
          - ru-msk-vol51

It’s important that in the nodeAffinity section (in the last line of the example above) is specified the availability zone, in which the volume was created. Also the volume and the instance must be located in one availability zone be, otherwise the volume couldn’t be attached to the instance.

To use this volume in the future, you just need to create a Persistent Volume Claim which appropriates parameters of the volume and to use it in the required resource. The storageClassName of this Claim must match the one specified in Persistent Volume.

Configuration example for creating the pod with volume more than 20 GiB:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: static-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-static
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
  persistentVolumeClaim:
    claimName: static-claim

Scenario with creating new volumes

To create the new volumes as Storage class specify ebs.csi.aws.com in the provisioner field. In the parameters field you may specify parameters for the created volumes:

Parameter

Valid values

Default value

Description

csi.storage.k8s.io/fsType

xfs, ext2, ext3, ext4

ext4

File system in which the new volume will be formated.

type

io2, gp2, st2

gp2

Volume type

iopsPerGB

IOPS per gibibyte. Required for io2 volumes

If there is no parameter, it will use the default value.

Configuration example:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ebs-dynamic
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
parameters:
  csi.storage.k8s.io/fstype: xfs
  type: io2
  iopsPerGB: "50"
  encrypted: "true"

When creating new volumes, a persistent volume will be built according to the request of the Persistent Volume Claim.

Persistent volumes in CROC Cloud support accessModes only with the ReadWriteOnce value for EBS, link to kubernetes’ documentation.

Request example for creating the pod with volume more than 4 GiB:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-dynamic-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-dynamic
  resources:
    requests:
      storage: 4Gi

When creating a pod, which uses this request, Kubernetes will automatically create in the cloud the 4 GiB volume with the parameters specified in storage class and will attach it to the pod.

Pod configuration example:

apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-dynamic-claim

Scenario with volume snapshots

To take volume snapshots, you must first create a pod with a disk and Storage Class, as well as Volume Snapshot Class.

An example of Volume Snapshot Class configuration :

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: csi-aws-vsc
driver: ebs.csi.aws.com
deletionPolicy: Delete
parameters:
  csi.storage.k8s.io/snapshotter-secret-name: aws-secret
  csi.storage.k8s.io/snapshotter-secret-namespace: kube-system

If you use this request to create a Volume Snapshot Class, an object of the VolumeSnapshotClass class will be automatically created. Note that the same data is used for cloud authorization as in the EBS provider. You will also need Volume Snapshot.

An example of Volume Snapshot configuration:

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: ebs-volume-snapshot-2
  namespace: kube-system
spec:
  volumeSnapshotClassName: csi-aws-vsc
  source:
    persistentVolumeClaimName: ebs-dynamic-claim

If you use this request to create a Volume Snapshot, an object of the VolumeSnapshot class and a volume snapshot will be automatically created in the cloud according to the current state of the Persistent Volume Claim in the Kubernetes cluster. Now you can use this Volume Snapshot as a data source (dataSource) for Persistent Volume Claim.

An example configuration for such a Persistent Volume Claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-restore-claim
spec:
  dataSource:
    name: ebs-volume-snapshot-2
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-dynamic
  resources:
    requests:
      storage: 32Gi

An example configuration for such a Persistent Volume Claim in the pod configuration

apiVersion: v1
kind: Pod
metadata:
  name: app-restored
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage-restored
      mountPath: /data
  volumes:
  - name: persistent-storage-restored
    persistentVolumeClaim:
      claimName: ebs-restore-claim

Installation of EBS-provider in your Kubernetes Cluster

You can install the EBS-provider separately from the cloud service.

To do this, you need to create a Secret with data for authorizing the user on whose behalf operation with the cloud will be performed:

apiVersion: v1
kind: Secret
metadata:
  name: aws-secret
  namespace: kube-system
stringData:
  key_id: "<AWS_ACCESS_KEY_ID>"
  access_key: "<AWS_SECRET_ACCESS_KEY>"

For correct operation, the user whose data are entered in the key_id and access_key fields must have the following grants in the infrastructure service:

  • attach_volume

  • detach_volume

  • describe_instances

  • describe_volumes

Optionally, to enable the creation and deletion of volumes:

  • create_volume

  • delete_volume

Optionally, to enable the creation and deletion of volume snapshots:

  • create_snapshot

  • delete_snapshot

  • describe_snapshots

To check the available grants for the Kubernetes EBS Provider user or update them, go to the section IAM.

If the infrastructure service lacks some actions, for example, create_snapshot, delete_snapshot and describe_snapshots for using volume snapshots, add them on the setup page for user grants.

You can also delete the user from the project and re-add them as Kubernetes EBS Provider user to expand available grants. However, the existing EBS providers in deployed clusters will stop working.

After that you should apply this configuration:

kubectl apply -f https://storage.cloud.croc.ru/kaas/v10/deployment/ebs/ebs.yaml

If the installation is successful (pods with the `` ebs-csi- * ‘’ prefix in the name are launched), CROC Cloud volumes will become available for usage in Kubernetes.

To use volume snapshots, you should perform the following actions:

  1. Run the commands on any master node in the Kubernetes cluster (only for clusters that were deployed prior to 10/05/2021 and used EBS provider)

    kubectl delete -f https://storage.cloud.croc.ru/kaas/v10/deployment/ebs/ebs.yaml
    kubectl create -f https://storage.cloud.croc.ru/kaas/v10/deployment/ebs/ebs.yaml
    
  2. Download the files external-snapshotter and apply the configuration:

    wget https://github.com/kubernetes-csi/external-snapshotter/archive/refs/tags/v3.0.3.tar.gz
    tar -xvf v3.0.3.tar.gz
    cd external-snapshotter-3.0.3
    kubectl create -f client/config/crd
    kubectl create -f deploy/kubernetes/snapshot-controller
    

If the installation is successful (pod with the prefix snapshot-controller* in the name will be launched), you will be able to create volume snapshots in the CROC Cloud for volumes used as Persistent Volume Claim in Kubernetes.

Docker Registry

Docker Registry is a scalable server application that stores Docker snapshots and allows you to distribute and use them. If you have selected the Docker Registry service when creating a cluster, then it will be installed on a master node.

To upload images from local computer to Docker Registry, install Docker.

Having installed, run the command and enter your password:

docker login <IP-адрес docker-registry>

Then download images by setting a tag starting with <IP address docker-registry>:5000/. For example, for an existing image quay.io/coreos/example-app:v1.0 a tag will be:

docker tag quay.io/coreos/example-app:v1.0 185.12.31.211:5000/example-app:v1.0
docker push 185.12.31.211:5000/example-app:v1.0

In the future, instead of public IP of the Docker Registry, you can use private IP and vice versa.

Use the regcred credentials configured in the cluster to create the pod from a downloaded image.

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
  - name: example-app
    image: '172.31.0.4:5000/example-app:v1.0'
  imagePullSecrets:
  - name: regcred

Cluster-manager

Cluster-manager features

cluster-manager is application-specific for CROC Cloud Kubernetes installations. It is responsible for changing the number of worker nodes in the cluster. This application operates with Instance Metadata API and Kubernetes API for providing the necessary integration of these components. Cluster-manager is responsible for:

  • cluster state monitoring (Healthy / Unhealthy);

  • secure deletion of worker nodes.

Cluster state monitoring

Cluster-manager permanently requests information about the number of worker nodes, registered in Kubernetes cluster. If this number doesn’t match the fact number of instances in cluster, then the cluster state becomes Unhealthy. You can meet this behaviour when adding and deleting worker nodes, because at first instances are starting and only after some time passed, these instances a registering as cluster worker nodes. If the number of worker nodes, registered in cluster matches the number of running instances, the cluster state becomes Healthy.

Secure node deletion

When cluster-manager receives a notification about deleting worker node, it deletes worker node from the cluster using Kubernetes API. After the successful deletion of the worker node from the cluster, the instance termination process starts.

Maintenance mode

To maintain a master node (for example, update its OS kernel), you need to switch it in the SchedulingDisabled state. Cluster-manager synchronizes SchedulingDisabled state between the cluster and cloud.

Recovering unavailable master nodes

Cluster-manager periodically checks the availability of master nodes and, if unavailable, starts the recovery process. Recovery means deleting an unavailable instance and creating a new one while preserving the network interface. Master nodes in the SchedulingDisabled state are not recovered.

The consequences of cluster-manager deletion

We don’t recommend users to change settings of this application and configuration of its deployment in cluster. The user won’t be able to reduce the quantity of cluster worker nodes via CROC Cloud web interface:

  • if cluster-manager is deleted from the cluster;

  • when moving a running process from a master node to one of worker nodes.

Also the cluster state monitoring will stop.

Installation in cluster

Kubernetes Clusters service of CROC Cloud automatically installs сluster-manager in each cluster when it is created. The cluster-manager process is launched on a cluster’s master node. The following settings are used during the installation:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: cluster-manager
rules:
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - get
      - delete
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: cluster-manager
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-manager
subjects:
- kind: ServiceAccount
  name: cluster-manager
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cluster-manager
  namespace: kube-system

---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: cluster-manager
  name: cluster-manager
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: cluster-manager
  template:
    metadata:
      labels:
        k8s-app: cluster-manager
    spec:
      nodeSelector:
        node-role.kubernetes.io/master: ""
      containers:
        - name: cluster-manager
          image: dhub.c2.croc.ru/kaas/kubernetes-manager:v0.1.0
          imagePullPolicy: Always
          env:
            - name: PYTHONUNBUFFERED
              value: "1"
      serviceAccountName: cluster-manager
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule