content
listlengths
1
171
tag
dict
[ { "data": "During the upgrade process, provisioning of volumes and attach/detach operations might not work. Existing volumes and volumes already in use by a pod will continue to work without interruption. To upgrade, apply the resource of the latest release. Use the same method that was used to create the initial deployment (`kubectl` vs `helm`). There is no need to change existing `LinstorCluster`, `LinstorSatelliteConfiguration` or `LinstorNodeConnection` resources. To upgrade to the latest release using `kubectl`, run the following commands: ``` $ kubectl apply --server-side -k \"https://github.com/piraeusdatastore/piraeus-operator//config/default?ref=v2.5.1\" $ kubectl wait pod --for=condition=Ready -n piraeus-datastore --all ``` Generally, no special steps required. LINSTOR Satellites are now managed via DaemonSet resources. Any patch targeting a `satellite` Pod resources is automatically converted to the equivalent DaemonSet resource patch. In the Pod list, you will see these Pods using a new `linstor-satellite` prefix. Removed the `NetworkPolicy` resource from default deployment. It can be reapplied as a . Removed the dependency on cert-manager for the initial deployment. To clean up an existing `Certificate` resource, run the following commands: ``` $ kubectl delete certificate -n piraeus-datastore piraeus-operator-serving-cert ``` No special steps required. Please follow the specialized . If you want to share DRBD configuration directories with the host, please update the CRDs before upgrading using helm: ``` $ kubectl replace -f ./charts/piraeus/crds ``` If you want to protect metrics endpoints, take a look on . We've also disabled the `haController` component in our chart. The replacement is available from , containing much needed improvements in robustness and fail-over speed. If you want to still use the old version, set `haController.enabled=true`. If you need to set the number of worker threads, or you need to set the log level of LINSTOR components, please update the CRDs before upgrading using helm: ``` $ kubectl replace -f ./charts/piraeus/crds ``` In case if you have SSL enabled installation, you need to regenerate your certificates in PEM format. Read the and repeat the described steps. Node labels are now automatically applied to LINSTOR satellites as \"Auxiliary Properties\". That means you can reuse your existing Kubernetes Topology information (for example `topology.kubernetes.io/zone` labels) when scheduling volumes using the `replicasOnSame` and `replicasOnDifferent` settings. However, this also means that the Operator will delete any existing auxiliary properties that were already applied. To apply auxiliary properties to satellites, you have to apply a label to the Kubernetes node object. The `csi-snapshotter` subchart was removed from this repository. Users who relied on it for their snapshot support should switch to the seperate charts provided by the Piraeus team on The new charts also include notes on how to upgrade to newer CRD versions. To support the new" }, { "data": "option an update to the LinstorCSIDriver CRD is required: ``` $ kubectl replace -f ./charts/piraeus/crds ``` To make use of the new `monitoringImage` value in the LinstorSatelliteSet CRD, you need to replace the existing CRDs before running the upgrade. If the CRDs are not upgraded, the operator will not deploy the monitoring container alongside the satellites. ``` $ kubectl replace -f ./charts/piraeus/crds ``` Then run the upgrade: ``` helm upgrade piraeus-op ./charts/piraeus -f <overrides> ``` No special steps required, unless using the newly introduced `additionalEnv` and `additionalProperties` overrides. If you plan to use the new overrides, replace the CustomResourceDefinitions before running the upgrade ``` $ kubectl replace -f ./charts/piraeus/crds ``` Then run the upgrade: ``` helm upgrade piraeus-op ./charts/piraeus -f <overrides> ``` No special steps required. After checking out the new repo, run a simple helm upgrade: ``` helm upgrade piraeus-op ./charts/piraeus -f <overrides> ``` Piraeus v1.2 is supported on Kubernetes 1.17+. If you are using an older Kubernetes distribution, you may need to change the default settings (for example ) To start the upgrade process, ensure you have a backup of the LINSTOR Controller database. If you are using the etcd deployment included in Piraeus, you can create a backup using: ``` kubectl exec piraeus-op-etcd-0 -- etcdctl snapshot save /tmp/save.db kubectl cp piraeus-op-etcd-0:/tmp/save.db save.db ``` Now you can start the upgrade process. Simply run `helm upgrade piraeus-op ./charts/piraeus`. If you installed Piraeus with customization, pass the same options you used for `helm install` to `helm upgrade`. This will cause the operator pod to be re-created and shortly after all other Piraeus pods. There is a known issue when updating the CSI components: the pods will not be updated to the newest image and the `errors` section of the LinstorCSIDrivers resource shows an error updating the DaemonSet. In this case, manually delete `deployment/piraeus-op-csi-controller` and `daemonset/piraeus-op-csi-node`. They will be re-created by the operator. After a short wait, all pods should be running and ready. Check that no errors are listed in the status section of LinstorControllers, LinstorSatelliteSets and LinstorCSIDrivers. The LINSTOR controller image given in `operator.controller.controllerImage` has to have its entrypoint set to or newer. All images starting with `piraeus-server:v1.8.0` meet this requirement. Older images will not work, as the `Service` will not automatically pick up on the active pod. To upgrade, first update the deployed LINSTOR image to a compatible version, then upgrade the operator. Upgrades from v0.* versions to v1.0 are best-effort only. The following guide assumes an upgrade from v0.5.0. CRDs have been updated and set to version `v1`. You need to replace any existing CRDs with these new ones: ``` kubectl replace -f charts/piraeus/crds/piraeus.linbit.comlinstorcontrollerscrd.yaml kubectl replace -f charts/piraeus/crds/piraeus.linbit.comlinstorcsidriverscrd.yaml kubectl replace -f" }, { "data": "``` Renamed `LinstorNodeSet` to `LinstorSatelliteSet`. This brings the operator in line with other LINSTOR resources. Existing `LinstorNodeSet` resources will automatically be migrated to `LinstorSatelliteSet`. The old resources will not be deleted. You can verify that migration was successful by running the following command: ``` $ kubectl get linstornodesets.piraeus.linbit.com -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.ResourceMigrated}{\"\\t\"}{.status.DependantsMigrated}{\"\\n\"}{end}' piraeus-op-ns true true ``` If both values are `true`, migration was successful. The old resource can be removed after migration. Renamed `LinstorControllerSet` to `LinstorController`. The old name implied the existence of multiple (separate) controllers. Existing `LinstorControllerSet` resources will automatically be migrated to `LinstorController`. The old resources will not be deleted. You can verify that migration was successful by running the following command: ``` $ kubectl get linstorcontrollersets.piraeus.linbit.com -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.ResourceMigrated}{\"\\t\"}{.status.DependantsMigrated}{\"\\n\"}{end}' piraeus-op-cs true true ``` If both values are `true`, migration was successful. The old resource can be removed after migration. Along with the CRDs, Helm settings changed too: `operator.controllerSet` to `operator.controller` `operator.nodeSet` to `operator.satelliteSet` If old settings are used, Helm will return an error. Node scheduling no longer relies on `linstor.linbit.com/piraeus-node` labels. Instead, all CRDs support setting pod [affinity] and [tolerations]. In detail: `linstorcsidrivers` gained 4 new resource keys, with no change in default behaviour: `nodeAffinity` affinity passed to the csi nodes `nodeTolerations` tolerations passed to the csi nodes `controllerAffinity` affinity passed to the csi controller `controllerTolerations` tolerations passed to the csi controller `linstorcontrollerset` gained 2 new resource keys, with no change in default behaviour: `affinity` affinity passed to the linstor controller pod `tolerations` tolerations passed to the linstor controller pod `linstornodeset` gained 2 new resource keys, with change in default behaviour*: `affinity` affinity passed to the linstor controller pod `tolerations` tolerations passed to the linstor controller pod Previously, linstor satellite pods required nodes marked with `linstor.linbit.com/piraeus-node=true`. The new default value does not place this restriction on nodes. To restore the old behaviour, set the following affinity: ```yaml affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: linstor.linbit.com/piraeus-node operator: In values: \"true\" ``` Other changed helm settings: `drbdKernelModuleInjectionMode`: renamed to `kernelModuleInjectionMode` `kernelModImage`: renamed to `kernelModuleInjectionImage` If old settings are used, Helm will return an error. While the API version was `v1alpha1`, this project did not maintain stability of or provide conversion of the Custom Resource Definitions. If you are using the Helm deployment, you may find that upgrades fail with errors similar to the following: ``` UPGRADE FAILED: cannot patch \"piraeus-op-cs\" with kind LinstorController: LinstorController.piraeus.linbit.com \"piraeus-op-cs\" is invalid: spec.etcdURL: Required value ``` The simplest solution in this case is to manually replace the CRD: ``` kubectl replace -f charts/piraeus/crds/piraeus.linbit.comlinstorcsidriverscrd.yaml kubectl replace -f charts/piraeus/crds/piraeus.linbit.comlinstorsatellitesetscrd.yaml kubectl replace -f charts/piraeus/crds/piraeus.linbit.comlinstorcontrollerscrd.yaml ``` Then continue with the Helm upgrade. Values that are lost during the replacement will be set again by Helm." } ]
{ "category": "Runtime", "file_name": "UPGRADE.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 1 sidebar_label: \"LVM Storage Node\" LVM storage nodes pool the disks on the node and provide LVM-type data volumes for applications. Add the node into the Kubernetes cluster or select a Kubernetes node. The node should have all the required items described in . For example, suppose you have a new node with the following information: name: k8s-worker-4 devPath: /dev/sdb diskType: SSD disk After the new node is already added into the Kubernetes cluster, make sure the following HwameiStor pods are already running on this node. ```bash $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master-1 Ready master 96d v1.24.3-2+63243a96d1c393 k8s-worker-1 Ready worker 96h v1.24.3-2+63243a96d1c393 k8s-worker-2 Ready worker 96h v1.24.3-2+63243a96d1c393 k8s-worker-3 Ready worker 96d v1.24.3-2+63243a96d1c393 k8s-worker-4 Ready worker 1h v1.24.3-2+63243a96d1c393 $ kubectl -n hwameistor get pod -o wide | grep k8s-worker-4 hwameistor-local-disk-manager-c86g5 2/2 Running 0 19h 10.6.182.105 k8s-worker-4 <none> <none> hwameistor-local-storage-s4zbw 2/2 Running 0 19h 192.168.140.82 k8s-worker-4 <none> <none> $ kubectl get localstoragenode k8s-worker-4 NAME IP ZONE REGION STATUS AGE k8s-worker-4 10.6.182.103 default default Ready 8d ``` Construct the storage pool of the node by adding a LocalStorageClaim CR as below: ```console $ kubectl apply -f - <<EOF apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskClaim metadata: name: k8s-worker-4 spec: nodeName: k8s-worker-4 owner: local-storage description: diskType: SSD EOF ``` Finally, check if the node has constructed the storage pool by checking the LocalStorageNode CR. ```bash kubectl get localstoragenode k8s-worker-4 -o yaml ``` The output may look like: ```yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode metadata: name: k8s-worker-4 spec: hostname: k8s-worker-4 storageIP: 10.6.182.103 topogoly: region: default zone: default status: pools: LocalStorage_PoolSSD: class: SSD disks: capacityBytes: 214744170496 devPath: /dev/sdb state: InUse type: SSD freeCapacityBytes: 214744170496 freeVolumeCount: 1000 name: LocalStorage_PoolSSD totalCapacityBytes: 214744170496 totalVolumeCount: 1000 type: REGULAR usedCapacityBytes: 0 usedVolumeCount: 0 volumeCapacityBytesLimit: 214744170496 volumes: state: Ready ```" } ]
{ "category": "Runtime", "file_name": "lvm_nodes.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "Snapshot is the read-only copy of the entire data of a cloud disk at a certain moment, and is an convenient and efficient way for data disaster tolerance, commonly used in data backup, creating images and application disaster tolerance. Snapshot system provides users an snapshot service interface, by which users can create, delete or cancel a snapshot, or recover data from snapshot, and even create their own image. The snapshot service works as an independent component from core services of Curve, and it supports multi-level snapshot, which means a full backup for the first snapshot and incremental snapshot for the following ones. In the first snapshot, data will be stored entirely on S3, and only the modified data will be stored in the following tasks for saving spaces. The storing of snapshot data to S3 is asynchronous. Leader election and a high availability is implemented by Etcd, and unfinished tasks will be resume automatically when the service restart. Figure 1 shows the architecture of Curve snapshot service. <p align=\"center\"> <img src=\"../images/snap.png\"><br> <font size=3> Figure 1: Architecture of Curve snapshot</font> </p> After receiving requests from users, the system will create temporary snapshot by calling the interfaces of curvefs, then it will dump the temporary data to object storage system and persist metadata of the snapshot to database. The functions of snapshot system can be concluded into two parts: Provides snapshot interface for users to create, delete and query snapshot information. Managing the snapshot data, and control the snapshot data flow in the system using the interface from curvefs and object storage system. curvefs provides RPC interface used by snapshot system for creating, deleting and reading snapshot. When the snapshot is created, curvefs is the provider of the snapshot data for snapshot system to store in object storage system, and in the recovering process from snapshot, curvefs is the receiver of the snapshot fetched from object storage system. Object storage is the underlying system for storing snapshot data, and provides only interfaces for creating, deleting, uploading and downloading objects. Object storage provides storage capabilities for snapshot data, and only provides interfaces for creating, deleting, uploading and downloading object files. It is called by the snapshot module to store the snapshot data read from curvefs or download the snapshot data from the object storage system and write it into curvefs to restore the volume file. Curve uses NOS as the object storage system. Here's what would happen when the user sends a snapshot request to the snapshot system. The request will goes to the HTTP service layer, in which a specific snapshot task will be generated and handed over to the snapshot task manager for scheduling. The process of taking a snapshot is that a temporary snapshot in curvefs will first be generated, then dumped to the object storage system, and finally deleted. The following steps are for creating a snapshot: Generate snapshot records and persist them to Etcd. Several judgments are required for this step: For the same volume, there can only be one snapshot being taken at the same time, so it is necessary to judge whether the volume has a snapshot request being processed. Theoretically, unlimited snapshots are supported by the system, and in the implementation we put a limit to the depth of snapshot, which can be modified in the configuration file. For this reason, we also need to determine whether the number of snapshot exceeds the limit. If every thing works well, the metadata of the original volume will be read from MDS and a snapshot record will be generated and persisted to" }, { "data": "The status of the snapshot generated in this step is 'pending'. Create temporary snapshot on curvefs, return seqNum of the snapshot created, then update the seqNum of the snapshot record. Create snapshot mapping table (see 1.4.1), and save it to object storage S3. Dump snapshot data from curvefs to object storage S3. When storing, the snapshot system first reads the snapshot data from curvefs, then uploads the data to the S3. Delete the temporary snapshot from curvefs. Update snapshot status to 'done'. Each snapshot has a corresponding record in the snapshot system which persisted in Etcd and use a uuid as unique identifier. Snapshot data is stored in S3 in chunks, and each chunk corresponds to an object on S3. The object that stores data is called a data object, and each snapshot also has an meta object that records the metadata. The meta object records the file information of the snapshot and the mapping table between the snapshot chunk and the object. The meta object consists of 2 parts: Snapshot file information, including original data volume name, volume size and chunk size, etc. Snapshot chunk mapping table, recording the list of snapshot data objects. Figure 2 is a sample of the sanpshot data. <p align=\"center\"> <img src=\"../images/snap-s3-format.png\"><br> <font size=3> Figure 3: Snapshot data format</font> </p> Please refer to the of snapshot and clone. In section 1 we introduced the snapshot system, which is for creating snapshots of files of curvefs and incrementally dump the data to S3. But for a complete snapshot system, supporting only the snapshot creation is not enough, file recovery and cloning is the reason why take snapshot. In this section we introduce Curve cloning system(recovering can be considered as cloning to some extent). According to the data source, there are two kinds of cloning, including cloning from snapshot and cloning from image, and if dividing by whether the data is entirely cloned before the service is provided, it can be divided into Lazy Clone and Not-Lazy Clone. Figure 3 shows the architecture of Curve cloning. <p align=\"center\"> <img src=\"../images/clone.png\"><br> <font size=3> Figure 3: Architecture of Curve cloning</font> </p> In figure 3, we can see components and their responsibility below. MDS: Responsible for managing file information and file status, and provides interface for managing and querying files. SnapshotCloneServer: Take charge of the management of snapshot and clone task information, and processes their logic. S3 Object Storage: For storing snapshot data. Chunk Server: The actual place that stores the file data, and supports \"Lazy Clone\" by using copy-on-read mechanism. When the Client initiates a read request to data on a cloned volume, if the area of target data has not been written, the system will copy data from the image (stored in curvefs) or snapshot (stored in S3), return the data to the Client and write it to the chunk file asynchronously. For cloning of curvefs, it's basically copying the data from the source location (the source location is the location of the object being copied. If cloned from snapshot, the source location is the location of the object on S3, and if cloned from a file on curvefs, the source location will be the object location on chunk server) to the target location (which is the location of the chunk generated). In actual scenarios, waiting for all data to be copied synchronously is not necessary in many cases, and we can copy only when the data is actually used. Therefore, a 'lazy' flag indicating whether to clone in a lazy way can be" }, { "data": "If this flag is marked true, it means coping when it is actually used, and if marked false, waiting for all data to be copied to chunk server is required. Since the data at the source location is only needed when reading the data, a copy-on-read strategy is adopted. Here are steps for creating a clone of a volume The user specifies the file or snapshot to be cloned, and sends a request to create a clone volume to SnapshotCloneServer. After receiving the request, the SnapshotCloneServer obtains data from locally saved snapshot information of the snapshot clone system (currently persisted to Etcd) or from MDS according to whether the cloned source is a snapshot or a file. The SnapshotCloneServer will then send a 'CreateCloneFile' request to MDS. For the newly created file, the initial version should be set to 1. After receiving the request, MDS will create a new file and set the status of the file to 'Cloning' (the meaning of different file status will be explained in the following chapters). It should be noted that the file created at the beginning will be placed in a temporary directory. For example, if the file that the user request to clone is named 'des', then this step will create the file with the name '/clone/des'. After the creation of the file, SnapshotCloneServer will queries the location info of each chunk: If the clone source is an image, only the file name is needed, and as for snapshot, the server will first obtain the meta object on S3 for analysis to obtain the information of the chunk, then allocates copyset for chunks on each segment through MDS. After that, the server will calls the 'CreateCloneChunk' interface of ChunkServer to create a new chunk file with the version 1 for each acquired chunk. Source location information is recorded in the chunk, if the clone source is an image, format '/filename/offset@cs' will be used as the location ('filename' means the name of the data source file, and @cs indicates that the file data is on the chunk server), and if the cloned source is a snapshot, the server use 'url@s3' as the location ('url' means the source data url on S3, @s3 indicates that the source data is on S3) After creating all chunks on the chunk server and recording the source data information, the SnapshotCloneServer will modifies the status of the file to 'CloneMetaInstalled' through 'CompleteCloneMeta' interface. Up till this step, the process of the lazy and non-lazy method are the same, and their difference will be reflected in the following steps. If the user specifies a lazy clone, the SnapshotCloneServer will first rename the previously created file from '/clone/des' to \"des\", in this way the user can access the newly created file and mount it for reading and writing. Lazy clone will not continue the following data copy unless the 'flatten' interface of the snapshot system is called explicitly. Then, the SnapshotCloneServer will continue to call 'RecoverChunk' circularly to asynchronously trigger the copy of the chunk data on the chunk server. After all chunks are successfully copied, interface 'CompleteCloneFile' will be called with the specified file name 'des' (the file has been renamed earlier) to change the status of the file to 'Cloned'. If lazy clone is not specified, that means the service will be provided only after the synchronization of all the" }, { "data": "The SnapshotCloneServer first circularly calls 'RecoverChunk' to trigger the copy of the chunk data on the chunk server, and when all the chunk data is copied, it will calls 'CompleteCloneFile' with specified file name '/clone/des' to change the status of the file to 'Cloned'. Then, file '/clone/des' will be renamed to 'des' for using. Essentially, the recovery operation is based on cloning. To explain cloning in a simplified way, it is to create a temporary file and treat it as a clone, then delete the original file and rename the new file to the original name. The attributes of the new file are the same as the one (including file name, file size, file version, etc.). During the recovery process, the main differences between recovery and cloning are: Before the recovery, the information of the original file will be obtained, and then the file name, version number and file size of the original file will be used as parameters to create a new file. The new file will be placed under the '/clone' directory just like the clone. The version number specified when calling 'CreateCloneChunk' is the actual version number of the chunk in the snapshot. After the recovery, the original file will be overwritten when the new file is renamed to the original file name. The writing process of a cloned volume is just like writing a normal volume. The difference is that the writing of the cloned volume will be recorded in a bitmap. When data is written, the bit in the corresponding position will be set to 1 in order to distinguish the unwritten area that need to be read from the clone source. In few steps: The client initiates the request. After receiving the request, the chunk server will determine the flag that represents the position of the source object recorded on the written chunk, and then determine whether the corresponding bit in the bitmap of the read area is 1, and will trigger the copy if it is. The chunk server then copies data from the source object through the recorded position in slice (with 1MB size in default and configurable). After the successful copy, return the data to the user. In the procedure above, we've mentioned that the cloned file has many status. What do these different status mean and what are their functions? <p align=\"center\"> <img src=\"../images/clone-file-status.png\"><br> <font size=3> Figure 4: Status transformation of a cloned file</font> </p> Figure 4 shows the status transformation of a cloned file. Cloning Cloning is the initial status of a cloned file. At this stage, it means that the SnapshotCloneServer is creating a chunk and loading its data source location information to itself. During this process, the file is unavailable, and the user can only see the task information. At this stage for the cloning, there will be a file under the '/clone' directory, but for recovery, the original file also exists under the directory. CloneMetaInstalled The file is in this status means that the metadata of chunks has been loaded successfully. If the user specifies the lazy method, the file in this status can be provided for external mounting. During this status, the SnapshotCloneServer will trigger data coping from the data source for each chunk, and snapshot is not allowed. In this status, if the clone is lazy, the file will be moved to the actual directory, and the origin file will be removed first if it is file recovering. For non-lazy, the location of the file will be the same as the 'Cloning' status. Cloned This status means that all chunk data has been copied," } ]
{ "category": "Runtime", "file_name": "snapshotcloneserver_en.md", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "English | This page showcases the utilization of `Spiderpool`, a comprehensive Underlay network solution, in a cluster where serves as the default CNI. `Spiderpool` leverages Multus to attach an additional NIC created with `Macvlan` to Pods and coordinates routes among multiple NICs using `coordinator`. The advantages offered by Spiderpool's solution are: External clients outside the cluster can directly access Pods through their Underlay IP without the need for exposing Pods using the NodePort approach. Pods can individually connect to an Underlay network interface, allowing them to access dedicated networks such as storage with guaranteed independent bandwidth. When Pods have multiple network interfaces attached, such as Cilium and Macvlan, route tuning can be applied to address issues related to Underlay IP access to ClusterIP based on the Cilium setup. When Pods have multiple network interfaces attached, such as Cilium and Macvlan, subnet route tuning is performed to ensure consistent round-trip paths for Pod data packet access, avoiding routing issues that may lead to packet loss. It is possible to flexibly specify the network interface for the default route of Pods based on the Pod's annotation: ipam.spidernet.io/default-route-nic. This document will use the abbreviation NAD to refer to the Multus CRD NetworkAttachmentDefinition, with NAD being its acronym. A ready Kubernetes cluster. Cilium has been already installed as the default CNI for your cluster. If it is not installed, please refer to or follow the commands below for installation: ```shell ~# helm repo add cilium https://helm.cilium.io/ ~# helm install cilium cilium/cilium -namespace kube-system ~# kubectl wait --for=condition=ready -l k8s-app=cilium pod -n kube-system ``` Helm binary Follow the command below to install Spiderpool: ```shell ~# helm repo add spiderpool https://spidernet-io.github.io/spiderpool ~# helm repo update spiderpool ~# helm install spiderpool spiderpool/spiderpool --namespace kube-system --set coordinator.mode=overlay --wait ``` If Macvlan CNI is not installed in your cluster, you can install it on each node by using the Helm parameter `--set plugins.installCNI=true`. Specify the name of the NetworkAttachmentDefinition instance for the default CNI used by Multus via `multus.multusCNI.defaultCniCRName`. If the `multus.multusCNI.defaultCniCRName` option is provided, an empty NetworkAttachmentDefinition instance will be automatically generated upon installation. Otherwise, Multus will attempt to create a NetworkAttachmentDefinition instance based on the first CNI configuration found in the /etc/cni/net.d directory. If no suitable configuration is found, a NetworkAttachmentDefinition instance named `default` will be created to complete the installation of Multus. Check the status of Spiderpool after the installation is complete: ```shell ~# kubectl get po -n kube-system | grep spiderpool spiderpool-agent-bcwqk 1/1 Running 0 1m spiderpool-agent-udgi4 1/1 Running 0 1m spiderpool-controller-bgnh3rkcb-k7sc9 1/1 Running 0 1m spiderpool-init 0/1 Completed 0 1m ``` Please check if `Spidercoordinator.status.phase` is `Synced`, and if the overlayPodCIDR is consistent with the pod subnet configured by Cilium in the cluster: ```shell ~# kubectl get configmaps -n kube-system cilium-config -o yaml | grep cluster-pool cluster-pool-ipv4-cidr: 10.244.64.0/18 cluster-pool-ipv4-mask-size: \"24\" ipam: cluster-pool ~# kubectl get spidercoordinators.spiderpool.spidernet.io default -o yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderCoordinator metadata: finalizers: spiderpool.spidernet.io name: default spec: detectGateway: false detectIPConflict: false hijackCIDR: 169.254.0.0/16 hostRPFilter: 0 hostRuleTable: 500 mode: auto podCIDRType: calico podDefaultRouteNIC: \"\" podMACPrefix: \"\" tunePodRoutes: true status: overlayPodCIDR: 10.244.64.0/18 phase: Synced serviceCIDR: 10.233.0.0/18 ``` At present, Spiderpool prioritizes obtaining the cluster's Pod and Service subnets by querying the kube-system/kubeadm-config" }, { "data": "If the kubeadm-config does not exist, causing the failure to obtain the cluster subnet, Spiderpool will attempt to retrieve the cluster Pod and Service subnets from the kube-controller-manager Pod. If the kube-controller-manager component in your cluster runs in systemd mode instead of as a static Pod, Spiderpool still cannot retrieve the cluster's subnet information. If both of the above methods fail, Spiderpool will synchronize the status.phase as NotReady, preventing Pod creation. To address such abnormal situations, we can take either of the following approaches: Manually create the kubeadm-config ConfigMap and correctly configure the cluster's subnet information: ```shell export PODSUBNET=<YOURPOD_SUBNET> export SERVICESUBNET=<YOURSERVICE_SUBNET> cat << EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: kubeadm-config namespace: kube-system data: ClusterConfiguration: | networking: podSubnet: ${POD_SUBNET} serviceSubnet: ${SERVICE_SUBNET} EOF ``` Once created, Spiderpool will automatically synchronize its status. The subnet for the interface `ens192` on the cluster nodes here is `10.6.0.0/16`. Create a SpiderIPPool using this subnet: ```shell cat << EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: 10-6-v4 spec: disable: false gateway: 10.6.0.1 ips: 10.6.212.200-10.6.212.240 subnet: 10.6.0.0/16 EOF ``` The subnet should be consistent with the subnet of `ens192` on the nodes, and ensure that the IP addresses do not conflict with any existing ones. The Multus NAD instance is created using Spidermultusconfig: ```shell cat << EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: macvlan-ens192 spec: cniType: macvlan macvlan: master: ens192 ippools: ipv4: 10-6-v4 vlanID: 0 EOF ``` Set `spec.macvlan.master` to `ens192` which must be present on the host. The subnet specified in `spec.macvlan.spiderpoolConfigPools.IPv4IPPool` should match that of `ens192`. Check if the Multus NAD has been created successfully: ```shell ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io macvlan-ens192 -o yaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"spiderpool.spidernet.io/v2beta1\",\"kind\":\"SpiderMultusConfig\",\"metadata\":{\"annotations\":{},\"name\":\"macvlan-ens192\",\"namespace\":\"default\"},\"spec\":{\"cniType\":\"macvlan\",\"coordinator\":{\"podCIDRType\":\"cluster\",\"tuneMode\":\"overlay\"},\"enableCoordinator\":true,\"macvlan\":{\"master\":[\"ens192\"],\"spiderpoolConfigPools\":{\"IPv4IPPool\":[\"10-6-v4\"]},\"vlanID\":0}}} creationTimestamp: \"2023-06-30T07:12:21Z\" generation: 1 name: macvlan-ens192 namespace: default ownerReferences: apiVersion: spiderpool.spidernet.io/v2beta1 blockOwnerDeletion: true controller: true kind: SpiderMultusConfig name: macvlan-ens192 uid: 3f902f46-d9d4-4c62-a7c3-98d4a9aa26e4 resourceVersion: \"24713635\" uid: 712d1e58-ab57-49a7-9189-0fffc64aa9c3 spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"macvlan-ens192\",\"plugins\":[{\"type\":\"macvlan\",\"ipam\":{\"type\":\"spiderpool\",\"defaultipv4ippool\":[\"10-6-v4\"]},\"master\":\"ens192\",\"mode\":\"bridge\"},{\"type\":\"coordinattor\",\"ipam\":{},\"dns\":{},\"detectGateway\":false,\"tunePodRoutes\":true,\"mode\":\"overlay\",\"hostRuleTable\":500,\"detectIPConflict\":false}]}' ``` Run the following command to create the demo application nginx: ```shell ~# cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-ens192 labels: app: nginx spec: containers: name: nginx image: nginx imagePullPolicy: IfNotPresent ports: name: http containerPort: 80 protocol: TCP EOF ``` `k8s.v1.cni.cncf.io/networks`: specifies that Multus uses `macvlan-ens192` to attach an additional interface to the Pod. Check the Pod's IP allocation after it is ready: ```shell ~# kubectl get po -l app=nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-x34abcsf74-xngkm 1/1 Running 0 2m 10.233.120.101 controller <none> <none> nginx-x34abcsf74-ougjk 1/1 Running 0 2m 10.233.84.230 worker01 <none> <none> ``` ```shell ~# kubectl get se NAME INTERFACE IPV4POOL IPV4 IPV6POOL IPV6 NODE nginx-4653bc4f24-xngkm net1 10-6-v4 10.6.212.202/16 worker01 nginx-4653bc4f24-ougjk net1 10-6-v4 10.6.212.230/16 controller ``` Use the command `ip` to view the Pod's information such as routes: ```shell [root@controller1 ~]# kubectl exec it nginx-4653bc4f24-xngkm sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo validlft forever preferredlft forever inet6 ::1/128 scope host validlft forever preferredlft forever 4: eth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1430 qdisc noqueue state UP group default link/ether a2:99:9d:04:01:80 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet" }, { "data": "scope global eth0 validlft forever preferredlft forever inet6 fd85:ee78:d8a6:8607::1:f2d5/128 scope global validlft forever preferredlft forever inet6 fe80::a099:9dff:fe04:131/64 scope link validlft forever preferredlft forever 5: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 2a:1e:a1:db:2a:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.6.212.202/16 brd 10.6.255.255 scope global net1 validlft forever preferredlft forever inet6 fd00:10:6::df3/64 scope global validlft forever preferredlft forever inet6 fe80::281e:a1ff:fedb:2a9a/64 scope link validlft forever preferredlft forever / # ip rule 0: from all lookup local 32760: from 10.6.212.131 lookup 100 32766: from all lookup main 32767: from all lookup default / # ip route default via 10.233.65.96 dev eth0 10.233.65.96 dev eth0 scope link 10.6.212.131 dev eth0 scope link 10.233.0.0/18 via 10.6.212.132 dev eth0 10.233.64.0/18 via 10.6.212.132 dev eth0 10.6.0.0/16 dev net1 scope link src 10.6.212.202 / # ip route show table 100 default via 10.6.0.1 dev net1 10.6.0.0/16 dev net1 scope link src 10.6.212.202 10.233.65.96 dev eth0 scope link 10.6.212.131 dev eth0 scope link 10.233.0.0/18 via 10.6.212.132 dev eth0 10.233.64.0/18 via 10.6.212.132 dev eth0 ``` Explanation of the above: The Pod is allocated two interfaces: eth0 (cilium) and net1 (macvlan), having IPv4 addresses of 10.233.120.101 and 10.6.212.202, respectively. 10.233.0.0/18 and 10.233.64.0/18 represent the cluster's CIDR. When the Pod accesses this subnet, traffic will be forwarded through eth0. Each route table will include this route. 10.6.212.132 is the IP address of the node where the Pod has been scheduled. This route ensures that when the Pod accesses the host, traffic will be forwarded through eth0. This series of routing rules guarantees that the Pod will forward traffic through eth0 when accessing targets within the cluster and through net1 for external targets. By default, the Pod's default route is reserved in eth0. To reserve it in net1, add the following annotation to the Pod's metadata: \"ipam.spidernet.io/default-route-nic: net1\". To test the east-west connectivity of the Pod, we will use the example of accessing the CoreDNS Pod and Service: ```shell ~# kubectl get all -n kube-system -l k8s-app=kube-dns -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/coredns-57fbf68cf6-2z65h 1/1 Running 1 (91d ago) 91d 10.233.105.131 worker1 <none> <none> pod/coredns-57fbf68cf6-kvcwl 1/1 Running 3 (91d ago) 91d 10.233.73.195 controller <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 91d k8s-app=kube-dns ~# Access the CoreDNS Pod across nodes ~# kubectl exec nginx-4653bc4f24-rswak -- ping 10.233.73.195 -c 2 PING 10.233.73.195 (10.233.73.195): 56 data bytes 64 bytes from 10.233.73.195: seq=0 ttl=62 time=2.348 ms 64 bytes from 10.233.73.195: seq=1 ttl=62 time=0.586 ms 10.233.73.195 ping statistics 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.586/1.467/2.348 ms ~# Access the CoreDNS Service ~# kubectl exec nginx-4653bc4f24-rswak -- curl 10.233.0.3:53 -I % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 curl: (52) Empty reply from server ``` Test the Pod's connectivity for north-south traffic, specifically accessing targets in another subnet (10.7.212.101): ```shell [root@controller1 cyclinder]# kubectl exec nginx-4653bc4f24-rswak -- ping 10.7.212.101 -c 2 PING 10.7.212.101 (10.7.212.101): 56 data bytes 64 bytes from 10.7.212.101: seq=0 ttl=61 time=4.349 ms 64 bytes from 10.7.212.101: seq=1 ttl=61 time=0.877 ms 10.7.212.101 ping statistics 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.877/2.613/4.349 ms ```" } ]
{ "category": "Runtime", "file_name": "get-started-cilium.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "| Author | | | | | | Date | 2020-08-28 | | Email | | The Driver module needs to support overlay2 and devicemapper drivers to achieve the following functions: Initialize the driver; return the driver status; return the metadata information of the driver; cleanup the driver; create a read-only layer; create a read-write layer; delete the layer, obtain the rootfs path of the layer; release the layer; The quota function also needs to be implemented for the overlay2 driver. ```mermaid classDiagram class driver class overlay <<interface>> overlay class devicemapper <<interface>> devicemapper driver : +char *name driver : +char *home driver : +char *backingfs driver : +bool support_dtype driver : +bool support_quota driver : +struct pquotacontrol *quotactrl driver : +struct graphdriver_ops* driver : +init() driver : +create_rw() driver : +create_ro() driver : +rm_layer() driver : +mount_layer() driver : +umount_layer() driver : +exists() driver : +apply_diff() driver : +getlayermetadata() driver : +getdriverstatus() driver : +clean_up() driver : +rm_layer() driver <|-- devicemapper driver <|-- overlay ``` ````c int graphdriverinit(const char *name, const char *isuladroot, char storage_opts, sizet storageopts_len); ```` ````c int graphdrivercreaterw(const char id, const char parent, struct drivercreateopts *create_opts); ```` ````c int graphdrivercreatero(const char id, const char parent, const struct drivercreateopts *create_opts); ```` ````c int graphdriverrmlayer(const char *id) ```` ````c char graphdriver_mount_layer(const char id, const struct drivermountopts *mount_opts) ```` ````c int graphdriverumountlayer(const char *id) ```` ````c bool graphdriverlayerexists(const char *id) ```` ````c int graphdriverapplydiff(const char id, const struct io_read_wrapper content, int64t *layersize) ```` ````c int graphdrivergetlayermetadata(const char *id, jsonmapstringstring *map_info) ```` ````c struct graphdriverstatus *graphdriverget_status(void) ```` ````c int graphdriver_cleanup(void) ```` Driver initialization initialization process: Overlay module initialization process: Devicemapper module initialization process: ````c struct drivercreateopts { char *mount_label; jsonmapstringstring *storageopt; }; ```` According to the incoming ID, parent layer ID and configuration creation options, call the actual driver create_rw interface to realize the function of creating layer layer. When creating the read-write layer, you need to determine whether the quota option is set. If the quota option is not set, the quota value of the default configuration of the daemon will be added as the quota limit of the read-write layer of the container. If the quota option is set and the current file system does not support setting quota, an error will be reported. The current overlay only supports the quota option. If there are other creation options, an error will be reported. ````c struct drivercreateopts { char *mount_label; jsonmapstringstring *storageopt; }; ```` According to the incoming ID, parent layer ID and creation options in configuration, call the actual driver create_ro interface to create a layer read-only layer. When creating a read-only layer, you need to determine whether the quota option is set. If the quota option is set, an error will be" }, { "data": "If there are other creation options, an error will be reported. Call the actual driver rm_layer interface according to the incoming ID to delete the corresponding layer. ````c struct drivermountopts { char *mount_label; char options; sizet optionslen; }; ```` Call the actual driver mount_layer interface according to the incoming ID to mount the corresponding layer and return the mounted file system path. Call the actual driver umount_layer interface according to the incoming ID to umount the corresponding layer. Call the actual driver exists interface according to the incoming ID to query whether the corresponding layer exists. ````c struct ioreadwrapper { void *context; ioreadfunc_t read; ioclosefunc_t close; }; ```` Call the actual driver apply_diff interface according to the incoming ID to decompress the data. Overlay driver: When decompressing data, special processing is required for the overlay .whout file. If it is a file starting with .wh., it is marked as deleted and needs to be converted to char data, and the file needs to be skipped for subsequent decompression. For example, after deleting the home directory, the corresponding layer data is decompressed locally, and the corresponding home needs to create a character device with the same name. ````c drwxr-xr-x 4 root root 55 Mar 16 15:52 . drwxrwxrwt. 26 root root 4096 Mar 26 12:02 .. drwxr-xr-x 2 root root 38 Mar 16 12:49 etc c 1 root root 0, 0 Mar 16 15:52 home -rw-r--r-- 1 root root 140543 Mar 13 12:12 index.html dr-xr-x 2 root root 26 Mar 13 12:13 root ```` The decompressed data should be chrooted to the corresponding directory to prevent soft link attacks. Call the actual driver getlayermetadata interface according to the incoming ID to query the metadata of the corresponding layer. The metadata supported by overlay query is as follows: | key | value | | :-- | - | | WorkDir | the work path of the overlay layer | | MergedDir | the work path of the overlay layer | | UpperDir | the diff path of the overlay layer | | LowerDir | the underlying path of the overlay layer, including all the underlying paths, divided by: | ````c struct graphdriver_status { char *driver_name; char *backing_fs; char *status; }; ```` Query the status of the driver. The driver status that supports query is as follows: | key | value | | :- | | | driver_name | driver name | | backing_fs | the name of the file system where storage is located | | status | corresponds to the status of the underlying driver<br />the status return information supported by overlay is:<br />Backing Filesystem<br /> Supports d_type: true | Clean up the resources of the corresponding driver according to the call to the clean_up interface of the underlying driver The overlay driver implements uninstalling the storage home directory" } ]
{ "category": "Runtime", "file_name": "image_storage_driver_design.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Generate the autocompletion script for powershell Generate the autocompletion script for powershell. To load completions in your current shell session: cilium-health completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. ``` cilium-health completion powershell [flags] ``` ``` -h, --help help for powershell --no-descriptions disable completion descriptions ``` ``` -D, --debug Enable debug messages -H, --host string URI to cilium-health server API --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-health e.g. syslog.level=info,syslog.facility=local5,syslog.tag=cilium-agent ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-health_completion_powershell.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: Isolating Applications on a Weave Network menu_order: 5 search_type: Documentation A single Weave network can host multiple, isolated applications where each application's containers are able to communicate with each other, but not with the containers of other applications. To isolate applications, you can make use of `isolation-through-subnets` technique. This common strategy is an example of how with Weave Net many of your `on metal` techniques can still be used to deploy applications to a container network. To begin isolating an application (or parts of an application), configure Weave Net's IP allocator to manage multiple subnets. Using , configure multiple subsets: host1$ weave launch --ipalloc-range 10.2.0.0/16 --ipalloc-default-subnet 10.2.1.0/24 host1$ eval $(weave env) host2$ weave launch --ipalloc-range 10.2.0.0/16 --ipalloc-default-subnet 10.2.1.0/24 $HOST1 host2$ eval $(weave env) This delegates the entire 10.2.0.0/16 subnet to Weave Net, and instructs it to allocate from 10.2.1.0/24 within that, if a specific subnet is not specified. Next, launch the two netcat containers onto the default subnet: host1$ docker run --name a1 -ti weaveworks/ubuntu host2$ docker run --name a2 -ti weaveworks/ubuntu And then to test the isolation, launch a few more containers onto a different subnet: host1$ docker run -e WEAVE_CIDR=net:10.2.2.0/24 --name b1 -ti weaveworks/ubuntu host2$ docker run -e WEAVE_CIDR=net:10.2.2.0/24 --name b2 -ti weaveworks/ubuntu Ping each container to confirm that they can talk to each other, but not to the containers of our first subnet: root@b1:/# ping -c 1 -q b2 PING b2.weave.local (10.2.2.128) 56(84) bytes of data. b2.weave.local ping statistics 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.338/1.338/1.338/0.000 ms root@b1:/# ping -c 1 -q a1 PING a1.weave.local (10.2.1.2) 56(84) bytes of data. a1.weave.local ping statistics 1 packets transmitted, 0 received, 100% packet loss, time 0ms root@b1:/# ping -c 1 -q a2 PING a2.weave.local (10.2.1.130) 56(84) bytes of data. a2.weave.local ping statistics 1 packets transmitted, 0 received, 100% packet loss, time 0ms If required, a container can also be attached to multiple subnets when it is started using: host1$ docker run -e WEAVE_CIDR=\"net:default net:10.2.2.0/24\" -ti weaveworks/ubuntu `net:default` is used to request the allocation of an address from the default subnet in addition to one from an explicitly specified range. Important: Containers must be prevented from capturing and injecting raw network packets - this can be accomplished by starting them with the `--cap-drop net_raw` option. Note: By default docker permits communication between containers on the same host, via their docker-assigned IP addresses. For complete isolation between application containers, that feature needs to be disabled by in the docker daemon configuration. See Also *" } ]
{ "category": "Runtime", "file_name": "application-isolation.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "```{toctree} :maxdepth: 1 daemon-behavior Debug Incus <debugging> Requirements </requirements> Packaging recommendations </packaging> environment syscall-interception User namespace setup <userns-idmap> Configuration option index </config-options> ```" } ]
{ "category": "Runtime", "file_name": "internals.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "A Gomega release is a tagged sha and a GitHub release. To cut a release: Ensure CHANGELOG.md is up to date. Use `git log --pretty=format:'- %s [%h]' HEAD...vX.X.X` to list all the commits since the last release Categorize the changes into Breaking Changes (requires a major version) New Features (minor version) Fixes (fix version) Maintenance (which in general should not be mentioned in `CHANGELOG.md` as they have no user impact) Update GOMEGAVERSION in `gomegadsl.go` Push a commit with the version number as the commit message (e.g. `v1.3.0`) Create a new with the version number as the tag (e.g. `v1.3.0`). List the key changes in the release notes." } ]
{ "category": "Runtime", "file_name": "RELEASING.md", "project_name": "CNI-Genie", "subcategory": "Cloud Native Network" }
[ { "data": "This document presents the strategy to ensure continued network connectivity for multiple clones created from a single Firecracker microVM snapshot. This document also provides an overview of the scalability benchmarks we performed. There are two things which prevent network connectivity from resuming out-of-the-box for clones created from the same snapshot: Firecracker currently saves and attempts to restore network devices using the initially configured TAP names, and each guest will be resumed with the same network configuration, most importantly with the same IP address(es). To work around the former, each clone should be started within a separate network namespace (we can have multiple TAP interfaces with the same name, as long as they reside in distinct network namespaces). The latter can be mitigated by leveraging `iptables` `SNAT` and `DNAT` support. We choose a clone address (CA) for each clone, which is the new address thats going to represent the guest, and make it so all packets leaving the VM have their source address rewritten to CA, and all incoming packets with the destination address equal to CA have it rewritten to the IP address configured inside the guest (which remains the same for all clones). Each individual clone continues to believe its using the original address, but outside the VM packets are assigned a different one for every clone. Lets have a more detailed look at this approach. We assume each VM has a single network interface attached. If multiple interfaces with full connectivity are required, we simply repeat the relevant parts of this process for each additional interface. A typical setup right before taking a snapshot involves having a VM with a network interface backed by a TAP device (named `vmtap0`, for example) with an IP address (referred to as the TAP IP address, for example `192.168.241.1/29`), and an IP address configured inside the guest for the corresponding virtio device (referred to as the guest IP address, for example `192.168.241.2/29`). Attempting to restore multiple clones from the same snapshot faces the problem of every single one of them attempting to use a TAP device with the original name, which is not possible by default. Therefore, we need to start each clone in a separate network namespace. This is already possible using the netns jailer parameter, described in the . The specified namespace must already exist, so we have to create it first using ```bash sudo ip netns add fc0 ``` (where `fc0` is the name of the network namespace we plan to use for this specific clone - `clone0`). A new network namespace is initially empty, so we also have to create a new tap interface within using ```bash sudo ip netns exec fc0 ip tuntap add name vmtap0 mode tap ``` The `ip netns exec <ns_name> <command>` allows us to execute `command` in the context of the specified network namespace (in the previous case, the secondary command creates a new tap" }, { "data": "Next we configure the new TAP interface to match the expectations of the snapshotted guest by running ```bash sudo ip netns exec fc0 ip addr add 192.168.241.1/29 dev vmtap0 sudo ip netns exec fc0 ip link set vmtap0 up ``` At this point we can start multiple clones, each in its separate namespace, but they wont have connectivity to the rest of the host, only the respective TAP interfaces. However, interaction over the network is still possible; for example we can connect over ssh to clone0 using ```bash sudo ip netns exec fc0 ssh root@192.168.241.2 ``` In order to obtain full connectivity we have to begin by connecting the network namespace to the rest of the host, and then solving the same guest IP problem. The former requires the use of `veth` pairs - *virtual interfaces that are link-local to each other (any packet sent through one end of the pair is immediately received on the other, and the other way around)*. One end resides inside the network namespace, while the other is moved into the parent namespace (the host global namespace in this case), and packets flow in or out according to the network configuration. We have to pick IP addresses for both ends of the veth pair. For clone index `idx`, lets use `10.<idx / 30>.<(idx % 30) * 8>.1/24` for the endpoint residing in the host namespace, and the same address ending with `2` for the other end which remains inside the clone's namespace. Thus, for `clone 0` the former is `10.0.0.1` and the latter `10.0.0.2`. The first endpoint must have an unique name on the host, for example chosen as `veth(idx + 1) (so veth1 for clone 0)`. To create and setup the veth pair, we use the following commands (for namespace `fc0`): ```bash sudo ip netns exec fc0 ip link add veth1 type veth peer name veth0 sudo ip netns exec fc0 ip link set veth1 netns 1 sudo ip netns exec fc0 ip addr add 10.0.0.2/24 dev veth0 sudo ip netns exec fc0 ip link set dev veth0 up sudo ip addr add 10.0.0.1/24 dev veth1 sudo ip link set dev veth1 up sudo ip netns exec fc0 ip route add default via 10.0.0.1 ``` The last step involves adding the `iptables` rules which change the source/destination IP address of packets on the fly (thus allowing all clones to have the same internal IP). We need to choose a clone address, which is unique on the host for each VM. In the demo, we use `192.168.<idx / 30>.<(idx % 30) * 8 + 3>` (which is `192.168.0.3` for `clone 0`): ```bash sudo ip netns exec fc0 iptables -t nat -A POSTROUTING -o veth0 \\ -s 192.168.241.2 -j SNAT --to 192.168.0.3 sudo ip netns exec fc0 iptables -t nat -A PREROUTING -i veth0 \\ -d 192.168.0.3 -j DNAT --to 192.168.241.2 sudo ip route add 192.168.0.3 via 10.0.0.2 ``` Full connectivity to/from the clone should be present at this" }, { "data": "To make sure the guest also adjusts to the new environment, you can explicitly clear the ARP/neighbour table in the guest: ```bash ip -family inet neigh flush any ip -family inet6 neigh flush any ``` Otherwise, packets originating from the guest might be using old Link Layer Address for up to arp cache timeout seconds. After said timeout period, connectivity will work both ways even without an explicit flush. We ran synthetic tests to determine the impact of the addtional iptables rules and namespaces on network performance. We compare the case where each VM runs as regular Firecracker (gets assigned a TAP interface and a unique IP address in the global namespace) versus the setup with a separate network namespace for each VM (together with the veth pair and additional rules). We refer to the former as the basic case, while the latter is the ns case. We measure latency with the `ping` command and throughput with `iperf`. The experiments, ran on an Amazon AWS `m5d.metal` EC2 instace, go as follows: Set up 3000 network resource slots (different TAP interfaces for the basic case, and namespaces + everything else for ns). This is mainly to account for any difference the setup itself might make, even if there are not as many active endpoints at any given time. Start 1000 Firecracker VMs, and pick `N < 1000` as the number of active VMs that are going to generate network traffic. For ping experiments, we ping each active VM from the host every 500ms for 30 seconds. For `iperf` experiments, we measure the average bandwidth of connections from the host to every active VM lasting 40 seconds. There is one separate client process per VM. When `N = 100`, in the basic case we get average latencies of `0.315 ms (0.160/0.430 min/max)` for `ping`, and an average throughput of `2.25 Gbps (1.62/3.21 min/max)` per VM for `iperf`. In the ns case, the ping results are bumped higher by around 10-20 us, while the `iperf` results are virtually the same on average, with a higher minimum (1.73 Gbps) and a lower maximum (3 Gbps). When `N = 1000`, we start facing desynchronizations caused by difficulties in starting (and thus finishing) the client processes all at the same time, which creates a wider distribution of results. In the basic case, the average latency for ping experiments has gone down to 0.305 ms, the minimum decreased to 0.155 ms, but the maximum increased to 0.640 ms. The average `iperf` per VM throughput is around `440 Mbps (110/3936 min/max)`. In the ns case, average `ping` latency is now `0.318 ms (0.170/0.650 min/max)`. For `iperf`, the average throughput is very close to basic at `~430 Mbps`, while the minimum and maximum values are lower at `85/3803 Mbps`. The above measurements give a significant degree of confidence in the scalability of the solution (subject to repeating for different values of the experimental parameters, if necessary). The increase in latency is almost negligible considering usual end-to-end delays. The lower minimum throughput from the iperf measurements might be significant, but only if that magnitude of concurrent, data-intensive transfers is likely. Moreover, the basic measurements are close to an absolute upper bound." } ]
{ "category": "Runtime", "file_name": "network-for-clones.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "vmadmd(8) -- virtual machine management daemon =============================================== /usr/vm/sbin/vmadmd The vmadmd daemon is designed to run in a SmartOS global zone and support vmadm(8) by performing some actions which require maintaining open connections or waiting for state change events that cannot be handle in a single run of vmadm(8). The functions vmadmd handles are: autobooting KVM VMs exposing KVM VM VNC consoles via TCP handling stopping and rebooting timeouts for KVM VMs handling sending QMP commands to KVM VMs Each of these is described in more detail below. These actions are exposed through an HTTP server listening on a unix socket /tmp/vmadmd.http. The HTTP interface is expected to only be used by vmadm(8) and endpoints should be considered a private interface. It is documented here in order to aid understanding of its behaviour. When the vmadmd process starts it checks for a file /tmp/.autoboot_vmadmd and if this file does not exist, it is created and vmadmd runs through the autoboot process. Since the /tmp filesystem is tmpfs, this file will only not exist on the first vmadmd startup after each boot. The autoboot process involves loading the list of zones with brand=kvm and checking the vm-autoboot property on these zones. If that property is set to true and the VM is not running, it is booted. VMs in a SmartOS system run within zones. The zone in which a VM is running has no network interfaces itself as the vnics for the VM are attached to the qemu process rather than being plumbed in the zone. As such there is no way to have the qemu processes VNC listening on a TCP socket in the zone. To still provide access to the VNC service, VMs run VNC connected to a unix socket inside their zonepath which is then exposed on a TCP socket by vmadmd in the global zone. In order to know when to bring the TCP sockets redirecting to the unix sockets up and down, vmadmd watches for zone status sysevents. When a 'kvm' branded zone goes to the 'running' state and on vmadmd startup, vmadmd will connect to the running VM's /zones/<uuid>/root/tmp/vm.vnc socket and opens a new TCP port that is redirected to this. When a zone is seen coming out of the running state, the TCP socket is closed. The port chosen for the VNC console is random, but can be discovered through the 'vmadm info' command. Note that this port will change if either the VM or vmadmd are restarted. The handling of stopping a KVM VM is complicated by the fact that sending a shutdown request to a guest requires cooperation from that guest. Since guest kernels are not always willing or able to cooperate with these shutdown requests, a 'vmadm stop' command marks a VM as being in transition to 'stopped' and sets an expiry for that transition to complete. Reboot for VMs is implemented as a stop and start so this also applies when running 'vmadm" }, { "data": "Since the vmadm(8) process returns once a stop request is sent, it is up to vmadmd to ensure that if the VM does not complete its transition by the expiry, the stop or reboot is forced. On startup, vmadmd checks all kvm branded zones and sends a forced stop to any which have an expired running transition. If the transition is to start the VM is then started. If the transition is not yet expired when the VM is loaded, vmadmd sets an internal timer to check again at expire time. Since vmadmd also handles all stop requests, this timer is also set when any new stop request comes in. Qemu processes support a protocol called QMP for sending several types of management commands to the hypervisor. When created with vmadm(8) the qmp socket will be listening on a unix socket /zones/<uuid>/root/tmp/vm.qmp. vmadmd exposes some of these commands and handles sending them through QMP. The commands exposed are described below. info (GET /vm/:id[?type=type,type,...]) This command actually sends several requests through QMP and adds some 'virtual' results for the VNC socket. By default all possible information is returned. Optionally the caller can specify a list of specific types of info in which case the result will include only that info. The result is returned as a JSON object. Types of info available are: block: Information about the block devices attached to this VM. blockstats: Counters for blocks read/written, number of operations and highest write offset for each block device. chardev: Information about the special character devices attached to this VM. cpus: Information about the virtual CPUs attached to this VM. kvm: Information about the availability of the KVM driver in this VM. pci: Information about each device on the virtual PCI bus attached to this VM. version: Qemu version information. vnc: The IP, port and VNC display number for the TCP socket we're listening on for this VM. stop (POST /vm/:id?action=stop&timeout=secs) This sends a acpi shutdown request to the VM. The guest kernel needs to be configured to handle this request and should immediately begin a shutdown in order to prevent data loss. If the shutdown sequence is not completed within the timeout number of seconds, the VM is forcibly shut down. reloadvnc (POST /vm/:id?action=reloadvnc) This recreates the VNC listener for the VM after loading the new parameters. If the vncpassword and vncport are unchanged, this is a NO-OP. The intention is that this be used after modifying a VM's VNC settings. reset (POST /vm/:id?action=reset) This is the action used by 'vmadm reboot <uuid> -F' command. It acts similarly to pressing the virtual reset switch on your VM. The guest OS will not be given warning or have time to respond to this request. sysrq (POST /vm/:id?action=sysrq&request=[nmi|screenshot]) There are two types of request you can make through the sysrq command: nmi: This sends a non-maskable interrupt to the VM. The guest kernel needs to be configured for this to do anything. screenshot: This takes a screenshot of the VMs console. The screenshot will be written to the /zones/<uuid>/root/tmp/vm.ppm. zones(5), vmadm(8), zonecfg(8), zoneadm(8) The vmadmd service is managed by the service management facility, smf(7), under the service identifier: svc:/system/smartdc/vmadmd:default Administrative actions on this service, such as enabling, disabling, or requesting restart, can be performed using svcadm(8). The service's status can be queried using the svcs(1) command." } ]
{ "category": "Runtime", "file_name": "vmadmd.8.md", "project_name": "SmartOS", "subcategory": "Container Runtime" }
[ { "data": "confd logs everything to stdout. You can control the types of messages that get printed by using the `-log-level` flag and corresponding configuration file settings. See the for more details. Example log messages: ```Bash 2013-11-03T19:04:53-08:00 confd[21356]: INFO SRV domain set to confd.io 2013-11-03T19:04:53-08:00 confd[21356]: INFO Starting confd 2013-11-03T19:04:53-08:00 confd[21356]: INFO etcd nodes set to http://etcd0.confd.io:4001, http://etcd1.confd.io:4001 2013-11-03T19:04:54-08:00 confd[21356]: INFO /tmp/myconf2.conf has md5sum ae5c061f41de8895b6ef70803de9a455 should be 50d4ce679e1cf13e10cd9de90d258996 2013-11-03T19:04:54-08:00 confd[21356]: INFO Target config /tmp/myconf2.conf out of sync 2013-11-03T19:04:54-08:00 confd[21356]: INFO Target config /tmp/myconf2.conf has been updated ```" } ]
{ "category": "Runtime", "file_name": "logging.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "This document provides tips on how to debug Sysbox. Before initiating a debugging session, we must ensure that the binaries that we will be operating on have been built with compiler-optimizations disabled. The following sysbox Makefile targets have been created for this purpose: ``` sysbox-debug sysbox-runc-debug sysbox-fs-debug sysbox-mgr-debug ``` Example: ``` $ make sysbox-debug && sudo make install ``` In some cases, it's desirable to debug process initialization phases, so in those cases we must pick a convenient location where to place a `sleep` instruction that provides user with enough time to launch the debugger. Example (sysbox-runc): ```diff diff --git a/create.go b/create.go index bb551950..a2b29beb 100644 a/create.go +++ b/create.go @@ -2,6 +2,7 @@ package main import ( \"os\" \"time\" @@ -59,6 +60,7 @@ command(s) that get executed on start, edit the args parameter of the spec. See if err := revisePidFile(context); err != nil { return err } time.Sleep(time.Second * 30) if err := sysbox.CheckHostConfig(context); err != nil { return err } ``` Even though GDB offers Golang support, in reality there are a few key features missing today, such as proper understanding of Golang's concurrency constructs (e.g. goroutines). In consequence, in this document i will be focusing on Delve debugger, which is not as feature-rich as the regular GDB, but it fully supports Golang's runtime. Luckily, most of the existing Delve instructions fully match GDB ones, so I will mainly concentrate on those that (slightly) deviate. Installation: ```console rodny@vm-1:~$ go get -u github.com/derekparker/delve/cmd/dlv ``` Change working directory to the sysbox workspace location: ```console rodny@vm-1:~$ cd ~/wsp/sysbox ``` Attaching to a running process: Let's pick sysbox-runc as an example. First, we need to find the PID of the running sysbox-runc process. Use `pstree -SlpgT | grep sysbox` or `ps -ef | grep sysbox` to help with this. Then start the debugger and attach it to the sysbox-runc process via the PID: ```console rodny@vm-1:~/wsp/sysbox/sysbox$ sudo env \"PATH=$PATH\" env \"GOROOT=$GOROOT\" env \"GOPATH=$GOPATH\" env \"PWD=$PWD\" $(which dlv) attach $(pidof sysbox-runc) ``` Notice that to allow Golang runtime to operate as we expect, we must export the existing Golang related env-vars to the newly-spawn delve process. Delve command reference: https://github.com/go-delve/delve/blob/master/Documentation/cli/README.md Setting break-points: Depending on the level of granularity required, we can set breakpoints attending to either one of these approches: Package + Receiver + Method: (dlv) b libcontainer.(*initProcess).start File + Line: (dlv) b process_linux.go:290 Example: ```console (dlv) b libcontainer.(*initProcess).start Breakpoint 1 set at 0x55731c80152d for github.com/opencontainers/runc/libcontainer.(*initProcess).start() /home/rodny/go/src/github.com/opencontainers/runc/libcontainer/process_linux.go:263 ``` Process iteration: We can make use of the typical `n` (next), `s` (step), `c` (continue) instruccions to iterate through a process' instruction-set. Example: ```console (dlv) c > github.com/opencontainers/runc/libcontainer.(*initProcess).start() /home/rodny/go/src/github.com/opencontainers/runc/libcontainer/process_linux.go:263 (hits goroutine(1):1 total:1) (PC: 0x55731c80152d) 258: p.cmd.Process = process 259: p.process.ops = p 260: return nil 261: } 262: => 263: func (p *initProcess) start() error { 264: defer p.parentPipe.Close() 265: err := p.cmd.Start() 266: p.process.ops = p 267:" }, { "data": "268: if err != nil { ``` Inspecting the stack-trace: ```console (dlv) bt 0 0x00000000004ead9a in syscall.Syscall6 at /usr/local/go/src/syscall/asmlinuxamd64.s:53 1 0x0000000000524f55 in os.(*Process).blockUntilWaitable at /usr/local/go/src/os/wait_waitid.go:31 2 0x00000000005194ae in os.(*Process).wait at /usr/local/go/src/os/exec_unix.go:22 3 0x00000000005180a1 in os.(*Process).Wait at /usr/local/go/src/os/exec.go:125 4 0x00000000007d870f in os/exec.(*Cmd).Wait at /usr/local/go/src/os/exec/exec.go:501 5 0x0000000000d6c2fa in github.com/opencontainers/runc/libcontainer. (*initProcess).wait at /root/nestybox/sysbox/sysbox-runc/libcontainer/process_linux.go:655 6 0x0000000000d6c43f in github.com/opencontainers/runc/libcontainer.(*initProcess).terminate at /root/nestybox/sysbox/sysbox-runc/libcontainer/process_linux.go:668 7 0x0000000000d89f35 in github.com/opencontainers/runc/libcontainer.(*initProcess).start.func1 at /root/nestybox/sysbox/sysbox-runc/libcontainer/process_linux.go:353 8 0x0000000000d6bace in github.com/opencontainers/runc/libcontainer.(*initProcess).start at /root/nestybox/sysbox/sysbox-runc/libcontainer/process_linux.go:592 9 0x0000000000d3f3ae in github.com/opencontainers/runc/libcontainer.(*linuxContainer).start at /root/nestybox/sysbox/sysbox-runc/libcontainer/container_linux.go:390 10 0x0000000000d3e426 in github.com/opencontainers/runc/libcontainer.(*linuxContainer).Start at /root/nestybox/sysbox/sysbox-runc/libcontainer/container_linux.go:287 11 0x0000000000e1da2e in main.(*runner).run at /root/nestybox/sysbox/sysbox-runc/utils_linux.go:383 12 0x0000000000e1f08f in main.startContainer at /root/nestybox/sysbox/sysbox-runc/utils_linux.go:553 13 0x0000000000e1f78c in main.glob..func2 at /root/nestybox/sysbox/sysbox-runc/create.go:108 14 0x0000000000bac838 in github.com/urfave/cli.HandleAction at /go/pkg/mod/github.com/urfave/cli@v1.22.1/app.go:523 15 0x0000000000bade00 in github.com/urfave/cli.Command.Run at /go/pkg/mod/github.com/urfave/cli@v1.22.1/command.go:174 16 0x0000000000baa123 in github.com/urfave/cli.(*App).Run at /go/pkg/mod/github.com/urfave/cli@v1.22.1/app.go:276 17 0x0000000000e11880 in main.main at /root/nestybox/sysbox/sysbox-runc/main.go:145 18 0x000000000043ad24 in runtime.main at /usr/local/go/src/runtime/proc.go:203 19 0x000000000046c0b1 in runtime.goexit at /usr/local/go/src/runtime/asm_amd64.s:1357 (dlv) ``` Configure the source-code path: Sysbox compilation process is carried out inside a docker container. In order to do this, we bind-mount the user's Sysbox workspace (i.e. \"sysbox\" folder) into this path within the container: `/root/nestybox/sysbox`. Golang compiler includes this path into the generated Sysbox binaries. Thereby, if you are debugging Sysbox daemon in your host, unless your workspace path fully matches the one above (unlikely), Delve will not be able to display the Sysbox source-code. The typical solution in these cases is to modify Delve's configuration to replace the containerized path with the one of your local environment. ```console (dlv) config substitute-path /root/nestybox/sysbox /home/rodny/wsp/sysbox ``` The source-code should be now properly shown: ```console (dlv) frame 10 > syscall.Syscall6() /usr/local/go/src/syscall/asmlinuxamd64.s:53 (PC: 0x4ead9a) Frame 10: /root/nestybox/sysbox/sysbox-runc/libcontainer/container_linux.go:287 (PC: d3e426) 282: if err := c.setupShiftfsMarks(); err != nil { 283: return err 284: } 285: } 286: } => 287: if err := c.start(process); err != nil { 288: if process.Init { 289: c.deleteExecFifo() 290: } 291: return err 292: } (dlv) ``` Inspecting POSIX threads: ```console (dlv) threads Thread 2955507 at 0x46dfd3 /usr/local/go/src/runtime/syslinuxamd64.s:536 runtime.futex Thread 2955508 at 0x46dfd3 /usr/local/go/src/runtime/syslinuxamd64.s:536 runtime.futex Thread 2955509 at 0x46dfd3 /usr/local/go/src/runtime/syslinuxamd64.s:536 runtime.futex Thread 2955510 at 0x46dfd3 /usr/local/go/src/runtime/syslinuxamd64.s:536 runtime.futex Thread 2955511 at 0x46dfd3 /usr/local/go/src/runtime/syslinuxamd64.s:536 runtime.futex Thread 2955512 at 0x46dfd3 /usr/local/go/src/runtime/syslinuxamd64.s:536 runtime.futex Thread 2955517 at 0x46dfd3 /usr/local/go/src/runtime/syslinuxamd64.s:536 runtime.futex Thread 2955520 at 0x4ead9a /usr/local/go/src/syscall/asmlinuxamd64.s:53 syscall.Syscall6 Thread 2955523 at 0x46dfd3 /usr/local/go/src/runtime/syslinuxamd64.s:536 runtime.futex Thread 2955564 at 0x46e180 /usr/local/go/src/runtime/syslinuxamd64.s:673 runtime.epollwait (dlv) ``` Inspecting goroutines: ```console (dlv) goroutines Goroutine 1 - User: /usr/local/go/src/syscall/asmlinuxamd64.s:53 syscall.Syscall6 (0x4ead9a) (thread 2955520) Goroutine 2 - User: /usr/local/go/src/runtime/proc.go:305 runtime.gopark (0x43b0db) Goroutine 3 - User: /usr/local/go/src/runtime/proc.go:305 runtime.gopark (0x43b0db) Goroutine 4 - User: /usr/local/go/src/runtime/proc.go:305 runtime.gopark (0x43b0db) Goroutine 5 - User: /usr/local/go/src/runtime/proc.go:305 runtime.gopark (0x43b0db) Goroutine 9 - User: /usr/local/go/src/runtime/proc.go:305 runtime.gopark (0x43b0db) Goroutine 18 - User: /usr/local/go/src/runtime/proc.go:305 runtime.gopark (0x43b0db) Goroutine 20 - User: /usr/local/go/src/runtime/sigqueue.go:147 os/signal.signal_recv (0x450dec) Goroutine 23 - User: /usr/local/go/src/runtime/proc.go:305 runtime.gopark (0x43b0db) Goroutine 26 - User: /usr/local/go/src/runtime/proc.go:305 runtime.gopark (0x43b0db) Goroutine 32 - User: /usr/local/go/src/runtime/proc.go:305 runtime.gopark (0x43b0db) [11 goroutines] (dlv) ``` NOTE: Use `goroutines -t` to show a full stack trace for each goroutine. Then use `frame X` to switch to the desired" }, { "data": "Get a list of DLV configs: ```console (dlv) config -list ``` Configure print length of strings: ```console (dlv) config max-string-len 1000 ``` Configure max array size: ```console (dlv) config max-array-values 600 ``` Configure depth of variable recursion: ```console (dlv) config max-variable-recurse 2 ``` For unit tests, use `dlv test <package> -test.run <test-name>`: ```console dlv test github.com/nestybox/sysbox-runc/libcontainer/integration -test.run TestSysctl ``` Then set a breakpoint at the desired test line and press `c` (continue). In some cases you need the test to be built with special tags: ```console go test -tags idmapped_mnt ``` Or if you want to attach the Delve debugger to it, first build the test then run it with the debugger. ```console go test -c -tags idmapped_mnt -gcflags=\"all=-N -l\" sudo env \"PATH=$PATH\" env \"GOROOT=$GOROOT\" env \"GOPATH=$GOPATH\" env \"PWD=$PWD\" $(which dlv) --check-go-version=false exec <path-to-compiled-test>.test ``` As it's usually the case, core-dumps can be generated either through the `gcore` tool (provided as part of the `gdb` package), or within the `dlv` debugger itself. Refer to this for details about the former procedure. For the later, proceed as below. Let's pick sysbox-fs as an example ... ```console dev-vm1:~/wsp/sysbox$ sudo env \"PATH=$PATH\" env \"GOROOT=$GOROOT\" env \"GOPATH=$GOPATH\" env \"PWD=$PWD\" $(which dlv) attach `pidof sysbox-fs` Type 'help' for list of commands. (dlv) dump core.sysbox-fs.1 Dumping memory 203239424 / 203239424... (dlv) quit Would you like to kill the process? [Y/n] n dev-vm1:~/wsp/04-14-2021/sysbox$ ls -lh core.sysbox-fs.1 -rw-r--r-- 1 root root 194M Apr 24 00:04 core.sysbox-fs.1 dev-vm1:~/wsp/sysbox$ sudo tar -zcvf core.sysbox-fs.1.tar.gz core.sysbox-fs.1 core.sysbox-fs.1 dev-vm1:~/wsp/sysbox$ ls -lh core.sysbox-fs.1.tar.gz -rw-r--r-- 1 root root 7.2M Apr 24 00:05 core.sysbox-fs.1.tar.gz ``` To load and debug a previously generated core-dump do the following. Install the Sysbox debugging package corresponding to the release being executed (no symbols are included in the official / production binaries): ```console sudo apt-get install ~/sysbox-ce-dbgsym0.4.0-0.ubuntu-focalamd64.ddeb ``` If debugging from your host: ```console $ sudo env \"PATH=$PATH\" env \"GOROOT=$GOROOT\" env \"GOPATH=$GOPATH\" env \"PWD=$PWD\" $(which dlv) core $(which sysbox-fs) ./core.sysbos-fs.1 ``` If debugging from Sysbox's dev/test container: ```console $ dlv core $(which sysbox-fs) ./core.sysbox-fs.1 ``` In both cases above, `sysbox-fs/sysbox-fs` refers to the path where to find the binary being debugged. Obviously, this binary should fully match the one utilized to generate the original core-dump. To debug cgo code, you must use gdb (delve does not work). Instructions: 1) Build the cgo code with \"go build --buildmode=exe\"; do not use \"--buildmode=pie\", as the position independent code confuses gdb. There may be a gdb option/command to get around this, but it's easier to just build with \"--buildmode=exe\" during debug. 2) In the golang file that calls cgo, use the \"-g\" switch, to tell gccgo to generate debug symbols. ```console ``` 3) If needed, instrument the binary to allow you time to attach the debugger to" }, { "data": "For example, to attach to the sysbox-runc nsenter child process which is normally ephemeral, add an debug \"sleep()\" to an appropriate location within the nsenter (to give you time to find the nsenter pid and attach the debugger to it), then execute sysbox-runc, find the pid with pstree, and attach gdb to it (next step). 3) Attach gdb to the target process (need root access): ```console GNU gdb (Ubuntu 8.3-0ubuntu1) 8.3 Copyright (C) 2019 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type \"show copying\" and \"show warranty\" for details. This GDB was configured as \"x86_64-linux-gnu\". Type \"show configuration\" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type \"help\". Type \"apropos word\" to search for commands related to \"word\". Attaching to process 17089 No executable file now. warning: Could not load vsyscall page because no executable was specified 0x00007f0d78abf2e2 in ?? () ``` 4) Point gdb to the sysbox-runc binary so it can load the symbols: ```console (gdb) file /usr/bin/sysbox-runc A program is being debugged already. Are you sure you want to change the file? (y or n) y Reading symbols from /usr/bin/sysbox-runc... Loading Go Runtime support. (gdb) bt ``` 5) Then use gdb as usual: ```console (gdb) break nsexec.c:650 Breakpoint 1 at 0xbb4f68: file nsexec.c, line 650. (gdb) c Continuing. Breakpoint 1, updateoomscore_adj (len=4, data=0xe2f62e \"-999\") at nsexec.c:650 650 updateoomscore_adj(\"-999\", 4); (gdb) n nsexec () at nsexec.c:662 662 if (config.namespaces) { (gdb) p config $1 = {data = 0x1b552a0 \"\\b\", cloneflags = 2114060288, oomscoreadj = 0x1b552dc \"0\", oomscoreadjlen = 2, uidmap = 0x1b552ac \"0 165536 65536\\n\", uidmaplen = 16, gidmap = 0x1b552c0 \"0 165536 65536\\n\", gidmaplen = 16, namespaces = 0x0, namespaceslen = 0, issetgroup = 1 '\\001', isrootless_euid = 0 '\\000', uidmappath = 0x0, uidmappathlen = 0, gidmappath = 0x0, gidmappathlen = 0, preprootfs = 1 '\\001', useshiftfs = 1 '\\001', makeparentpriv = 0 '\\000', rootfsprop = 540672, rootfs = 0x1b5530c \"/var/lib/docker/overlay2/d764bae04e3e81674c0f0c8ccfc8dec1ef2483393027723bac6519133fa7a4a2/merged\", rootfslen = 97, parentmount = 0x0, parentmountlen = 0, shiftfsmounts = 0x1b55374 \"/lib/modules/5.3.0-46-generic,/usr/src/linux-headers-5.3.0-46,/usr/src/linux-headers-5.3.0-46-generic,/var/lib/docker/containers/cbf6dfe2bef0563532770ed664829032d00eb278367176de32cd03b7290ea1ac\", shiftfsmountslen = 194} (gdb) set print pretty (gdb) p config $2 = { data = 0x1b552a0 \"\\b\", cloneflags = 2114060288, oomscoreadj = 0x1b552dc \"0\", oomscoreadj_len = 2, uidmap = 0x1b552ac \"0 165536 65536\\n\", uidmap_len = 16, gidmap = 0x1b552c0 \"0 165536 65536\\n\", gidmap_len = 16, namespaces = 0x0, namespaces_len = 0, is_setgroup = 1 '\\001', isrootlesseuid = 0 '\\000', uidmappath = 0x0, uidmappath_len = 0, gidmappath = 0x0, gidmappath_len = 0, prep_rootfs = 1 '\\001', --Type <RET> for more, q to quit, c to continue without paging--q Quit ``` Tip: if you are running sysbox-runc inside the test container, run gdb at host level, use pstree to figure out the pid of sysbox-runc nsenter child process inside the test container, and point gdb to the sysbox-runc binary inside the test container (e.g., `file /var/lib/docker/overlay2/860f62b3bd74c36be6754c8ed8e3f77a63744a2c6b16bef058b22ba0185e2877/merged/usr/bin/sysbox-runc`)." } ]
{ "category": "Runtime", "file_name": "debug.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "title: OpenShift adds a number of security and other enhancements to Kubernetes. In particular, allow the cluster admin to define exactly which permissions are allowed to pods running in the cluster. You will need to define those permissions that allow the Rook pods to run. The settings for Rook in OpenShift are described below, and are also included in the : : Creates the security context constraints and starts the operator deployment : Creates an object store with rgw listening on a valid port number for OpenShift To create an OpenShift cluster, the commands basically include: ```console oc create -f crds.yaml -f common.yaml oc create -f operator-openshift.yaml oc create -f cluster.yaml ``` Configuration required for Openshift is automatically created by the Helm charts, such as the SecurityContextConstraints. See the . To orchestrate the storage platform, Rook requires the following access in the cluster: Create `hostPath` volumes, for persistence by the Ceph mon and osd pods Run pods in `privileged` mode, for access to `/dev` and `hostPath` volumes Host networking for the Rook agent and clusters that require host networking Ceph OSDs require host PIDs for communication on the same node Before starting the Rook operator or cluster, create the security context constraints needed by the Rook pods. The following yaml is found in `operator-openshift.yaml` under `/deploy/examples`. !!! hint Older versions of OpenShift may require `apiVersion: v1`. Important to note is that if you plan on running Rook in namespaces other than the default `rook-ceph`, the example scc will need to be modified to accommodate for your namespaces where the Rook pods are running. To create the scc you will need a privileged account: ```console oc login -u system:admin ``` We will create the security context constraints with the operator in the next section. There are some Rook settings that also need to be adjusted to work in OpenShift. There is an environment variable that needs to be set in the operator spec that will allow Rook to run in OpenShift clusters. `ROOKHOSTPATHREQUIRES_PRIVILEGED`: Must be set to `true`. Writing to the hostPath is required for the Ceph mon and osd pods. Given the restricted permissions in OpenShift with SELinux, the pod must be running privileged in order to write to the hostPath volume. ```yaml name: ROOKHOSTPATHREQUIRES_PRIVILEGED value: \"true\" ``` Now create the security context constraints and the operator: ```console oc create -f operator-openshift.yaml ``` The cluster settings in `cluster.yaml` are largely isolated from the differences in OpenShift. There is perhaps just one to take note of: `dataDirHostPath`: Ensure that it points to a valid, writable path on the host systems. In OpenShift, ports less than 1024 cannot be bound. In the , ensure the port is modified to meet this requirement. ```yaml gateway: port: 8080 ``` You can expose a different port such as `80` by creating a service. A sample object store can be created with these settings: ```console oc create -f object-openshift.yaml ```" } ]
{ "category": "Runtime", "file_name": "ceph-openshift.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "The ttrpc protocol is client/server protocol to support multiple request streams over a single connection with lightweight framing. The client represents the process which initiated the underlying connection and the server is the process which accepted the connection. The protocol is currently defined as asymmetrical, with clients sending requests and servers sending responses. Both clients and servers are able to send stream data. The roles are also used in determining the stream identifiers, with client initiated streams using odd number identifiers and server initiated using even number. The protocol may be extended in the future to support server initiated streams, that is not supported in the latest version. The ttrpc protocol is designed to be lightweight and optimized for low latency and reliable connections between processes on the same host. The protocol does not include features for handling unreliable connections such as handshakes, resets, pings, or flow control. The protocol is designed to make low-overhead implementations as simple as possible. It is not intended as a suitable replacement for HTTP2/3 over the network. Each Message Frame consists of a 10-byte message header followed by message data. The data length and stream ID are both big-endian 4-byte unsigned integers. The message type is an unsigned 1-byte integer. The flags are also an unsigned 1-byte integer and use is defined by the message type. ++ | Data Length (32) | ++ | Stream ID (32) | ++--+ | Msg Type (8) | ++ | Flags (8) | ++--+ | Data (*) | ++ The Data Length field represents the number of bytes in the Data field. The total frame size will always be Data Length + 10 bytes. The maximum data length is 4MB and any larger size should be rejected. Due to the maximum data size being less than 16MB, the first frame byte should always be zero. This first byte should be considered reserved for future use. The Stream ID must be odd for client initiated streams and even for server initiated streams. Server initiated streams are not currently supported. | Message Type | Name | Description | |--|-|-| | 0x01 | Request | Initiates stream | | 0x02 | Response | Final stream data and terminates | | 0x03 | Data | Stream data | The request message is used to initiate stream and send along request data for properly routing and handling the stream. The stream may indicate unary without any inbound or outbound stream data with only a response is expected on the stream. The request may also indicate the stream is still open for more data and no response is expected until data is finished. If the remote indicates the stream is closed, the request may be considered non-unary but without anymore stream data" }, { "data": "In the case of `remote closed`, the remote still expects to receive a response or stream data. For compatibility with non streaming clients, a request with empty flags indicates a unary request. | Flag | Name | Description | ||--|--| | 0x01 | `remote closed` | Non-unary, but no more data expected from remote | | 0x02 | `remote open` | Non-unary, remote is still sending data | The response message is used to end a stream with data, an empty response, or an error. A response message is the only expected message after a unary request. A non-unary request does not require a response message if the server is sending back stream data. A non-unary stream may return a single response message but no other stream data may follow. No response flags are defined at this time, flags should be empty. The data message is used to send data on an already initialized stream. Either client or server may send data. A data message is not allowed on a unary stream. A data message should not be sent after indicating `remote closed` to the peer. The last data message on a stream must set the `remote closed` flag. The `no data` flag is used to indicate that the data message does not include any data. This is normally used with the `remote closed` flag to indicate the stream is now closed without transmitting any data. Since ttrpc normally transmits a single object per message, a zero length data message may be interpreted as an empty object. For example, transmitting the number zero as a protobuf message ends up with a data length of zero, but the message is still considered data and should be processed. | Flag | Name | Description | ||--|--| | 0x01 | `remote closed` | No more data expected from remote | | 0x04 | `no data` | This message does not have data | All ttrpc requests use streams to transfer data. Unary streams will only have two messages sent per stream, a request from a client and a response from the server. Non-unary streams, however, may send any numbers of messages from the client and the server. This makes stream management more complicated than unary streams since both client and server need to track additional state. To keep this management as simple as possible, ttrpc minimizes the number of states and uses two flags instead of control frames. Each stream has two states while a stream is still alive: `local closed` and `remote closed`. Each peer considers local and remote from their own perspective and sets flags from the other peer's perspective. For example, if a client sends a data frame with the `remote closed` flag, that is indicating that the client is now `local closed` and the server will be `remote" }, { "data": "A unary operation does not need to send these flags since each received message always indicates `remote closed`. Once a peer is both `local closed` and `remote closed`, the stream is considered finished and may be cleaned up. Due to the asymmetric nature of the current protocol, a client should always be in the `local closed` state before `remote closed` and a server should always be in the `remote closed` state before `local closed`. This happens because the client is always initiating requests and a client always expects a final response back from a server to indicate the initiated request has been fulfilled. This may mean server sends a final empty response to finish a stream even after it has already completed sending data before the client. +--+ +--+ | Client | | Server | ++-+ +-++ | ++ | local >+ Request +--> remote closed | ++ | closed | | | +-+ | finished <--+ Response +--< finished | +-+ | | | RC: `remote closed` flag RO: `remote open` flag +--+ +--+ | Client | | Server | ++-+ +-++ | +--+ | >-+ Request [RO] +--> | +--+ | | | | ++ | >--+ Data +> | ++ | | | | +--+ | local >+ Data [RC] +> remote closed | +--+ | closed | | | +-+ | finished <--+ Response +--< finished | +-+ | | | +--+ +--+ | Client | | Server | ++-+ +-++ | +--+ | local >-+ Request [RC] +--> remote closed | +--+ | closed | | | ++ | <--+ Data +< | ++ | | | | +--+ | finished <+ Data [RC] +< finished | +--+ | | | +--+ +--+ | Client | | Server | ++-+ +-++ | +--+ | >-+ Request [RO] +--> | +--+ | | | | ++ | >--+ Data +> | ++ | | | | ++ | <--+ Data +< | ++ | | | | ++ | >--+ Data +> | ++ | | | | +--+ | local >+ Data [RC] +> remote closed | +--+ | closed | | | ++ | <--+ Data +< | ++ | | | | +--+ | finished <+ Data [RC] +< finished | +--+ | | | While this protocol is defined primarily to support Remote Procedure Calls, the protocol does not define the request and response types beyond the messages defined in the protocol. The implementation provides a default protobuf definition of request and response which may be used for cross language rpc. All implementations should at least define a request type which support routing by procedure name and a response type which supports call status. | Version | Features | ||| | 1.0 | Unary requests only | | 1.2 | Streaming support |" } ]
{ "category": "Runtime", "file_name": "PROTOCOL.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "containerd offers a fully namespaced API so multiple consumers can all use a single containerd instance without conflicting with one another. Namespaces allow multi-tenancy within a single daemon. This removes the need for the common pattern of using nested containers to achieve this separation. Consumers are able to have containers with the same names but with settings and/or configurations that vary drastically. For example, system or infrastructure level containers can be hidden in one namespace while user level containers are kept in another. Underlying image content is still shared via content addresses but image names and metadata are separate per namespace. It is important to note that namespaces, as implemented, is an administrative construct that is not meant to be used as a security feature. It is trivial for clients to switch namespaces. The client specifies the namespace via the `context`. There is a `github.com/containerd/containerd/v2/namespaces` package that allows a user to get and set the namespace on a context. ```go // set a namespace ctx := namespaces.WithNamespace(context.Background(), \"my-namespace\") // get the namespace ns, ok := namespaces.Namespace(ctx) ``` Because the client calls containerd's gRPC API to interact with the daemon, all API calls require a context with a namespace set. Note that a namespace cannot be named `\"version\"` (). Namespaces are passed through the containerd API to the underlying plugins providing functionality. Plugins must be written to take namespaces into account. Filesystem paths, IDs, and other system level resources must be namespaced for a plugin to work properly. Simply create a new `context` and set your application's namespace on the `context`. Make sure to use a unique namespace for applications that does not conflict with existing namespaces. The namespaces API, or the `ctr namespaces` client command, can be used to query/list and create new namespaces. ```go ctx := context.Background() var ( docker = namespaces.WithNamespace(ctx, \"docker\") vmware = namespaces.WithNamespace(ctx, \"vmware\") ecs = namespaces.WithNamespace(ctx, \"aws-ecs\") cri = namespaces.WithNamespace(ctx, \"cri\") ) ``` Namespaces can have a list of labels associated with the namespace. This can be useful for associating metadata with a particular namespace. Labels can also be used to configure the defaults for containerd, for example: ```bash sudo ctr namespaces label k8s.io containerd.io/defaults/snapshotter=btrfs sudo ctr namespaces label k8s.io containerd.io/defaults/runtime=testRuntime ``` This will set the default snapshotter as `btrfs` and runtime as `testRuntime`. Note that currently only these two labels are used to configure the defaults and labels of `default` namespace are not considered for the same. If we need to inspect containers, images, or other resources in various namespaces the `ctr` tool allows you to do this. Simply set the `--namespace,-n` flag on `ctr` to change the namespace. If you do not provide a namespace, `ctr` client commands will all use the default namespace, which is simply named \"`default`\". ```bash sudo ctr -n docker tasks sudo ctr -n cri tasks ``` You can also use the `CONTAINERD_NAMESPACE` environment variable to specify the default namespace to use for any of the `ctr` client commands." } ]
{ "category": "Runtime", "file_name": "namespaces.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "We take all security reports seriously. When we receive such reports, we will investigate and subsequently address any potential vulnerabilities as quickly as possible. If you discover a potential security issue in this project, please notify AWS/Amazon Security via our or directly via email to . Please do not create a public GitHub issue in this project." } ]
{ "category": "Runtime", "file_name": "SECURITY.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "title: Setup description: Install Virtual Kubelet using one of several methods weight: 1 You can install Virtual Kubelet by building it . First, make sure that you have a set. Then clone the Virtual Kubelet repository and run `make build`: ```bash mkdir -p ${GOPATH}/src/github.com/virtual-kubelet cd ${GOPATH}/src/github.com/virtual-kubelet git clone https://github.com/virtual-kubelet/virtual-kubelet cd virtual-kubelet && make build ``` This method adds a `virtual-kubelet` executable to the `bin` folder. To run it: ```bash bin/virtual-kubelet ``` Once you have Virtual Kubelet installed, you can move on to the documentation." } ]
{ "category": "Runtime", "file_name": "setup.md", "project_name": "Virtual Kubelet", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage transparent encryption ``` -h, --help help for encrypt ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Flushes the current IPsec state - Display the current encryption state" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_encrypt.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Users face difficulty in knowing the progress of backup/restore operations of volume snapshotters. This is very similar to the issues faced by users to know progress for restic backup/restore, like, estimation of operation, operation in-progress/hung etc. Each plugin might be providing a way to know the progress, but, it need not uniform across the plugins. Even though plugins provide the way to know the progress of backup operation, this information won't be available to user during restore time on the destination cluster. So, apart from the issues like progress, status of operation, volume snapshotters have unique problems like not being uniform across plugins not knowing the backup information during restore operation need to be optional as few plugins may not have a way to provide the progress information This document proposes an approach for plugins to follow to provide backup/restore progress, which can be used by users to know the progress. Provide uniform way of visibility into backup/restore operations performed by volume snapshotters Plugin implementation for this approach (Omitted, see introduction) Progress will be updated by volume snapshotter in VolumePluginBackup CR which is specific to that backup operation. Progress will be updated by volume snapshotter in VolumePluginRestore CR which is specific to that restore operation. Existing `Snapshot` Go struct from `volume` package have most of the details related to backup operation performed by volumesnapshotters. This struct also gets backed up to backup location. But, this struct doesn't get synced on other clusters at regular intervals. It is currently synced only during restore operation, and velero CLI shows few of its contents. At a high level, in this approach, this struct will be converted to a CR by adding new fields (related to Progress tracking) to it, and gets rid of `volume.Snapshot` struct. Instead of backing up of Go struct, proposal is: to backup CRs to backup location, and sync them into other cluster by backupSyncController running in that cluster. There is one addition to volume.SnapshotSpec, i.e., ProviderName to convert it to CR's spec. Below is the updated VolumePluginBackup CR's Spec: ``` type VolumePluginBackupSpec struct { // BackupName is the name of the Velero backup this snapshot // is associated with. BackupName string `json:\"backupName\"` // BackupUID is the UID of the Velero backup this snapshot // is associated with. BackupUID string `json:\"backupUID\"` // Location is the name of the VolumeSnapshotLocation where this snapshot is stored. Location string `json:\"location\"` // PersistentVolumeName is the Kubernetes name for the volume. PersistentVolumeName string `json:\"persistentVolumeName\"` // ProviderVolumeID is the provider's ID for the volume. ProviderVolumeID string `json:\"providerVolumeID\"` // Provider is the Provider field given in VolumeSnapshotLocation Provider string `json:\"provider\"` // VolumeType is the type of the disk/volume in the cloud provider // API. VolumeType string `json:\"volumeType\"` // VolumeAZ is the where the volume is provisioned // in the cloud provider. VolumeAZ string `json:\"volumeAZ,omitempty\"` // VolumeIOPS is the optional value of provisioned IOPS for the // disk/volume in the cloud provider API. VolumeIOPS *int64 `json:\"volumeIOPS,omitempty\"` } ``` Few fields (except first two) are added to volume.SnapshotStatus to convert it to CR's" }, { "data": "Below is the updated VolumePluginBackup CR's status: ``` type VolumePluginBackupStatus struct { // ProviderSnapshotID is the ID of the snapshot taken in the cloud // provider API of this volume. ProviderSnapshotID string `json:\"providerSnapshotID,omitempty\"` // Phase is the current state of the VolumeSnapshot. Phase SnapshotPhase `json:\"phase,omitempty\"` // PluginSpecific are a map of key-value pairs that plugin want to provide // to user to identify plugin properties related to this backup // +optional PluginSpecific map[string]string `json:\"pluginSpecific,omitempty\"` // Message is a message about the volume plugin's backup's status. // +optional Message string `json:\"message,omitempty\"` // StartTimestamp records the time a backup was started. // Separate from CreationTimestamp, since that value changes // on restores. // The server's time is used for StartTimestamps // +optional // +nullable StartTimestamp *metav1.Time `json:\"startTimestamp,omitempty\"` // CompletionTimestamp records the time a backup was completed. // Completion time is recorded even on failed backups. // Completion time is recorded before uploading the backup object. // The server's time is used for CompletionTimestamps // +optional // +nullable CompletionTimestamp *metav1.Time `json:\"completionTimestamp,omitempty\"` // Progress holds the total number of bytes of the volume and the current // number of backed up bytes. This can be used to display progress information // about the backup operation. // +optional Progress VolumeOperationProgress `json:\"progress,omitempty\"` } type VolumeOperationProgress struct { TotalBytes int64 BytesDone int64 } type VolumePluginBackup struct { metav1.TypeMeta `json:\",inline\"` // +optional metav1.ObjectMeta `json:\"metadata,omitempty\"` // +optional Spec VolumePluginBackupSpec `json:\"spec,omitempty\"` // +optional Status VolumePluginBackupStatus `json:\"status,omitempty\"` } ``` For every backup operation of volume, Velero creates VolumePluginBackup CR before calling volumesnapshotter's CreateSnapshot API. In order to know the CR created for the particular backup of a volume, Velero adds following labels to CR: `velero.io/backup-name` with value as Backup Name, and, `velero.io/pv-name` with value as volume that is undergoing backup Backup name being unique won't cause issues like duplicates in identifying the CR. Labels will be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/main/pkg/label/label.go#L35). If Plugin supports showing progress of the operation it is performing, it does following: finds the VolumePluginBackup CR related to this backup operation by using `tags` passed in CreateSnapshot call updates the CR with the progress regularly. After return from `CreateSnapshot` in `takePVSnapshot`, currently Velero adds `volume.Snapshot` to `backupRequest`. Instead of this, CR will be added to `backupRequest`. During persistBackup call, this CR also will be backed up to backup location. In backupSyncController, it checks for any VolumePluginBackup CRs that need to be synced from backup location, and syncs them to cluster if needed. VolumePluginBackup will be useful as long as backed up data is available at backup location. When the Backup is deleted either by manually or due to expiry, VolumePluginBackup also can be deleted. `processRequest` of `backupDeletionController` will perform deletion of VolumePluginBackup before volumesnapshotter's DeleteSnapshot is called. Currently `volume.Snapshot` is backed up as `<backupname>-volumesnapshots.json.gz` file in the backup location. As the VolumePluginBackup CR is backed up instead of `volume.Snapshot`, to provide backward compatibility, CR will be backed as the same file i.e., `<backupname>-volumesnapshots.json.gz` file in the backup location. For backward compatibility on restore side, consider below possible cases wrt Velero version on restore side and format of json.gz file at object location: older version of Velero, older json.gz file (backupname-volumesnapshots.json.gz) older version of Velero, newer json.gz file newer version of Velero, older json.gz file newer version of Velero, newer json.gz file First and last should be" }, { "data": "For second case, decode in `GetBackupVolumeSnapshots` on the restore side should fill only required fields of older version and should work. For third case, after decode, metadata.name will be empty. `GetBackupVolumeSnapshots` decodes older json.gz into the CR which goes fine. It will be modified to return []VolumePluginBackupSpec, and the changes are done accordingly in its caller. If decode fails in second case during implementation, this CR need to be backed up to different file. And, for backward compatibility, newer code should check for old file existence, and follow older code if exists. If it doesn't exists, check for newer file and follow the newer code. `backupSyncController` on restore clusters gets the `<backupname>-volumesnapshots.json.gz` object from backup location and decodes it to in-memory VolumePluginBackup CR. If its `metadata.name` is populated, controller creates CR. Otherwise, it will not create the CR on the cluster. It can be even considered to create CR on the cluster. ``` // VolumePluginRestoreSpec is the specification for a VolumePluginRestore CR. type VolumePluginRestoreSpec struct { // SnapshotID is the identifier for the snapshot of the volume. // This will be used to relate with output in 'velero describe backup' SnapshotID string `json:\"snapshotID\"` // BackupName is the name of the Velero backup from which PV will be // created. BackupName string `json:\"backupName\"` // Provider is the Provider field given in VolumeSnapshotLocation Provider string `json:\"provider\"` // VolumeType is the type of the disk/volume in the cloud provider // API. VolumeType string `json:\"volumeType\"` // VolumeAZ is the where the volume is provisioned // in the cloud provider. VolumeAZ string `json:\"volumeAZ,omitempty\"` } // VolumePluginRestoreStatus is the current status of a VolumePluginRestore CR. type VolumePluginRestoreStatus struct { // Phase is the current state of the VolumePluginRestore. Phase string `json:\"phase\"` // VolumeID is the PV name to which restore done VolumeID string `json:\"volumeID\"` // Message is a message about the volume plugin's restore's status. // +optional Message string `json:\"message,omitempty\"` // StartTimestamp records the time a restore was started. // Separate from CreationTimestamp, since that value changes // on restores. // The server's time is used for StartTimestamps // +optional // +nullable StartTimestamp *metav1.Time `json:\"startTimestamp,omitempty\"` // CompletionTimestamp records the time a restore was completed. // Completion time is recorded even on failed restores. // The server's time is used for CompletionTimestamps // +optional // +nullable CompletionTimestamp *metav1.Time `json:\"completionTimestamp,omitempty\"` // Progress holds the total number of bytes of the snapshot and the current // number of restored bytes. This can be used to display progress information // about the restore operation. // +optional Progress VolumeOperationProgress `json:\"progress,omitempty\"` // PluginSpecific are a map of key-value pairs that plugin want to provide // to user to identify plugin properties related to this restore // +optional PluginSpecific map[string]string `json:\"pluginSpecific,omitempty\"` } type VolumePluginRestore struct { metav1.TypeMeta `json:\",inline\"` // +optional metav1.ObjectMeta `json:\"metadata,omitempty\"` // +optional Spec VolumePluginRestoreSpec `json:\"spec,omitempty\"` // +optional Status VolumePluginRestoreStatus `json:\"status,omitempty\"` } ``` For every restore operation, Velero creates VolumePluginRestore CR before calling volumesnapshotter's CreateVolumeFromSnapshot API. In order to know the CR created for the particular restore of a volume, Velero adds following labels to CR: `velero.io/backup-name` with value as Backup Name, and, `velero.io/snapshot-id` with value as snapshot id that need to be restored" }, { "data": "with value as `Provider` in `VolumeSnapshotLocation` Labels will be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/main/pkg/label/label.go#L35). Plugin will be able to identify CR by using snapshotID that it received as parameter of CreateVolumeFromSnapshot API, and plugin's Provider name. It updates the progress of restore operation regularly if plugin supports feature of showing progress. Velero deletes VolumePluginRestore CR when it handles deletion of Restore CR. This approach is different to approach 1 only with respect to Backup. ``` // VolumePluginBackupSpec is the specification for a VolumePluginBackup CR. type VolumePluginBackupSpec struct { // Volume is the PV name to be backed up. Volume string `json:\"volume\"` // Backup name Backup string `json:\"backup\"` // Provider is the Provider field given in VolumeSnapshotLocation Provider string `json:\"provider\"` } // VolumePluginBackupStatus is the current status of a VolumePluginBackup CR. type VolumePluginBackupStatus struct { // Phase is the current state of the VolumePluginBackup. Phase string `json:\"phase\"` // SnapshotID is the identifier for the snapshot of the volume. // This will be used to relate with output in 'velero describe backup' SnapshotID string `json:\"snapshotID\"` // Message is a message about the volume plugin's backup's status. // +optional Message string `json:\"message,omitempty\"` // StartTimestamp records the time a backup was started. // Separate from CreationTimestamp, since that value changes // on restores. // The server's time is used for StartTimestamps // +optional // +nullable StartTimestamp *metav1.Time `json:\"startTimestamp,omitempty\"` // CompletionTimestamp records the time a backup was completed. // Completion time is recorded even on failed backups. // Completion time is recorded before uploading the backup object. // The server's time is used for CompletionTimestamps // +optional // +nullable CompletionTimestamp *metav1.Time `json:\"completionTimestamp,omitempty\"` // PluginSpecific are a map of key-value pairs that plugin want to provide // to user to identify plugin properties related to this backup // +optional PluginSpecific map[string]string `json:\"pluginSpecific,omitempty\"` // Progress holds the total number of bytes of the volume and the current // number of backed up bytes. This can be used to display progress information // about the backup operation. // +optional Progress VolumeOperationProgress `json:\"progress,omitempty\"` } type VolumeOperationProgress struct { TotalBytes int64 BytesDone int64 } type VolumePluginBackup struct { metav1.TypeMeta `json:\",inline\"` // +optional metav1.ObjectMeta `json:\"metadata,omitempty\"` // +optional Spec VolumePluginBackupSpec `json:\"spec,omitempty\"` // +optional Status VolumePluginBackupStatus `json:\"status,omitempty\"` } ``` For every backup operation of volume, volume snapshotter creates VolumePluginBackup CR in Velero namespace. It keep updating the progress of operation along with other details like Volume name, Backup Name, SnapshotID etc as mentioned in the CR. In order to know the CR created for the particular backup of a volume, volume snapshotters adds following labels to CR: `velero.io/backup-name` with value as Backup Name, and, `velero.io/volume-name` with value as volume that is undergoing backup Backup name being unique won't cause issues like duplicates in identifying the CR. Plugin need to sanitize the value that can be set for above labels. Label need to be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/main/pkg/label/label.go#L35). Though no restrictions are required on the name of CR, as a general practice, volume snapshotter can name this CR with the value same as return value of CreateSnapshot. After return from `CreateSnapshot` in `takePVSnapshot`, if VolumePluginBackup CR exists for particular backup of the volume, velero adds this CR to `backupRequest`. During persistBackup call, this CR also will be backed up to backup" }, { "data": "In backupSyncController, it checks for any VolumePluginBackup CRs that need to be synced from backup location, and syncs them to cluster if needed. `processRequest` of `backupDeletionController` will perform deletion of VolumePluginBackup before volumesnapshotter's DeleteSnapshot is called. Another alternative is: Deletion of `VolumePluginBackup` CR can be delegated to plugin. Plugin can perform deletion of VolumePluginBackup using the `snapshotID` passed in volumesnapshotter's DeleteSnapshot request. Creation of the VolumePluginBackup/VolumePluginRestore CRDs at installation time Persistence of VolumePluginBackup CRs towards the end of the backup operation As part of backup synchronization, VolumePluginBackup CRs related to the backup will be synced. Deletion of VolumePluginBackup when volumeshapshotter's DeleteSnapshot is called Deletion of VolumePluginRestore as part of handling deletion of Restore CR In case of approach 1, converting `volume.Snapshot` struct as CR and its related changes creation of VolumePlugin(Backup|Restore) CRs before calling volumesnapshotter's API `GetBackupVolumeSnapshots` and its callers related changes for change in return type from []volume.Snapshot to []VolumePluginBackupSpec. In 'velero describe' CLI, required CRs will be fetched from API server and its contents like backupName, PVName (if changed due to label size limitation), size of PV snapshot will be shown in the output. When CRs gets upgraded, velero can support older API versions also (till they get deprecated) to identify the CRs that need to be persisted to backup location. However, it can provide preference over latest supported API. If new fields are added without changing API version, it won't cause any problem as these resources are intended to provide information, and, there is no reconciliation on these resources. Plugin that supports this CR should handle the situation gracefully when CRDs are not installed. It can handle the errors occurred during creation/updation of the CRs. Non K8s native plugins will not be able to implement this as they can not create the CRs. Above proposed approach have limitation that plugin need to be K8s native in order to create, update CRs. Instead, a new method for 'Progress' will be added to interface. Velero server regularly polls this 'Progress' method and updates VolumePluginBackup CR on behalf of plugin. But, this involves good amount of changes and needs a way for backward compatibility. As volume plugins are mostly K8s native, its fine to go ahead with current limitation. Instead of creating new CRs, plugins can directly update the status of Backup CR. But, this deviates from current approach of having separate CRs like PodVolumeBackup/PodVolumeRestore to know operations progress. Instead of using labels to identify the CR related to particular backup on a volume, restrictions can be placed on the name of VolumePluginBackup CR to be same as the value returned from CreateSnapshot. But, this can cause issue when volume snapshotter just crashed without returning snapshot id to velero. If CR is backed up to different object other than `#backup-volumesnapshots.json.gz` in backup location, restore controller need to follow 'fall-back model'. It first need to check for new kind of object, and, if it doesn't exists, follow the old model. To avoid 'fall-back' model which prone to errors, VolumePluginBackup CR is backed to same location as that of `volume.Snapshot` location. Currently everything runs under the same `velero` service account so all plugins have broad access, which would include being able to modify CRs created by another plugin." } ]
{ "category": "Runtime", "file_name": "plugin-backup-and-restore-progress-design.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This document lists the tasks required to create a Kata Release. GitHub permissions to run workflows. The Kata Containers project uses for all releases. Semantic versions are comprised of three fields in the form: ``` MAJOR.MINOR.PATCH ``` When `MINOR` increases, the new release adds new features but without changing the existing behavior. When `MAJOR` increases, the new release adds new features, bug fixes, or both and which changes the behavior from the previous release (incompatible with previous releases). A major release will also likely require a change of the container manager version used, -for example Containerd or CRI-O. Please refer to the release notes for further details. Important : the Kata Containers project doesn't have stable branches (see for details). Bug fixes are released as part of `MINOR` or `MAJOR` releases only. `PATCH` is always `0`. When the `kata-containers/kata-containers` repository is ready for a new release, first create a PR to set the release in the `VERSION` file and have it merged. We make use of in the file from the `kata-containers/kata-containers` repository to build and upload release artifacts. The action is manually triggered and is responsible for generating a new release (including a new tag), pushing those to the `kata-containers/kata-containers` repository. The new release is initially created as a draft. It is promoted to an official release when the whole workflow has completed successfully. Check the [actions status page](https://github.com/kata-containers/kata-containers/actions) to verify all steps in the actions workflow have completed successfully. On success, a static tarball containing Kata release artifacts will be uploaded to the [Release page](https://github.com/kata-containers/kata-containers/releases). If the workflow fails because of some external environmental causes, e.g. network timeout, simply re-run the failed jobs until they eventually succeed. If for some reason you need to cancel the workflow or re-run it entirely, go first to the and delete the draft release from the previous run. Release notes are auto-generated by the GitHub CLI tool used as part of our release workflow. However, some manual tweaking may still be necessary in order to highlight the most important features and bug fixes in a specific release. With this in mind, please, poke @channel on #kata-dev and people who worked on the release will be able to contribute to that. Publish in [Slack and Kata mailing list](https://github.com/kata-containers/community#join-us) that new release is ready." } ]
{ "category": "Runtime", "file_name": "Release-Process.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "title: \"ark backup logs\" layout: docs Get backup logs Get backup logs ``` ark backup logs BACKUP [flags] ``` ``` -h, --help help for logs --timeout duration how long to wait to receive logs (default 1m0s) ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with backups" } ]
{ "category": "Runtime", "file_name": "ark_backup_logs.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "The following assumes that you have applied the monitoring stack onto your cluster. Monitor the Kilo DaemonSet with: ```shell kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/podmonitor.yaml ``` Monitor the WireGuard interfaces with: ```shell kubectl create ns kilo kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/wg-exporter.yaml ``` The manifest will deploy the as a DaemonSet and a . By default the kube-prometheus stack only monitors the `default`, `kube-system` and `monitoring` namespaces. In order to allow Prometheus to monitor the `kilo` namespace, apply the Role and RoleBinding with: ```shell kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/wg-exporter-role-kube-prometheus.yaml ``` Kilo exports some standard metrics with the Prometheus GoCollector and ProcessCollector. It also exposes some Kilo-specific metrics. ``` ``` The exports the following metrics: ``` ``` If your laptop is a Kilo peer of the cluster you can access the Prometheus UI by navigating your browser directly to the cluster IP of the `prometheus-k8s` service. Otherwise use `port-forward`: ```shell kubectl -n monitoring port-forward svc/prometheus-k8s 9090 ``` and navigate your browser to `localhost:9090`. Check if you can see the PodMonitors for Kilo and the WireGuard Exporter under Status -> Targets in the Prometheus web UI. If you don't see them, check the logs of the `prometheus-k8s` Pods; it may be that Prometheus doesn't have the permission to get Pods in the `kilo` namespace. In this case, you need to apply the Role and RoleBinding from above. Navigate to Graph and try to execute a simple query, e.g. type `kilo_nodes` and click on `execute`. You should see some data. Let's navigate to the Grafana dashboard. Again, if your laptop is not a Kilo peer, use `port-forward`: ```shell kubectl -n monitoring port-forward svc/grafana 3000 ``` Now navigate your browser to `localhost:3000`. The default user and password is `admin` `admin`. An example configuration for a dashboard displaying Kilo metrics can be found . You can import this dashboard by hitting + -> Import on the Grafana dashboard. The dashboard looks like this: <img src=\"./graphs/kilo.png\" />" } ]
{ "category": "Runtime", "file_name": "monitoring.md", "project_name": "Kilo", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Generate the autocompletion script for the specified shell Generate the autocompletion script for cilium-bugtool for the specified shell. See each sub-command's help for details on how to use the generated script. ``` -h, --help help for completion ``` - Collects agent & system information useful for bug reporting - Generate the autocompletion script for bash - Generate the autocompletion script for fish - Generate the autocompletion script for powershell - Generate the autocompletion script for zsh" } ]
{ "category": "Runtime", "file_name": "cilium-bugtool_completion.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Previous change logs can be found at Optimize ansible script, optimize build script, optimize log printing. Translate some document and code comment from chinese to english. Add a script for k8s to attach curve volume. curveopstool improve: - Hardware: 6 nodes, each with: 20x SATA SSD Intel SSD DC S3500 Series 800G 2x Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz 2x Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection, bond mode is 802.3ad with layer2+3 hash policy 251G RAM Performance test is based on curve-nbd, the size of each block device is 200GB, all configurations are default, and each Chunkserver is deployed on one SSD. 1 NBD block device: | item | iops/bandwidth | avg-latency | 99th-latency | striped volume<br>iops/bandwidth | striped volume<br>avg-latency | striped volume<br>99th-latency | | :-: | :-: | :-: | :-: | :-: |:-: |:-: | | 4K randwrite, 128 depth | 97,700 iops | 1.3 ms | 2.0 ms | 97,100 iops | 1.3 ms | 3.0 ms | | 4K randread, 128 depth | 119,000 iops | 1.07 ms | 1.8 ms | 98,000 iops | 1.3 ms | 2.2 ms | | 512K write, 128 depth | 208 MB/s | 307 ms | 347 ms | 462 MB/s | 138 ms | 228 ms | | 512K read, 128 depth | 311 MB/s | 206 ms | 264 ms | 843 MB/s | 75 ms | 102 ms | 10 NBD block device: | item | iops/bandwidth | avg-latency | 99th-latency | striped volume<br>iops/bandwidth | striped volume<br>avg-latency | striped volume<br>99th-latency | | :-: | :-: | :-: | :-: | :-: |:-: |:-: | | 4K randwrite, 128 depth | 231,000 iops | 5.6 ms | 50 ms | 227,000 iops | 5.9 ms | 53 ms | | 4K randread, 128 depth | 350,000 iops | 3.7 ms | 8.2 ms | 345,000 iops | 3.8 ms | 8.2 ms | | 512K write, 128 depth | 805 MB/s | 415 ms | 600 ms | 1,077 MB/s | 400 ms | 593 ms | | 512K read, 128 depth | 2,402 MB/s | 267 ms | 275 ms | 3,313 MB/s | 201 ms | 245 ms | <hr/> <hr/>" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.2.md", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "| Parameter | Required| Description |Option Values |Default | | |-|--| --|| | `diskSelector.name` |Yes |Disk group name | | | | `diskSelector.re` |Yes |Matches the disk group policy supports regular expressions | | | | `diskSelector.policy` |Yes |Disk group name matching policy | | | | `diskSelector.nodeLabel` |Yes |Disk group name matching node label | | | | `diskScanInterval` |Yes |Disk scan interval, 0 to close the local disk scanning | | | | `schedulerStrategy` |Yes |Disk group name scheduling policies : binpack select the disk capacity for PV just met requests. storage node, spreadout of the most select the remaining disk capacity for PV nodes | `binpack``spreadout` | `spreadout` | ```yaml config.json: |- { \"diskSelector\": [ { \"name\": \"carina-vg-ssd\", \"re\": [\"loop2+\"], \"policy\": \"LVM\", \"nodeLabel\": \"kubernetes.io/hostname\" }, { \"name\": \"carina-raw-hdd\", \"re\": [\"vdb+\", \"sd+\"], \"policy\": \"RAW\", \"nodeLabel\": \"kubernetes.io/hostname\" } ], \"diskScanInterval\": \"300\", \"schedulerStrategy\": \"spreadout\" } ``` | Parameter |Required| Description |Option Values |Default | | --|-|--| --|| | `csi.storage.k8s.io/fstype` |No |Mount the device file format |`xfs`,`ext4` |`ext4` | | `carina.storage.io/backend-disk-group-name` |No |Back - end storage devices, disk type, fill out the slow disk group name |User - configured disk group name | | | `carina.storage.io/cache-disk-group-name` |No |Cache device type of disk, fill out the quick disk group name |User - configured disk group name | | | `carina.storage.io/cache-disk-ratio` |No |Cache range from 1-100 per cent, the rate equation is `cache-disk-size = backend-disk-size * cache-disk-ratio / 100` | 1-100 | | | `carina.storage.io/cache-policy` |Yes |Cache policy |`writethrough`,`writeback`,`writearound` | | | `carina.storage.io/disk-group-name` |No |disk group name |User - configured disk group name | | | `carina.storage.io/exclusively-raw-disk` |No |When using a raw disk whether to use exclusive disk |`true`,`false` |`false` | | `reclaimPolicy` |No |GC policy |`Delete`,`Retain` |`Delete` | | `allowVolumeExpansion` |Yes |Whether to allow expansion |`true`,`false` |`true` | | `volumeBindingMode` |Yes |Scheduling policy : waitforfirstconsumer means binding schedule after creating the container Once you create a PVC pv,immediate also completes the preparation of volumes bound and dynamic.| `WaitForFirstConsumer`,`Immediate` | | | `allowedTopologies` |No |Only volumebindingmode : immediate that contains the type of support based on matchlabelexpressions select PV nodes | | | ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-carina-sc provisioner: carina.storage.io parameters: csi.storage.k8s.io/fstype: xfs carina.storage.io/backend-disk-group-name: hdd carina.storage.io/cache-disk-group-name: carina-vg-ssd carina.storage.io/cache-disk-ratio: \"50\" carina.storage.io/cache-policy: writethrough carina.storage.io/disk-group-name: \"carina-vg-hdd\" carina.storage.io/exclusively-raw-disk: false reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate mountOptions: allowedTopologies: matchLabelExpressions: key: beta.kubernetes.io/os values: arm64 amd64 ``` | Parameter |Required|Description | Option Values |Default | | -- | |--| - |-| | `carina.storage.io/blkio.throttle.readbpsdevice` |Yes |Set disk read BPS value | | | | `carina.storage.io/blkio.throttle.writebpsdevice` |Yes |Set the disk is write BPS value | | | | `carina.storage.io/blkio.throttle.readiopsdevice` |Yes |Set disk read IOPS value | | | | `carina.storage.io/blkio.throttle.writeiopsdevice` |Yes |Set the disk is write IOPS value | | | | `carina.storage.io/allow-pod-migration-if-node-notready` |No |Whether to migrate when node is down |`true`,`false`|`false` | ```yaml metadata: annotations: carina.storage.io/blkio.throttle.readbpsdevice: \"10485760\" carina.storage.io/blkio.throttle.readiopsdevice: \"10000\" carina.storage.io/blkio.throttle.writebpsdevice: \"10485760\" carina.storage.io/blkio.throttle.writeiopsdevice: \"100000\" carina.storage.io/allow-pod-migration-if-node-notready: true ```" } ]
{ "category": "Runtime", "file_name": "configrations.md", "project_name": "Carina", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark backup delete\" layout: docs Delete a backup Delete a backup ``` ark backup delete NAME [flags] ``` ``` --confirm Confirm deletion -h, --help help for delete ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with backups" } ]
{ "category": "Runtime", "file_name": "ark_backup_delete.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Command Reference sidebar_position: 1 slug: /command_reference description: Descriptions, usage and examples of all commands and options included in JuiceFS Client. import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; Running `juicefs` by itself and it will print all available commands. In addition, you can add `-h/--help` flag after each command to get more information, e.g., `juicefs format -h`. ``` NAME: juicefs - A POSIX file system built on Redis and object storage. USAGE: juicefs [global options] command [command options] [arguments...] VERSION: 1.1.0 COMMANDS: ADMIN: format Format a volume config Change configuration of a volume quota Manage directory quotas destroy Destroy an existing volume gc Garbage collector of objects in data storage fsck Check consistency of a volume restore restore files from trash dump Dump metadata into a JSON file load Load metadata from a previously dumped JSON file version Show version INSPECTOR: status Show status of a volume stats Show real time performance statistics of JuiceFS profile Show profiling of operations completed in JuiceFS info Show internal information of a path or inode debug Collect and display system static and runtime information summary Show tree summary of a directory SERVICE: mount Mount a volume umount Unmount a volume gateway Start an S3-compatible gateway webdav Start a WebDAV server TOOL: bench Run benchmarks on a path objbench Run benchmarks on an object storage warmup Build cache for target directories/files rmr Remove directories recursively sync Sync between two storages clone clone a file or directory without copying the underlying data compact Trigger compaction of chunks GLOBAL OPTIONS: --verbose, --debug, -v enable debug log (default: false) --quiet, -q show warning and errors only (default: false) --trace enable trace log (default: false) --log-id value append the given log id in log, use \"random\" to use random uuid --no-agent disable pprof (:6060) agent (default: false) --pyroscope value pyroscope address --no-color disable colors (default: false) --help, -h show help (default: false) --version, -V print version only (default: false) COPYRIGHT: Apache License 2.0 ``` To enable commands completion, simply source the script provided within directory. For example: <Tabs groupId=\"juicefs-cli-autocomplete\"> <TabItem value=\"bash\" label=\"Bash\"> ```shell source hack/autocomplete/bash_autocomplete ``` </TabItem> <TabItem value=\"zsh\" label=\"Zsh\"> ```shell source hack/autocomplete/zsh_autocomplete ``` </TabItem> </Tabs> Please note the auto-completion is only enabled for the current session. If you want to apply it for all new sessions, add the `source` command to `.bashrc` or `.zshrc`: <Tabs groupId=\"juicefs-cli-autocomplete\"> <TabItem value=\"bash\" label=\"Bash\"> ```shell echo \"source path/to/bash_autocomplete\" >> ~/.bashrc ``` </TabItem> <TabItem value=\"zsh\" label=\"Zsh\"> ```shell echo \"source path/to/zsh_autocomplete\" >> ~/.zshrc ``` </TabItem> </Tabs> Alternatively, if you are using bash on a Linux system, you may just copy the script to `/etc/bash_completion.d` and rename it to `juicefs`: ```shell cp hack/autocomplete/bashautocomplete /etc/bashcompletion.d/juicefs source /etc/bash_completion.d/juicefs ``` Create and format a file system, if a volume already exists with the same `META-URL`, this command will skip the format step. To adjust configurations for existing volumes, use . ```shell juicefs format [command options] META-URL NAME juicefs format sqlite3://myjfs.db myjfs juicefs format redis://localhost myjfs --storage=s3 --bucket=https://mybucket.s3.us-east-2.amazonaws.com juicefs format mysql://jfs:mypassword@(127.0.0.1:3306)/juicefs myjfs META_PASSWORD=mypassword juicefs format mysql://jfs:@(127.0.0.1:3306)/juicefs myjfs juicefs format sqlite3://myjfs.db myjfs --inodes=1000000 --capacity=102400 juicefs format sqlite3://myjfs.db myjfs --trash-days=0 ``` |Items|Description| |-|-| |`META-URL`|Database URL for metadata storage, see for details.| |`NAME`|Name of the file system| |`--force`|overwrite existing format (default: false)| |`--no-update`|don't update existing volume (default: false)| |Items|Description| |-|-| |`--storage=file`|Object storage type (e.g. `s3`, `gs`, `oss`, `cos`) (default: `file`, refer to for all supported object storage types)| |`--bucket=/var/jfs`|A bucket URL to store data (default:" }, { "data": "or `/var/jfs`)| |`--access-key=value`|Access Key for object storage (can also be set via the environment variable `ACCESS_KEY`), see for more.| |`--secret-key value`|Secret Key for object storage (can also be set via the environment variable `SECRET_KEY`), see for more.| |`--session-token=value`|session token for object storage, see for more.| |`--storage-class value` <VersionAdd>1.1</VersionAdd> |the default storage class| |Items|Description| |-|-| |`--block-size=4096`|size of block in KiB (default: 4096). 4M is usually a better default value because many object storage services use 4M as their internal block size, thus using the same block size in JuiceFS usually yields better performance.| |`--compress=none`|compression algorithm, choose from `lz4`, `zstd`, `none` (default). Enabling compression will inevitably affect performance. Among the two supported algorithms, `lz4` offers a better performance, while `zstd` comes with a higher compression ratio, Google for their detailed comparison.| |`--encrypt-rsa-key=value`|A path to RSA private key (PEM)| |`--encrypt-algo=aes256gcm-rsa`|encrypt algorithm (aes256gcm-rsa, chacha20-rsa) (default: \"aes256gcm-rsa\")| |`--hash-prefix`|For most object storages, if object storage blocks are sequentially named, they will also be closely stored in the underlying physical regions. When loaded with intensive concurrent consecutive reads, this can cause hotspots and hinder object storage performance.<br/><br/>Enabling `--hash-prefix` will add a hash prefix to name of the blocks (slice ID mod 256, see ), this distributes data blocks evenly across actual object storage regions, offering more consistent performance. Obviously, this option dictates object naming pattern and should be specified when a file system is created, and cannot be changed on-the-fly.<br/><br/>Currently, had already made improvements and no longer require application side optimization, but for other types of object storages, this option still recommended for large scale scenarios.| |`--shards=0`|If your object storage limit speed in a bucket level (or you're using a self-hosted object storage with limited performance), you can store the blocks into N buckets by hash of key (default: 0), when N is greater than 0, `bucket` should to be in the form of `%d`, e.g. `--bucket \"juicefs-%d\"`. `--shards` cannot be changed afterwards and must be planned carefully ahead.| |Items|Description| |-|-| |`--capacity=0`|storage space limit in GiB, default to 0 which means no limit. Capacity will include trash files, if is enabled.| |`--inodes=0`|Limit the number of inodes, default to 0 which means no limit.| |`--trash-days=1`|By default, delete files are put into , this option controls the number of days before trash files are expired, default to 1, set to 0 to disable trash.| |`--enable-acl=true` <VersionAdd>1.2</VersionAdd>|enable it is irreversible. | Change config of a volume. Note that after updating some settings, the client may not take effect immediately, and it needs to wait for a certain period of time. The specific waiting time can be controlled by the option. ```shell juicefs config [command options] META-URL juicefs config redis://localhost juicefs config redis://localhost --inodes 10000000 --capacity 1048576 juicefs config redis://localhost --trash-days 7 juicefs config redis://localhost --min-client-version 1.0.0 --max-client-version 1.1.0 ``` |Items|Description| |-|-| |`--yes, -y`|automatically answer 'yes' to all prompts and run non-interactively (default: false)| |`--force`|skip sanity check and force update the configurations (default: false)| |Items|Description| |-|-| |`--storage=file` <VersionAdd>1.1</VersionAdd> |Object storage type (e.g. `s3`, `gs`, `oss`, `cos`) (default: `\"file\"`, refer to for all supported object storage types).| |`--bucket=/var/jfs`|A bucket URL to store data (default: `$HOME/.juicefs/local` or `/var/jfs`)| |`--access-key=value`|Access Key for object storage (can also be set via the environment variable `ACCESS_KEY`), see for more.| |`--secret-key value`|Secret Key for object storage (can also be set via the environment variable `SECRET_KEY`), see for more.| |`--session-token=value`|session token for object storage, see for more.| |`--storage-class value`" }, { "data": "|the default storage class| |`--upload-limit=0`|bandwidth limit for upload in Mbps (default: 0)| |`--download-limit=0`|bandwidth limit for download in Mbps (default: 0)| |Items|Description| |-|-| |`--capacity value`|limit for space in GiB| |`--inodes value`|limit for number of inodes| |`--trash-days value`|number of days after which removed files will be permanently deleted| |`--encrypt-secret`|encrypt the secret key if it was previously stored in plain format (default: false)| |`--min-client-version value` <VersionAdd>1.1</VersionAdd> |minimum client version allowed to connect| |`--max-client-version value` <VersionAdd>1.1</VersionAdd> |maximum client version allowed to connect| |`--dir-stats` <VersionAdd>1.1</VersionAdd> |enable dir stats, which is necessary for fast summary and dir quota (default: false)| |`--enable-acl` <VersionAdd>1.2</VersionAdd>|enable POSIX ACL(irreversible), min-client-version will be set to v1.2| Manage directory quotas ```shell juicefs quota command [command options] META-URL juicefs quota set redis://localhost --path /dir1 --capacity 1 --inodes 100 juicefs quota get redis://localhost --path /dir1 juicefs quota list redis://localhost juicefs quota delete redis://localhost --path /dir1 juicefs quota check redis://localhost ``` |Items|Description| |-|-| |`META-URL`|Database URL for metadata storage, see \"\" for details.| |`--path value`|full path of the directory within the volume| |`--capacity value`|hard quota of the directory limiting its usage of space in GiB (default: 0)| |`--inodes value`|hard quota of the directory limiting its number of inodes (default: 0)| |`--repair`|repair inconsistent quota (default: false)| |`--strict`|calculate total usage of directory in strict mode (NOTE: may be slow for huge directory) (default: false)| Destroy an existing volume, will delete relevant data in metadata engine and object storage. See . ```shell juicefs destroy [command options] META-URL UUID juicefs destroy redis://localhost e94d66a8-2339-4abd-b8d8-6812df737892 ``` |Items|Description| |-|-| |`--yes, -y` <VersionAdd>1.1</VersionAdd> |automatically answer 'yes' to all prompts and run non-interactively (default: false)| |`--force`|skip sanity check and force destroy the volume (default: false)| If for some reason, a object storage block escape JuiceFS management completely, i.e. the metadata is gone, but the block still persists in the object storage, and cannot be released, this is called an \"object leak\". If this happens without any special file system manipulation, it could well indicate a bug within JuiceFS, file a to let us know. Meanwhile, you can run this command to deal with leaked objects. It also deletes stale slices produced by file overwrites. See . ```shell juicefs gc [command options] META-URL juicefs gc redis://localhost juicefs gc redis://localhost --compact juicefs gc redis://localhost --delete ``` |Items|Description| |-|-| |`--delete`|delete leaked objects (default: false)| |`--compact`|compact all chunks with more than 1 slices (default: false).| |`--threads=10`|number of threads to delete leaked objects (default: 10)| Check consistency of file system. ```shell juicefs fsck [command options] META-URL juicefs fsck redis://localhost ``` |Items|Description| |-|-| |`--path value` <VersionAdd>1.1</VersionAdd> |absolute path within JuiceFS to check| |`--repair` <VersionAdd>1.1</VersionAdd> |repair specified path if it's broken (default: false)| |`--recursive, -r` <VersionAdd>1.1</VersionAdd> |recursively check or repair (default: false)| |`--sync-dir-stat` <VersionAdd>1.1</VersionAdd> |sync stat of all directories, even if they are existed and not broken (NOTE: it may take a long time for huge trees) (default: false)| Rebuild the tree structure for trash files, and put them back to original directories. ```shell juicefs restore [command options] META HOUR ... juicefs restore redis://localhost/1 2023-05-10-01 ``` |Items|Description| |-|-| |`--put-back value`|move the recovered files into original directory (default: false)| |`--threads value`|number of threads (default: 10)| Dump metadata into a JSON file. Refer to for more information. ```shell juicefs dump [command options] META-URL [FILE] juicefs dump redis://localhost meta-dump.json juicefs dump redis://localhost sub-meta-dump.json --subdir /dir/in/jfs ``` |Items|Description| |-|-| |`META-URL`|Database URL for metadata storage, see for details.| |`FILE`|Export file path, if not specified, it will be exported to standard output. If the filename ends with `.gz`, it will be automatically compressed.| |`--subdir=path`|Only export metadata for the specified subdirectory.| |`--keep-secret-key` <VersionAdd>1.1</VersionAdd> |Export object storage authentication information, the default is `false`. Since it is exported in plain text, pay attention to data security when using" }, { "data": "If the export file does not contain object storage authentication information, you need to use to reconfigure object storage authentication information after the subsequent import is completed.| |`--fast` <VersionAdd>1.2</VersionAdd>|Use more memory to speedup dump.| |`--skip-trash` <VersionAdd>1.2</VersionAdd>|Skip files and directories in trash.| Load metadata from a previously dumped JSON file. Read to learn more. ```shell juicefs load [command options] META-URL [FILE] juicefs load redis://127.0.0.1:6379/1 meta-dump.json ``` |Items|Description| |-|-| |`META-URL`|Database URL for metadata storage, see for details.| |`FILE`|Import file path, if not specified, it will be imported from standard input. If the filename ends with `.gz`, it will be automatically decompressed.| |`--encrypt-rsa-key=path` <VersionAdd>1.0.4</VersionAdd> |The path to the RSA private key file used for encryption.| |`--encrypt-alg=aes256gcm-rsa` <VersionAdd>1.0.4</VersionAdd> |Encryption algorithm, the default is `aes256gcm-rsa`.| Show status of JuiceFS. ```shell juicefs status [command options] META-URL juicefs status redis://localhost ``` |Items|Description| |-|-| |`--session=0, -s 0`|show detailed information (sustained inodes, locks) of the specified session (SID) (default: 0)| |`--more, -m` <VersionAdd>1.1</VersionAdd> |show more statistic information, may take a long time (default: false)| Show runtime statistics, read for more. ```shell juicefs stats [command options] MOUNTPOINT juicefs stats /mnt/jfs juicefs stats /mnt/jfs -l 1 ``` |Items|Description| |-|-| |`--schema=ufmco`|schema string that controls the output sections (`u`: usage, `f`: FUSE, `m`: metadata, `c`: block cache, `o`: object storage, `g`: Go) (default: `ufmco`)| |`--interval=1`|interval in seconds between each update (default: 1)| |`--verbosity=0`|verbosity level, 0 or 1 is enough for most cases (default: 0)| Show profiling of operations completed in JuiceFS, based on . read for more. ```shell juicefs profile [command options] MOUNTPOINT/LOGFILE juicefs profile /mnt/jfs cat /mnt/jfs/.accesslog > /tmp/jfs.alog juicefs profile /tmp/jfs.alog juicefs profile /tmp/jfs.alog --interval 0 ``` |Items|Description| |-|-| |`--uid=value, -u value`|only track specified UIDs (separated by comma)| |`--gid=value, -g value`|only track specified GIDs (separated by comma)| |`--pid=value, -p value`|only track specified PIDs (separated by comma)| |`--interval=2`|flush interval in seconds; set it to 0 when replaying a log file to get an immediate result (default: 2)| Show internal information for given paths or inodes. ```shell juicefs info [command options] PATH or INODE juicefs info /mnt/jfs/foo cd /mnt/jfs juicefs info -i 100 ``` |Items|Description| |-|-| |`--inode, -i`|use inode instead of path (current dir should be inside JuiceFS) (default: false)| |`--recursive, -r`|get summary of directories recursively (NOTE: it may take a long time for huge trees) (default: false)| |`--strict` <VersionAdd>1.1</VersionAdd> |get accurate summary of directories (NOTE: it may take a long time for huge trees) (default: false)| |`--raw`|show internal raw information (default: false)| It collects and displays information from multiple dimensions such as the operating environment and system logs to help better locate errors ```shell juicefs debug [command options] MOUNTPOINT juicefs debug /mnt/jfs juicefs debug --out-dir=/var/log /mnt/jfs juicefs debug --out-dir=/var/log --limit=1000 /mnt/jfs ``` |Items|Description| |-|-| |`--out-dir=./debug/`|The output directory of the results, automatically created if the directory does not exist (default: `./debug/`)| |`--stats-sec=5`|The number of seconds to sample .stats file (default: 5)| |`--limit=value`|The number of log entries collected, from newest to oldest, if not specified, all entries will be collected| |`--trace-sec=5`|The number of seconds to sample trace metrics (default: 5)| |`--profile-sec=30`|The number of seconds to sample profile metrics (default: 30)| It is used to show tree summary of target" }, { "data": "```shell juicefs summary [command options] PATH juicefs summary /mnt/jfs/foo juicefs summary --depth 5 /mnt/jfs/foo juicefs summary --entries 20 /mnt/jfs/foo juicefs summary --strict /mnt/jfs/foo ``` |Items|Description| |-|-| |`--depth value, -d value`|depth of tree to show (zero means only show root) (default: 2)| |`--entries value, -e value`|show top N entries (sort by size) (default: 10)| |`--strict`|show accurate summary, including directories and files (may be slow) (default: false)| |`--csv`|print summary in csv format (default: false)| Mount a volume. The volume must be formatted in advance. JuiceFS can be mounted by root or normal user, but due to their privilege differences, cache directory and log path will vary, read below descriptions for more. ```shell juicefs mount [command options] META-URL MOUNTPOINT juicefs mount redis://localhost /mnt/jfs juicefs mount redis://:mypassword@localhost /mnt/jfs -d META_PASSWORD=mypassword juicefs mount redis://localhost /mnt/jfs -d juicefs mount redis://localhost /mnt/jfs --subdir /dir/in/jfs juicefs mount redis://localhost /mnt/jfs -d --writeback juicefs mount redis://localhost /mnt/jfs -d --read-only juicefs mount redis://localhost /mnt/jfs --backup-meta 0 ``` |Items|Description| |-|-| |`META-URL`|Database URL for metadata storage, see for details.| |`MOUNTPOINT`|file system mount point, e.g. `/mnt/jfs`, `Z:`.| |`-d, --background`|run in background (default: false)| |`--no-syslog`|disable syslog (default: false)| |`--log=path`|path of log file when running in background (default: `$HOME/.juicefs/juicefs.log` or `/var/log/juicefs.log`)| |`--update-fstab` <VersionAdd>1.1</VersionAdd> |add / update entry in `/etc/fstab`, will create a symlink from `/sbin/mount.juicefs` to JuiceFS executable if not existing (default: false)| |Items|Description| |-|-| |`--enable-xattr`|enable extended attributes (xattr) (default: false)| |`--enable-ioctl` <VersionAdd>1.1</VersionAdd> |enable ioctl (support GETFLAGS/SETFLAGS only) (default: false)| |`--root-squash value` <VersionAdd>1.1</VersionAdd> |mapping local root user (UID = 0) to another one specified as UID:GID| |`--prefix-internal` <VersionAdd>1.1</VersionAdd> |add '.jfs' prefix to all internal files (default: false)| |`-o value`|other FUSE options, see | |Items|Description| |-|-| |`--subdir=value`|mount a sub-directory as root (default: \"\")| |`--backup-meta=3600`|interval (in seconds) to automatically backup metadata in the object storage (0 means disable backup) (default: \"3600\")| |`--backup-skip-trash` <VersionAdd>1.2</VersionAdd>|skip files and directories in trash when backup metadata.| |`--heartbeat=12`|interval (in seconds) to send heartbeat; it's recommended that all clients use the same heartbeat value (default: \"12\")| |`--read-only`|allow lookup/read operations only (default: false)| |`--no-bgjob`|Disable background jobs, default to false, which means clients by default carry out background jobs, including:<br/><ul><li>Clean up expired files in Trash (look for `cleanupDeletedFiles`, `cleanupTrash` in )</li><li>Delete slices that's not referenced (look for `cleanupSlices` in )</li><li>Clean up stale client sessions (look for `CleanStaleSessions` in )</li></ul>Note that compaction isn't affected by this option, it happens automatically with file reads and writes, client will check if compaction is in need, and run in background (take Redis for example, look for `compactChunk` in ).| |`--atime-mode=noatime` <VersionAdd>1.1</VersionAdd> |Control atime (last time the file was accessed) behavior, support the following modes:<br/><ul><li>`noatime` (default): set when the file is created or when `SetAttr` is explicitly called. Accessing and modifying the file will not affect atime, tracking atime comes at a performance cost, so this is the default behavior</li><li>`relatime`: update inode access times relative to mtime (last time when the file data was modified) or ctime (last time when file metadata was changed). Only update atime if atime was earlier than the current mtime or ctime, or the file's atime is more than 1 day old</li><li>`strictatime`: always update atime on access</li></ul>| |`--skip-dir-nlink value` <VersionAdd>1.1</VersionAdd> |number of retries after which the update of directory nlink will be skipped (used for tkv only, 0 means never) (default: 20)| For metadata cache description and usage, refer to and . |Items|Description| |-|-| |`--attr-cache=1`|attributes cache timeout in seconds (default: 1), read | |`--entry-cache=1`|file entry cache timeout in seconds (default: 1), read | |`--dir-entry-cache=1`|dir entry cache timeout in seconds (default: 1), read | |`--open-cache=0`|open file cache timeout in seconds (0 means disable this feature) (default: 0)| |`--open-cache-limit value` <VersionAdd>1.1</VersionAdd> |max number of open files to cache (soft limit, 0 means unlimited) (default: 10000)| |Items|Description| |-|-| |`--storage=file`|Object storage type (e.g. `s3`, `gs`, `oss`, `cos`) (default: `\"file\"`, refer to for all supported object storage types).| |`--storage-class value`" }, { "data": "|the storage class for data written by current client| |`--bucket=value`|customized endpoint to access object storage| |`--get-timeout=60`|the max number of seconds to download an object (default: 60)| |`--put-timeout=60`|the max number of seconds to upload an object (default: 60)| |`--io-retries=10`|number of retries after network failure (default: 10)| |`--max-uploads=20`|Upload concurrency, defaults to 20. This is already a reasonably high value for 4M writes, with such write pattern, increasing upload concurrency usually demands higher `--buffer-size`, learn more at . But for random writes around 100K, 20 might not be enough and can cause congestion at high load, consider using a larger upload concurrency, or try to consolidate small writes in the application end. | |`--max-deletes=10`|number of threads to delete objects (default: 10)| |`--upload-limit=0`|bandwidth limit for upload in Mbps (default: 0)| |`--download-limit=0`|bandwidth limit for download in Mbps (default: 0)| |Items|Description| |-|-| |`--buffer-size=300`|total read/write buffering in MiB (default: 300), see | |`--prefetch=1`|prefetch N blocks in parallel (default: 1), see | |`--writeback`|upload objects in background (default: false), see | |`--upload-delay=0`|When `--writeback` is enabled, you can use this option to add a delay to object storage upload, default to 0, meaning that upload will begin immediately after write. Different units are supported, including `s` (second), `m` (minute), `h` (hour). If files are deleted during this delay, upload will be skipped entirely, when using JuiceFS for temporary storage, use this option to reduce resource usage. Refer to .| |`--cache-dir=value`|directory paths of local cache, use `:` (Linux, macOS) or `;` (Windows) to separate multiple paths (default: `$HOME/.juicefs/cache` or `/var/jfsCache`), see | |`--cache-mode value` <VersionAdd>1.1</VersionAdd> |file permissions for cached blocks (default: \"0600\")| |`--cache-size=102400`|size of cached object for read in MiB (default: 102400), see | |`--free-space-ratio=0.1`|min free space ratio (default: 0.1), if is enabled, this option also controls write cache size, see | |`--cache-partial-only`|cache random/small read only (default: false), see | |`--verify-cache-checksum value` <VersionAdd>1.1</VersionAdd> |Checksum level for cache data. After enabled, checksum will be calculated on divided parts of the cache blocks and stored on disks, which are used for verification during reads. The following strategies are supported:<br/><ul><li>`none`: Disable checksum verification, if local cache data is tampered, bad data will be read;</li><li>`full` (default): Perform verification when reading the full block, use this for sequential read scenarios;</li><li>`shrink`: Perform verification on parts that's fully included within the read range, use this for random read scenarios;</li><li>`extend`: Perform verification on parts that fully include the read range, this causes read amplifications and is only used for random read scenarios demanding absolute data integrity.</li></ul>| |`--cache-eviction value` <VersionAdd>1.1</VersionAdd> |cache eviction policy (none or 2-random) (default: \"2-random\")| |`--cache-scan-interval value` <VersionAdd>1.1</VersionAdd> |interval (in seconds) to scan cache-dir to rebuild in-memory index (default: \"3600\")| ||Items|Description| |-|-| |`--metrics=127.0.0.1:9567`|address to export metrics (default: `127.0.0.1:9567`)| |`--custom-labels`|custom labels for metrics, format: `key1:value1;key2:value2` (default: \"\")| |`--consul=127.0.0.1:8500`|Consul address to register (default: `127.0.0.1:8500`)| |`--no-usage-report`|do not send usage report (default: false)| Unmount a volume. ```shell juicefs umount [command options] MOUNTPOINT juicefs umount /mnt/jfs ``` |Items|Description| |-|-| |`-f, --force`|force unmount a busy mount point (default: false)| |`--flush` <VersionAdd>1.1</VersionAdd> |wait for all staging chunks to be flushed (default: false)| Start an S3-compatible gateway, read for more. ```shell juicefs gateway [command options] META-URL ADDRESS export MINIOROOTUSER=admin export MINIOROOTPASSWORD=12345678 juicefs gateway redis://localhost localhost:9000 ``` Apart from options listed below, this command shares options with `juicefs mount`, be sure to refer to as well. |Items|Description| |-|-| | `--log value`<VersionAdd>1.2</VersionAdd> | path for gateway log | |`META-URL`|Database URL for metadata storage, see for details.| |`ADDRESS`|S3 gateway address and listening port, for example: `localhost:9000`| |`--access-log=path`|path for JuiceFS access log.| | `--background," }, { "data": "| run in background (default: false) | |`--no-banner`|disable MinIO startup information (default: false)| |`--multi-buckets`|use top level of directories as buckets (default: false)| |`--keep-etag`|save the ETag for uploaded objects (default: false)| |`--umask=022`|umask for new file and directory in octal (default: 022)| | `--domain value`<VersionAdd>1.2</VersionAdd> |domain for virtual-host-style requests| Start a WebDAV server, refer to for more. ```shell juicefs webdav [command options] META-URL ADDRESS juicefs webdav redis://localhost localhost:9007 ``` Apart from options listed below, this command shares options with `juicefs mount`, be sure to refer to as well. |Items|Description| |-|-| |`META-URL`|Database URL for metadata storage, see for details.| |`ADDRESS`|WebDAV address and listening port, for example: `localhost:9007`.| |`--cert-file` <VersionAdd>1.1</VersionAdd> |certificate file for HTTPS| |`--key-file` <VersionAdd>1.1</VersionAdd> |key file for HTTPS| |`--gzip`|compress served files via gzip (default: false)| |`--disallowList`|disallow list a directory (default: false)| | `--log value`<VersionAdd>1.2</VersionAdd> | path for WebDAV log| |`--access-log=path`|path for JuiceFS access log.| | `--background, -d`<VersionAdd>1.2</VersionAdd> | run in background (default: false)| Run benchmark, including read/write/stat for big and small files. For a detailed introduction to the `bench` subcommand, refer to the . ```shell juicefs bench [command options] PATH juicefs bench /mnt/jfs -p 4 juicefs bench /mnt/jfs --big-file-size 0 ``` |Items|Description| |-|-| |`--block-size=1`|block size in MiB (default: 1)| |`--big-file-size=1024`|size of big file in MiB (default: 1024)| |`--small-file-size=0.1`|size of small file in MiB (default: 0.1)| |`--small-file-count=100`|number of small files (default: 100)| |`--threads=1, -p 1`|number of concurrent threads (default: 1)| Run basic benchmarks on the target object storage to test if it works as expected. Read for more. ```shell juicefs objbench [command options] BUCKET ACCESSKEY=myAccessKey SECRETKEY=mySecretKey juicefs objbench --storage=s3 https://mybucket.s3.us-east-2.amazonaws.com -p 6 ``` |Items|Description| |-|-| |`--storage=file`|Object storage type (e.g. `s3`, `gs`, `oss`, `cos`) (default: `file`, refer to for all supported object storage types)| |`--access-key=value`|Access Key for object storage (can also be set via the environment variable `ACCESS_KEY`), see for more.| |`--secret-key value`|Secret Key for object storage (can also be set via the environment variable `SECRET_KEY`), see for more.| |`--block-size=4096`|size of each IO block in KiB (default: 4096)| |`--big-object-size=1024`|size of each big object in MiB (default: 1024)| |`--small-object-size=128`|size of each small object in KiB (default: 128)| |`--small-objects=100`|number of small objects (default: 100)| |`--skip-functional-tests`|skip functional tests (default: false)| |`--threads=4, -p 4`|number of concurrent threads (default: 4)| Download data to local cache in advance, to achieve better performance on application's first read. You can specify a mount point path to recursively warm-up all files under this path. You can also specify a file through the `--file` option to only warm-up the files contained in it. If the files needing warming up resides in many different directories, you should specify their names in a text file, and pass to the `warmup` command using the `--file` option, allowing `juicefs warmup` to download concurrently, which is significantly faster than calling `juicefs warmup` multiple times, each with a single file. ```shell juicefs warmup [command options] [PATH ...] juicefs warmup /mnt/jfs/datadir echo '/jfs/f1 /jfs/f2 /jfs/f3' > /tmp/filelist.txt juicefs warmup -f /tmp/filelist.txt ``` |Items|Description| |-|-| |`--file=path, -f path`|file containing a list of paths (each line is a file path)| |`--threads=50, -p 50`|number of concurrent workers, default to 50. Reduce this number in low bandwidth environment to avoid download timeouts| |`--background, -b`|run in background (default: false)| Remove all the files and subdirectories, similar to `rm -rf`, except this command deals with metadata directly (bypassing kernel), thus is much faster. If trash is enabled, deleted files are moved into trash. Read more at . ```shell juicefs rmr PATH ... juicefs rmr /mnt/jfs/foo ``` Sync between two storage, read for more. ```shell juicefs sync [command options] SRC DST juicefs sync oss://mybucket.oss-cn-shanghai.aliyuncs.com s3://mybucket.s3.us-east-2.amazonaws.com juicefs sync s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/ juicefs sync --exclude='a?/b*'" }, { "data": "jfs://META-URL/ juicefs sync --include='a1/b1' --exclude='a[1-9]/b*' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/ juicefs sync --include='a1/b1' --exclude='a*' --include='b2' --exclude='b?' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/ ``` As shown in the examples, the format of both source (`SRC`) and destination (`DST`) paths is: ``` @]BUCKET ``` In which: `NAME`: JuiceFS supported data storage types like `s3`, `oss`, refer to for a full list. `ACCESSKEY` and `SECRETKEY`: The credential required to access the data storage, refer to . `TOKEN` token used to access the object storage, as some object storage supports the use of temporary token to obtain permission for a limited time `BUCKET. `[/PREFIX]`: Optional, a prefix for the source and destination paths that can be used to limit synchronization of data only in certain paths. |Items|Description| |-|-| |`--start=KEY, -s KEY, --end=KEY, -e KEY`|Provide object storage key range for syncing.| |`--exclude=PATTERN`|Exclude keys matching PATTERN.| |`--include=PATTERN`|Include keys matching PATTERN, need to be used with `--exclude`.| |`--limit=-1`|Limit the number of objects that will be processed, default to -1 which means unlimited.| |`--update, -u`|Update existing files if the source files' `mtime` is newer, default to false.| |`--force-update, -f`|Always update existing file, default to false.| |`--existing, --ignore-non-existing` <VersionAdd>1.1</VersionAdd> |Skip creating new files on destination, default to false.| |`--ignore-existing` <VersionAdd>1.1</VersionAdd> |Skip updating files that already exist on destination, default to false.| |Items|Description| |-|-| |`--dirs`|Sync empty directories as well.| |`--perms`|Preserve permissions, default to false.| |`--links, -l`|Copy symlinks as symlinks default to false.| |`--delete-src, --deleteSrc`|Delete objects that already exist in destination. Different from rsync, files won't be deleted at the first run, instead they will be deleted at the next run, after files are successfully copied to the destination.| |`--delete-dst, --deleteDst`|Delete extraneous objects from destination.| |`--check-all`|Verify the integrity of all files in source and destination, default to false. Comparison is done on byte streams, which comes at a performance cost.| |`--check-new`|Verify the integrity of newly copied files, default to false. Comparison is done on byte streams, which comes at a performance cost.| |`--dry`|Don't actually copy any file.| |Items|Description| |-|-| |`--threads=10, -p 10`|Number of concurrent threads, default to 10.| |`--list-threads=1` <VersionAdd>1.1</VersionAdd> |Number of `list` threads, default to 1. Read to learn its usage.| |`--list-depth=1` <VersionAdd>1.1</VersionAdd> |Depth of concurrent `list` operation, default to 1. Read to learn its usage.| |`--no-https`|Do not use HTTPS, default to false.| |`--storage-class value` <VersionAdd>1.1</VersionAdd> |the storage class for destination| |`--bwlimit=0`|Limit bandwidth in Mbps default to 0 which means unlimited.| |Items| Description | |-|--| |`--manager-addr=ADDR`| The listening address of the Manager node in distributed synchronization mode in the format: `<IP>:[port]`. If not specified, it listens on a random port. If this option is omitted, it listens on a random local IPv4 address and a random port. | |`--worker=ADDR,ADDR`| Worker node addresses used in distributed syncing, comma separated. | Quickly clone directories or files within a single JuiceFS mount point. The cloning process involves copying only the metadata without copying the data blocks, making it extremely fast. Read for more. ```shell juicefs clone [command options] SRC DST juicefs clone /mnt/jfs/file1 /mnt/jfs/file2 juicefs clone /mnt/jfs/dir1 /mnt/jfs/dir2 juicefs clone -p /mnt/jfs/file1 /mnt/jfs/file2 ``` |Items|Description| |-|-| |`--preserve, -p`|By default, the executor's UID and GID are used for the clone result, and the mode is recalculated based on the user's umask. Use this option to preserve the UID, GID, and mode of the file.| Performs fragmentation optimization, merging, or cleaning of non-contiguous slices in the given directory to improve read performance. For detailed information, refer to . ```shell juicefs compact [command options] PATH juicefs compact /mnt/jfs ``` | Item | Description | |-|-| | `--threads, -p` | Number of threads to concurrently execute tasks (default: 10) |" } ]
{ "category": "Runtime", "file_name": "command_reference.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "(image-format)= Images contain a root file system and a metadata file that describes the image. They can also contain templates for creating files inside an instance that uses the image. Images can be packaged as either a unified image (single file) or a split image (two files). Images for containers have the following directory structure: ``` metadata.yaml rootfs/ templates/ ``` Images for VMs have the following directory structure: ``` metadata.yaml rootfs.img templates/ ``` For both instance types, the `templates/` directory is optional. The `metadata.yaml` file contains information that is relevant to running the image in Incus. It includes the following information: ```yaml architecture: x86_64 creation_date: 1424284563 properties: description: Ubuntu 22.04 LTS Intel 64bit os: Ubuntu release: jammy 22.04 templates: ... ``` The `architecture` and `creation_date` fields are mandatory. The `properties` field contains a set of default properties for the image. The `os`, `release`, `name` and `description` fields are commonly used, but are not mandatory. The `templates` field is optional. See {ref}`imageformattemplates` for information on how to configure templates. For containers, the `rootfs/` directory contains a full file system tree of the root directory (`/`) in the container. Virtual machines use a `rootfs.img` `qcow2` file instead of a `rootfs/` directory. This file becomes the main disk device. (imageformattemplates)= You can use templates to dynamically create files inside an instance. To do so, configure template rules in the `metadata.yaml` file and place the template files in a `templates/` directory. As a general rule, you should never template a file that is owned by a package or is otherwise expected to be overwritten by normal operation of an instance. For each file that should be generated, create a rule in the `metadata.yaml` file. For example: ```yaml templates: /etc/hosts: when: create rename template: hosts.tpl properties: foo: bar /etc/hostname: when: start template: hostname.tpl /etc/network/interfaces: when: create template: interfaces.tpl create_only: true /home/foo/setup.sh: when: create template: setup.sh.tpl create_only: true uid: 1000 gid: 1000 mode: 755 ``` The `when` key can be one or more of: `create` - run at the time a new instance is created from the image `copy` - run when an instance is created from an existing one `start` - run every time the instance is started The `template` key points to the template file in the `templates/` directory. You can pass user-defined template properties to the template file through the `properties` key. Set the `create_only` key if you want Incus to create the file if it doesn't exist, but not overwrite an existing" }, { "data": "The `uid`, `gid` and `mode` keys can be used to control the file ownership and permissions. Template files use the format. They always receive the following context: | Variable | Type | Description | |--|--|-| | `trigger` | `string` | Name of the event that triggered the template | | `path` | `string` | Path of the file that uses the template | | `instance` | `map[string]string` | Key/value map of instance properties (name, architecture, privileged and ephemeral) | | `config` | `map[string]string` | Key/value map of the instance's configuration | | `devices` | `map[string]map[string]string` | Key/value map of the devices assigned to the instance | | `properties` | `map[string]string` | Key/value map of the template properties specified in `metadata.yaml` | For convenience, the following functions are exported to the Pongo2 templates: `config_get(\"user.foo\", \"bar\")` - Returns the value of `user.foo`, or `\"bar\"` if not set. Incus supports two Incus-specific image formats: a unified tarball and split tarballs. These tarballs can be compressed. Incus supports a wide variety of compression algorithms for tarballs. However, for compatibility purposes, you should use `gzip` or `xz`. (image-format-unified)= A unified tarball is a single tarball (usually `*.tar.xz`) that contains the full content of the image, including the metadata, the root file system and optionally the template files. This is the format that Incus itself uses internally when publishing images. It is usually easier to work with; therefore, you should use the unified format when creating Incus-specific images. The image identifier for such images is the SHA-256 of the tarball. (image-format-split)= A split image consists of two files. The first is a tarball containing the metadata and optionally the template files (usually `*.tar.xz`). The second can be a tarball, `squashfs` or `qcow2` image containing the actual instance data. For containers, the second file is most commonly a SquashFS-formatted file system tree, though it can also be a tarball of the same tree. For virtual machines, the second file is always a `qcow2` formatted disk image. Tarballs can be externally compressed (`.tar.xz`, `.tar.gz`, ...) whereas `squashfs` and `qcow2` can be internally compressed through their respective native compression options. This format is designed to allow for easy image building from existing non-Incus rootfs tarballs that are already available. You should also use this format if you want to create images that can be consumed by both Incus and other tools. The image identifier for such images is the SHA-256 of the concatenation of the metadata and data files (in that order)." } ]
{ "category": "Runtime", "file_name": "image_format.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Yesterday, Feb. 23, we had the containerd summit for contributors and maintainers. We started off by getting everyone up to speed on the project, roadmap, and goals before diving down into specific issues and design of containerd. We should have the videos up soon for the various sessions and Q&A. We discussed the use of the containerd shim, the costs that it adds, as well as its benefits. Overall, most of us agreed that the extra memory overhead is worth it for the feature set that it provides. However, we did make a few changes to the shim in the master branch; a grpc API so that the shim is straightforward to interact with and much more robust in terms of error handling. We also have one shim per container vs the one shim per process, which was the current implementation. There were a couple of additional changes we discussed. One being launching the shim inside the cgroup of the container so that any IO, CPU, and memory that the shim consumes is \"charged\" to the container. I'll expand a little more on why this matters in the next section when it comes to logging. Logging is a big issue for containers and its not exactly straightforward. Since containers can be used in many different ways the support of logging and connecting to a containers interactively can add complexity to the runtime. Right now, the shim is responsible for handling the IO of a container and making sure that it is kept open in the event that containerd ( the daemon ) or clients above containerd crash and need to reconnect to containers that they launched. You can use fifos for reconnecting to container logs but there is concerns about the buffer limit when nothing is reading from the fifo. If the fifo fills up, then the application in the container will block. In the end, most container systems log to files on disk but in various formats. The only part of the container's IO that needs to be a pipe, or fifo for reconnection is STDIN. STDOUT and STDERR can go directly to a file in most cases. However, logging the raw output for a container directly to a file is not ideal as both Docker and Kube have specific on disk formats. So our initial idea that was discussed is to move some type of \"log formatter\" into the" }, { "data": "This will allow a single container to have its logs written directly to disk from the shim without using fifos for OUT and ERR. The other problem that this would solve is when a single container has excessive logs. If a single daemon is writing all the logs to disk, you will see high CPU on the entire daemon. If we place the shim in the cgroup for the container, then only that container is affected when it starts to log excessively without affecting the performance of the entire system. The CPU and IO could be charged to the container. We still have to work out how we will specify the log format in the shim. Go 1.8 plugins are not looking like a good solution for this. Ill be opening an issue so that we can figure this out on the repo. A use case for containerd is that its a solid base for multiple container platforms. When you have a single containerd installed on a system but have Kube, Docker, and Swarm all using it, different operations such as listing of containers and access to content need to be scoped appropriately so that Docker does not return containers created by Kube and vice versa. We were already looking to design a metadata store for clients and internal data of containerd and this seems like a good fit for this problem. In the breakout session we discussed having system details like content and containers stay in a flat namespace inside the containerd core but query and create operations by clients should be namespaced. As simple example would be: `/docker/images/redis/3.2` `/docker/containers/redis-master` `/kube/services/lb` `/kube/pods/redis/containers/logger` `/kube/pods/redis/containers/master` These are just simple examples and much of the format will be left up to the clients for storing metadata within the system. Clients should be able to query based on the namespace of keys as well as queries of `/` to see the entire system as a whole. Looking at the latest Go 1.8 release plugins looked like a perfect solution for extending containerd by third parties. However, after discussions at the summit and with help from Tim to get clarification on the feature from the Go team it does not not look very promising in terms of implementation. We currently have the code for plugins via go 1.8 merged into master but we will have to rethink our extension method going forward." }, { "data": "During the image distribution breakout we first covered the current state of snapshots and where we expect to go next. The discussion started with how new snapshotters will be integrated using Go plugins or package importing rather than relying on grpc. Snapshots themselves are not currently exposed as a grpc service but there is some desire to have lower level access for debugging, content sideloading, or building. The exposed interface for snapshots beyond the existing pull/push/run workflows will be designed with this in mind. Additionally there was some discussion related to the possibility of having the mounter be pluggable and opening up snapshots to volumes use cases. The snapshot model was discussed in depth including how we will differentiate between terminal commits and online snapshots. Differentiating between the two could be important to optimize different use cases around building, importing, and backing up container changes. The next major topic of discussion was around distribution, primarily focused on the resolver and fetcher interfaces. We discussed what a locator will look for finding remote content as well as how they can be integrated with a `git remote`-like model. Stephen expressed his intent to keep the locator as opaque as possible, but to stay away from including protocol definitions in the locator value. Having a remotes table was proposed was proposed as a way to keep naming simple but allow defining more specific protocols. The current fetch interface was found to be unclear. It was proposed to remove the hints and integrate the media type checks and expectations into the resolver. Role of content store discussed for validating content and converging ingests of duplicate data. Next week we are still working towards and end to end PoC with the various subsystems of containerd working together behind the GRPC api. The API is still under development but the PoC should give people a good idea how the various subsystems will interact with one another. We will also try to get issues up on the repository from discussions at the summit so we can continue the discussions during the summit with the broader community. Rethinking an extension model. Extending containerd is important for many people. It makes sure that third parties do not have to commit code to the core to gain the functionality that they require. However, its not simple in Go and Go 1.8 plugins do not look like the solution. We need to work more on this." } ]
{ "category": "Runtime", "file_name": "2017-02-24.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "As long as the debian directory is in the sub-directory you need to link or copy it in the top directory. In the top directory do this: ```sh rm -rf debian cp -r dist/debian . ``` Make sure all the dependencies are met. See the `INSTALL.md` for this. Otherwise `debuild` will complain and quit. To do some configuration for the build, some environment variables can be used. Due to the fact, that `debuild` filters out some variables, all the configuration variables need to be prefixed by `DEB_` See `mconfig --help` for details about the configuration options. `export DEB_NOSUID=1` adds --without-suid `export DEB_NONETWORK=1` adds --without-network `export DEB_NOSECCOMP=1` adds --without-seccomp `export DEB_NOALL=1` adds all of the above To select a specific profile for `mconfig`. REMINDER: to build with seccomp you need to install `libseccomp-dev` package ! For real production environment us this configuration: ```sh export DEBSCPROFILE=release-stripped ``` or if debugging is needed use this. ```sh export DEBSCPROFILE=debug ``` In case a different build directory is needed: ```sh export DEBSCBUILDDIR=builddir ``` One way to update the changelog would be that the developer of singularity update the Debian changelog on every commit. As this is double work, because of the CHANGELOG.md in the top directory, the changelog is automatically updated with the version of the source which is currently checked out. Which means you can easily build Debian packages for all the different tagged versions of the software. See `INSTALL.md` on how to checkout a specific version. Be aware, that `debchange` will complain about a lower version as the top in the current changelog. Which means you have to cleanup the changelog if needed. If you did not change anything in the debian directory manually, it might be easiest to . Be aware, that the Debian install directory as you see it now, might not be available in older versions (branches, tags). Make sure you have a clean copy of the debian directory before you switch to (checkout) an older version. Usually `debchange` is configured by the environment variables `DEBFULLNAME` and `EMAIL`. As `debuild` creates a clean environment it filters out most of the environment variables. To set `DEBFULLNAME` for the `debchange` command in the makefile, you have to set `DEB_FULLNAME`. If these variables are not set, `debchange` will try to find appropriate values from the system configuration. Usually by using the login name and the domain-name. ```sh export DEB_FULLNAME=\"Your Name\" export EMAIL=\"you@example.org\" ``` As usual for creating a Debian package you can use `dpkg-buildpackage` or `debuild` which is a kind of wrapper for the first and includes the start of `lintian`, too. ```sh dpkg-buildpackage --build=binary --no-sign lintian --verbose --display-info --show-overrides ``` or all in one ```sh debuild --build=binary --no-sign --lintian-opts --display-info --show-overrides ``` After successful build the Debian package can be found in the parent directory. To clean up the temporary files created by `debuild` use the command: ```sh dh clean ``` To cleanup the copy of the debian directory, make sure you saved your changes (if any) and remove it. ```sh rm -rf debian ``` For details on Debian package building see the man-page of `debuild` and `dpkg-buildpackage` and `lintian` In the current version this is by far not ready for using it in official Debian Repositories. This might change in future. I updated the old debian directory to make it just work, for people needing it. Any help is welcome to provide a Debian installer which can be used for building a Debian package, that can be used in official Debian Repositories." } ]
{ "category": "Runtime", "file_name": "DEBIAN_PACKAGE.md", "project_name": "Singularity", "subcategory": "Container Runtime" }
[ { "data": "Kuasar has make some changes on official containerd v1.7.0 based on commit:`1f236dc57aff44eafd95b3def26683a235b97241`. Please note that for compatibility with Containerd, it is recommended to use Go version 1.19 or later. `git` clone the codes of containerd fork version from kuasar repository. ```bash git clone https://github.com/kuasar-io/containerd.git cd containerd make make install ``` Refer to the following configuration to modify the configuration file, default path is `/etc/containerd/config.toml`. Important!!!: AppArmor feature is not support now, you need update `disable_apparmor = true` in the config file. For vmm: ```toml [proxy_plugins.vmm] type = \"sandbox\" address = \"/run/vmm-sandboxer.sock\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.vmm] runtime_type = \"io.containerd.kuasar.v1\" sandboxer = \"vmm\" io_type = \"hvsock\" ``` For quark: ```toml [proxy_plugins.quark] type = \"sandbox\" address = \"/run/quark-sandboxer.sock\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.quark] runtime_type = \"io.containerd.quark.v1\" sandboxer = \"quark\" ``` For wasm: ```toml [proxy_plugins.wasm] type = \"sandbox\" address = \"/run/wasm-sandboxer.sock\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.wasm] runtime_type = \"io.containerd.wasm.v1\" sandboxer = \"wasm\" ``` To start containerd, run `ENABLECRISANDBOXES=1 containerd` In order to use the containerd Sandbox API, the containerd daemon should be started with the environment variable `ENABLECRISANDBOXES=1`." } ]
{ "category": "Runtime", "file_name": "containerd.md", "project_name": "Kuasar", "subcategory": "Container Runtime" }
[ { "data": "The Antrea Multi-cluster feature is introduced from v1.5.0. There is no data-plane related changes from release v1.5.0, so Antrea deployment and Antrea Multi-cluster deployment are indenpendent. However, we suggest to keep Antrea and Antrea Multi-cluster in the same version considering there will be data-plane change involved in the future. Please refer to to learn the requirement of Antrea upgrade. This doc focuses on Multi-cluster deployment only. The goal is to support 'graceful' upgrade. Multi-cluster upgrade will not have disruption to data-plane of member clusters, but there can be downtime of processing new configurations when individual components restart: During Leader Controller restart, a new member cluster, ClusterSet or ResourceExport will not be processed. This is because the Controller also runs the validation webhooks for MemberClusterAnnounce, ClusterSet and ResourceExport. During Member Controller restart, a new ClusterSet will not be processed, this is because the Controller runs the validation webhooks for ClusterSet. Our goal is to support version skew for different Antrea Multi-cluster components, but the Multi-cluster feature is still in Alpha version, and the API is not stable yet. Our recommendation is always to upgrade Antrea Multi-cluster to the same version for a ClusterSet. Antrea Leader Controller: must be upgraded first Antrea Member Controller: must the same version as the Antrea Leader Controller. Antctl: must not be newer than the Antrea Leader/Member Controller. Please notice Antctl for Multi-cluster is added since v1.6.0. In one ClusterSet, We recommend all member and leader clusters deployed with the same version. During Leader controller upgrade, resource export/import between member clusters is not supported. Before all member clusters are upgraded to the same version as Leader controller, the feature introduced in old version should still work cross clusters, but no guarantee for the feature in new version. It should have no impact during upgrade to those imported resources like Service, Endpoints or AntreaClusterNetworkPolicy. Prior to Antrea v1.13, the `ClusterClaim` CRD is used to define both the local Cluster ID and the ClusterSet ID. Since Antrea v1.13, the `ClusterClaim` CRD is removed, and the `ClusterSet` CRD solely defines a ClusterSet. The name of a `ClusterSet` CR must match the ClusterSet ID, and a new `clusterID` field specifies the local Cluster ID. After upgrading Antrea Multi-cluster Controller from a version older than v1.13, the new version Multi-cluster Controller can still recognize and work with the old version `ClusterClaim` and `ClusterSet`" }, { "data": "However, we still suggest updating the `ClusterSet` CR to the new version after upgrading Multi-cluster Controller. You just need to update the existing `ClusterSet` CR and add the right `clusterID` to the spec. An example `ClusterSet` CR is like the following: ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha2 kind: ClusterSet metadata: name: test-clusterset # This value must match the ClusterSet ID. namespace: kube-system spec: clusterID: test-cluster-north # The new added field since v1.13. leaders: clusterID: test-cluster-north secret: \"member-north-token\" server: \"https://172.18.0.1:6443\" namespace: antrea-multicluster ``` You may also delete the `ClusterClaim` CRD after the upgrade, and then all existing `ClusterClaim` CRs will be removed automatically after the CRD is deleted. ```bash kubectl delete crds clusterclaims.multicluster.crd.antrea.io ``` The Antrea Multi-cluster APIs are built using K8s CustomResourceDefinitions and we follow the same versioning scheme as the K8s APIs and the same . Other than the most recent API versions in each track, older API versions must be supported after their announced deprecation for a duration of no less than: GA: 12 months Beta: 9 months Alpha: N/A (can be removed immediately) K8s has a on the removal ofAPI object versions that have been persisted to storage. We adopt the following rules for the CustomResources which are persisted by the K8s apiserver. Alpha API versions may be removed at any time. The must be used for CRDs to indicate that a particular version of the resource has been deprecated. Beta and GA API versions must be supported after deprecation for the respective durations stipulated above before they can be removed. For deprecated Beta and GA API versions, a must be provided along with each Antrea release, until the API version is removed altogether. Please refer to to learn the details. Following is the Antrea Multi-cluster feature list. For the details of each feature, please refer to . | Feature | Supported in | | -- | | | Service Export/Import | v1.5.0 | | ClusterNetworkPolicy Replication | v1.6.0 | When you are trying to directly apply a newer Antrea Multi-cluster YAML manifest, as provided with , you will probably meet an issue like below if you are upgrading Multi-cluster components from v1.5.0 to a newer one: ```log label issue:The Deployment \"antrea-mc-controller\" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{\"app\":\"antrea\", \"component\":\"antrea-mc-controller\"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable ``` The issue is caused by the label change introduced by . The reason is mutation of label selectors on Deployments is not allowed in `apps/v1beta2` and forward. You need to delete the Deployment \"antrea-mc-controller\" first, then run `kubectl apply -f` with the manifest of the newer version." } ]
{ "category": "Runtime", "file_name": "upgrade.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Get cluster information, including cluster name, address, number of volumes, number of nodes, and usage rate, etc. ```bash cfs-cli cluster info ``` Get usage, status, etc. of metaNodes and dataNodes by region. ```bash cfs-cli cluster stat ``` Freeze the cluster. After setting it to `true`, when the partition is full, the cluster will not automatically allocate new partitions. ```bash cfs-cli cluster freeze [true/false] ``` Set the memory threshold for each MetaNode in the cluster. If the memory usage reaches this threshold, all the metaPartition will be readOnly. [float] should be a float number between 0 and 1. ```bash cfs-cli cluster threshold [float] ``` Set the `volDeletionDelayTime` configuration, measured in hours, which represents the number of hours after enabling delayed volume deletion when the volume will be permanently deleted. Prior to that, it will be marked for deletion and can be recovered. default is 48 hours. ```bash cfs-cli cluster volDeletionDelayTime [VOLDELETIONDELAYTIME] ``` Set the configurations of the cluster. ```bash cfs-cli cluster set [flags] ``` ```bash Flags: --autoRepairRate string DataNode auto repair rate --batchCount string MetaNode delete batch count --deleteWorkerSleepMs string MetaNode delete worker sleep time with millisecond. if 0 for no sleep -h, --help help for set --loadFactor string Load Factor --markDeleteRate string DataNode batch mark delete limit rate. if 0 for no infinity limit --maxDpCntLimit string Maximum number of dp on each datanode, default 3000, 0 represents setting to default ```" } ]
{ "category": "Runtime", "file_name": "cluster.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "is a platform for running serverless workloads on Kubernetes. This guide will show you how to run basic Knative workloads in gVisor. This guide assumes you have have a cluster that is capable of running gVisor workloads. This could be a enabled cluster on Google Cloud Platform or one you have set up yourself using . This guide will also assume you have Knative installed using as the network layer. You can follow the to install Knative. Knative allows the use of various parameters on Pods via . We will enable the feature flag to enable the use of the Kubernetes . Edit the feature flags ConfigMap. ```bash kubectl edit configmap config-features -n knative-serving ``` Add the `kubernetes.podspec-runtimeclassname: enabled` to the `data` field. Once you are finished the ConfigMap will look something like this (minus all the system fields). ```yaml apiVersion: v1 kind: ConfigMap metadata: name: config-features namespace: knative-serving labels: serving.knative.dev/release: v0.22.0 data: kubernetes.podspec-runtimeclassname: enabled ``` After you have set the Runtime Class feature flag you can now create Knative services that specify a `runtimeClassName` in the spec. ```bash cat <<EOF | kubectl apply -f - apiVersion: serving.knative.dev/v1 kind: Service metadata: name: helloworld-go spec: template: spec: runtimeClassName: gvisor containers: image: gcr.io/knative-samples/helloworld-go env: name: TARGET value: \"gVisor User\" EOF ``` You can see the pods running and their Runtime Class. ```bash kubectl get pods -o=custom-columns='NAME:.metadata.name,RUNTIME CLASS:.spec.runtimeClassName,STATUS:.status.phase' ``` Output should look something like the following. Note that your service might scale to zero. If you access it via it's URL you should get a new Pod. ``` NAME RUNTIME CLASS STATUS helloworld-go-00002-deployment-646c87b7f5-5v68s gvisor Running ``` Congrats! Your Knative service is now running in gVisor!" } ]
{ "category": "Runtime", "file_name": "knative.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "Before executing a remotely fetched ACI, rkt will verify it based on attached signatures generated by the ACI creator. Before this can happen, rkt needs to know which creators you trust, and therefore are trusted to run images on your machine. The identity of each ACI creator is established with a public key, which is placed in rkt's key store on disk. When adding a trusted key, a prefix can scope the level of established trust to a subset of images. A few examples: ``` ``` ``` ``` To trust a key for an entire root domain, you must use the `--root` flag, with a path to a key file (no discovery). ``` ``` The easiest way to trust a key is through meta discovery. rkt will find and download a public key that the creator has published on their website. The . The TL;DR is rkt will find a meta tag that looks like: ```html <meta name=\"ac-discovery-pubkeys\" content=\"coreos.com/etcd https://coreos.com/dist/pubkeys/aci-pubkeys.gpg\"> ``` And use it to download the public key and present it to you for approval: ``` Prefix: \"coreos.com/etcd\" Key: \"https://coreos.com/dist/pubkeys/aci-pubkeys.gpg\" GPG key fingerprint is: 8B86 DE38 890D DB72 9186 7B02 5210 BD88 8818 2190 CoreOS ACI Builder <release@coreos.com> Are you sure you want to trust this key (yes/no)? yes Trusting \"https://coreos.com/dist/pubkeys/aci-pubkeys.gpg\" for prefix \"coreos.com/etcd\". Added key for prefix \"coreos.com/etcd\" at \"/etc/rkt/trustedkeys/prefix.d/coreos.com/etcd/8b86de38890ddb7291867b025210bd8888182190\" ``` If rkt can't find a key using meta discovery, an error will be printed: ``` Error determining key location: --prefix meta discovery error: found no ACI meta tags ``` If you know where a public key is located, you can request it directly from disk or via HTTPS: ``` Prefix: \"coreos.com/etcd\" Key: \"https://coreos.com/dist/pubkeys/aci-pubkeys.gpg\" GPG key fingerprint is: 8B86 DE38 890D DB72 9186 7B02 5210 BD88 8818 2190 CoreOS ACI Builder <release@coreos.com> Are you sure you want to trust this key (yes/no)? yes Trusting \"https://coreos.com/dist/pubkeys/aci-pubkeys.gpg\" for prefix \"coreos.com/etcd\". Added key for prefix \"coreos.com/etcd\" at \"/etc/rkt/trustedkeys/prefix.d/coreos.com/etcd/8b86de38890ddb7291867b025210bd8888182190\" ``` Trusted public keys can be pre-populated by placing them in the appropriate location on disk for the desired prefix. ``` $ find /etc/rkt/trustedkeys/ /etc/rkt/trustedkeys/ /etc/rkt/trustedkeys/prefix.d /etc/rkt/trustedkeys/prefix.d/coreos.com /etc/rkt/trustedkeys/prefix.d/coreos.com/etcd /etc/rkt/trustedkeys/prefix.d/coreos.com/etcd/8b86de38890ddb7291867b025210bd8888182190 /etc/rkt/trustedkeys/root.d /etc/rkt/trustedkeys/root.d/d8685c1eff3b2276e5da37fd65eea12767432ac4 ``` | Flag | Default | Options | Description | | | | | | | `--insecure-allow-http` | `false` | `true` or `false` | Allow HTTP use for key discovery and/or retrieval | | `--prefix` | `` | A URL prefix | Prefix to limit trust to | | `--root` | `false` | `true` or `false` | Add root key from filesystem without a prefix | | `--skip-fingerprint-review` | `false` | `true` or `false` | Accept key without fingerprint confirmation | See the table with ." } ]
{ "category": "Runtime", "file_name": "trust.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "This enhancement allows the user to customize the default disks and node configurations in Longhorn for newly added nodes using Kubernetes label and annotation, instead of using Longhorn API or UI. https://github.com/longhorn/longhorn/issues/1053 https://github.com/longhorn/longhorn/issues/991 Allow users to customize the disks and node configuration for new nodes without using Longhorn API or UI. This will make it much easier for users to scale the cluster since it will eliminate the necessity to configure Longhorn manually for each newly added node if the node contains more than one disk or the disk configuration is different between the nodes. Allow users to define node tags for newly added nodes without using the Longhorn API or UI. This enhancement will not keep the node label/annotation in sync with the Longhorn node/disks configuration. Longhorn directly uses the node annotation to set the node tags once the node contains no tag. Longhorn uses the setting `Create Default Disk on Labeled Nodes` to decide if to enable the default disks customization. If the setting is enabled, Longhorn will wait for the default disks customization set, instead of directly creating the Longhorn default disk for the node without disks (new node is included). Then Longhorn relies on the value of the node label `node.longhorn.io/create-default-disk` to decide how to customize default disks: If the value is `config`, the annotation will be parsed and used as the default disks customization. If the value is boolean value `true`, the data path setting will be used for the default disk. And other values will be treated as `false` and no default disk will be created. Before the enhancement, when the users want to scale up the Kubernetes cluster and add tags on the node, they would need access to the Longhorn API/UI to do that. After the enhancement, the users can add a specified annotation to the new nodes to define the tags. In this way, the users don't need to work with Longhorn API/UI directly during the process of scaling up a cluster. Before the enhancement, when the users want to scale up the Kubernetes cluster and customize the disks on the node, they would need to: Enable the Longhorn setting `Create Default Disk on Labeled Nodes` to prevent the default disk to be created automatically on the node. Add new nodes to the Kubernetes cluster, e.g. by using Rancher or Terraform, etc. After the new node was recognized by Longhorn, edit the node to add disks using either Longhorn UI or API. The third step here needs to be done for every node separately, making it inconvenient for the operation. After the enhancement, the steps the user would take is: Enable the Longhorn setting `Create Default Disk on Labeled Nodes`. Add new nodes to the Kubernetes cluster, e.g. by using Rancher or Terraform, etc. Add the label and annotations to the node to define the default disk(s) for the new node. Longhorn will pick it up automatically and add the disk(s) for the new node. In this way, the user doesn't need to work with Longhorn API/UI directly during the process of scaling up a cluster. The user adds the default node tags annotation `node.longhorn.io/default-node-tags=<node tag list>` to a Kubernetes" }, { "data": "If the Longhorn node tag list was empty before step 1, the user should see the tag list for that node updated according to the annotation. Otherwise, the user should see no change to the tag list. The users enable the setting `Create Default Disk on Labeled Nodes`. The users add a new node, then they will get a node without any disk. By deleting all disks on an existing node, the users can get the same result. After patching the label `node.longhorn.io/create-default-disk=config` and the annotation `node.longhorn.io/default-disks-config=<customized default disks>` for the Kubernetes node, the node disks should be updated according to the annotation. If: The Longhorn node contains no tag. The Kubernetes node object of the same name contains an annotation `node.longhorn.io/default-node-tags`, for example: ``` node.longhorn.io/default-node-tags: '[\"fast\",\"storage\"]' ``` The annotation can be parsed successfully. Then Longhorn will update the Longhorn node object with the new tags specified by the annotation. The process will be done as a part of the node controller reconciliation logic in the Longhorn manager. If: The Longhorn node contains no disk. The setting `Create Default Disk on Labeled Nodes` is enabled. The Kubernetes node object of the same name contains the label `node.longhorn.io/create-default-disk: 'config'` and an annotation `node.longhorn.io/default-disks-config`, for example: ``` node.longhorn.io/default-disks-config: '[{\"path\":\"/mnt/disk1\",\"allowScheduling\":false}, {\"path\":\"/mnt/disk2\",\"allowScheduling\":false,\"storageReserved\":1024,\"tags\":[\"ssd\",\"fast\"]}]' ``` The annotation can be parsed successfully. Then Longhorn will create the customized default disk(s) specified by the annotation. The process will be done as a part of the node controller reconciliation logic in the Longhorn manager. If the label/annotations failed validation, no partial configuration will be applied and the whole annotation will be ignored. No change will be done for the node tag/disks. The validation failure can be caused by: The annotation format is invalid and cannot be parsed to tags/disks configuration. The format is valid but there is an unqualified tag in the tag list. The format is valid but there is an invalid disk parameter in the disk list. e.g., duplicate disk path, non-existing disk path, multiple disks with the same file system, the reserved storage size being out of range... The users deploy Longhorn system. The users enable the setting `Create Default Disk on Labeled Nodes`. The users scale the cluster. Then the newly introduced nodes should contain no disk and no tag. The users pick up a new node, create 2 random data path in the container then patch the following valid node label and annotations: ``` labels: node.longhorn.io/create-default-disk: \"config\" }, annotations: node.longhorn.io/default-disks-config: '[{\"path\":\"<random data path 1>\",\"allowScheduling\":false}, {\"path\":\"<random data path 2>\",\"allowScheduling\":true,\"storageReserved\":1024,\"tags\":[\"ssd\",\"fast\"]}]' node.longhorn.io/default-node-tags: '[\"fast\",\"storage\"]' ``` After the patching, the node disks and tags will be created and match the annotations. The users use Longhorn UI to modify the node configuration. They will find that the node annotations keep unchanged and don't match the current node tag/disk configuration. The users delete all node tags and disks via UI. Then the node tags/disks will be recreated immediately and match the annotations. The users pick up another new node, directly patch the following invalid node label and annotations: ``` labels: node.longhorn.io/create-default-disk: \"config\" }, annotations: node.longhorn.io/default-disks-config: '[{\"path\":\"<non-existing data path>\",\"allowScheduling\":false}, node.longhorn.io/default-node-tags: '[\"slow\",\".*invalid-tag\"]' ``` Then they should find that the tag and disk list are still empty. The users create a random data path then correct the annotation for the node: ``` annotations: node.longhorn.io/default-disks-config: '[{\"path\":\"<random data path>\",\"allowScheduling\":false}, node.longhorn.io/default-node-tags: '[\"slow\",\"storage\"]' ``` Now they will see that the node tags and disks are created correctly and match the annotations. N/A." } ]
{ "category": "Runtime", "file_name": "20200319-default-disks-and-node-configuration.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Support Process\" layout: docs Velero provides best effort support through the process on this page for the current version of Velero and n-1 Velero version, including all patch releases in the supported minor releases. For example, if the current version is 1.9, the Velero maintainers would offer best effort support for v1.9 and v1.8. If you have a question about a previous Velero version (for example, 1.7), please note that maintainers may ask you to upgrade to a supported version before doing any investigation into your issue. For more information about Velero testing and supported Kubernetes versions, see Velero's . The Velero maintainers use a weekly rotation to manage community support. Each week, a different maintainer is the point person for responding to incoming support issues via Slack, GitHub, and the Google group. The point person is not expected to be on-call 24x7. Instead, they choose one or more hour(s) per day to be available/responding to incoming issues. They will communicate to the community what that time slot will be each week. We will update the public Slack channel's topic to indicate that you are the point person for the week, and what hours you'll be available. `#velero` public Slack channel in Kubernetes org in GitHub (`velero`, `velero-plugin-for-[aws|gcp|microsoft-azure|csi]`, `helm-charts`) Generally speaking, new GitHub issues will fall into one of several categories. We use the following process for each: Feature request Label the issue with `Enhancement/User` or `Enhancement/Dev` Leave the issue in the `New Issues` swimlane for triage by product mgmt Bug Label the issue with `Bug` Leave the issue in the `New Issues` swimlane for triage by product mgmt User question/problem that does not clearly fall into one of the previous categories When you start investigating/responding, label the issue with `Investigating` Add comments as you go, so both the user and future support people have as much context as possible Use the `Needs Info` label to indicate an issue is waiting for information from the user. Remove/re-add the label as needed. If you resolve the issue with the user, close it out If the issue ends up being a feature request or a bug, update the title and follow the appropriate process for it If the reporter becomes unresponsive after multiple pings, close out the issue due to inactivity and comment that the user can always reach out again as needed We ensure all GitHub issues worked on during the week on are labeled with `Investigating` and `Needs Info` (if appropriate), and have updated comments so the next person can pick them up." } ]
{ "category": "Runtime", "file_name": "support-process.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Mounts are the main interaction mechanism in containerd. Container systems of the past typically end up having several disparate components independently perform mounts, resulting in complex lifecycle management and buggy behavior when coordinating large mount stacks. In containerd, we intend to keep mount syscalls isolated to the container runtime component, opting to have various components produce a serialized representation of the mount. This ensures that the mounts are performed as a unit and unmounted as a unit. From an architecture perspective, components produce mounts and runtime executors consume them. More imaginative use cases include the ability to virtualize a series of mounts from various components without ever having to create a runtime. This will aid in testing and implementation of satellite components. The `Mount` type follows the structure of the historic mount syscall: | Field | Type | Description | |-|-|-| | Type | `string` | Specific type of the mount, typically operating system specific | | Target | `string` | Intended filesystem path for the mount destination. | | Source | `string` | The object which originates the mount, typically a device or another filesystem path. | | Options | `[]string` | Zero or more options to apply with the mount, possibly `=`-separated key value pairs. | We may want to further parameterize this to support mounts with various helpers, such as `mount.fuse`, but this is out of scope, for now." } ]
{ "category": "Runtime", "file_name": "mounts.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "WasmEdge applications can be plugged into existing application frameworks or platforms. WasmEdge provides a safe and efficient extension mechanism for those frameworks. In this chapter, we will introduce several such frameworks and platforms. support WasmEdge to run as containers for microservices. We will cover distributed application framework , service mesh , and event mesh . support WasmEdge as an embedded function or plug-in runtime. We will cover streaming data framework and Go function schedulder / framework . allows WasmEdge programs to run as serverless functions in their infrastructure. We will cover , , , , and ." } ]
{ "category": "Runtime", "file_name": "frameworks.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "https://github.com/vmware-tanzu/velero/releases/tag/v1.8.0 `velero/velero:v1.8.0` https://velero.io/docs/v1.8 https://velero.io/docs/v1.8/upgrade-to-1.8/ Versions 1.4 of the Velero plugins for AWS, Azure and GCP now support snapshotting and restoring the persistent volumes provisioned by CSI driver via the APIs of the cloud providers. With this enhancement, users can backup and restore the persistent volumes on these cloud providers without using the Velero CSI plugin. The CSI plugin will remain beta and the feature flag `EnableCSI` will be disabled by default. For the version of the plugins and the CSI drivers they support respectively please see the table: | Plugin | Version | CSI Driver | | | -- | - | | velero-plugin-for-aws | v1.4.0 | ebs.csi.aws.com | | velero-plugin-for-microsoft-azure | v1.4.0 | disk.csi.azure.com | | velero-plugin-for-gcp | v1.4.0 | pd.csi.storage.gke.io | We've verified the functionality of Velero on IPv6 dual stack by successfully running the E2E test on IPv6 dual stack environment. In this release we continued our code modernization work, rewriting some controllers using Kubebuilder v3. This work is ongoing and we will continue to make progress in future releases. More test cases have been added to the E2E test suite to improve the release health. The creation time is now taken into account to calculate the next run for scheduled backup. When a Backup Storage Location (BSL) is deleted, backup and Restic repository resources will also be deleted. Starting in v1.8, Velero will only support Kubernetes v1 CRD meaning that Velero v1.8+ will only run on Kubernetes v1.16+. Before upgrading, make sure you are running a supported Kubernetes version. For more information, see our . Item Snapshotter plugin API was merged. This will support both Upload Progress monitoring and the planned Data Mover. Upload Progress monitoring PRs are in progress for 1.9. E2E test on ssr object with controller namespace mix-ups (#4521, @mqiu) Check whether the volume is provisioned by CSI driver or not by the annotation as well (#4513, @ywk253100) Initialize the labels field of `velero backup-location create` option to avoid #4484 (#4491, @ywk253100) Fix e2e 2500 namespaces scale test timeout problem (#4480, @mqiu) Add backup deletion e2e test (#4401, @danfengliu) Return the error when getting backup store in backup deletion controller (#4465, @reasonerjt) Ignore the provided port is already allocated error when restoring the LoadBalancer service (#4462, @ywk253100) Revert #4423 migrate backup sync controller to kubebuilder. (#4457, @jxun) Add rbac and annotation test cases (#4455, @mqiu) remove --crds-version in velero install command. (#4446, @jxun) Upgrade e2e test vsphere plugin (#4440, @mqiu) Fix e2e test failures for the inappropriate optimize of velero install (#4438, @mqiu) Limit backup namespaces on test resource filtering cases (#4437, @mqiu) Bump up Go to 1.17 (#4431, @reasonerjt) Added `<backup" }, { "data": "to the backup format. This file exists when item snapshots are taken and contains an array of volume.Itemsnapshots containing the information about the snapshots. This will not be used unless upload progress monitoring and item snapshots are enabled and an ItemSnapshot plugin is used to take snapshots. Also added DownloadTargetKindBackupItemSnapshots for retrieving the signed URL to download only the `<backup name>`-itemsnapshots.json.gz part of a backup for use by `velero backup describe`. (#4429, @dsmithuchida) Migrate backup sync controller from code-generator to kubebuilder. (#4423, @jxun) Added UploadProgressFeature flag to enable Upload Progress Monitoring and Item Snapshotters. (#4416, @dsmithuchida) Added BackupWithResolvers and RestoreWithResolvers calls. Will eventually replace Backup and Restore methods. Adds ItemSnapshotters to Backup and Restore workflows. (#4410, @dsu) Build for darwin-arm64 (#4409, @epk) Add resource filtering test cases (#4404, @mqiu) Fix the issue that the backup cannot be deleted after the application uninstalled (#4398, @ywk253100) Add restoreactionitem plugin to handle admission webhook configurations (#4397, @reasonerjt) Keep the annotation \"pv.kubernetes.io/provisioned-by\" when restoring PVs (#4391, @ywk253100) Adjust structure of e2e test codes (#4386, @mqiu) feat: migrate velero controller from kubebuilder v2 to v3 From Velero v1.8, apiextesions.k8s.io/v1beta1 is no longer supported, which means only CRD of apiextensions.k8s.io/v1 is supported, and the supported Kubernetes version is updated to v1.16 and later. (#4382, @jxun) Delete backups and Restic repos associated with deleted BSL(s) (#4377, @codegold79) Add the key for GKE zone for AZ collection (#4376, @reasonerjt) Fix statefulsets volumeClaimTemplates storageClassName when use Changing PV/PVC Storage Classes (#4375, @Box-Cube) Fix snapshot e2e test issue of jsonpath (#4372, @danfengliu) Modify the timestamp in the name of a backup generated from schedule to use UTC. (#4353, @jxun) Read Availability zone from nodeAffinity requirements (#4350, @reasonerjt) Use factory.Namespace() to replace hardcoded velero namespace (#4346, @half-life666) Return the error if velero failed to detect S3 region for restic repo (#4343, @reasonerjt) Add init log option for velero controller-runtime manager. (#4341, @jxun) Ignore the `provided port is already allocated` error when restoring the `NodePort` service (#4336, @ywk253100) Fixed an issue with the `backup-location create` command where the BSL Credential field would be set to an invalid empty SecretKeySelector when no credential details were provided. (#4322, @zubron) fix buggy pager func (#4306, @alaypatel07) Don't create a backup immediately after creating a schedule (#4281, @ywk253100) Fix CVE-2020-29652 and CVE-2020-26160 (#4274, @ywk253100) Refine tag-release.sh to align with change in release process (#4185, @reasonerjt) Fix plugins incompatible issue in upgrade test (#4141, @danfengliu) Verify group before treating resource as cohabiting (#4126, @sseago) Added ItemSnapshotter plugin definition and plugin framework - addresses #3533. Part of the Upload Progress enhancement (#3533) (#4077, @dsmithuchida) Add upgrade test in E2E test (#4058, @danfengliu) Handle namespace mapping for PVs without snapshots on restore (#3708, @sseago)" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.8.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "containerd is meant to be a simple daemon to run on any system. It provides a minimal config with knobs to configure the daemon and what plugins are used when necessary. ``` NAME: containerd - _ _ / /_ _(_) _/ / / _/ \\/ \\/ / `/ / \\/ \\/ / / / // /_/ / / / / /_/ /_/ / / / / / / / / /_/ / \\_/\\/_/ /_/\\/\\,_/_/_/ /_/\\// \\,_/ high performance container runtime USAGE: containerd [global options] command [command options] [arguments...] VERSION: v2.0.0-beta.0 DESCRIPTION: containerd is a high performance container runtime whose daemon can be started by using this command. If none of the config, publish, oci-hook, or help commands are specified, the default action of the containerd command is to start the containerd daemon in the foreground. A default configuration is used if no TOML configuration is specified or located at the default file location. The containerd config command can be used to generate the default configuration for containerd. The output of that command can be used and modified as necessary as a custom configuration. COMMANDS: config Information on the containerd config publish Binary to publish events to containerd oci-hook Provides a base for OCI runtime hooks to allow arguments to be injected. help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --config value, -c value Path to the configuration file (default: \"/etc/containerd/config.toml\") --log-level value, -l value Set the logging level [trace, debug, info, warn, error, fatal, panic] --address value, -a value Address for containerd's GRPC server --root value containerd root directory --state value containerd state directory --help, -h Show help --version, -v Print the version ``` While a few daemon level options can be set from CLI flags the majority of containerd's configuration is kept in the configuration file. The default path for the config file is located at `/etc/containerd/config.toml`. You can change this path via the `--config,-c` flags when booting the daemon. If you are using systemd as your init system, which most modern linux OSs are, the service file requires a few modifications. ```systemd [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Delegate=yes KillMode=process [Install] WantedBy=multi-user.target ``` `Delegate=yes` and `KillMode=process` are the two most important changes you need to make in the `[Service]` section. `Delegate` allows containerd and its runtimes to manage the cgroups of the containers that it" }, { "data": "Without setting this option, systemd will try to move the processes into its own cgroups, causing problems for containerd and its runtimes to properly account for resource usage with the containers. `KillMode` handles when containerd is being shut down. By default, systemd will look in its named cgroup and kill every process that it knows about for the service. This is not what we want. As ops, we want to be able to upgrade containerd and allow existing containers to keep running without interruption. Setting `KillMode` to `process` ensures that systemd only kills the containerd daemon and not any child processes such as the shims and containers. The following `systemd-run` command starts containerd in a similar way: ``` sudo systemd-run -p Delegate=yes -p KillMode=process /usr/local/bin/containerd ``` In the containerd config file you will find settings for persistent and runtime storage locations as well as grpc, debug, and metrics addresses for the various APIs. There are a few settings that are important for ops. The first setting is the `oom_score`. Because containerd will be managing multiple containers, we need to ensure that containers are killed before the containerd daemon gets into an out of memory condition. We also do not want to make containerd unkillable, but we want to lower its score to the level of other system daemons. containerd also exports its own metrics as well as container level metrics via the Prometheus metrics format under `/v1/metrics`. Currently, Prometheus only supports TCP endpoints, therefore, the metrics address should be a TCP address that your Prometheus infrastructure can scrape metrics from. containerd also has two different storage locations on a host system. One is for persistent data and the other is for runtime state. `root` will be used to store any type of persistent data for containerd. Snapshots, content, metadata for containers and image, as well as any plugin data will be kept in this location. The root is also namespaced for plugins that containerd loads. Each plugin will have its own directory where it stores data. containerd itself does not actually have any persistent data that it needs to store, its functionality comes from the plugins that are loaded. ``` /var/lib/containerd/ io.containerd.content.v1.content blobs ingest io.containerd.metadata.v1.bolt meta.db io.containerd.runtime.v2.task default example io.containerd.snapshotter.v1.btrfs io.containerd.snapshotter.v1.overlayfs metadata.db snapshots ``` `state` will be used to store any type of ephemeral data. Sockets, pids, runtime state, mount points, and other plugin data that must not persist between reboots are stored in this location. ``` /run/containerd containerd.sock debug.sock io.containerd.runtime.v2.task default redis config.json init.pid" }, { "data": "rootfs bin data dev etc home lib media mnt proc root run sbin srv sys tmp usr var runc default redis state.json ``` Both the `root` and `state` directories are namespaced for plugins. Both directories are an implementation detail of containerd and its plugins. They should not be tampered with as corruption and bugs can and will happen. External apps reading or watching changes in these directories have been known to cause `EBUSY` and stale file handles when containerd and/or its plugins try to cleanup resources. ```toml version = 2 root = \"/var/lib/containerd\" state = \"/run/containerd\" oom_score = -999 [grpc] address = \"/run/containerd/containerd.sock\" uid = 0 gid = 0 [debug] address = \"/run/containerd/debug.sock\" uid = 0 gid = 0 level = \"info\" [metrics] address = \"127.0.0.1:1234\" ``` At the end of the day, containerd's core is very small. The real functionality comes from plugins. Everything from snapshotters, runtimes, and content are all plugins that are registered at runtime. Because these various plugins are so different we need a way to provide type safe configuration to the plugins. The only way we can do this is via the config file and not CLI flags. In the config file you can specify plugin level options for the set of plugins that you use via the `[plugins.<name>]` sections. You will have to read the plugin specific docs to find the options that your plugin accepts. See The bolt metadata plugin allows configuration of the content sharing policy between namespaces. The default mode \"shared\" will make blobs available in all namespaces once it is pulled into any namespace. The blob will be pulled into the namespace if a writer is opened with the \"Expected\" digest that is already present in the backend. The alternative mode, \"isolated\" requires that clients prove they have access to the content by providing all of the content to the ingest before the blob is added to the namespace. Both modes share backing data, while \"shared\" will reduce total bandwidth across namespaces, at the cost of allowing access to any blob just by knowing its digest. The default is \"shared\". While this is largely the most desired policy, one can change to \"isolated\" mode with the following configuration: ```toml version = 2 [plugins.\"io.containerd.metadata.v1.bolt\"] contentsharingpolicy = \"isolated\" ``` In \"isolated\" mode, it is also possible to share only the contents of a specific namespace by adding the label `containerd.io/namespace.shareable=true` to that namespace. This will make its blobs available in all other namespaces even if the content sharing policy is set to \"isolated\". If the label value is set to anything other than `true`, the namespace content will not be shared." } ]
{ "category": "Runtime", "file_name": "ops.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "This document outlines the requirements for all documentation in the [Kata Containers](https://github.com/kata-containers) project. All documents must: Be written in simple English. Be written in format. Have a `.md` file extension. Be linked to from another document in the same repository. Although GitHub allows navigation of the entire repository, it should be possible to access all documentation purely by navigating links inside the documents, starting from the repositories top-level `README`. If you are adding a new document, ensure you add a link to it in the \"closest\" `README` above the directory where you created your document. If the document needs to tell the user to manipulate files or commands, use a to specify the commands. If at all possible, ensure that every command in the code blocks can be run non-interactively. If this is possible, the document can be tested by the CI which can then execute the commands specified to ensure the instructions are correct. This avoids documents becoming out of date over time. Note: Do not add a table of contents (TOC) since GitHub will auto-generate one. Linking between documents is strongly encouraged to help users and developers navigate the material more easily. Linking also avoids repetition - if a document needs to refer to a concept already well described in another section or document, do not repeat it, link to it (the principle). Another advantage of this approach is that changes only need to be applied in one place: where the concept is defined (not the potentially many places where the concept is referred to using a link). Important information that is not part of the main document flow should be added as a Note in bold with all content contained within a block quote: Note: This is a really important point! This particular note also spans multiple lines. The entire note should be included inside the quoted block. If there are multiple notes, bullets should be used: Notes: I am important point 1. I am important point 2. I am important point n. Use the same approach as for . For example: Warning: Running this command assumes you understand the risks of doing so. Other examples: Warnings: Do not unplug your computer! Always read the label. Do not pass go. Do not collect $200. Tip: Read the manual page for further information on available options. Hint: Look behind you! All filenames and command names should be rendered in a fixed-format font using backticks: Run the `foo` command to make it work. Modify the `bar` option in file `/etc/baz/baz.conf`. Render any options that need to be specified to the command in the same manner: Run `bar -axz --apply foo.yaml` to make the changes. For standard system commands, it is also acceptable to specify the name along with the manual page section that documents the command in brackets: The command to list files in a directory is called `ls(1)`. This section lists requirements for displaying commands and command output. The requirements must be adhered to since documentation containing code blocks is validated by the CI system, which executes the command blocks with the help of the" }, { "data": "If a document includes commands the user should run, they MUST be shown in a bash code block with every command line prefixed with `$ ` to denote a shell prompt: <pre> ```bash $ echo \"Hi - I am some bash code\" $ sudo docker run -ti busybox true $ [ $? -eq 0 ] && echo \"success\" ``` <pre> If a command needs to be run as the `root` user, it must be run using `sudo(8)`. ```bash $ sudo echo \"I'm running as root\" ``` All lines beginning `# ` should be comment lines, NOT commands to run as the `root` user. Try to avoid showing the output of commands. The reasons for this: Command output can change, leading to confusion when the output the user sees does not match the output in the documentation. There is the risk the user will get confused between what parts of the block refer to the commands they should type and the output that they should not. It can make the document look overly \"busy\" or complex. In the unusual case that you need to display command output, use an unadorned code block (\\`\\`\\`): <pre> The output of the `ls(1)` command is expected to be: ``` ls: cannot access '/foo': No such file or directory ``` <pre> Long lines should not span across multiple lines by using the `\\` continuation character. GitHub automatically renders such blocks with scrollbars. Consequently, backslash continuation characters are not necessary and are a visual distraction. These characters also mess up a user's shell history when commands are pasted into a terminal. All binary image files must be in a standard and well-supported format such as PNG. This format is preferred for vector graphics such as diagrams because the information is stored more efficiently, leading to smaller file sizes. JPEG images are acceptable, but this format is more appropriate to store photographic images. When possible, generate images using freely available software. Every binary image file MUST be accompanied by the \"source\" file used to generate it. This guarantees that the image can be modified by updating the source file and re-generating the binary format image file. Ideally, the format of all image source files is an open standard, non-binary one such as SVG. Text formats are highly preferable because you can manipulate and compare them with standard tools (e.g. `diff(1)`). Since this project uses a number of terms not found in conventional dictionaries, we have a that checks both dictionary words and the additional terms we use. Run the spell checking tool on your document before raising a PR to ensure it is free of mistakes. If your document introduces new terms, you need to update the custom dictionary used by the spell checking tool to incorporate the new words. Occasionally documents need to specify the name of people. Write such names in backticks. The main reason for this is to keep the happy (since it cannot manage all possible names). However, since backticks render in a fixed-width font, this makes the names clearer: Welcome to `Clark Kent`, the newest member of the Kata Containers Architecture Committee. Write version number in backticks. This keeps the happy and since backticks render in a fixed-width font, it also makes the numbers clearer: Ensure you are using at least version `1.2.3-alpha3.wibble.1` of the tool. The apostrophe character (`'`) must only be used for showing possession (\"Peter's book\") and for standard contractions (such as \"don't\"). Use double-quotes (\"...\") in all other circumstances you use quotes outside of ." } ]
{ "category": "Runtime", "file_name": "Documentation-Requirements.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "title: \"Upgrading to Velero 1.12\" layout: docs Velero installed. If you're not yet running at least Velero v1.7, see the following: Before upgrading, check the to make sure your version of Kubernetes is supported by the new version of Velero. Caution: From Velero v1.10, except for using restic to do file-system level backup and restore, kopia is also been integrated, it could be upgraded from v1.10 or higher to v1.12 directly, but it would be a little bit of difference when upgrading to v1.12 from a version lower than v1.10.0. Install the Velero v1.12 command-line interface (CLI) by following the . Verify that you've properly installed it by running: ```bash velero version --client-only ``` You should see the following output: ```bash Client: Version: v1.12.0 Git commit: <git SHA> ``` Update the Velero custom resource definitions (CRDs) to include schema changes across all CRDs that are at the core of the new features in this release: ```bash velero install --crds-only --dry-run -o yaml | kubectl apply -f - ``` NOTE: Since velero v1.10.0 only v1 CRD will be supported during installation, therefore, the v1.10.0 will only work on Kubernetes version >= v1.16 Update the container image and objects fields used by the Velero deployment and, optionally, the restic daemon set: ```bash kubectl get deploy -n velero -ojson \\ | sed \"s#\\\"image\\\"\\: \\\"velero\\/velero\\:v[0-9].[0-9].[0-9]\\\"#\\\"image\\\"\\: \\\"velero\\/velero\\:v1.12.0\\\"#g\" \\ | sed \"s#\\\"server\\\",#\\\"server\\\",\\\"--uploader-type=$uploader_type\\\",#g\" \\ | sed \"s#default-volumes-to-restic#default-volumes-to-fs-backup#g\" \\ | sed \"s#default-restic-prune-frequency#default-repo-maintain-frequency#g\" \\ | sed \"s#restic-timeout#fs-backup-timeout#g\" \\ | kubectl apply -f - echo $(kubectl get ds -n velero restic -ojson) \\ | sed \"s#\\\"image\\\"\\: \\\"velero\\/velero\\:v[0-9].[0-9].[0-9]\\\"#\\\"image\\\"\\: \\\"velero\\/velero\\:v1.12.0\\\"#g\" \\ | sed \"s#\\\"name\\\"\\: \\\"restic\\\"#\\\"name\\\"\\: \\\"node-agent\\\"#g\" \\ | sed \"s#\\[ \\\"restic\\\",#\\[ \\\"node-agent\\\",#g\" \\ | kubectl apply -f - kubectl delete ds -n velero restic --force --grace-period 0 ``` Confirm that the deployment is up and running with the correct version by running: ```bash velero version ``` You should see the following output: ```bash Client: Version: v1.12.0 Git commit: <git SHA> Server: Version: v1.12.0 ``` If it's directly upgraded from v1.10 or higher, the other steps remain the same only except for step 3 above. The details as below: Update the container image used by the Velero deployment, plugin and, optionally, the node agent daemon set: ```bash kubectl set image deployment/velero \\ velero=velero/velero:v1.12.0 \\ velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.8.0 \\ --namespace velero kubectl set image daemonset/node-agent \\ node-agent=velero/velero:v1.12.0 \\ --namespace velero ``` If upgraded from v1.9.x, there still remains some resources left over in the cluster and never used in v1.12.x, which could be deleted through kubectl and it is based on your desire: resticrepository CRD and related CRs velero-restic-credentials secret in velero install namespace" } ]
{ "category": "Runtime", "file_name": "upgrade-to-1.12.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Access metric status of the operator ``` -h, --help help for metrics ``` - Run cilium-operator - List all metrics for the operator" } ]
{ "category": "Runtime", "file_name": "cilium-operator_metrics.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "English | There are two technologies in cloud-native networking: \"overlay network\" and \"underlay network\". Despite no strict definition for underlay and overlay networks in cloud-native networking, we can simply abstract their characteristics from many CNI projects. The two technologies meet the needs of different scenarios. Spiderpool is designed for underlay networks, and the following comparison of the two solutions can better illustrate the features and usage scenarios of Spiderpool. These solutions implement the decoupling of the Pod network and host network, such as , and other CNI plugins. Typically, they use tunnel technology such as vxlan to build an overlay network plane, and use NAT technology for north-south traffic. These IPAM solutions have the following characteristics: Divide Pod subnets into node-based IP blocks In terms of a smaller subnet mask, the Pod subnet is divided into smaller IP blocks, and each node is assigned one or more IP blocks depending on the actual IP allocation account. First, since the IPAM plugin on each node only needs to allocate and release IP addresses in the local IP block, there is no IP allocation conflict with IPAM on other nodes, achieving more efficient allocation. Second, a specific IP address follows an IP block and is allocated within one node all the time, so it cannot be assigned on other nodes together with a bound Pod. Sufficient IP address resources Subnets not overlapping with any CIDR, could be used by the cluster, so the cluster has enough IP address resources as long as NAT technology is used in an appropriate manner. As a result, IPAM components face less pressure to reclaim abnormal IP addresses. No requirement for static IP addresses For the static IP address requirement, there is a difference between a stateless application and a stateful application. Regarding stateless application like deployment, the Pod's name will change when the Pod restarts, and the business logic of the application itself is stateless. Thus static IP addresses means that all the Pod replicas are fixed in a set of IP addresses; for stateful applications such as statefulset, considering both the fixed information including Pod's names and stateful business logic, the strong binding of one Pod and one specific IP address needs to be implemented for static IP addresses. The \"overlay network solution\" mostly exposes the ingress and source addresses of services to the outside of the cluster with the help of NAT technology, and realizes the east-west communication through DNS, clusterIP and other technologies. In addition, although the IP block of IPAM fixes the IP to one node, it does not guarantee the application replicas follow the scheduling. Therefore, there is no scope for the static IP address capability. Most of the mainstream CNIs in the community have not yet supported \"static IP addressed\", or support it in a rough way. The advantage of the \"overlay network solution\" is that the CNI plugins are highly compatible with any underlying network environment, and can provide independent subnets with sufficient IP addresses for Pods. These solutions share the node's network for Pods, which means Pods can directly obtain IP addresses in the node network. Thus, applications can directly use their own IP addresses for east-west and north-south communications. There are two typical scenarios for underlay network solutions: clusters deployed on a \"legacy network\" and clusters deployed on an IAAS environment, such as a public" }, { "data": "The following summarizes the IPAM characteristics of the \"legacy network scenario\": An IP address able to be assigned to any node As the number of network devices in the data center increases and multi-cluster technology evolves, IPv4 address resources become scarce, thus requiring IPAM to improve the efficiency of IP usage. As the Pod replicas of the applications requiring \"static IP addresses\" could be scheduled to any node in the cluster and drift between nodes, IP addresses might drift together. Therefore, an IP address should be able to be allocated to a Pod on any node. Different replicas within one application could obtain IP addresses across subnets Take as an example one node could access subnet 172.20.1.0/24 while another node just only access subnet 172.20.2.0/24. In this case, when the replicas within one application need to be deployed across subnets, IPAM is required to be able to assign subnet-matched IP addresses to the application on different nodes. Static IP addresses For some traditional applications, the source IPs or destination IPs need to be sensed in the microservice. And network admins are used to enabling fine-grained network security control via firewalls and other means. Therefore, in order to reduce the transformation chores after the applications move to the Kubernetes, applications need static IP addresses. Pods with Multiple NICs need IP addresses of different underlay subnets Since the Pod is connected to an underlay network, it has the need for multiple NICs to reach different underlay subnets. IP conflict Underlay networks are more prone to IP conflicts. For instance, Pods conflict with host IPs outside the cluster, or conflict with other clusters under the same subnet. But it is difficult for IPAM to discover these conflicting IP addresses externally unless CNI plugins are involved for real-time IP conflict detection. Release and recover IP addresses Because of the scarcity of IP addresses in underlay networks and the static IP address requirements of applications, a newly launched Pod may fail due to the lack of IP addresses owing to some IP addresses not being released by abnormal Pods. This requires IPAMs to have a more accurate, efficient and timely IP recovery mechanism. The advantages of the underlay network solution include: no need for network NAT mapping, which makes cloud-based network transformation for applications way more convenient; the underlying network firewall and other devices can achieve relatively fine control of Pod communication; no tunneling technology contributes to improved throughput and latency performance of network communications. Any CNI project compatible with third-party IPAM plugins can work well with Spiderpool IPAM, such as: , , , , , , , When creating a Pod, it will follow the steps below to get IP allocations.The lifecycle of IP allocation involves three major stages: `candidate pool acquisition`, `candidate pool filtering`, and `candidate pool sorting`. Candidate pool acquisition: Spiderpool follows a strict rule of selecting pools from high to low priority. It identifies all pools that match the high priority rules and marks them as candidates for further consideration. Candidate pool filtering: Spiderpool applies filtering mechanisms such as affinity to carefully select the appropriate candidate pools from the available options. This ensures that specific requirements or complex usage scenarios are satisfied. Candidate pool sorting: in cases where multiple candidate pools exist, Spiderpool sorts them based on the priority rules defined in the SpiderIPPool object. IP addresses are then allocated sequentially, starting from the pool with available" }, { "data": "Spiderpool offers a variety of pool selection rules when assigning IP addresses to Pods. The selection process strictly adheres to a high to low priority order. The following rules are listed in descending order of priority, and if multiple rules apply at the same time, the preceding rule will overwrite the subsequent one. The 1st priority: SpiderSubnet annotation The SpiderSubnet resource represents a collection of IP addresses. When an application requires a fixed IP address, the application administrator needs to inform their platform counterparts about the available IP addresses and routing attributes. However, as they belong to different operational departments, this process becomes cumbersome, resulting in complex workflows for creating each application. To simplify this, Spiderpool's SpiderSubnet feature automatically allocates IP addresses from subnets to IPPool and assigns fixed IP addresses to applications. This greatly reduces operational costs. When creating an application, you can use the `ipam.spidernet.io/subnets` or `ipam.spidernet.io/subnet` annotation to specify the Subnet. This allows for the automatic creation of an IP pool by randomly selecting IP addresses from the subnet, which can then be allocated as fixed IPs for the application. For more details, please refer to . The 2nd priority: SpiderIPPool annotation Different IP addresses within a Subnet can be stored in separate instances of IPPool (Spiderpool ensures that there is no overlap between the address sets of IPPools). The size of the IP collection in SpiderIPPool can vary based on requirements. This design feature is particularly beneficial when dealing with limited IP address resources in the Underlay network. When creating an application, the SpiderIPPool annotation `ipam.spidernet.io/ippools` or `ipam.spidernet.io/ippool` can be used to bind different IPPools or share the same IPPool. This allows all applications to share the same Subnet while maintaining \"micro-isolation\". For more details, please refer to . The 3th priority: namespace default IP pool By setting the annotation `ipam.spidernet.io/default-ipv4-ippool` or `ipam.spidernet.io/default-ipv6-ippool` in the namespace, you can specify the default IP pool. When creating an application within that tenant, if there are no other higher-priority pool rules, it will attempt to allocate an IP address from the available candidate pools for that tenant. For more details, please refer to . The fourth priority: CNI configuration file The global CNI default pool can be set by configuring the `defaultipv4ippool` and `defaultipv6ippool` fields in the CNI configuration file. Multiple IP pools can be defined as alternative pools. When an application uses this CNI configuration network and invokes Spiderpool, each application replica is sequentially assigned an IP address according to the order of elements in the \"IP pool array\". In scenarios where nodes belong to different regions or data centers, if the node where an application replica scheduled matches the node affinity rule of the first IP pool, the Pod obtains an IP from that pool. If it doesn't meet the criteria, Spiderpool attempts to assign an IP from the alternative pools until all options have been exhausted. For more information, please refer to . The fifth priority: cluster's default IP pool Within the SpiderIPPool CR object, setting the spec.default field to `true` designates the pool as the cluster's default IP pool (default value is `false`). For more information, please refer to . To determine the availability of candidate IP pools for IPv4 and IPv6, Spiderpool filters them using the following rules: IP pools in the `terminating` state are filtered out. The `spec.disable` field of an IP pool indicates its availability. A value of `false` means the IP pool is not usable. Check if the" }, { "data": "and `IPPool.Spec.NodeAffinity` match the Pod's scheduling node. Mismatching values result in filtering out the IP pool. Check if the `IPPool.Spec.NamespaceName` and `IPPool.Spec.NamespaceAffinity` match the Pod's namespace. Mismatching values lead to filtering out the IP pool. Check if the `IPPool.Spec.NamespaceName` matches the Pod's `matchLabels`. Mismatching values lead to filtering out the IP pool. Check if the `IPPool.Spec.MultusName` matches the current NIC Multus configuration of the Pod. If there is no match, the IP pool is filtered out. Check if all IPs within the IP pool are included in the IPPool instance's `exclude_ips` field. If it is, the IP pool is filtered out. Check if all IPs in the pool are reserved in the ReservedIP instance. If it is, the IP pool is filtered out. An IP pool will be filtered out if its available IP resources are exhausted. After filtering the candidate pools, Spiderpool may have multiple pools remaining. To determine the order of IP address allocation, Spiderpool applies custom priority rules to sort these candidates. IP addresses are then selected from the pools with available IPs in the following manner: IP pool resources with the `IPPool.Spec.PodAffinity` property are given the highest priority. IPPool resources with either the `IPPool.Spec.NodeName` or `IPPool.Spec.NodeAffinity` property are given the secondary priority. The `NodeName` takes precedence over `NodeAffinity`. Following that, IP pool resources with either the `IPPool.Spec.NamespaceName` or `IPPool.Spec.NamespaceAffinity` property maintain the third-highest priority. The `NamespaceName` takes precedence over `NamespaceAffinity`. IP pool resources with the `IPPool.Spec.MultusName` property receive the lowest priority. Here are some simple instances to describe this rule. IPPoolA with properties `IPPool.Spec.PodAffinity` and `IPPool.Spec.NodeName` has higher priority than IPPoolB with single affinity property `IPPool.Spec.PodAffinity`. IPPoolA with single property `IPPool.Spec.PodAffinity` has higher priority than IPPoolB with properties `IPPool.Spec.NodeName` and `IPPool.Spec.NamespaceName`. IPPoolA with properties `IPPool.Spec.PodAffinity` and `IPPool.Spec.NodeName` has higher priority than IPPoolB with properties `IPPool.Spec.PodAffinity`,`IPPool.Spec.NamespaceName` and `IPPool.Spec.MultusName`. If a Pod belongs to StatefulSet, IP addresses that meet the aforementioned rules will be allocated with priority. When a Pod is restarted, it will attempt to reuse the previously assigned IP address. When a pod is normally deleted, the CNI plugin will be called to clean IP on a pod interface and make IP free on IPAM database. This can make sure all IPs are managed correctly and no IP leakage issue occurs. But on cases, it may go wrong and IP of IPAM database is still marked as used by a nonexistent pod. when some errors happened, the CNI plugin is not called correctly when pod deletion. This could happen like cases: When a CNI plugin is called, its network communication goes wrong and fails to release IP. The container runtime goes wrong and fails to call CNI plugin. A node breaks down and then always can not recover, the api-server makes pods of the breakdown node to be `deleting` status, but the CNI plugin fails to be called. BTW, this fault could be simply simulated by removing the CNI binary on a host when pod deletion. This issue will make a bad result: the new pod may fail to run because the expected IP is still occupied. the IP resource is exhausted gradually although the actual number of pods does not grow. Some CNI or IPAM plugins could not handle this issue. For some CNIs, the administrator self needs to find the IP with this issue and use a CLI tool to reclaim" }, { "data": "For some CNIs, it runs an interval job to find the IP with this issue and not reclaim them in time. For some CNIs, there is not any mechanism at all to fix the IP issue. For some CNIs, its IP CIDR is big enough, so the leaked IP issue is not urgent. For Spiderpool, all IP resources are managed by administrator, and an application will be bound to a fixed IP, so the IP reclaim can be finished in time. To prevent IP from leaking when the ippool resource is deleted, Spiderpool has some rules: For an ippool, if IP still taken by pods, Spiderpool uses webhook to reject deleting request of the ippool resource. For a deleting ippool, the IPAM plugin will stop assigning IP from it, but could release IP from it. The ippool sets a finalizer by the spiderpool controller once it is created. After the ippool goes to be `deleting` status, the spiderpool controller will remove the finalizer when all IPs in the ippool are free, then the ippool object will be deleted. Once a pod is created and gets IPs from `SpiderIPPool`, Spiderpool will create a corresponding `SpiderEndpoint` object at the same time. It will take a finalizer (except the StatefulSet pod) and will be set to `OwnerReference` with the pod. When a pod is deleted, Spiderpool will release its IPs with the recorded data by a corresponding `SpiderEndpoint` object, then spiderpool controller will remove the `Current` data of SpiderEndpoint object and remove its finalizer. (For the StatefulSet `SpiderEndpoint`, Spiderpool will delete it directly if its `Current` data was cleaned up) In Kubernetes, garbage collection (Garbage Collection, GC for short) is very important for the recycling of IP addresses. The availability of IP addresses is critical to whether a Pod can start successfully. The GC mechanism can automatically reclaim these unused IP addresses, avoiding waste of resources and exhaustion of IP addresses. This article will introduce Spiderpool's excellent GC capabilities. The IP addresses assigned to Pods are recorded in IPAM, but these Pods no longer exist in the Kubernetes cluster. These IPs can be called `zombie IPs`. Spiderpool can recycle `zombie IPs`. Its implementation principle is as follows : When `deleting Pod` in the cluster, but due to problems such as `network exception` or `cni binary crash`, the call to `cni delete` fails, resulting in the IP address not being reclaimed by cni. In failure scenarios such as `cni delete failure`, if a Pod that has been assigned an IP is destroyed, but the IP address is still recorded in the IPAM, a phenomenon of zombie IP is formed. For this kind of problem, Spiderpool will automatically recycle these zombie IP addresses based on the cycle and event scanning mechanism. In some accidents, the stateless Pod is in a constant `Terminating` phase, Spiderpool will automatically release its IP address after the Pod's `spec.terminationGracePeriodSecond` + `SPIDERPOOLGCADDITIONALGRACEDELAY` periods. This feature can be controlled by the environment variable `SPIDERPOOLGCSTATELESSTERMINATINGPODONREADYNODEENABLED`. This capability can be used to solve the failure scenario of `unexpected Pod downtime with Node ready`. After a node goes down unexpectedly, the Pod in the cluster is permanently in the `Terminating` phase, and the IP address occupied by the Pod cannot be released. For the stateless Pod in `Terminating` phase, Spiderpool will automatically release its IP address after the Pod's `spec.terminationGracePeriodSecond`. This feature can be controlled by the environment variable `SPIDERPOOLGCSTATELESSTERMINATINGPODONNOTREADYNODE_ENABLED`. This capability can be used to solve the failure scenario of `unexpected node downtime`." } ]
{ "category": "Runtime", "file_name": "ipam-des.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "Sorry for slacking off last week on the report. We totally spaced it. This week we will go over what happened last week and this week. https://github.com/containerd/containerd/pull/496 After receiving feedback on the `snapshot.Driver` interface, now known as the `Snapshotter`, it was found that the behavior of active and committed snapshots was confusing. Specifically, it was hard to tell which methods to use based on the state of the snapshot. It was also confusing which identifier to use based on the state. To clean this up, we moved to using \"Active\" and \"Committed\" as the type of a snapshot, as opposed to the state of the snapshot. These lifecycle relationships are such that \"Active\" snapshots can only be created from a \"Committed\" snapshot and \"Committed\" snapshots must come from an \"Active\" snapshot. We retain the concept of a parent for \"Active\" and \"Committed\" snapshots, but clarified that only \"Committed\" snapshots can be a parent. As part of this, we unified the keyspace of snapshots. For common operations, such as removal and stat, we only have a single method that works for both active and committed snapshots. For methods that take one or the other, the restriction is called out. `Remove` and `Delete` were consolidated as part of this. This also makes it easier to handle scenarios that use the snapshot identity as a lock to coordinate multiple writers. `Exists` and `Parent` have also be consolidated into single `Stat` call. This returns an `Info` structure which includes information about state and parentage. We also simplify the `Walk` method as part of this. Effectively, we now have snapshots that are either active or committed and a much smaller interface! We spend time talking with people implementing Windows support as well as a few other" }, { "data": "One the major issues with our current approach was that bundles were a central part of our architecture. The content and storage subsystems would produce bundles and the execution subsystem would consume them. However, with a bundle being on the filesystem, having this concept does not work as smoothly on Windows as it would for Unix platforms. So the major design change is that bundles will be an implementation detail of the runtime and not a core part of the API. You will no longer pass the bundle path to containerd, it will manage bundles internally and the root filesystem mounts along with the spec, passed via the `Any` type, will be API fields for the create request of a container. With the bundles change above we also need to make sure changes for where containers are created and who does the supervision after creation. The runtimes in containerd, things such as Linux, Windows, and Solaris, will be responsible for the creation of containers and loading of containers when containerd boots. The containerd core will be responsible for interfacing with the GRPC API and managing a common `Container` interface that the runtimes produce. The supervision of containers will be handled in the core. This is not much of a change from today, just refining where responsibilities lie. Overall design has been a large focus for us at the moment. While containerd can run containers today it is not optimized in terms of speed or design. With the work we did this week to make sure that containerd will work across many different platforms with first class support, not as an after thought, we are in a good position to really start on the development work and get a POC out within the next few weeks. This POC should give you a good idea of what containerd can do, its APIs, and how users will interact with its various subsystems." } ]
{ "category": "Runtime", "file_name": "2017-02-10.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "Extend CSI snapshot to support Longhorn snapshot Before this feature, if the user uses , they can only create Longhorn backups (out of cluster). We want to extend the CSI Snapshotter to support creating for Longhorn snapshot (in-cluster) as well. https://github.com/longhorn/longhorn/issues/2534 Extend the CSI Snapshotter to support: Creating Longhorn snapshot Deleting Longhorn snapshot Creating a new PVC from a CSI snapshot that is associated with a Longhorn snapshot Longhorn snapshot Reverting is not a goal because CSI snapshotter doesn't support replace in place for now: https://github.com/container-storage-interface/spec/blob/master/spec.md#createsnapshot Before this feature is implemented, users can only use CSI Snapshotter to create/restore Longhorn backups. This means that users must set up a backup target outside of the cluster. Uploading/downloading data from backup target is a long/costly operation. Sometimes, users might just want to use CSI Snapshotter to take an in-cluster Longhorn snapshot and create a new volume from that snapshot. The Longhorn snapshot operation is cheap and faster than the backup operation and doesn't require setting up a backup target. To use this feature, users need to do: Deploy the CSI snapshot CRDs, Controller as instructed at https://longhorn.io/docs/1.2.3/snapshots-and-backups/csi-snapshot-support/enable-csi-snapshot-support/ Deploy a VolumeSnapshotClass with the parameter `type: longhorn-snapshot`. I.e., ```yaml kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1beta1 metadata: name: longhorn-snapshot driver: driver.longhorn.io deletionPolicy: Delete parameters: type: longhorn-snapshot ``` To create a new CSI snapshot associated with a Longhorn snapshot of the volume `test-vol`, users deploy the following VolumeSnapshot CR: ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot spec: volumeSnapshotClassName: longhorn-snapshot source: persistentVolumeClaimName: test-vol ``` A new Longhorn snapshot is created for the volume `test-vol` To create a new PVC from the CSI snapshot, users can deploy the following yaml: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-restore-snapshot-pvc spec: storageClassName: longhorn dataSource: name: test-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 5Gi # should be the same as the size of `test-vol` ``` A new PVC will be created with the same content as in the VolumeSnapshot `test-snapshot` Deleting the VolumeSnapshot `test-snapshot` will lead to the deletion of the corresponding Longhorn snapshot of the volume `test-vol` None We follow the specification in when supporting the CSI snapshot. We define a new parameter in the VolumeSnapshotClass `type`. The value of the parameter `type` can be `longhorn-snapshot` or `longhorn-backup`. When `type` is `longhorn-snapshot` it means that the CSI VolumeSnapshot created with this VolumeSnapshotClass is associated with a Longhorn snapshot. When `type` is `longhorn-backup` it means that the CSI VolumeSnapshot created with this VolumeSnapshotClass is associated with a Longhorn backup. In , we get the value of parameter" }, { "data": "If it is `longhorn-backup`, we take a Longhorn backup as before. If it is `longhorn-snapshot` we do: Get the name of the Longhorn volume Check if the volume is in attached state. If it is not, return `codes.FailedPrecondition`. We cannot take a snapshot of non-attached volume. Check if a Longhorn snapshot with the same name as the requested CSI snapshot already exists. If yes, return OK without taking a new Longhorn snapshot. Take a new Longhorn snapshot. Encode the snapshotId in the format `snap://volume-name/snapshot-name`. This snaphotId will be used in the later CSI CreateVolume and DeleteSnapshot call. In : If the VolumeContentSource is a `VolumeContentSource_Snapshot` type, decode the snapshotId in the format from the above step. Create a new volume with the `dataSource` set to `snap://volume-name/snapshot-name`. This will trigger Longhorn to clone the content of the snapshot to the new volume. Note that if the source volume is not attached, Longhorn cannot verify the existence of the snapshot inside the Longhorn volume. This means that and new PVC cannot be provisioned. In : Decode the snapshotId in the format from the above step. If the type is `longhorn-backup` we delete the backup as before. If the type is `longhorn-snapshot`, we delete the corresponding Longhorn snapshot of the source volume. If the source volume or the snapshot is no longer exist, we return OK as specified in Integration test plan. Deploy the CSI snapshot CRDs, Controller as instructed at https://longhorn.io/docs/1.2.3/snapshots-and-backups/csi-snapshot-support/enable-csi-snapshot-support/ Deploy 4 VolumeSnapshotClass: ```yaml kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1beta1 metadata: name: longhorn-backup-1 driver: driver.longhorn.io deletionPolicy: Delete ``` ```yaml kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1beta1 metadata: name: longhorn-backup-2 driver: driver.longhorn.io deletionPolicy: Delete parameters: type: longhorn-backup ``` ```yaml kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1beta1 metadata: name: longhorn-snapshot driver: driver.longhorn.io deletionPolicy: Delete parameters: type: longhorn-snapshot ``` ```yaml kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1beta1 metadata: name: invalid-class driver: driver.longhorn.io deletionPolicy: Delete parameters: type: invalid ``` Create Longhorn volume `test-vol` of 5GB. Create PV/PVC for the Longhorn volume. Create a workload that uses the volume. Write some data to the volume. Make sure data persist to the volume by running `sync` Set up a backup target for Longhorn `type` is `longhorn-backup` or `\"\"` Create a VolumeSnapshot with the following yaml ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot-longhorn-backup spec: volumeSnapshotClassName: longhorn-backup-1 source: persistentVolumeClaimName: test-vol ``` Verify that a backup is created. Delete the `test-snapshot-longhorn-backup` Verify that the backup is deleted Create the `test-snapshot-longhorn-backup` VolumeSnapshot with `volumeSnapshotClassName: longhorn-backup-2` Verify that a backup is created. `type` is `longhorn-snapshot` volume is in detached state. Scale down the workload of `test-vol` to detach the volume. Create `test-snapshot-longhorn-snapshot` VolumeSnapshot with `volumeSnapshotClassName: longhorn-snapshot`. Verify the error `volume" }, { "data": "invalid state ... for taking snapshot` in the Longhorn CSI plugin. volume is in attached state. Scale up the workload to attach `test-vol` Verify that a Longhorn snapshot is created for the `test-vol`. invalid type Create `test-snapshot-invalid` VolumeSnapshot with `volumeSnapshotClassName: invalid-class`. Verify the error `invalid snapshot type: %v. Must be %v or %v or` in the Longhorn CSI plugin. Delete `test-snapshot-invalid` VolumeSnapshot. From `longhorn-backup` type Create a new PVC with the flowing yaml: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-restore-pvc spec: storageClassName: longhorn dataSource: name: test-snapshot-longhorn-backup kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 5Gi ``` Attach the PVC `test-restore-pvc` and verify the data Delete the PVC From `longhorn-snapshot` type Source volume is attached && Longhorn snapshot exist Create a PVC with the following yaml: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-restore-pvc spec: storageClassName: longhorn dataSource: name: test-snapshot-longhorn-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 5Gi ``` Attach the PVC `test-restore-pvc` and verify the data Delete the PVC Source volume is detached Scale down the workload to detach the `test-vol` Create the same PVC `test-restore-pvc` as in the `Source volume is attached && Longhorn snapshot exist` section Verify that PVC provisioning failed because the source volume is detached so Longhorn cannot verify the existence of the Longhorn snapshot in the source volume. Scale up the workload to attach `test-vol` Wait for PVC to finish provisioning and be bounded Attach the PVC `test-restore-pvc` and verify the data Delete the PVC Source volume is attached && Longhorn snapshot doesnt exist Find the VolumeSnapshotContent of the VolumeSnapshot `test-snapshot-longhorn-snapshot`. Find the Longhorn snapshot name inside the field `VolumeSnapshotContent.snapshotHandle`. Go to Longhorn UI. Delete the Longhorn snapshot. Repeat steps in the section `Longhorn snapshot exist` above. PVC should be stuck in provisioning because Longhorn snapshot of the source volume doesn't exist. Delete the PVC `test-restore-pvc` PVC `longhorn-backup` type Done in the above step `longhorn-snapshot` type volume is attached && snapshot doesnt exist Delete the VolumeSnapshot `test-snapshot-longhorn-snapshot` and verify that the VolumeSnapshot is deleted. volume is attached && snapshot exist Recreate the VolumeSnapshot `test-snapshot-longhorn-snapshot` Verify the creation of Longhorn snapshot with the name in the field `VolumeSnapshotContent.snapshotHandle` Delete the VolumeSnapshot `test-snapshot-longhorn-snapshot` Verify that Longhorn snapshot is removed or marked as removed Verify that the VolumeSnapshot `test-snapshot-longhorn-snapshot` is deleted. volume is detached Recreate the VolumeSnapshot `test-snapshot-longhorn-snapshot` Scale down the workload to detach `test-vol` Delete the VolumeSnapshot `test-snapshot-longhorn-snapshot` Verify that VolumeSnapshot `test-snapshot-longhorn-snapshot` is stuck in deleting No upgrade strategy needed We need to update the docs and examples to reflect the new parameter in the VolumeSnapshotClass, `type`." } ]
{ "category": "Runtime", "file_name": "20220110-extend-csi-snapshot-to-support-longhorn-snapshot.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Announcing a new GitHub home for Velero slug: announcing-gh-move image: /img/posts/vmware-tanzu.png excerpt: The next Velero release (v1.2) will be built out of a new GitHub organization, and we have significant changes to our plugins. author_name: Carlisia Thompson author_avatar: /img/contributors/carlisia-thompson.png categories: ['velero'] tags: ['Carlisia Thompson'] We are now part of a brand new GitHub organization: . VMware Tanzu is a new family of projects, products and services for the cloud native world. With the Velero project being a cloud native technology that extends Kubernetes, it is only natural that it would be moved to sit alongside all the other VMware-supported cloud native repositories. You can read more about this change in this . The new Velero repository can now be found at . Past issues, pull requests, commits, contributors, etc., have all been moved to this repo. The next Velero release, version 1.2, will be built out of this new repository and is slated to come out at the end of October. The main for version 1.2 is the restructuring around how we will be handling all Object Store and Volume Snapshotter plugins. Previously, Velero included both types of plugins for AWS, Microsoft Azure, and Google Cloud Platform (GCP) in-tree. Beginning with Velero 1.2, these plugins will be moved out of tree and installed like any other plugin. With more and more providers wanting to support Velero, it gets more difficult to justify excluding new plugins from being in-tree while continuing to maintain the AWS, Microsoft Azure, and GCP plugins in-tree. At the same time, if we were to include any more plugins in-tree, it would ultimately become the responsibility of the Velero team to maintain an increasing number of plugins in an unsustainable way. As the opportunity to move to a new GitHub organization presented itself, we thought it was a good time to make structural changes. The three original native plugins and their respective documentation will each have their own repo under the new VMware Tanzu GitHub organization as of version 1.2. You will be able to find them by looking up our list of . Maintenance of these plugins will continue to be done by the Velero core team as usual, although we will gladly promote active contributors to maintainers. This change mainly aims to achieve the following goals: Interface with all plugins equally and consistently Encourage developers to get involved with the smaller code base of each plugin and potentially be promoted to plugin maintainers Iterate on plugins separately from the core codebase Reduce the size of the Velero binaries and images by extracting these SDKs and having a separate release for each individual provider Instructions for upgrading to version 1.2 and installing Velero and its plugins will be added to . As always, we welcome feedback and participation in the development of Velero. All information on how to contact us or become involved can be found here: https://velero.io/community/" } ]
{ "category": "Runtime", "file_name": "2019-10-01-announcing-gh-move.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This guide shows you how to set the DRBD Module Loader when using [Talos Linux]. To complete this guide, you should be familiar with: editing `LinstorSatelliteConfiguration` resources. using the `talosctl` command line to to access Talos Linux nodes. using the `kubectl` command line tool to access the Kubernetes cluster. By default, the DRBD Module Loader will try to find the necessary header files to build DRBD from source on the host system. In [Talos Linux] these header files are not included in the host system. Instead, the Kernel modules is packed into a system extension. Ensure Talos has the correct `drbd` loaded for the running Kernel. This can be achieved by updating the machine config: ```yaml machine: install: extensions: image: ghcr.io/siderolabs/drbd:9.2.0-v1.3.6 kernel: modules: name: drbd parameters: usermode_helper=disabled name: drbdtransporttcp ``` NOTE: Replace `v1.3.6` with the Talos version running. Validate `drbd` module is loaded: ```shell $ talosctl -n <NODE_IP> read /proc/modules drbdtransporttcp 28672 - - Live 0xffffffffc046c000 (O) drbd 643072 - - Live 0xffffffffc03b9000 (O) ``` Validate `drbd` module parameter `usermode_helper` is set to `disabled`: ```shell $ talosctl -n <NODEIP> read /sys/module/drbd/parameters/usermodehelper disabled ``` To change the DRBD Module Loader, so that it uses the modules provided by system extension, apply the following `LinstorSatelliteConfiguration`: ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: talos-loader-override spec: podTemplate: spec: initContainers: name: drbd-shutdown-guard $patch: delete name: drbd-module-loader $patch: delete volumes: name: run-systemd-system $patch: delete name: run-drbd-shutdown-guard $patch: delete name: systemd-bus-socket $patch: delete name: lib-modules $patch: delete name: usr-src $patch: delete name: etc-lvm-backup hostPath: path: /var/etc/lvm/backup type: DirectoryOrCreate name: etc-lvm-archive hostPath: path: /var/etc/lvm/archive type: DirectoryOrCreate ``` Explanation: `/etc/lvm/` is read-only in Talos and therefore can't be used. Let's use `/var/etc/lvm/` instead. Talos does not ship with Systemd, so everything Systemd related needs to be removed `/usr/lib/modules` and `/usr/src` are not needed as the Kernel module is already compiled and needs just to be used." } ]
{ "category": "Runtime", "file_name": "talos.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "Code quality: move magefile in its own subdir/submodule to remove magefile dependency on logrus consumer improve timestamp format documentation Fixes: fix race condition on logger hooks Correct versioning number replacing v1.7.1. Beware this release has introduced a new public API and its semver is therefore incorrect. Code quality: use go 1.15 in travis use magefile as task runner Fixes: small fixes about new go 1.13 error formatting system Fix for long time race condiction with mutating data hooks Features: build support for zos Fixes: the dependency toward a windows terminal library has been removed Features: a new buffer pool management API has been added a set of `<LogLevel>Fn()` functions have been added Fixes: end of line cleanup revert the entry concurrency bug fix whic leads to deadlock under some circumstances update dependency on go-windows-terminal-sequences to fix a crash with go 1.14 Features: add an option to the `TextFormatter` to completely disable fields quoting Code quality: add golangci linter run on travis Fixes: add mutex for hooks concurrent access on `Entry` data caller function field for go1.14 fix build issue for gopherjs target Feature: add an hooks/writer sub-package whose goal is to split output on different stream depending on the trace level add a `DisableHTMLEscape` option in the `JSONFormatter` add `ForceQuote` and `PadLevelText` options in the `TextFormatter` Fixes build break for plan9, nacl, solaris This new release introduces: Enhance TextFormatter to not print caller information when they are empty (#944) Remove dependency on golang.org/x/crypto (#932, #943) Fixes: Fix Entry.WithContext method to return a copy of the initial entry (#941) This new release introduces: Add `DeferExitHandler`, similar to `RegisterExitHandler` but prepending the handler to the list of handlers (semantically like `defer`) (#848). Add `CallerPrettyfier` to `JSONFormatter` and `TextFormatter` (#909, #911) Add `Entry.WithContext()` and `Entry.Context`, to set a context on entries to be used e.g. in hooks (#919). Fixes: Fix wrong method calls `Logger.Print` and `Logger.Warningln` (#893). Update `Entry.Logf` to not do string formatting unless the log level is enabled (#903) Fix infinite recursion on unknown `Level.String()` (#907) Fix race condition in `getCaller` (#916). This new release introduces: Log, Logf, Logln functions for Logger and Entry that take a Level Fixes: Building prometheus node_exporter on AIX (#840) Race condition in TextFormatter (#468) Travis CI import path (#868) Remove coloured output on Windows (#862) Pointer to func as field in JSONFormatter (#870) Properly marshal Levels (#873) This new release introduces: A new method `SetReportCaller` in the `Logger` to enable the file, line and calling function from which the trace has been issued A new trace level named `Trace` whose level is below `Debug` A configurable exit function to be called upon a Fatal trace The `Level` object now implements `encoding.TextUnmarshaler` interface This is a bug fix" }, { "data": "fix the build break on Solaris don't drop a whole trace in JSONFormatter when a field param is a function pointer which can not be serialized This new release introduces: several fixes: a fix for a race condition on entry formatting proper cleanup of previously used entries before putting them back in the pool the extra new line at the end of message in text formatter has been removed a new global public API to check if a level is activated: IsLevelEnabled the following methods have been added to the Logger object IsLevelEnabled SetFormatter SetOutput ReplaceHooks introduction of go module an indent configuration for the json formatter output colour support for windows the field sort function is now configurable for text formatter the CLICOLOR and CLICOLOR\\_FORCE environment variable support in text formater This new release introduces: a new api WithTime which allows to easily force the time of the log entry which is mostly useful for logger wrapper a fix reverting the immutability of the entry given as parameter to the hooks a new configuration field of the json formatter in order to put all the fields in a nested dictionnary a new SetOutput method in the Logger a new configuration of the textformatter to configure the name of the default keys a new configuration of the text formatter to disable the level truncation Fix hooks race (#707) Fix panic deadlock (#695) Fix race when adding hooks (#612) Fix terminal check in AppEngine (#635) Replace example files with testable examples bug: quote non-string values in text formatter (#583) Make (Logger) SetLevel a public method bug: fix escaping in text formatter (#575) Officially changed name to lower-case bug: colors on Windows 10 (#541) bug: fix race in accessing level (#512) feature: add writer and writerlevel to entry (#372) bug: fix undefined variable on solaris (#493) formatter: configure quoting of empty values (#484) formatter: configure quoting character (default is `\"`) (#484) bug: fix not importing io correctly in non-linux environments (#481) bug: fix windows terminal detection (#476) bug: fix tty detection with custom out (#471) performance: Use bufferpool to allocate (#370) terminal: terminal detection for app-engine (#343) feature: exit handler (#375) feature: Add a test hook (#180) feature: `ParseLevel` is now case-insensitive (#326) feature: `FieldLogger` interface that generalizes `Logger` and `Entry` (#308) performance: avoid re-allocations on `WithFields` (#335) logrus/text_formatter: don't emit empty msg logrus/hooks/airbrake: move out of main repository logrus/hooks/sentry: move out of main repository logrus/hooks/papertrail: move out of main repository logrus/hooks/bugsnag: move out of main repository logrus/core: run tests with `-race` logrus/core: detect TTY based on `stderr` logrus/core: support `WithError` on logger logrus/core: Solaris support logrus/core: fix possible race (#216) logrus/doc: small typo fixes and doc improvements hooks/raven: allow passing an initialized client logrus/core: revert #208 formatter/text: fix data race (#218) logrus/core: fix entry log level (#208) logrus/core: improve performance of text formatter by 40% logrus/core: expose `LevelHooks` type logrus/core: add support for DragonflyBSD and NetBSD formatter/text: print structs more verbosely logrus: fix more Fatal family functions logrus: fix not exiting on `Fatalf` and `Fatalln` logrus: defaults to stderr instead of stdout hooks/sentry: add special field for `http.Request` formatter/text: ignore Windows for colors formatter/\\: allow configuration of timestamp layout formatter/text: Add configuration option for time format (#158)" } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "The base policy for a VM with its firewall enabled is: block all inbound traffic allow all outbound traffic All firewall rules applied to a VM are applied on top of those defaults. Firewall rules can affect one VM (using the vm target) or many (using the tag or all vms target types). Adding and updating rules takes effect immediately. Adding or removing tags on a VM causes rules that apply to those tags to be added or removed immediately. In the case of two rules that affect the same VM and port and have the same priority level (0 when one isn't specified), the rule that goes counter to the default policy takes precedence by default. This means: If you have an incoming BLOCK and an incoming ALLOW rule for the same VM and port of the same priority, the ALLOW will override. Give the BLOCK a higher priority to have it applied first. If you have an outgoing BLOCK and an outgoing ALLOW rule for the same VM and port of the same priority, the BLOCK will override. Give the ALLOW a higher priority to have it applied first. Rules are created and updated using a JSON payload as in this example: { \"rule\": \"FROM any TO all vms ALLOW tcp port 22\", \"enabled\": true, \"owner_uuid\": \"5c3ea269-75b8-42fa-badc-956684fb4d4e\" } The properties of this payload are: rule* (required): the firewall rule. See the Rule Syntax section below for the syntax. enabled* (boolean, optional): If set to true, the rule will be applied to VMs. If set to false, the rule will be added but not applied. global* (boolean, optional): If set, the rule will be applied to all VMs in the datacenter, regardless of owner. owner_uuid* (UUID, optional): If set, restricts the set of VMs that the rule can be applied to VMs owned by this UUID. Note that only one of owner_uuid or global can be set at a time for a rule. Firewall rules are in the following format: <p style=\"text-align: center\"> <img alt=\"Rules are of the form: FROM targetlist TO targetlist action protocol ports/types\" src=\"./media/img/rule.svg\" /> </p> Affected sources and destinations can be defined as a list of targets in the following syntax: <p style=\"text-align: center\"> <img alt=\"Target List Keywords: ALL VMS, ANY, or a list of targets separated by OR\" src=\"./media/img/target-list.svg\" /> </p> from targets and to targets can be any of the following types (see the Target Types section below): <p style=\"text-align: center\"> <img alt=\"Target Keywords: VM, IP, SUBNET, TAG\" src=\"./media/img/target.svg\" /> </p> Protocols can be targeted using: <p style=\"text-align: center\"> <img alt=\"Protocol Keywords: TCP, UDP, ICMP, ICMP6, AH, ESP\" src=\"./media/img/protocol.svg\" /> </p> The limits for the parameters are: 24 from targets 24 to targets 8 ports or types vm <uuid> Targets the VM with that UUID. Example: FROM any to vm 04128191-d2cb-43fc-a970-e4deefe970d8 ALLOW tcp port 80 Allows HTTP traffic from any host to VM" }, { "data": "ip <IP address> Targets the specified IPv4 or IPv6 address. Example: FROM all vms to (ip 10.2.0.1 OR ip fd22::1234) BLOCK tcp port 25 Blocks SMTP traffic to that IP. subnet <subnet CIDR> Targets the specified IPv4 or IPv6 subnet range. Example: FROM subnet 10.8.0.0/16 TO vm 0f570678-c007-4610-a2c0-bbfcaab9f4e6 ALLOW tcp port 443 Allows HTTPS traffic from a private IPv4 /16 to the specified VM. Example: FROM subnet fd22::/64 TO vm 0f570678-c007-4610-a2c0-bbfcaab9f4e6 ALLOW tcp port 443 Allows HTTPS traffic from a private IPv6 /64 to the specified VM. tag <name> tag <name> = <value> tag \"<name with spaces>\" = \"<value with spaces>\" Targets all VMs with the specified tag, or all VMs with the specified tag and value. Both tag name and value can be quoted if they contain spaces. Examples: FROM all vms TO tag syslog ALLOW udp port 514 Allows syslog traffic from all VMs to syslog servers. FROM tag role = db TO tag role = www ALLOW tcp port 5432 Allows database traffic from databases to webservers. All other VMs with role tags (role = staging, for example) will not be affected by this rule. FROM all vms TO tag \"VM type\" = \"LDAP server\" ALLOW tcp PORT 389 Allow LDAP access from all VMs to LDAP servers. all vms Targets all VMs. Example: FROM all vms TO all vms ALLOW tcp port 22 Allows ssh traffic between all VMs. any Targets any host (any IPv4 address). Example: FROM any TO all vms ALLOW tcp port 80 Allows HTTP traffic from any IP to all VMs. ( <target> OR <target> OR ... ) The vm, ip, subnet and tag target types can be combined into a list surrounded by parentheses and joined by OR. Example: FROM (vm 163dcedb-828d-43c9-b076-625423250ee2 OR tag db) TO (subnet 10.2.2.0/24 OR ip 10.3.0.1) BLOCK tcp port 443 Blocks HTTPS traffic to an internal subnet and IP. ALLOW BLOCK Actions can be one of ALLOW or BLOCK. Note that certain combinations of actions and directions will essentially have no effect on the behaviour of a VM's firewall. For example: FROM any TO all vms BLOCK tcp port 143 Since the default rule set blocks all incoming ports, this rule doesn't really have an effect on the VMs. Another example: FROM all vms TO any ALLOW tcp port 25 Since the default policy allows all outbound traffic, this rule doesn't have an effect. tcp udp icmp icmp6 ah esp The protocol can be one of tcp, udp, icmp(6), ah or esp. The protocol dictates whether ports or types can be used (see the Ports section below). <p style=\"text-align: center\"> <img alt=\"All, specific, or ranges of ports can be allowed and blocked\" src=\"./media/img/port-list.svg\" /> </p> For TCP and UDP, this specifies the port numbers that the rule applies to. Port numbers must be between 1 and 65535," }, { "data": "Ranges are written as two port numbers separated by a - (hyphen), with the lower number coming first, with optional spaces around the hyphen. Port ranges are inclusive, so writing the range \"20 - 22\" would cause the rule to apply to the ports 20, 21 and 22. Examples: FROM tag www TO any ALLOW tcp (port 80 AND port 443) Allows HTTP and HTTPS traffic from any IP to all webservers. FROM tag www TO any ALLOW tcp ports 80, 443, 8000-8100 Allows traffic on HTTP, HTTPS and common alternative HTTP ports from any IP to all webservers. <p style=\"text-align: center\"> <img alt=\"All ICMP types can be specified, or a list of specific ones\" src=\"./media/img/type-list.svg\" /> </p> For ICMP, this specifies the ICMP type and optional code that the rule applies to. Types and codes must be between 0 and 255, inclusive. Examples: FROM any TO all vms ALLOW icmp TYPE 8 CODE 0 Allows pinging all VMs. The IPv6 equivalent would be: FROM any TO all vms ALLOW icmp6 TYPE 128 CODE 0 And to block outgoing replies: FROM all vms TO any BLOCK icmp TYPE 0 FROM all vms TO any BLOCK icmp6 TYPE 129 priority <level> Specifying a priority for a rule allows defining its relation with other rules. By default, a rule has a priority level of 0, the lowest priority. Rules with a higher priority will be used before ones with a lower priority. The highest level that can be specified is 100. Examples: FROM any TO tag mta ALLOW tcp PORT 25 FROM subnet 10.20.30.0/24 TO tag mta BLOCK tcp PORT 25 PRIORITY 1 Allows traffic from anyone but 10.20.30.0/24 to access an MTA. FROM all vms TO any BLOCK tcp PORT all FROM all vms TO any ALLOW tcp PORT 22 PRIORITY 1 Blocks all outbound traffic, overriding the default outbound policy, except for SSH. FROM all vms TO tag syslog ALLOW udp port 514 Allows syslog traffic from all VMs to syslog servers. FROM tag role = db TO tag role = www ALLOW tcp port 5432 Allows database traffic from databases to webservers. FROM all vms TO all vms ALLOW tcp port 22 Allows ssh traffic between all VMs. FROM any TO all vms ALLOW tcp port 80 Allow HTTP traffic from any host to all VMs. FROM any TO all vms ALLOW ah FROM any TO all vms ALLOW esp FROM any TO all vms ALLOW udp (PORT 500 and PORT 4500) Allows traffic from any host to all VMs. This section explains error messages. The rule you're trying to create doesn't contain any targets that will actually cause rules to be applied to VMs. Targets that will cause rules to be applied are: tag vm all vms Some examples of rules that would cause this message include: FROM any TO any ALLOW tcp port 22 FROM ip 192.168.1.3 TO subnet 192.168.1.0/24 ALLOW tcp port 22" } ]
{ "category": "Runtime", "file_name": "rules.md", "project_name": "SmartOS", "subcategory": "Container Runtime" }
[ { "data": "Status: Accepted This document proposes a solution to select an API group version to restore from the versions backed up using the feature flag EnableAPIGroupVersions. It is possible that between the time a backup has been made and a restore occurs that the target Kubernetes version has incremented more than one version. In such a case where at least a versions of Kubernetes was skipped, the preferred source cluster's API group versions for resources may no longer be supported by the target cluster. With , all supported API group versions were backed up if the EnableAPIGroupVersions feature flag was set for Velero. The next step (outlined by this design proposal) will be to see if any of the backed up versions are supported in the target cluster and if so, choose one to restore for each backed up resource. Choose an API group to restore from backups given a priority system or a user-provided prioritization of versions. Restore resources using the chosen API group version. Allow users to restore onto a cluster that is running a Kubernetes version older than the source cluster. The changes proposed here only allow for skipping ahead to a newer Kubernetes version, but not going backward. Allow restoring from backups created using Velero version 1.3 or older. This proposal will only work on backups created using Velero 1.4+. Modifying the compressed backup tarball files. We don't want to risk corrupting the backups. Using plugins to restore a resource when the target supports none of the source cluster's API group versions. The ability to use plugins will hopefully be something added in the future, but not at this time. During restore, the proposal is that Velero will determine if the `APIGroupVersionsFeatureFlag` was enabled in the target cluster and `Status.FormatVersion 1.1.0` was used during backup. Only if these two conditions are met will the changes proposed here take effect. The proposed code starts with creating three lists for each backed up resource. The three lists will be created by (1) reading the directory names in the backup tarball file and seeing which API group versions were backed up from the source cluster, (2) looking at the target cluster and determining which API group versions are supported, and (3) getting ConfigMaps from the target cluster in order to get user-defined prioritization of versions. The three lists will be used to create a map of chosen versions for each resource to restore. If there is a user-defined list of priority versions, the versions will be checked against the supported versions lists. The highest user-defined priority version that is/was supported by both target and source clusters will be the chosen version for that resource. If no user specified versions are supported by neither target nor source, the versions will be logged and the restore will continue with other prioritizations. Without a user-defined prioritization of versions, the following version prioritization will be followed, starting from the highest priority: target cluster preferred version, source cluster preferred version, and a common supported version. Should there be multiple common supported versions, the one that will be chosen will be based on the" }, { "data": "Once the version to restore is chosen, the file path to the backed up resource in the tarball will be modified such that it points to the resources' chosen API group version. If no version is found in common between the source and target clusters, the chosen version will default to the source cluster's preferred version (the version being restored currently without the changes proposed here). Restore will be allowed to continue as before. There are six objectives to achieve the above stated goals: Determine if the APIGroupVersionsFeatureFlag is enabled and Backup Objects use Status.FormatVersion 1.1.0. List the backed up API group versions. List the API group versions supported by the target cluster. Get the user-defined version priorities. Use a priority system to determine which version to restore. The source preferred version will be the default if the priorities fail. Modify the paths to the backup files in the tarball in the resource restore process. For restore to be able to choose from multiple supported backed up versions, the feature flag must have been enabled during the restore processes. Backup objects must also have . The reason for checking for the feature flag during restore is to ensure the user would like to restore a version that might not be the source cluster preferred version. This check is done via `features.IsEnabled(velerov1api.APIGroupVersionsFeatureFlag)`. The reason for checking `Status.FormatVersion` is to ensure the changes made by this proposed design is backward compatible. Only with Velero version 1.4 and forward was Format Version 1.1.0 used to structure the backup directories. Format Version 1.1.0 is required for the restore process proposed in this design doc to work. Before v1.4, the backed up files were in a directory structure that will not be recognized by the proposed code changes. In this case, restore should not attempt to restore from multiple versions as they will not exist. The is stored in a `restoreContext` struct field called . The full chain is `ctx.backup.Status.FormatVersion`. The above two checks can be done inside a new method on the `*restoreContext` object with the method signature `meetsAPIGVRestoreReqs() bool`. This method can remain in the `restore` package, but for organizational purposes, it can be moved to a file called `prioritizegroupversion.go`. Currently, in `pkg/restore/restore.go`, in the `execute(...)` method, around , the resources and their backed up items are saved in a map called `backupResources`. At this point, the feature flag and format versions can be checked (described in Objective #1). If the requirements are met, the `backedupResources` map can be sent to a method (to be created) with the signature `ctx.chooseAPIVersionsToRestore(backupResources)`. The `ctx` object has the type `*restore.Context`. The `chooseAPIVersionsToRestore` method can remain in the `restore` package, but for organizational purposes, it can be moved to a file called `prioritizegroupversion.go`. Inside the `chooseAPIVersionsToRestore` method, we can take advantage of the `archive` package's `Parser` type. `ParseGroupVersions(backupDir string) (map[string]metav1.APIGroup, error)`. The `ParseGroupVersions(...)` method will loop through the `resources`, `resource.group`, and group version directories to populate a map called `sourceRGVersions`. The `sourceRGVersions` map's keys will be strings in the format `<resource>.<group>`, e.g. \"horizontalpodautoscalers.autoscaling\". The values will be APIGroup structs. The API Group struct can be imported from k8s.io/apimachinery/pkg/apis/meta/v1. Order the APIGroup.Versions slices using a sort function copied from `k8s.io/apimachinery/pkg/version`. ```go sort.SliceStable(gvs, func(i, j int) bool { return version.CompareKubeAwareVersionStrings(gvs[i].Version," }, { "data": "> 0 }) ``` Still within the `chooseAPIVersionsToRestore` method, the target cluster's resource group versions can now be obtained. ```go targetRGVersions := ctx.discoveryHelper.APIGroups() ``` Order the APIGroup.Versions slices using a sort function copied from `k8s.io/apimachinery/pkg/version`. ```go sort.SliceStable(gvs, func(i, j int) bool { return version.CompareKubeAwareVersionStrings(gvs[i].Version, gvs[j].Version) > 0 }) ``` Still within the `chooseAPIVersionsToRestore` method, the user-defined version priorities can be retrieved. These priorities are expected to be in a config map named `enableapigroupversions` in the `velero` namespace. An example config map is ```yaml apiVersion: v1 kind: ConfigMap metadata: name: enableapigroupversions namespace: velero data: restoreResourcesVersionPriority: | - rockbands.music.example.io=v2beta1,v2beta2 orchestras.music.example.io=v2,v3alpha1 subscriptions.operators.coreos.com=v2,v1 ``` In the config map, the resources and groups and the user-defined version priorities will be listed in the `data.restoreResourcesVersionPriority` field following the following general format: `<group>.<resource>=<version 1>[, <version n> ...]`. A map will be created to store the user-defined priority versions. The map's keys will be strings in the format `<resource>.<group>`. The values will be APIGroup structs that will be imported from `k8s.io/apimachinery/pkg/apis/meta/v1`. Within the APIGroup structs will be versions in the order that the user provides in the config map. The PreferredVersion field in APIGroup struct will be left empty. Determining the priority will also be done in the `chooseAPIVersionsToRestore` method. Once a version is chosen, it will be stored in a new map of the form `map[string]ChosenGRVersion` where the key is the `<resource>.<group>` and the values are of the `ChosenGroupVersion` struct type (shown below). The map will be saved to the `restore.Context` object in a field called `chosenGrpVersToRestore`. ```go type ChosenGroupVersion struct { Group string Version string Dir string } ``` The first method called will be `ctx.gatherSTUVersions()` and it will gather the source cluster group resource and versions (`sgvs`), target cluster group versions (`tgvs`), and custom user resource and group versions (`ugvs`). Loop through the source cluster resource and group versions (`sgvs`). Find the versions for the group in the target cluster. An attempt will first be made to `findSupportedUserVersion`. Loop through the resource.groups in the custom user resource and group versions (`ugvs`) map. If a version is supported by both `tgvs` and `sgvs`, that will be set as the chosen version for the corresponding resource in `ctx.chosenGrpVersToRestore` If no three-way match can be made between the versions in `ugvs`, `tgvs`, and `sgvs`, move on to attempting to use the target cluster preferred version. Loop through the `sgvs` versions for the resource and see if any of them match the first item in the `tgvs` version list. Because the versions in `tgvs` have been ordered, the first version in the version slide will be the preferred version. If target preferred version cannot be used, attempt to choose the source cluster preferred version. Loop through the target versions and see if any of them match the first item in the source version slice, which will be the preferred version due to Kubernetes version ordering. If neither clusters' preferred version can be used, look through remaining versions in the target version list and see if there is a match with the remaining versions in the source versions list. If none of the previous checks produce a chosen version, the source preferred version will be the default and the restore process will continue. Here is another way to list the priority versions described above: Priority 0 ((User" }, { "data": "Users determine restore version priority using a config map Priority 1. Target preferred version can be used. Priority 2. Source preferred version can be used. Priority 3. A common supported version can be used. This means target supported version == source supported version if multiple support versions intersect, choose the version using the If there is no common supported version between target and source clusters, then the default `ChosenGRVersion` will be the source preferred version. This is the version that would have been assumed for restore before the changes proposed here. Note that adding a field to `restore.Context` will mean having to make a map for the field during instantiation. To see example cases with version priorities, see a blog post written by Rafael Brito: https://github.com/brito-rafa/k8s-webhooks/tree/master/examples-for-projectvelero. The method doing the bulk of the restoration work is `ctx.restoreResource(...)`. Inside this method, around in `pkg/restore/restore.go`, the path to backup json file for the item being restored is set. After the groupResource is instantiated at pkg/restore/restore.go:733, and before the `for` loop that ranges through the `items`, the `ctx.chosenGRVsToRestore` map can be checked. If the groupResource exists in the map, the path saved to `resource` variable can be updated. Currently, the item paths look something like ```bash /var/folders/zj/vc4ln5h14djg9svz7x_t1d0r0000gq/T/620385697/resources/horizontalpodautoscalers.autoscaling/namespaces/myexample/php-apache-autoscaler.json ``` This proposal will have the path changed to something like ```bash /var/folders/zj/vc4ln5h14djg9svz7x_t1d0r0000gq/T/620385697/resources/horizontalpodautoscalers.autoscaling/v2beta2/namespaces/myexample/php-apache-autoscaler.json ``` The `horizontalpodautoscalers.autoscaling` part of the path will be updated to `horizontalpodautoscalers.autoscaling/v2beta2` using ```go version, ok := ctx.chosenGVsToRestore[groupResource.String()] if ok { resource = filepath.Join(groupResource.String(), version.VerDir) } ``` The restore can now proceed as normal. Look for plugins if no common supported API group version could be found between the target and source clusters. We had considered searching for plugins that could handle converting an outdated resource to a new one that is supported in the target cluster, but it is difficult, will take a lot of time, and currently will not be useful because we are not aware of such plugins. It would be better to keep the initial changes simple to see how it works out and progress to more complex solutions as demand necessitates. It was considered to modify the backed up json files such that the resources API versions are supported by the target but modifying backups is discouraged for several reasons, including introducing data corruption. I can't think of any additional risks in terms of Velero security here. I have made it such that the changes in code will only affect Velero installations that have `APIGroupVersionsFeatureFlag` enabled during restore and Format Version 1.1.0 was used during backup. If both these requirements are not met, the changes will have no affect on the restore process, making the changes here entirely backward compatible. This first draft of the proposal will be submitted Oct. 30, 2020. Once this proposal is approved, I can have the code and unit tests written within a week and submit a PR that fixes Issue #2551. At the time of writing this design proposal, I had not seen any of @jenting's work for solving Issue #2551. He had independently covered the first two priorities I mentioned above before I was even aware of the issue. I hope to not let his efforts go to waste and welcome incorporating his ideas here to make this design proposal better." } ]
{ "category": "Runtime", "file_name": "restore-with-EnableAPIGroupVersions-feature.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "stratovirt-img is an offline tool for virtual disks. Usage: ```shell stratovirt-img command [command options] ``` Command parameters: img_path: the path for image. fmt: disk format. img_size: size for image, the unit can be K, M, G or none for bytes. options is a comma separated list of format specific options in a name=value format. Following commands are supported now: Create virtual disk with different format. Command syntax: ```shell create [-f fmt] [-o options] imgpath imgsize ``` Sample Configuration ```shell stratovirt-img create -f raw imgpath imgsize stratovirt-img create -f qcow2 -o cluster-size=65536 imgpath imgsize ``` Note: 1. The cluster size can be only be set for `qcow2` or default to 65536. 2. Disk format is default to raw. Check if there are some mistakes on the image and choose to fix. Command syntax: ```shell check [-r {leaks|all}] [-noprinterror] [-f fmt] img_path ``` -r: `leaks` means only the leaked cluster will be fixed, `all` means all repairable mistake will be fixed. -noprinterror: do not print detailed error messages. Sample Configuration ```shell stratovirt-img check img_path ``` Note: The command of check is not supported by raw format. Change the virtual size of the disk. `+size`means increase from old size, while `size` means resize to new size. Command syntax: ```shell resize [-f fmt] img_path [+]size ``` Sample Configuration ```shell stratovirt-img resize -f qcow2 img_path +size stratovirt-img resize -f raw img_path +size ``` Note: Shrink operation is not supported now. Operating internal snapshot for disk, it is only supported by qcow2. Command syntax: ```shell snapshot [-l | -a snapshotname | -c snapshotname | -d snapshotname] imgpath ``` -a snapshot_name: applies a snapshot (revert disk to saved state). -c snapshot_name: creates a snapshot. -d snapshot_name: deletes a snapshot. -l: lists all snapshots in the given image. Sample Configuration ```shell stratovirt-img snapshot -c snapshotname imgpath stratovirt-img snapshot -a snapshotname imgpath stratovirt-img snapshot -d snapshotname imgpath stratovirt-img snapshot -l img_path ``` Note: The internal snapshot is not supported by raw." } ]
{ "category": "Runtime", "file_name": "stratovirt-img.md", "project_name": "StratoVirt", "subcategory": "Container Runtime" }
[ { "data": "Currently, Longhorn allows the choice between a number of behaviors (node drain policies) when a node is cordoned or drained: `Block If Contains Last Replica` ensures the `instance-manager` pod cannot be drained from a node as long as it is the last node with a healthy replica for some volume. Benefits: Protects data by preventing the drain operation from completing until there is a healthy replica available for each volume available on another node. Drawbacks: If there is only one replica for the volume, or if its other replicas are unhealthy, the user may need to manually (through the UI) request the eviction of replicas from the disk or node. Volumes may be degraded after the drain is complete. If the node is rebooted, redundancy is reduced until it is running again. If the node is removed, redundancy is reduced until another replica rebuilds. `Allow If Last Replica Is Stopped` is similar to the above, but only prevents an `instance-manager` pod from draining if it has the last RUNNING replica. Benefits: Allows the drain operation to proceed in situations where the node being drained is expected to come back online (data will not be lost) and the replicas stored on the node's disks are not actively being used. Drawbacks: Similar drawbacks to `Block If Contains Last Replica`. If, for some reason, the node never comes back, data is lost. `Always Allow` never prevents an `instance-manager` pod from draining. Benefits: The drain operation completes quickly without Longhorn getting in the way. Drawbacks: There is no opportunity for Longhorn to protect data. This proposal seeks to add a fourth and fifth behavior (node drain policy) with the following properties: `Block For Eviction` ensures the `instance-manager` pod cannot be drained from a node as long as it contains any replicas for any volumes. Replicas are automatically evicted from the node as soon as it is cordoned. Benefits: Protects data by preventing the drain operation from completing until all replicas have been relocated. Automatically evicts replicas, so the user does not need to do it manually (through the UI). Maintains replica redundancy at all times. Drawbacks: The drain operation is significantly slower than for other behaviors. Every replica must be rebuilt on another node before it can complete. The drain operation is data-intensive, especially when replica auto balance is enabled, as evicted replicas may be moved back to the drained node when/if it comes back online. Like all of these policies, it triggers on cordon, not on drain (it is not possible for Longhorn to distinguish between a node that is actively being drained and one that is cordoned for some other reason). If a user regularly cordons nodes without draining them, replicas will be rebuilt pointlessly. `Block For Eviction If Contains Last Replica` ensures the `instance-manager` pod cannot be drained from a node as long as it is the last node with a healthy replica for some volume. Replicas that meet this condition are automatically evicted from the node as soon as it is cordoned. Benefits: Protects data by preventing the drain operation from completing until there is a healthy replica available for each volume available on another node. Automatically evicts replicas, so the user does not need to do it manually (through the" }, { "data": "The drain operation is only as slow and data-intensive as is necessary to protect data. Drawbacks: Volumes may be degraded after the drain is complete. If the node is rebooted, redundancy is reduced until it is running again. If the node is removed, redundancy is reduced until another replica rebuilds. Like all of these policies, it triggers on cordon, not on drain (it is not possible for Longhorn to distinguish between a node that is actively being drained and one that is cordoned for some other reason). If a user regularly cordons nodes without draining them, replicas will be rebuilt pointlessly. Given the drawbacks, `Block For Eviction` should likely not be the default node drain policy moving forward. However, some users may find it helpful to switch to `Block For Eviction`, especially during cluster upgrade operations. See for additional insight. `Block For Eviction If Contains Last Replica` is a more efficient behavior that may make sense as the long-term setting in some clusters. It has largely the same benefits and drawbacks as `Block If Contains Last Replica`, except that it doesn't require users to perform any manual drains. Still, when data is properly backed up and the user is planning to bring a node back online after maintenance, it is costlier than other options. https://github.com/longhorn/longhorn/issues/2238 Add a new `Block For Eviction` node drain policy as described in the summary. Ensure that replicas automatically evict from a cordoned node when `Block For Eviction` is set. Ensure a drain operation can not complete until all replicas are evicted when `Block For Eviction` is set Document recommendations for when to use `Block For Eviction`. Only trigger automatic eviction when a node is actively being drained. It is not possible to distinguish between a node that is only cordoned and one that is actively being drained. I use Rancher to manage RKE2 and K3s Kubernetes clusters. When I upgrade these clusters, the system upgrade controller attempts to drain each node before rebooting it. If a node contains the last healthy replica for a volume, the drain never completes. I know I can manually evict replicas from a node to allow it to continue, but this eliminates the benefit of the automation. After this enhancement, I can choose to set the node drain policy to `Block For Eviction` before kicking off a cluster upgrade. The upgrade may take a long time, but it eventually completes with no additional intervention. I am not comfortable with the reduced redundancy `Block If Contains Last Replica` provides while my drained node is being rebooted. Or, I commonly drain nodes to remove them from the cluster and I am not comfortable with the reduced redundancy `Block If Contains Last Replica` provides while a new replica is rebuilt. It would be nice if I could drain nodes without this discomfort. After this enhancement, I can choose to set the node drain policy to `Block For Eviction` before draining a node or nodes. It may take a long time, but I know my data is safe when the drain completes. Add `block-for-eviction` and `block-for-eviction-if-last-replica` options to the `node-drain-policy` setting. The user chooses these options to opt in to the new behavior. Add a `status.autoEvicting` to the `node.longhorn.io/v1beta2` custom resource. This is not a field users can/should interact with, but they can view it via" }, { "data": "NOTE: We originally experimented with a new `status.conditions` entry in the `node.longhorn.io/v1beta2` custom resource with the type `Evicting`. However, this was a bit less natural, because: Longhorn node conditions generally describe the state a node is in, not what the node is doing. During normal operation, `Evicting` should be `False`. The Longhorn UI displays a condition in this state with a red symbol, indicating an error state that should be investigated. Deprecate the `replica.status.evictionRequested` field in favor of a new `replica.spec.evictionRequested` field so that replica eviction can be requested by the node controller instead of the replica controller. The existing eviction logic is well-tested, so there is no reason to significantly refactor it. It works as follows: The user can set `spec.evictionRequested = true` on a node or disk. When the replica controller sees `spec.evictionRequested == true` on the node or disk hosting a replica, it sets `status.evictionRequested = true` on that replica. The volume controller uses `replica.status.evictionRequested == true` to influence replica scheduling/deletion behavior (e.g. rebuild an extra replica to replace the evicting one or delete the evicting one once rebuilding is complete). The user can set `spec.evictionRequested = false` on a node or disk. When the replica controller sees `spec.evictionRequested == false` on the node or disk hosting a replica, it sets `replica.status.evictionRequested = false` on that replica. The volume controller uses `replica.status.evictionRequested == false` to influence replica scheduling/deletion behavior (e.g. don't start a rebuild for a previously evicting replica if one hasn't been started already). NOTE: If a new replica already started rebuilding as part of an eviction, it continues to rebuild and remains in the cluster even after eviction is canceled. It can be cleaned up manually if desired. Make changes so that: The node controller (not the replica controller) sets `replica.spec.evictionRequested = true` when: `spec.evictionRequested == true` on the replica's node (similar to existing behavior moved from the replica controller), OR `spec.evictionRequested == true` on the replica's disk. (similar to existing behavior moved from the replica controller), OR `status.Unschedulable == true` on the associated Kubernetes node object and the node drain policy is `block-for-eviction`, OR `status.Unschedulable == true` on the associated Kubernetes node object, the node drain policy is `block-for-eviction-if-contains-last-replica`, and there are no other PDB-protected replicas for a volume. Much of the logic currently used by the instance manager controller to recognize PDB-protected replicas is moved to utility functions so both the node and instance manager controllers can use it. The volume controller uses `replica.spec.evictionRequested == true` in exactly the same way it previously used `replica.status.evictionRequested` to influence replica scheduling/deletion behavior (e.g. rebuild an extra replica to replace the evicting one or delete the evicting one once rebuilding is complete). The node controller sets `replica.spec.evictionRequested = false` when: The user is not requesting eviction with `node.spec.evictionRequested == true` or `disk.spec.evictionRequested == true`, AND The conditions aren't right for auto-eviction based on the node status and drain policy. The node controller sets `status.autoEvicting = true` when a node has evicting replicas because of the new drain policies and `status.autoEvicting == false` when it does not. This provides a clue to the user (and the UI) while auto eviction is ongoing. Test normal behavior with `block-for-eviction`: Set `node-drain-policy` to `block-for-eviction`. Create a" }, { "data": "Ensure (through soft anti-affinity, low replica count, and/or enough disks) that an evicted replica of the volume can be scheduled elsewhere. Write data to the volume. Drain a node one of the volume's replicas is scheduled to. While the drain is ongoing: Verify that the volume never becomes degraded. Verify that `node.status.autoEvicting == true`. Verify that `replica.spec.evictionRequested == true`. Verify the drain completes. Uncordon the node. Verify the replica on the drained node has moved to a different one. Verify that `node.status.autoEvicting == false`. Verify that `replica.spec.evictionRequested == false`. Verify the volume's data. Verify the output of an appropriate event during the test (with reason `EvictionAutomatic`). Test normal behavior with `block-for-eviction-if-contains-last-replica`: Set `node-drain-policy` to `block-for-eviction-if-contains-last-replica`. Create one volume with a single replica and another volume with three replicas. Ensure (through soft anti-affinity, low replica count, and/or enough disks) that evicted replicas of both volumes can be scheduled elsewhere. Write data to the volumes. Drain a node both volumes have a replica scheduled to. While the drain is ongoing: Verify that the volume with one replica never becomes degraded. Verify that the volume with three replicas becomes degraded. Verify that `node.status.autoEvicting == true`. Verify that `replica.spec.evictionRequested == true` on the replica for the volume that only has one. Verify that `replica.spec.evictionRequested == false` on the replica for the volume that has three. Verify the drain completes. Uncordon the node. Verify the replica for the volume with one replica has moved to a different node. Verify the replica for the volume with three replicas has not moved. Verify that `node.status.autoEvicting == false`. Verify that `replica.spec.evictionRequested == false` on all replicas. Verify the the data in both volumes. Verify the output of two appropriate events during the test (with reason `EvictionAutomatic`). Verify the output of an appropriate event for the replica that ultimately wasn't evicted during the test (with reason `EvictionCanceled`). Test unschedulable behavior with `block-for-eviction`: Set `node-drain-policy` to `block-for-eviction`. Create a volume. Ensure (through soft anti-affinity, high replica count, and/or not enough disks) that an evicted replica of the volume can not be scheduled elsewhere. Write data to the volume. Drain a node one of the volume's replicas is scheduled to. While the drain is ongoing: Verify that `node.status.autoEvicting == true`. Verify that `replica.spec.evictionRequested == true`. Verify the drain never completes. Uncordon the node. Verify that the volume is healthy. Verify that `node.status.autoEvicting == false`. Verify the volume's data. Verify the output of an appropriate event during the test (with reason `EvictionAutomatic`). Verify the output of an appropriate event during the test (with reason `EvictionCanceled`). Add `status.autoEvicting = false` to all `node.longhorn.io` objects during the upgrade. Add `spec.evictionRequested = status.evictionRequested` to all replica objects during the upgrade. The default node drain policy remains `Block If Contains Last Replica`, so do not make setting changes. I have given some though to if/how this behavior should be reflected in the UI. In this draft, I have [chosen not to represent auto-eviction as a node condition](#api-changes), which would have automatically shown it in the UI, but awkwardly. I considered representing it in the `Status` column on the `Node` tab. Currently, the only status are `Schedulable` (green), `Unschedulable` (yellow), `Down` (grey), and `Disabled` (red). We could add `AutoEvicting` (yellow), but it would overlap with `Unschedulable`. This might be acceptable, as it could be read as, \"This node is auto-evicting in addition to being unschedulable.\"" } ]
{ "category": "Runtime", "file_name": "20230905-automatically-evict-replicas-while-draining.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Custom Images By default, Rook will deploy the latest stable version of the Ceph CSI driver. Commonly, there is no need to change this default version that is deployed. For scenarios that require deploying a custom image (e.g. downstream releases), the defaults can be overridden with the following settings. The CSI configuration variables are found in the `rook-ceph-operator-config` ConfigMap. These settings can also be specified as environment variables on the operator deployment, though the configmap values will override the env vars if both are specified. ```console kubectl -n $ROOKOPERATORNAMESPACE edit configmap rook-ceph-operator-config ``` The default upstream images are included below, which you can change to your desired images. ```yaml ROOKCSICEPH_IMAGE: \"quay.io/cephcsi/cephcsi:v3.11.0\" ROOKCSIREGISTRAR_IMAGE: \"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1\" ROOKCSIPROVISIONER_IMAGE: \"registry.k8s.io/sig-storage/csi-provisioner:v4.0.1\" ROOKCSIATTACHER_IMAGE: \"registry.k8s.io/sig-storage/csi-attacher:v4.5.1\" ROOKCSIRESIZER_IMAGE: \"registry.k8s.io/sig-storage/csi-resizer:v1.10.1\" ROOKCSISNAPSHOTTER_IMAGE: \"registry.k8s.io/sig-storage/csi-snapshotter:v7.0.2\" ROOKCSIADDONSIMAGE: \"quay.io/csiaddons/k8s-sidecar:v0.8.0\" ``` If image version is not passed along with the image name in any of the variables above, Rook will add the corresponding default version to that image. Example: if `ROOKCSICEPH_IMAGE: \"quay.io/private-repo/cephcsi\"` is passed, Rook will add internal default version and consume it as `\"quay.io/private-repo/cephcsi:v3.11.0\"`. If you would like Rook to use the default upstream images, then you may simply remove all variables matching `ROOKCSI*_IMAGE` from the above ConfigMap and/or the operator deployment. You can use the below command to see the CSI images currently being used in the cluster. Note that not all images (like `volumereplication-operator`) may be present in every cluster depending on which CSI features are enabled. ```console kubectl --namespace rook-ceph get pod -o jsonpath='{range .items[]}{range .spec.containers[]}{.image}{\"\\n\"}' -l 'app in (csi-rbdplugin,csi-rbdplugin-provisioner,csi-cephfsplugin,csi-cephfsplugin-provisioner)' | sort | uniq ``` The default images can also be found with each release in the" } ]
{ "category": "Runtime", "file_name": "custom-images.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "The ganesha_mon RA monitors its ganesha.nfsd daemon. While the daemon is running, it creates two attributes: ganesha-active and grace-active. When the daemon stops for any reason, the attributes are deleted. Deleting the ganesha-active attribute triggers the failover of the virtual IP (the IPaddr RA) to another node according to constraint location rules where ganesha.nfsd is still running. The ganesha_grace RA monitors the grace-active attribute. When the grace-active attibute is deleted, the ganesha_grace RA stops, and will not restart. This triggers pacemaker to invoke the notify action in the ganesha_grace RAs on the other nodes in the cluster; which send a DBUS message to their respective ganesha.nfsd. (N.B. grace-active is a bit of a misnomer. while the grace-active attribute exists, everything is normal and healthy. Deleting the attribute triggers putting the surviving ganesha.nfsds into GRACE.) To ensure that the remaining/surviving ganesha.nfsds are put into NFS-GRACE before the IPaddr (virtual IP) fails over there is a short delay (sleep) between deleting the grace-active attribute and the ganesha-active attribute. To summarize, e.g. in a four node cluster: on node 2 ganesha_mon::monitor notices that ganesha.nfsd has died on node 2 ganesha_mon::monitor deletes its grace-active attribute on node 2 ganesha_grace::monitor notices that grace-active is gone and returns OCFERRGENERIC, a.k.a. new error. When pacemaker tries to (re)start ganesha_grace, its start action will return OCFNOTRUNNING, a.k.a. known error, don't attempt further restarts. on nodes 1, 3, and 4, ganesha_grace::notify receives a post-stop notification indicating that node 2 is gone, and sends a DBUS message to its ganesha.nfsd, putting it into NFS-GRACE. on node 2 ganesha_mon::monitor waits a short period, then deletes its ganesha-active attribute. This triggers the IPaddr (virt IP) failover according to constraint location rules." } ]
{ "category": "Runtime", "file_name": "ganesha-ha.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "This document describes the versioning policy for this repository. This policy is designed so the following goals can be achieved. Users are provided a codebase of value that is stable and secure. Versioning of this project will be idiomatic of a Go project using [Go modules](https://github.com/golang/go/wiki/Modules). [Semantic import versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning) will be used. Versions will comply with [semver 2.0](https://semver.org/spec/v2.0.0.html) with the following exceptions. New methods may be added to exported API interfaces. All exported interfaces that fall within this exception will include the following paragraph in their public documentation. > Warning: methods may be added to this interface in minor releases. If a module is version `v2` or higher, the major version of the module must be included as a `/vN` at the end of the module paths used in `go.mod` files (e.g., `module go.opentelemetry.io/otel/v2`, `require go.opentelemetry.io/otel/v2 v2.0.1`) and in the package import path (e.g., `import \"go.opentelemetry.io/otel/v2/trace\"`). This includes the paths used in `go get` commands (e.g., `go get go.opentelemetry.io/otel/v2@v2.0.1`. Note there is both a `/v2` and a `@v2.0.1` in that example. One way to think about it is that the module name now includes the `/v2`, so include `/v2` whenever you are using the module name). If a module is version `v0` or `v1`, do not include the major version in either the module path or the import path. Modules will be used to encapsulate signals and components. Experimental modules still under active development will be versioned at `v0` to imply the stability guarantee defined by . > Major version zero (0.y.z) is for initial development. Anything MAY > change at any time. The public API SHOULD NOT be considered stable. Mature modules for which we guarantee a stable public API will be versioned with a major version greater than `v0`. The decision to make a module stable will be made on a case-by-case basis by the maintainers of this project. Experimental modules will start their versioning at `v0.0.0` and will increment their minor version when backwards incompatible changes are released and increment their patch version when backwards compatible changes are released. All stable modules that use the same major version number will use the same entire version number. Stable modules may be released with an incremented minor or patch version even though that module has not been changed, but rather so that it will remain at the same version as other stable modules that did undergo change. When an experimental module becomes stable a new stable module version will be released and will include this now stable module. The new stable module version will be an increment of the minor version number and will be applied to all existing stable modules as well as the newly stable module being released. Versioning of the associated [contrib repository](https://github.com/open-telemetry/opentelemetry-go-contrib) of this project will be idiomatic of a Go project using [Go modules](https://github.com/golang/go/wiki/Modules). [Semantic import versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning) will be" }, { "data": "Versions will comply with . If a module is version `v2` or higher, the major version of the module must be included as a `/vN` at the end of the module paths used in `go.mod` files (e.g., `module go.opentelemetry.io/contrib/instrumentation/host/v2`, `require go.opentelemetry.io/contrib/instrumentation/host/v2 v2.0.1`) and in the package import path (e.g., `import \"go.opentelemetry.io/contrib/instrumentation/host/v2\"`). This includes the paths used in `go get` commands (e.g., `go get go.opentelemetry.io/contrib/instrumentation/host/v2@v2.0.1`. Note there is both a `/v2` and a `@v2.0.1` in that example. One way to think about it is that the module name now includes the `/v2`, so include `/v2` whenever you are using the module name). If a module is version `v0` or `v1`, do not include the major version in either the module path or the import path. In addition to public APIs, telemetry produced by stable instrumentation will remain stable and backwards compatible. This is to avoid breaking alerts and dashboard. Modules will be used to encapsulate instrumentation, detectors, exporters, propagators, and any other independent sets of related components. Experimental modules still under active development will be versioned at `v0` to imply the stability guarantee defined by . > Major version zero (0.y.z) is for initial development. Anything MAY > change at any time. The public API SHOULD NOT be considered stable. Mature modules for which we guarantee a stable public API and telemetry will be versioned with a major version greater than `v0`. Experimental modules will start their versioning at `v0.0.0` and will increment their minor version when backwards incompatible changes are released and increment their patch version when backwards compatible changes are released. Stable contrib modules cannot depend on experimental modules from this project. All stable contrib modules of the same major version with this project will use the same entire version as this project. Stable modules may be released with an incremented minor or patch version even though that module's code has not been changed. Instead the only change that will have been included is to have updated that modules dependency on this project's stable APIs. When an experimental module in contrib becomes stable a new stable module version will be released and will include this now stable module. The new stable module version will be an increment of the minor version number and will be applied to all existing stable contrib modules, this project's modules, and the newly stable module being released. Contrib modules will be kept up to date with this project's releases. Due to the dependency contrib modules will implicitly have on this project's modules the release of stable contrib modules to match the released version number will be staggered after this project's release. There is no explicit time guarantee for how long after this projects release the contrib release will be. Effort should be made to keep them as close in time as" }, { "data": "No additional stable release in this project can be made until the contrib repository has a matching stable release. No release can be made in the contrib repository after this project's stable release except for a stable release of the contrib repository. GitHub releases will be made for all releases. Go modules will be made available at Go package mirrors. To better understand the implementation of the above policy the following example is provided. This project is simplified to include only the following modules and their versions: `otel`: `v0.14.0` `otel/trace`: `v0.14.0` `otel/metric`: `v0.14.0` `otel/baggage`: `v0.14.0` `otel/sdk/trace`: `v0.14.0` `otel/sdk/metric`: `v0.14.0` These modules have been developed to a point where the `otel/trace`, `otel/baggage`, and `otel/sdk/trace` modules have reached a point that they should be considered for a stable release. The `otel/metric` and `otel/sdk/metric` are still under active development and the `otel` module depends on both `otel/trace` and `otel/metric`. The `otel` package is refactored to remove its dependencies on `otel/metric` so it can be released as stable as well. With that done the following release candidates are made: `otel`: `v1.0.0-RC1` `otel/trace`: `v1.0.0-RC1` `otel/baggage`: `v1.0.0-RC1` `otel/sdk/trace`: `v1.0.0-RC1` The `otel/metric` and `otel/sdk/metric` modules remain at `v0.14.0`. A few minor issues are discovered in the `otel/trace` package. These issues are resolved with some minor, but backwards incompatible, changes and are released as a second release candidate: `otel`: `v1.0.0-RC2` `otel/trace`: `v1.0.0-RC2` `otel/baggage`: `v1.0.0-RC2` `otel/sdk/trace`: `v1.0.0-RC2` Notice that all module version numbers are incremented to adhere to our versioning policy. After these release candidates have been evaluated to satisfaction, they are released as version `v1.0.0`. `otel`: `v1.0.0` `otel/trace`: `v1.0.0` `otel/baggage`: `v1.0.0` `otel/sdk/trace`: `v1.0.0` Since both the `go` utility and the Go module system support [the semantic versioning definition of precedence](https://semver.org/spec/v2.0.0.html#spec-item-11), this release will correctly be interpreted as the successor to the previous release candidates. Active development of this project continues. The `otel/metric` module now has backwards incompatible changes to its API that need to be released and the `otel/baggage` module has a minor bug fix that needs to be released. The following release is made: `otel`: `v1.0.1` `otel/trace`: `v1.0.1` `otel/metric`: `v0.15.0` `otel/baggage`: `v1.0.1` `otel/sdk/trace`: `v1.0.1` `otel/sdk/metric`: `v0.15.0` Notice that, again, all stable module versions are incremented in unison and the `otel/sdk/metric` package, which depends on the `otel/metric` package, also bumped its version. This bump of the `otel/sdk/metric` package makes sense given their coupling, though it is not explicitly required by our versioning policy. As we progress, the `otel/metric` and `otel/sdk/metric` packages have reached a point where they should be evaluated for stability. The `otel` module is reintegrated with the `otel/metric` package and the following release is made: `otel`: `v1.1.0-RC1` `otel/trace`: `v1.1.0-RC1` `otel/metric`: `v1.1.0-RC1` `otel/baggage`: `v1.1.0-RC1` `otel/sdk/trace`: `v1.1.0-RC1` `otel/sdk/metric`: `v1.1.0-RC1` All the modules are evaluated and determined to a viable stable release. They are then released as version `v1.1.0` (the minor version is incremented to indicate the addition of new signal). `otel`: `v1.1.0` `otel/trace`: `v1.1.0` `otel/metric`: `v1.1.0` `otel/baggage`: `v1.1.0` `otel/sdk/trace`: `v1.1.0` `otel/sdk/metric`: `v1.1.0`" } ]
{ "category": "Runtime", "file_name": "VERSIONING.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "Please join the Kanister community to give feedback on the roadmap and open issues with your suggestions. Lifecycle for contributors: Roles, Privs, and lifecycle project admin? maintainer (core) reviewer+approval: PR + branch protections? contributor Blueprints Maintenance and Support Policy project test matrix: Kopia vs Restic vs Stow, downstream adopters vs Kanister standalone community maintained examples: move to a new public repo Move the Kanister.io website to GitHub Repo Leverage GitHub Issues and Projects for planning Fork block storage functions, deprecate unused Kanister code Kopia.io Repository Controller with a CR to control the lifecycle of a Kopia Repository Server ActionSet metrics Container image vulnerability scanning Track and log events triggered by Blueprint Actions ARM support Use GitHub pages with Jekyll or Hugo for documentation Vault integration for Repository Server secrets Deprecate Restic Generate Kanister controller using KubeBuilder - current one is based on rook operator Merge the Repository controller into the Kanister controller after 2. Replace package with a supported fork Replace http://gopkg.in/check.v1 with a better test framework Release notes Adopt license scanning tool and OpenSSF Best Practices Badge" } ]
{ "category": "Runtime", "file_name": "ROADMAP.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "This framework allows you to write and run tests against a local multi-cluster environment. The framework is using the `mstart.sh` script in order to setup the environment according to a configuration file, and then uses the test framework to actually run the tests. Tests are written in python2.7, but can invoke shell scripts, binaries etc. Entry point for all tests is `/path/to/ceph/src/test/rgw/testmulti.py`. And the actual tests are located inside the `/path/to/ceph/src/test/rgw/rgwmulti` subdirectory. So, to run all tests use: ``` $ cd /path/to/ceph/src/test/rgw/ $ nosetests test_multi.py ``` This will assume a configuration file called `/path/to/ceph/src/test/rgw/test_multi.conf` exists. To use a different configuration file, set the `RGWMULTITEST_CONF` environment variable to point to that file. Since we use the same entry point file for all tests, running specific tests is possible using the following format: ``` $ nosetests testmulti.py:<specifictest_name> ``` To run multiple tests based on wildcard string, use the following format: ``` $ nosetests test_multi.py -m \"<wildcard string>\" ``` Note that the test to run, does not have to be inside the `test_multi.py` file. Note that different options for running specific and multiple tests exists in the , as well as other options to control the execution of the tests. Following RGW environment variables are taken into consideration when running the tests: `RGW_FRONTEND`: used to change frontend to 'civetweb' or 'beast' (default) `RGWVALGRIND`: used to run the radosgw under valgrind. e.g. RGWVALGRIND=yes Other environment variables used to configure elements other than RGW can also be used as they are used in vstart.sh. E.g. MON, OSD, MGR, MSD The configuration file for the run has 3 sections: This section holds the following parameters: `num_zonegroups`: number of zone groups (integer, default 1) `num_zones`: number of regular zones in each group (integer, default 3) `numazzones`: number of archive zones (integer, default 0, max value 1) `gatewaysperzone`: number of RADOS gateways per zone (integer, default 2) `no_bootstrap`: whether to assume that the cluster is already up and does not need to be setup again. If set to \"false\", it will try to re-run the cluster, so, `mstop.sh` must be called beforehand. Should be set to false, anytime the configuration is changed. Otherwise, and assuming the cluster is already up, it should be set to \"true\" to save on execution time (boolean, default false) `log_level`: console log level of the logs in the tests, note that any program invoked from the test my emit logs regardless of that setting (integer, default 20) 20 and up -> DEBUG 10 and up -> INFO 5 and up -> WARNING 1 and up -> ERROR CRITICAL is always logged `log_file`: log file name. If not set, only console logs exists (string, default None) `fileloglevel`: file log level of the logs in the tests. Similar to `log_level` `tenant`: name of tenant (string, default None) `checkpoint_retries`: TODO (integer, default 60) `checkpoint_delay`: TODO (integer, default 5) `reconfigure_delay`: TODO (integer, default 5) TODO TODO New tests should be added into the `/path/to/ceph/src/test/rgw/rgw_multi` subdirectory. Base classes are in: `/path/to/ceph/src/test/rgw/rgw_multi/multisite.py` `/path/to/ceph/src/test/rgw/rgw_multi/tests.py` holds the majority of the tests, but also many utility and infrastructure functions that could be used in other tests files" } ]
{ "category": "Runtime", "file_name": "test_multi.md", "project_name": "Ceph", "subcategory": "Cloud Native Storage" }
[ { "data": "Configure `isulad` Configure the `pod-sandbox-image` in `/etc/isulad/daemon.json`: ```json \"pod-sandbox-image\": \"my-pause:1.0.0\" ``` Configure the `endpoint`of `isulad`: ```json \"hosts\": [ \"unix:///var/run/isulad.sock\" ] ``` if `hosts` is not configured, the default endpoint is `unix:///var/run/isulad.sock`. Restart `isulad`: ```bash $ sudo systemctl restart isulad ``` Start `kubelet` based on the configuration or default value: ```bash $ /usr/bin/kubelet --container-runtime-endpoint=unix:///var/run/isulad.sock --image-service-endpoint=unix:///var/run/isulad.sock --pod-infra-container-image=my-pause:1.0.0 --container-runtime=remote ... ``` RuntimeClass is a feature for selecting the container runtime configuration. The container runtime configuration is used to run a Pod's containers. For more information, please refer to . Currently `isulad` only supports `kata-containers` and `runc`. Configure `isulad` in `/etc/isulad/daemon.json`: ```json \"runtimes\": { \"kata-runtime\": { \"path\": \"/usr/bin/kata-runtime\", \"runtime-args\": [ \"--kata-config\", \"/usr/share/defaults/kata-containers/configuration.toml\" ] } } ``` Extra configuration `iSulad` supports the `overlay2` and `devicemapper` as storage drivers. The default value is `overlay2`. In some scenarios, using block device type as storage drivers is a better choice, such as run a `kata-containers`. The procedure for configuring the `devicemapper` is as follows: First, create ThinPool: ```bash $ sudo pvcreate /dev/sdb1 # /dev/sdb1 for example $ sudo vgcreate isulad /dev/sdb $ sudo echo y | lvcreate --wipesignatures y -n thinpool isulad -L 200G $ sudo echo y | lvcreate --wipesignatures y -n thinpoolmeta isulad -L 20G $ sudo lvconvert -y --zero n -c 512K --thinpool isulad/thinpool --poolmetadata isulad/thinpoolmeta $ sudo lvchange --metadataprofile isulad-thinpool isulad/thinpool ``` Then,add configuration for `devicemapper` in `/etc/isulad/daemon.json`: ```json \"storage-driver\": \"devicemapper\" \"storage-opts\": [ \"dm.thinpooldev=/dev/mapper/isulad-thinpool\", \"dm.fs=ext4\", \"dm.minfreespace=10%\" ] ``` Restart `isulad`: ```bash $ sudo systemctl restart isulad ``` Create `kata-runtime.yaml`. For example: ```yaml apiVersion: node.k8s.io/v1beta1 kind: RuntimeClass metadata: name: kata-runtime handler: kata-runtime ``` Execute `kubectl apply -f kata-runtime.yaml` Create pod spec `kata-pod.yaml`. For example: ```yaml apiVersion: v1 kind: Pod metadata: name: kata-pod-example spec: runtimeClassName: kata-runtime containers: name: kata-pod image: busybox:latest command: [\"/bin/sh\"] args: [\"-c\", \"sleep 1000\"] ``` Run pod: ```bash $ kubectl create -f kata-pod.yaml $ kubectl get pod NAME READY STATUS RESTARTS AGE kata-pod-example 1/1 Running 4 2s ``` iSulad realize the CRI interface to connect to the CNI network, parse the CNI network configuration files, join or exit CNI network. In this section, we call CRI interface to start pod to verify the CNI network configuration for simplicity. Configure `isulad` in `/etc/isulad/daemon.json` : ```json \"network-plugin\": \"cni\", \"cni-bin-dir\": \"/opt/cni/bin\", \"cni-conf-dir\": \"/etc/cni/net.d\", ``` Prepare CNI network plugins: Compile and genetate the CNI plugin binaries, and copy binaries to the directory `/opt/cni/bin`. ```bash $ git clone https://github.com/containernetworking/plugins.git $ cd plugins && ./build_linux.sh $ cd ./bin && ls bandwidth bridge dhcp firewall flannel ... ``` Prepare CNI network configuration: The conf file suffix can be `.conflist` or `.conf`, the difference is whether it contains multiple plugins. For example, we create `10-mynet.conflist` file under directory `/etc/cni/net.d/`, the content is as follows: ```json { \"cniVersion\": \"0.3.1\", \"name\": \"default\", \"plugins\": [ { \"name\": \"default\", \"type\": \"ptp\", \"ipMasq\": true, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ] } }, { \"type\": \"portmap\", \"capabilities\": { \"portMappings\": true } } ] } ``` Configure sandbox-config.json : ```json { \"portmappings\":[{\"protocol\": 1, \"containerport\": 80, \"host_port\": 8080}], \"metadata\": { \"name\": \"test\", \"namespace\": \"default\", \"attempt\": 1, \"uid\": \"hdishd83djaidwnduwk28bcsb\" }, \"labels\": { \"filterlabelkey\": \"filterlabelval\" }, \"linux\": { } } ``` Restart `isulad` and start Pod: ```sh $ sudo systemctl restart isulad $ sudo crictl -i unix:///var/run/isulad.sock -r unix:///var/run/isulad.sock runp sandbox-config.json ``` View pod network informations: ```sh $ sudo crictl -i unix:///var/run/isulad.sock -r unix:///var/run/isulad.sock inspectp <pod-id> ```" } ]
{ "category": "Runtime", "file_name": "k8s_integration.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage cluster nodes ``` -h, --help help for node ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - List nodes" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_node.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "[toc] Integerate coredns and kube-proxy into fabedge-agent and remove fab-proxy component; Allow user to set strongswan port on connector; Implement hole-punching feature which help edge nodes behind NAT network to communicate each other ;" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-0.8.0.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "During container's lifecycle, different Hooks can be executed to do custom actions. In Kata Containers, we support two types of Hooks, `OCI Hooks` and `Kata Hooks`. The OCI Spec stipulates six hooks that can be executed at different time points and namespaces, including `Prestart Hooks`, `CreateRuntime Hooks`, `CreateContainer Hooks`, `StartContainer Hooks`, `Poststart Hooks` and `Poststop Hooks`. We support these types of Hooks as compatible as possible in Kata Containers. The path and arguments of these hooks will be passed to Kata for execution via `bundle/config.json`. For example: ``` ... \"hooks\": { \"prestart\": [ { \"path\": \"/usr/bin/prestart-hook\", \"args\": [\"prestart-hook\", \"arg1\", \"arg2\"], \"env\": [ \"key1=value1\"] } ], \"createRuntime\": [ { \"path\": \"/usr/bin/createRuntime-hook\", \"args\": [\"createRuntime-hook\", \"arg1\", \"arg2\"], \"env\": [ \"key1=value1\"] } ] } ... ``` In Kata, we support another three kinds of hooks executed in guest VM, including `Guest Prestart Hook`, `Guest Poststart Hook`, `Guest Poststop Hook`. The executable files for Kata Hooks must be packaged in the guest rootfs. The file path to those guest hooks should be specified in the configuration file, and guest hooks must be stored in a subdirectory of `guesthookpath` according to their hook type. For example: In configuration file: ``` guesthookpath=\"/usr/share/hooks\" ``` In guest rootfs, prestart-hook is stored in `/usr/share/hooks/prestart/prestart-hook`. The table below summarized when and where those different hooks will be executed in Kata Containers: | Hook Name | Hook Type | Hook Path | Exec Place | Exec Time | |||||| | `Prestart(deprecated)` | OCI hook | host runtime namespace | host runtime namespace | After VM is started, before container is created. | | `CreateRuntime` | OCI hook | host runtime namespace | host runtime namespace | After VM is started, before container is created, after `Prestart` hooks. | | `CreateContainer` | OCI hook | host runtime namespace | host vmm namespace* | After VM is started, before container is created, after `CreateRuntime` hooks. | | `StartContainer` | OCI hook | guest container namespace | guest container namespace | After container is created, before container is started. | | `Poststart` | OCI hook | host runtime namespace | host runtime namespace | After container is started, before start operation returns. | | `Poststop` | OCI hook | host runtime namespace | host runtime namespace | After container is deleted, before delete operation returns. | | `Guest Prestart` | Kata hook | guest agent namespace | guest agent namespace | During start operation, before container command is executed. | | `Guest Poststart` | Kata hook | guest agent namespace | guest agent namespace | During start operation, after container command is executed, before start operation returns. | | `Guest Poststop` | Kata hook | guest agent namespace | guest agent namespace | During delete operation, after container is deleted, before delete operation returns. | `Hook Path` specifies where hook's path be resolved. `Exec Place` specifies in which namespace those hooks can be executed. For `CreateContainer` Hooks, OCI requires to run them inside the container namespace while the hook executable path is in the host runtime, which is a non-starter for VM-based containers. So we design to keep them running in the host vmm namespace. `Exec Time` specifies at which time point those hooks can be executed." } ]
{ "category": "Runtime", "file_name": "hooks-handling.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "title: JuiceFS vs. SeaweedFS slug: /comparison/juicefsvsseaweedfs description: This document compares JuiceFS and SeaweedFS, covering their architecture, storage mechanisms, client protocols, and other advanced features. and are both open-source high-performance distributed file storage systems. They operate under the business-friendly Apache License 2.0. However, JuiceFS comes in two versions: a and an , you can use JuiceFS Enterprise Edition as on-premises deployment, or directly. The Enterprise Edition uses a proprietary metadata engine, while its client shares code extensively with the . This document compares the key attributes of JuiceFS and SeaweedFS in a table and then explores them in detail. You can easily see their main differences in the table below and delve into specific topics you're interested in within this article. By highlighting their contrasts and evaluating their suitability for different use cases, this document aims to help you make informed decisions. | Comparison basis | SeaweedFS | JuiceFS | | : | : | : | | Metadata engine | Supports multiple databases | The Community Edition supports various databases; the Enterprise Edition uses an in-house, high-performance metadata engine. | | Metadata operation atomicity | Not guaranteed | The Community Edition ensures atomicity through database transactions; the Enterprise Edition ensures atomicity within the metadata engine. | | Changelog | Supported | Exclusive to the Enterprise Edition | | Data storage | Self-contained | Relies on object storage | | Erasure coding | Supported | Relies on object storage | | Data consolidation | Supported | Relies on object storage | | File splitting | 8MB | 64MB logical blocks + 4MB physical storage blocks | | Tiered storage | Supported | Relies on object storage | | Data compression | Supported (based on file extensions) | Supported (configured globally) | | Storage encryption | Supported | Supported | | POSIX compatibility | Basic | Full | | S3 protocol | Basic | Basic | | WebDAV protocol | Supported | Supported | | HDFS compatibility | Basic | Full | | CSI Driver | Supported | Supported | | Client cache | Supported | Supported | | Cluster data replication | Unidirectional and bidirectional replication is supported | Exclusive to the Enterprise Edition, only unidirectional replication is supported | | Cloud data cache | Supported (manual synchronization) | Exclusive to the Enterprise Edition | | Trash | Unsupported | Supported | | Operations and monitoring | Supported | Supported | | Release date | April 2015 | January 2021 | | Primary maintainer | Individual (Chris Lu) | Company (Juicedata Inc.) | | Programming language | Go | Go | | Open source license | Apache License 2.0 | Apache License 2.0 | The system consists of three components: The volume servers, which store files in the underlying layer The master servers, which manage the cluster An optional component, filer, which provides additional features to the upper layer In the system operation, both the volume server and the master server are used for file storage: The volume server focuses on data read and write operations. The master server primarily functions as a management service for the cluster and volumes. In terms of data access, SeaweedFS implements a similar approach to Haystack. A user-created volume in SeaweedFS corresponds to a large disk file (\"Superblock\" in the diagram" }, { "data": "Within this volume, all files written by the user (\"Needles\" in the diagram) are merged into the large disk file. Data write and read process in SeaweedFS: Before a write operation, the client initiates a write request to the master server. SeaweedFS returns a File ID (composed of Volume ID and offset) based on the current data volume. During the writing process, basic metadata information such as file length and chunk details is also written together with the data. After the write is completed, the caller needs to associate the file with the returned File ID and store this mapping in an external system such as MySQL. During a read operation, since the File ID already contains all the necessary information to compute the file's location (offset), the file content can be efficiently retrieved. On top of the underlying storage services, SeaweedFS offers a component called filer, which interfaces with the volume server and the master server. It provides features like POSIX support, WebDAV, and the S3 API. Like JuiceFS, the filer needs to connect to an external database to store metadata information. JuiceFS adopts an architecture that separates data and metadata storage: File data is split and stored in object storage systems such as Amazon S3. Metadata is stored in a user-selected database such as Redis or MySQL. The client connects to the metadata engine for metadata services and writes actual data to object storage, achieving distributed file systems with strong consistency . For details about JuiceFS' architecture, see the document. Both SeaweedFS and JuiceFS support storing file system metadata in external databases: SeaweedFS supports up to . JuiceFS has a high requirement for database transaction capabilities and currently supports . JuiceFS ensures strict atomicity for every operation, which requires strong transaction capabilities from the metadata engine like Redis and MySQL. As a result, JuiceFS supports fewer databases. SeaweedFS provides weaker atomicity guarantees for operations. It only uses transactions of some databases (SQL, ArangoDB, and TiKV) during rename operations, with a lower requirement for database transaction capabilities. Additionally, during the rename operation, SeaweedFS does not lock the original directory or file during the metadata copying process. This may result in data loss under high loads. SeaweedFS generates changelog for all metadata operations. The changelog can be transmitted and replayed. This ensures data safety and enables features like file system data replication and operation auditing. SeaweedFS supports file system data replication between multiple clusters. It offers two asynchronous data replication modes: Active-Active. In this mode, both clusters participate in read and write operations and they synchronize data bidirectionally. When there are more than two nodes in the cluster, certain operations such as renaming directories are subject to certain restrictions. Active-Passive. In this mode, a primary-secondary relationship is established, and the passive side is read-only. Both modes achieve consistency between different cluster data by transmitting and applying changelog. Each changelog has a signature to ensure that the same message is applied only once. The JuiceFS Community Edition does not implement a changelog, but it can use its inherent data replication capabilities from the metadata engine and object storage to achieve file system mirroring. For example, both and only support data replication. When combined with , either of them can enable a setup similar to SeaweedFS' Active-Passive mode without relying on" }, { "data": "It's worth noting that the JuiceFS Enterprise Edition implements the metadata engine based on changelog. It supports and . As mentioned earlier, SeaweedFS' data storage is achieved through volume servers + master servers, supporting features like merging small data blocks and erasure coding. JuiceFS' data storage relies on object storage services, and relevant features are provided by the object storage. Both SeaweedFS and JuiceFS split files into smaller chunks before persisting them in the underlying data system: SeaweedFS splits files into 8MB blocks. For extremely large files (over 8GB), it also stores the chunk index in the underlying data system. JuiceFS uses 64MB logical data blocks (chunks), which are further divided into 4MB blocks to be uploaded to object storage. For details, see . For newly created volumes, SeaweedFS stores data locally. For older volumes, SeaweedFS supports uploading them to the cloud to achieve . JuiceFS does not implement tiered storage but directly uses object storage's tiered management services, such as . JuiceFS supports compressing all written data using LZ4 or Zstandard. SeaweedFS determines whether to compress data based on factors such as the file extension and file type. Both support encryption, including encryption during transmission and at rest: SeaweedFS supports encryption both in transit and at rest. When data encryption is enabled, all data written to the volume server is encrypted using random keys. The corresponding key information is managed by the filer that maintains the file metadata. For details, see the . For details about JuiceFS' encryption feature, see . JuiceFS is , while SeaweedFS currently , with ongoing feature enhancements. JuiceFS implements an , enabling direct access to the file system through the S3 API. It supports tools like s3cmd, AWS CLI, and MinIO Client (mc) for file system management. SeaweedFS currently , covering common read, write, list, and delete requests, with some extensions for specific requests like reads. Both support the WebDAV protocol. For details, see: JuiceFS is , including Hadoop 2.x, Hadoop 3.x, and various components within the Hadoop ecosystem. SeaweedFS offers . It lacks support for advanced operations like truncate, concat, checksum, and set attributes. Both support a CSI Driver. For details, see: SeaweedFS client is equipped with , but its documentation weren't located at the time of writing, you can search for `cache` in the . JuiceFS' client supports , allowing users to optimize based on their application's needs. SeaweedFS can be used as an , you can manually warm up specified data to local cache directory, while local modification is asynchronously uploaded to object storage. JuiceFS stores files in split form. Due to its architecture, it does not support serving as a cache for object storage or a cache layer. However, the JuiceFS Enterprise Edition has a standalone feature to provide caching services for existing data in object storage, which is similar to SeaweedFS' object storage gateway. By default, JuiceFS enables the feature. To prevent accidental data loss and ensure data safety, deleted files are retained for a specified time. However, SeaweedFS does not support this feature. Both offer comprehensive maintenance and troubleshooting solutions: JuiceFS provides and to let users view real-time performance metrics. It offers a API to integrate monitoring data into Prometheus for visualization and monitoring alerts in Grafana. SeaweedFS uses to interactively execute maintenance tasks, such as checking the current cluster status and listing file directories. It also supports approaches to integrate with Prometheus." } ]
{ "category": "Runtime", "file_name": "juicefs_vs_seaweedfs.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "StratoVirt supports two test modes: unit test and mod test. It should be noted that mod test is not fully supported on the x86_64 architecture now. Unit tests are Rust functions that verify that the non-test code is functioning in the expected manner. We recommend performing unit test execution separately, run StratoVirt unit test as follows: ```shell $ cargo test --workspace --exclude mod_test -- --nocapture --test-threads=1 ``` StratoVirt mod test is an integrated testing method. During the test, the StratoVirt process will be started as the server and communicate through socket and QMP to test the StratoVirt module function. Before running mod test, we need to compile `stratovirt` and `virtiofsd` first, and then export the environment variables `STRATOVIRTBINARY` and `VIRTIOFSDBINARY`. Build StratoVirt: ```shell $ cargo build --workspace --bins --release --target=aarch64-unknown-linux-gnu --all-features ``` Build virtiofsd: ```shell $ git clone https://gitlab.com/virtio-fs/virtiofsd.git $ cd virtiofsd $ cargo build --release ``` Export the environment variables `STRATOVIRTBINARY` and `VIRTIOFSDBINARY`: ```shell $ export STRATOVIRT_BINARY=\"/path/to/stratovirt\" $ export VIRTIOFSD_BINARY=\"/path/to/virtiofsd\" ``` Run StratoVirt mod test as follows: ```shell $ cargo test --all-features -p mod_test -- --nocapture --test-threads=1 ```" } ]
{ "category": "Runtime", "file_name": "test.md", "project_name": "StratoVirt", "subcategory": "Container Runtime" }
[ { "data": "<a name=\"top\"/> - - - - - - - - - - - - - - - - <a name=\"api.proto\"/> <p align=\"right\"><a href=\"#top\">Top</a></p> <a name=\"v1alpha.App\"/> App describes the information of an app that&#39;s running in a pod. | Field | Type | Label | Description | | -- | - | -- | -- | | name | | | Name of the app, required. | | image | | | Image used by the app, required. However, this may only contain the image id if it is returned by ListPods(). | | state | | | State of the app. optional, non-empty only if it&#39;s returned by InspectPod(). | | exit_code | | | Exit code of the app. optional, only valid if it&#39;s returned by InspectPod() and the app has already exited. | | annotations | | repeated | Annotations for this app. | <a name=\"v1alpha.Event\"/> Event describes the events that will be received via ListenEvents(). | Field | Type | Label | Description | | -- | - | -- | -- | | type | | | Type of the event, required. | | id | | | ID of the subject that causes the event, required. If the event is a pod or app event, the id is the pod&#39;s uuid. If the event is an image event, the id is the image&#39;s id. | | from | | | Name of the subject that causes the event, required. If the event is a pod event, the name is the pod&#39;s name. If the event is an app event, the name is the app&#39;s name. If the event is an image event, the name is the image&#39;s name. | | time | | | Timestamp of when the event happens, it is the seconds since epoch, required. | | data | | repeated | Data of the event, in the form of key-value pairs, optional. | <a name=\"v1alpha.EventFilter\"/> EventFilter defines the condition that the returned events needs to satisfy in ListImages(). The condition are combined by &#39;AND&#39;. | Field | Type | Label | Description | | -- | - | -- | -- | | types | | repeated | If not empty, then only returns the events that have the listed types. | | ids | | repeated | If not empty, then only returns the events whose &#39;id&#39; is included in the listed ids. | | names | | repeated | If not empty, then only returns the events whose &#39;from&#39; is included in the listed names. | | sincetime | | | If set, then only returns the events after this timestamp. If the server starts after sincetime, then only the events happened after the start of the server will be returned. If since_time is a future timestamp, then no events will be returned until that time. | | until_time | | | If set, then only returns the events before this timestamp. If it is a future timestamp, then the event stream will be closed at that moment. | <a name=\"v1alpha.GetInfoRequest\"/> Request for GetInfo(). <a name=\"v1alpha.GetInfoResponse\"/> Response for GetInfo(). | Field | Type | Label | Description | | -- | - | -- | -- | | info | | | Required. | <a name=\"v1alpha.GetLogsRequest\"/> Request for" }, { "data": "| Field | Type | Label | Description | | -- | - | -- | -- | | pod_id | | | ID of the pod which we will get logs from, required. | | app_name | | | Name of the app within the pod which we will get logs from, optional. If not set, then the logs of all the apps within the pod will be returned. | | lines | | | Number of most recent lines to return, optional. | | follow | | | If true, then a response stream will not be closed, and new log response will be sent via the stream, default is false. | | since_time | | | If set, then only the logs after the timestamp will be returned, optional. | | until_time | | | If set, then only the logs before the timestamp will be returned, optional. | <a name=\"v1alpha.GetLogsResponse\"/> Response for GetLogs(). | Field | Type | Label | Description | | -- | - | -- | -- | | lines | | repeated | List of the log lines that returned, optional as the response can contain no logs. | <a name=\"v1alpha.GlobalFlags\"/> GlobalFlags describes the flags that passed to rkt api service when it is launched. | Field | Type | Label | Description | | -- | - | -- | -- | | dir | | | Data directory. | | systemconfigdir | | | System configuration directory. | | localconfigdir | | | Local configuration directory. | | userconfigdir | | | User configuration directory. | | insecure_flags | | | Insecure flags configurates what security features to disable. | | trustkeysfrom_https | | | Whether to automatically trust gpg keys fetched from https | <a name=\"v1alpha.Image\"/> Image describes the image&#39;s information. | Field | Type | Label | Description | | -- | - | -- | -- | | base_format | | | Base format of the image, required. This indicates the original format for the image as nowadays all the image formats will be transformed to ACI. | | id | | | ID of the image, a string that can be used to uniquely identify the image, e.g. sha512 hash of the ACIs, required. | | name | | | Name of the image in the image manifest, e.g. &#39;coreos.com/etcd&#39;, optional. | | version | | | Version of the image, e.g. &#39;latest&#39;, &#39;2.0.10&#39;, optional. | | import_timestamp | | | Timestamp of when the image is imported, it is the seconds since epoch, optional. | | manifest | | | JSON-encoded byte array that represents the image manifest, optional. | | size | | | Size is the size in bytes of this image in the store. | | annotations | | repeated | Annotations on this image. | | labels | | repeated | Labels of this image. | <a name=\"v1alpha.ImageFilter\"/> ImageFilter defines the condition that the returned images need to satisfy in ListImages(). The conditions are combined by &#39;AND&#39;, and different filters are combined by &#39;OR&#39;. | Field | Type | Label | Description | | -- | - | -- | -- | | ids | | repeated | If not empty, the images that have any of the ids will be returned. | | prefixes | | repeated | if not empty, the images that have any of the prefixes in the name will be" }, { "data": "| | base_names | | repeated | If not empty, the images that have any of the base names will be returned. For example, both &#39;coreos.com/etcd&#39; and &#39;k8s.io/etcd&#39; will be returned if &#39;etcd&#39; is included, however &#39;k8s.io/etcd-backup&#39; will not be returned. | | keywords | | repeated | If not empty, the images that have any of the keywords in the name will be returned. For example, both &#39;kubernetes-etcd&#39;, &#39;etcd:latest&#39; will be returned if &#39;etcd&#39; is included, | | labels | | repeated | If not empty, the images that have all of the labels will be returned. | | imported_after | | | If set, the images that are imported after this timestamp will be returned. | | imported_before | | | If set, the images that are imported before this timestamp will be returned. | | annotations | | repeated | If not empty, the images that have all of the annotations will be returned. | | full_names | | repeated | If not empty, the images that have any of the exact full names will be returned. | <a name=\"v1alpha.ImageFormat\"/> ImageFormat defines the format of the image. | Field | Type | Label | Description | | -- | - | -- | -- | | type | | | Type of the image, required. | | version | | | Version of the image format, required. | <a name=\"v1alpha.Info\"/> Info describes the information of rkt on the machine. | Field | Type | Label | Description | | -- | - | -- | -- | | rkt_version | | | Version of rkt, required, in the form of Semantic Versioning 2.0.0 (http://semver.org/). | | appc_version | | | Version of appc, required, in the form of Semantic Versioning 2.0.0 (http://semver.org/). | | api_version | | | Latest version of the api that&#39;s supported by the service, required, in the form of Semantic Versioning 2.0.0 (http://semver.org/). | | global_flags | | | The global flags that passed to the rkt api service when it&#39;s launched. | <a name=\"v1alpha.InspectImageRequest\"/> Request for InspectImage(). | Field | Type | Label | Description | | -- | - | -- | -- | | id | | | Required. | <a name=\"v1alpha.InspectImageResponse\"/> Response for InspectImage(). | Field | Type | Label | Description | | -- | - | -- | -- | | image | | | Required. | <a name=\"v1alpha.InspectPodRequest\"/> Request for InspectPod(). | Field | Type | Label | Description | | -- | - | -- | -- | | id | | | ID of the pod which we are querying status for, required. | <a name=\"v1alpha.InspectPodResponse\"/> Response for InspectPod(). | Field | Type | Label | Description | | -- | - | -- | -- | | pod | | | Required. | <a name=\"v1alpha.KeyValue\"/> | Field | Type | Label | Description | | -- | - | -- | -- | | Key | | | Key part of the key-value pair. | | value | | | Value part of the key-value pair. | <a name=\"v1alpha.ListImagesRequest\"/> Request for ListImages(). | Field | Type | Label | Description | | -- | - | -- | -- | | filters | | repeated | Optional. | | detail | | | Optional. | <a name=\"v1alpha.ListImagesResponse\"/> Response for" }, { "data": "| Field | Type | Label | Description | | -- | - | -- | -- | | images | | repeated | Required. | <a name=\"v1alpha.ListPodsRequest\"/> Request for ListPods(). | Field | Type | Label | Description | | -- | - | -- | -- | | filters | | repeated | Optional. | | detail | | | Optional. | <a name=\"v1alpha.ListPodsResponse\"/> Response for ListPods(). | Field | Type | Label | Description | | -- | - | -- | -- | | pods | | repeated | Required. | <a name=\"v1alpha.ListenEventsRequest\"/> Request for ListenEvents(). | Field | Type | Label | Description | | -- | - | -- | -- | | filter | | | Optional. | <a name=\"v1alpha.ListenEventsResponse\"/> Response for ListenEvents(). | Field | Type | Label | Description | | -- | - | -- | -- | | events | | repeated | Aggregate multiple events to reduce round trips, optional as the response can contain no events. | <a name=\"v1alpha.Network\"/> Network describes the network information of a pod. | Field | Type | Label | Description | | -- | - | -- | -- | | name | | | Name of the network that a pod belongs to, required. | | ipv4 | | | Pod&#39;s IPv4 address within the network, optional if IPv6 address is given. | | ipv6 | | | Pod&#39;s IPv6 address within the network, optional if IPv4 address is given. | <a name=\"v1alpha.Pod\"/> Pod describes a pod&#39;s information. If a pod is in Embryo, Preparing, AbortedPrepare state, only id and state will be returned. If a pod is in other states, the pod manifest and apps will be returned when &#39;detailed&#39; is true in the request. A valid pid of the stage1 process of the pod will be returned if the pod is Running has run once. Networks are only returned when a pod is in Running. | Field | Type | Label | Description | | -- | - | -- | -- | | id | | | ID of the pod, in the form of a UUID. | | pid | | | PID of the stage1 process of the pod. | | state | | | State of the pod. | | apps | | repeated | List of apps in the pod. | | networks | | repeated | Network information of the pod. Note that a pod can be in multiple networks. | | manifest | | | JSON-encoded byte array that represents the pod manifest of the pod. | | annotations | | repeated | Annotations on this pod. | | cgroup | | | Cgroup of the pod, empty if the pod is not running. | | created_at | | | Timestamp of when the pod is created, nanoseconds since epoch. Zero if the pod is not created. | | started_at | | | Timestamp of when the pod is started, nanoseconds since epoch. Zero if the pod is not started. | | gcmarkedat | | | Timestamp of when the pod is moved to exited-garbage/garbage, in nanoseconds since epoch. Zero if the pod is not moved to exited-garbage/garbage yet. | <a name=\"v1alpha.PodFilter\"/> PodFilter defines the condition that the returned pods need to satisfy in ListPods(). The conditions are combined by &#39;AND&#39;, and different filters are combined" }, { "data": "&#39;OR&#39;. | Field | Type | Label | Description | | -- | - | -- | -- | | ids | | repeated | If not empty, the pods that have any of the ids will be returned. | | states | | repeated | If not empty, the pods that have any of the states will be returned. | | app_names | | repeated | If not empty, the pods that all of the apps will be returned. | | image_ids | | repeated | If not empty, the pods that have all of the images(in the apps) will be returned | | network_names | | repeated | If not empty, the pods that are in all of the networks will be returned. | | annotations | | repeated | If not empty, the pods that have all of the annotations will be returned. | | cgroups | | repeated | If not empty, the pods whose cgroup are listed will be returned. | | podsubcgroups | | repeated | If not empty, the pods whose these cgroup belong to will be returned. i.e. the pod&#39;s cgroup is a prefix of the specified cgroup | <a name=\"v1alpha.AppState\"/> AppState defines the possible states of the app. | Name | Number | Description | | - | | -- | | APPSTATEUNDEFINED | 0 | | | APPSTATERUNNING | 1 | | | APPSTATEEXITED | 2 | | <a name=\"v1alpha.EventType\"/> EventType defines the type of the events that will be received via ListenEvents(). | Name | Number | Description | | - | | -- | | EVENTTYPEUNDEFINED | 0 | | | EVENTTYPEPOD_PREPARED | 1 | Pod events. | | EVENTTYPEPODPREPAREABORTED | 2 | | | EVENTTYPEPOD_STARTED | 3 | | | EVENTTYPEPOD_EXITED | 4 | | | EVENTTYPEPODGARBAGECOLLECTED | 5 | | | EVENTTYPEAPP_STARTED | 6 | App events. | | EVENTTYPEAPP_EXITED | 7 | (XXX)yifan: Maybe also return exit code in the event object? | | EVENTTYPEIMAGE_IMPORTED | 8 | Image events. | | EVENTTYPEIMAGE_REMOVED | 9 | | <a name=\"v1alpha.ImageType\"/> ImageType defines the supported image type. | Name | Number | Description | | - | | -- | | IMAGETYPEUNDEFINED | 0 | | | IMAGETYPEAPPC | 1 | | | IMAGETYPEDOCKER | 2 | | | IMAGETYPEOCI | 3 | | <a name=\"v1alpha.PodState\"/> PodState defines the possible states of the pod. See https://github.com/rkt/rkt/blob/master/Documentation/devel/pod-lifecycle.md for a detailed explanation of each state. | Name | Number | Description | | - | | -- | | PODSTATEUNDEFINED | 0 | | | PODSTATEEMBRYO | 1 | States before the pod is running. Pod is created, ready to entering &#39;preparing&#39; state. | | PODSTATEPREPARING | 2 | Pod is being prepared. On success it will become &#39;prepared&#39;, otherwise it will become &#39;aborted prepared&#39;. | | PODSTATEPREPARED | 3 | Pod has been successfully prepared, ready to enter &#39;running&#39; state. it can also enter &#39;deleting&#39; if it&#39;s garbage collected before running. | | PODSTATERUNNING | 4 | State that indicates the pod is running. Pod is running, when it exits, it will become &#39;exited&#39;. | | PODSTATEABORTED_PREPARE | 5 | States that indicates the pod is exited, and will never run. Pod failed to prepare, it will only be garbage collected and will never run again. | | PODSTATEEXITED | 6 | Pod has exited, it now can be garbage" }, { "data": "| | PODSTATEDELETING | 7 | Pod is being garbage collected, after that it will enter &#39;garbage&#39; state. | | PODSTATEGARBAGE | 8 | Pod is marked as garbage collected, it no longer exists on the machine. | <a name=\"v1alpha.PublicAPI\"/> PublicAPI defines the read-only APIs that will be supported. These will be handled over TCP sockets. | Method Name | Request Type | Response Type | Description | | -- | | - | | | GetInfo | | | GetInfo gets the rkt&#39;s information on the machine. | | ListPods | | | ListPods lists rkt pods on the machine. | | InspectPod | | | InspectPod gets detailed pod information of the specified pod. | | ListImages | | | ListImages lists the images on the machine. | | InspectImage | | | InspectImage gets the detailed image information of the specified image. | | ListenEvents | | | ListenEvents listens for the events, it will return a response stream that will contain event objects. | | GetLogs | | | GetLogs gets the logs for a pod, if the app is also specified, then only the logs of the app will be returned. If &#39;follow&#39; in the &#39;GetLogsRequest&#39; is set to &#39;true&#39;, then the response stream will not be closed after the first response, the future logs will be sent via the stream. | | .proto Type | Notes | C++ Type | Java Type | Python Type | | -- | -- | -- | | -- | | <a name=\"double\" /> double | | double | double | float | | <a name=\"float\" /> float | | float | float | float | | <a name=\"int32\" /> int32 | Uses variable-length encoding. Inefficient for encoding negative numbers if your field is likely to have negative values, use sint32 instead. | int32 | int | int | | <a name=\"int64\" /> int64 | Uses variable-length encoding. Inefficient for encoding negative numbers if your field is likely to have negative values, use sint64 instead. | int64 | long | int/long | | <a name=\"uint32\" /> uint32 | Uses variable-length encoding. | uint32 | int | int/long | | <a name=\"uint64\" /> uint64 | Uses variable-length encoding. | uint64 | long | int/long | | <a name=\"sint32\" /> sint32 | Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s. | int32 | int | int | | <a name=\"sint64\" /> sint64 | Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s. | int64 | long | int/long | | <a name=\"fixed32\" /> fixed32 | Always four bytes. More efficient than uint32 if values are often greater than 2^28. | uint32 | int | int | | <a name=\"fixed64\" /> fixed64 | Always eight bytes. More efficient than uint64 if values are often greater than 2^56. | uint64 | long | int/long | | <a name=\"sfixed32\" /> sfixed32 | Always four bytes. | int32 | int | int | | <a name=\"sfixed64\" /> sfixed64 | Always eight bytes. | int64 | long | int/long | | <a name=\"bool\" /> bool | | bool | boolean | boolean | | <a name=\"string\" /> string | A string must always contain UTF-8 encoded or 7-bit ASCII text. | string | String | str/unicode | | <a name=\"bytes\" /> bytes | May contain any arbitrary sequence of bytes. | string | ByteString | str |" } ]
{ "category": "Runtime", "file_name": "docs.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "name: Support request about: You are trying to use Antrea and need help title: '' labels: kind/support assignees: '' Describe what you are trying to do <!-- A description of what you are trying to achieve, what you have tried so far and the issues you are facing. -->" } ]
{ "category": "Runtime", "file_name": "support_request.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "BSD/macOS: Fix possible deadlock on closing the watcher on kqueue (thanks @nhooyr and @glycerine) Tests: Fix missing verb on format string (thanks @rchiossi) Linux: Fix deadlock in Remove (thanks @aarondl) Linux: Watch.Add improvements (avoid race, fix consistency, reduce garbage) (thanks @twpayne) Docs: Moved FAQ into the README (thanks @vahe) Linux: Properly handle inotify's INQOVERFLOW event (thanks @zeldovich) Docs: replace references to OS X with macOS Linux: use InotifyInit1 with IN_CLOEXEC to stop leaking a file descriptor to a child process when using fork/exec (thanks @pattyshack) Fix flaky inotify stress test on Linux (thanks @pattyshack) add a String() method to Event.Op (thanks @oozie) Windows: fix for double backslash when watching the root of a drive (thanks @brunoqc) Support linux/arm64 by x/sys/unix and switching to to it from syscall (thanks @suihkulokki) Fix golint errors in windows.go (thanks @tiffanyfj) kqueue: Fix logic for CREATE after REMOVE (thanks @bep) kqueue: fix race condition in Close (thanks @djui for reporting the issue and @ppknap for writing a failing test) inotify: fix race in test enable race detection for continuous integration (Linux, Mac, Windows) inotify: use epoll_create1 for arm64 support (requires Linux 2.6.27 or later) (thanks @suihkulokki) inotify: fix path leaks (thanks @chamaken) kqueue: watch for rename events on subdirectories (thanks @guotie) kqueue: avoid infinite loops from symlinks cycles (thanks @illicitonion) kqueue: don't watch named pipes (thanks @evanphx) inotify: use epoll to wake up readEvents (thanks @PieterD) inotify: closing watcher should now always shut down goroutine (thanks @PieterD) kqueue: close kqueue after removing watches, fixes inotify: Retry read on EINTR (thanks @PieterD) kqueue: rework internals add low-level functions only need to store flags on directories less mutexes done can be an unbuffered channel remove calls to os.NewSyscallError More efficient string concatenation for Event.String() (thanks @mdlayher) kqueue: fix regression in rework causing subdirectories to be watched kqueue: cleanup internal watch before sending remove event kqueue: add dragonfly to the build tags. Rename source code files, rearrange code so exported APIs are at the top. Add done channel to example code. (thanks @chenyukang) (thanks @zhsso) [Fix] Make ./path and path equivalent. (thanks @zhsso) [API] Remove AddWatch on Windows, use Add. Improve documentation for exported identifiers. Minor updates based on feedback from golint. Moved to . Use os.NewSyscallError instead of returning errno (thanks @hariharan-uno) kqueue: fix incorrect mutex used in Close() Update example to demonstrate usage of Op. Fix for String() method on Event (thanks Alex Brainman) Don't build on Plan 9 or Solaris (thanks @4ad) Events channel of type Event rather than Event. [internal] use syscall constants directly for inotify and kqueue. [internal] kqueue: rename events to kevents and fileEvent to event. Go 1.3+ required on Windows (uses syscall.ERRORMOREDATA" }, { "data": "[internal] remove cookie from Event struct (unused). [internal] Event struct has the same definition across every OS. [internal] remove internal watch and removeWatch methods. [API] Renamed Watch() to Add() and RemoveWatch() to Remove(). [API] Pluralized channel names: Events and Errors. [API] Renamed FileEvent struct to Event. [API] Op constants replace methods like IsCreate(). Fix data race on kevent buffer (thanks @tilaks) [API] Remove current implementation of WatchFlags. current implementation doesn't take advantage of OS for efficiency provides little benefit over filtering events as they are received, but has extra bookkeeping and mutexes no tests for the current implementation not fully implemented on Windows kqueue: cleanup internal watch before sending remove event (thanks @zhsso) Fix data race on kevent buffer (thanks @tilaks) IsAttrib() for events that only concern a file's metadata (thanks @abustany) (thanks @cespare) [NOTICE] Development has moved to `code.google.com/p/go.exp/fsnotify` in preparation for inclusion in the Go standard library. [API] Remove FD_SET and friends from Linux adapter (thanks @nathany) (reported by @paulhammond) (reported by @mdwhatcott) (reported by @bernerdschaefer) [Doc] specify OS-specific limits in README (thanks @debrando) [Doc] Contributing (thanks @nathany) (thanks @paulhammond) (thanks @nathany) (thanks @jbowtie) [API] Make syscall flags internal [Fix] inotify: ignore event changes (reported by @srid) [Fix] tests on Windows lower case error messages kqueue: Use EVT_ONLY flag on Darwin [Doc] Update README with full example [Fix] inotify: allow monitoring of \"broken\" symlinks (thanks @tsg) (thanks @ChrisBuchholz) (reported by @nbkolchin) (reported by @nbkolchin) [Doc] add Authors (thanks @fsouza) [Fix] Windows path separators [Doc] BSD License kqueue: directory watching improvements (thanks @vmirage) inotify: add `INMOVEDTO` (requested by @cpisto) (reported by @jakerr) [Fix] inotify: fixes from https://codereview.appspot.com/5418045/ (ugorji) (reported by @robfig) [Fix] kqueue: watch the directory even if it isn't a new watch (thanks @robfig) [Fix] kqueue: modify after recreation of file [Fix] kqueue: watch with an existing folder inside the watched folder (thanks @vmirage) [Fix] kqueue: no longer get duplicate CREATE events kqueue: events for created directories [Fix] for renaming files [Feature] FSNotify flags [Fix] inotify: Added file name back to event path kqueue: watch files after directory created (thanks @tmc) [Fix] inotify: remove all watches before Close() [API] kqueue: return errors during watch instead of sending over channel kqueue: match symlink behavior on Linux inotify: add `DELETE_SELF` (requested by @taralx) [Fix] kqueue: handle EINTR (reported by @robfig) (thanks @davecheney) Go 1 released: build with go tool [Feature] Windows support using winfsnotify Windows does not have attribute change notifications Roll attribute notifications into IsModify kqueue: add files when watch directory update to latest Go weekly code kqueue: add watch on file creation to match inotify kqueue: create file event inotify: ignore `IN_IGNORED` events event String() linux: common FileEvent functions initial commit" } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "CNI-Genie", "subcategory": "Cloud Native Network" }
[ { "data": "title: CSI Driver for ZFS PV Provisioning authors: \"@pawanpraka1\" owners: \"@kmova\" \"@vishnuitta\" creation-date: 2019-08-05 last-updated: 2019-08-05 * * * * This is a design proposal to implement a CSI Driver for ZFS volume provisioning for Kubernetes. This design describes how a ZFS dataset will be represented/managed as a kubernetes customer resource, how CSI driver will be implemented support dynamic provisioning of Volumes (ZFS Datasets) on ZPOOL. Using the design/solution described in this document, users will be able to dynamically provision a Local PV backed by a ZFS Volume. This design expects that the administrators have provisioned a ZPOOL on the nodes, as part of adding the node to the Kubernetes Cluster. Using a ZFS Local PV has the following advantages - as opposed to Kubernetes native Local PV backed by direct attached devices: Sharing of the devices among multiple application pods. Enforcing quota on the volumes, making sure the pods dont consume more than the capacity allocated to them. Ability to take snapshots of the Local PV Ability to sustain single disk failures - using the ZPOOL RAID functionality Ability to use data services like compression and encryption. Ubuntu 18.04 Kubernetes 1.14+ Node are installed with ZFS 0.7 or 0.8 ZPOOLs are pre-created by the administrator on the nodes. Zpools on all the nodes will have the same name. StorageClass Topology specification will be used to restrict the Volumes to be provisioned on the nodes where the ZPOOLs are available. I should be able to provide a volume that can be consumed by my application. This volume should get created dynamically during application creation time and the provision should happen from the ZFS pool which is already running on the local nodes. I should be able to delete the volume that was being consumed by my application. This volume should get deleted only when the application is deleted and it should be cleaned up from the ZFS pool running on the node. CSI driver will handle CSI request for volume create CSI driver will read the request parameters and create following resources: ZFSVolume (Kubernetes custom resource) ZFSVolume will be watched for the property change CSI driver will handle CSI request for volume delete CSI driver will read the request parameters and delete corresponding ZFSPV volume resource user will setup all the node and setup the ZFS pool on each of those nodes. user will deploy below sample storage class where we get all the needed zfs properties for creating the volume. The storage class will have allowedTopologies and pool name which will tell us that pool is available on those nodes and it can pick any of that node to schedule the PV. If allowed topologies are not provided then it means the pool is there on all the nodes and scheduler can create the PV anywhere. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: openebs-zfspv provisioner: zfs.csi.openebs.io parameters: blocksize: \"4k\" compression: \"on\" dedup: \"on\" thinprovision: \"yes\" poolname: \"zfspv-pool\" allowedTopologies: matchLabelExpressions: key:" }, { "data": "values: gke-zfspv-user-default-pool-c8929518-cgd4 gke-zfspv-user-default-pool-c8929518-dxzc ``` user will deploy a PVC using above storage class ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: demo-zfspv-vol-claim spec: storageClassName: openebs-zfspv accessModes: ReadWriteOnce resources: requests: storage: 5G ``` At CSI, when we get a Create Volume request, it will first try to find a node where it can create the PV object. The driver will trigger the scheduler which will return a node where the PV should be created. In CreateVolume call, we will have the list of nodes where the ZFS pools are present and the volume should be created in anyone of the node present in the list. At this point the ZFS driver will have list of all nodes where ZFS pools are present. It will go through the list and pick the appropriate node to schedule the PV. In this scheduling algorithm the scheduler will pick the node where less number of volumes are provisioned. This is the default scheduling if no scheduling algorithm is provided. Lets say there are 2 nodes node1 and node2 with below pool configuration :- ``` node1 | |--> pool1 | | | |> pvc1 | |> pvc2 |--> pool2 |> pvc3 node2 | |--> pool1 | | | |> pvc4 |--> pool2 |> pvc5 |> pvc6 ``` So if application is using pool1 as shown in the below storage class, then ZFS driver will schedule it on node2 as it has one volume as compared to node1 which has 2 volumes in pool1. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: openebs-zfspv provisioner: zfs.csi.openebs.io parameters: blocksize: \"4k\" compression: \"on\" dedup: \"on\" thinprovision: \"yes\" scheduler: \"VolumeWeighted\" poolname: \"pool1\" ``` So if application is using pool2 as shown in the below storage class, then ZFS driver will schedule it on node1 as it has one volume only as compared node2 which has 2 volumes in pool2. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: openebs-zfspv provisioner: zfs.csi.openebs.io parameters: blocksize: \"4k\" compression: \"on\" dedup: \"on\" thinprovision: \"yes\" scheduler: \"VolumeWeighted\" poolname: \"pool2\" ``` In case of same number of volumes on all the nodes for the given pool, it can pick any node and schedule the PV on that. In this scheduling algorithm the scheduler will account the available space in ZFS pool into scheduling consideration and schedule the PV to the appropriate ZFS pool where sufficient space is available. Consider the below scenario in a two node cluster setup :- node1 | |--> pool1 (available 1TB) |--> pool2 (available 500GB) node2 | |--> pool1 (available 300 GB) |--> pool2 (available 2TB) Here, if application is using pool1 then the volume will be provisioned on node1 as it has more space available than node2 for pool1. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: openebs-zfspv provisioner: zfs.csi.openebs.io parameters: blocksize: \"4k\" compression: \"on\" dedup: \"on\" thinprovision: \"yes\" scheduler: \"CapacityWeighted\" poolname: \"pool1\" ``` If application is using pool2 then volume will be provisioned on node2 as it has more space available that node1 for pool2. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: openebs-zfspv provisioner:" }, { "data": "parameters: blocksize: \"4k\" compression: \"on\" dedup: \"on\" thinprovision: \"yes\" scheduler: \"CapacityWeighted\" poolname: \"pool2\" ``` In case if same space is available for all the nodes, the scheduler can pick anyone and create the PV for that. It will create the PV object on scheduled node so that the application using that PV always comes to the same node and also it creates the ZFSVolume object for that volume in order to manage the creation of the ZFS dataset. There will be a watcher at each node which will be watching for the ZFSVolume resource which is aimed for them. The watcher is inbuilt into ZFS node-agent. As soon as ZFSVolume object is created for a node, the corresponding watcher will get the add event and it will create the ZFS dataset with all the volume properties from ZFSVolume custom resource. It will get the pool name from the ZFSVolume custom resource and creates the volume in that pool with pvc name. When the application pod is scheduled on a node, the CSI node-agent will get a NodePublish event. The driver will get the device path from the ZFSVolume customer resource and try to mount the file system as per user request given in the storage class. Once the ZFS volume dataset is created it will put a finalizer and return successful for the NodePublish event. the kubernetes managed ZFS volume will look like this ```yaml apiVersion: openebs.io/v1alpha1 kind: ZFSVolume metadata: creationTimestamp: \"2019-11-03T06:08:02Z\" finalizers: zfs.openebs.io/finalizer generation: 1 labels: kubernetes.io/nodename: gke-pawan-zfspv-default-pool-8b577544-5wgl name: pvc-4b083833-fe00-11e9-b162-42010a8001b1 namespace: openebs resourceVersion: \"147959\" selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/zfsvolumes/pvc-4b083833-fe00-11e9-b162-42010a8001b1 uid: 4b0fd8bc-fe00-11e9-b162-42010a8001b1 spec: blocksize: 4k capacity: \"4294967296\" compression: \"on\" dedup: \"on\" ownerNodeID: gke-pawan-zfspv-default-pool-8b577544-5wgl poolName: zfspv-pool thinProvison: \"yes\" ``` When CSI will get volume destroy request, it will destroy the created zvol and also deletes the corresponding ZFSVolume custom resource. There will be a watcher watching for this ZFSVolume custom resource in the agent. We can update the ZFSVolume custom resource with the desired property and the watcher of this custom resource will apply the changes to the corresponding volume. When ZFS CSI controller gets a snapshot create request :- ```yaml kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1beta1 metadata: name: zfspv-snapclass annotations: snapshot.storage.kubernetes.io/is-default-class: \"true\" driver: zfs.csi.openebs.io deletionPolicy: Delete ``` ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: zfspv-snap namespace: openebs spec: volumeSnapshotClassName: zfspv-snapclass source: persistentVolumeClaimName: zfspv-pvc ``` it will create a ZFSSnapshot custom resource. This custom resource will get all the details from the ZFSVolume CR, ownerNodeID will be same here as snapshot will also be there on same node where the original volume is there. The controller will create the request with status as Pending and set a label \"openebs.io/volname\" to the volume name from where the snapshot has to be created. ```yaml apiVersion: openebs.io/v1alpha1 kind: ZFSSnapshot metadata: name: snapshot-ead3b8ab-306a-4003-8cc2-4efdbb7a9306 namespace: openebs labels: openebs.io/persistence-volum: pvc-34133838-0d0d-11ea-96e3-42010a800114 spec: blocksize: 4k capacity: \"4294967296\" compression: \"on\" dedup: \"on\" ownerNodeID: zfspv-node1 poolName: zfspv-pool status: Pending ``` The watcher of this resource will be the ZFS node agent. The agent running on ownerNodeID node will check this event and create a snapshot with the given name. Once the node agent is able to create the snapshot, it will add a finalizer \"zfs.openebs.io/finalizer\" to this" }, { "data": "Also, it will update the status field to Ready, so that the controller can check this and return successful for the snap create request, the final snapshot object will be :- ```yaml apiVersion: openebs.io/v1alpha1 kind: ZFSSnapshot metadata: name: zfspv-snap namespace: openebs Finalizers: zfs.openebs.io/finalizer labels: openebs.io/persistence-volume: pvc-34133838-0d0d-11ea-96e3-42010a800114 spec: blocksize: 4k capacity: \"4294967296\" compression: \"on\" dedup: \"on\" ownerNodeID: zfspv-node1 poolName: zfspv-pool status: state: Ready ``` We can use GRPC call also to create the snapshot, The controller plugin will call node plugin's grpc server to create the snapshot. Creating a ZFSSnapshot custom resource has advantages :- Even if Node plugin is down we can get the info about snapshots No need to do zfs call every time to get the list of snapshots Controller plugin and Node plugin can run without knowledge of each other Reconciliation is easy to manage as K8s only supports a reconciliation based snapshots as of today. For example, the operation to create a snapshot is triggered via K8s CR. Post that, K8s will be re-trying till a successful snapshot is created. In this case, having an API doesn't add much value. However, there will be use cases where snapshots have to be taken while the file system is frozen to get an application-consistent snapshot. This can required that a snapshot API should be available with blocking API calls that can be aborted after an upper time limit. Maybe, in a future version, we should add a wrapper around the snapshot functionality and expose it through node agent API. This can be tracked in the backlogs. When ZFS CSI controller gets a clone create request :- ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: zfspv-clone spec: dataSource: name: zfspv-snap kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce ``` it will create a ZFSVolume custom resource with same spec as snapshot object. Since clone will be on the same node where snapshot exist so ownerNodeID will also be same. ```yaml apiVersion: openebs.io/v1alpha1 kind: ZFSVolume metadata: name: zfspv-clone namespace: openebs spec: blocksize: 4k capacity: \"4294967296\" compression: \"on\" dedup: \"on\" ownerNodeID: zfspv-node1 poolName: zfspv-pool snapName: pvc-34133838-0d0d-11ea-96e3-42010a800114@snapshot-ead3b8ab-306a-4003-8cc2-4efdbb7a9306 ``` The watcher of this resource will be the ZFS node agent. The agent running on ownerNodeID node will check this event and create a clone from the snapshot with the given name. Once the node agent is able to create the clone from the given snapshot, it will add a finalizer \"zfs.openebs.io/finalizer\" to this resource. the final clone ZFSVolume object will be :- ```yaml apiVersion: openebs.io/v1alpha1 kind: ZFSVolume metadata: name: pvc-e1230d2c-b32a-48f7-8b76-ca335b253dcd namespace: openebs Finalizers: zfs.openebs.io/finalizer spec: blocksize: 4k capacity: \"4294967296\" compression: \"on\" dedup: \"on\" ownerNodeID: zfspv-node1 poolName: zfspv-pool snapName: pvc-34133838-0d0d-11ea-96e3-42010a800114@snapshot-ead3b8ab-306a-4003-8cc2-4efdbb7a9306 ``` Note that if ZFSVolume CR does not have snapName, then a normal volume will be created, the snapName field will specify whether we have to create a volume or clone. Provisioning via node selector/affinity. De-Provisioning of volume. Volume Property change support. Support provisioning without Node Selector/Affinity. Monitoring of Devices and ZFS statistics. Alert based on Device and ZFS observability metrics. BDD for ZFSPV. CI pipelines setup to validate the software." } ]
{ "category": "Runtime", "file_name": "20190805-csi-zfspv-volume-provisioning.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "Get information of the specified meta Partition. ```bash cfs-cli metapartition info [Partition ID] ``` Decommission the specified meta partition on the target node and automatically transfer it to other available nodes. ```bash cfs-cli metapartition decommission [Address] [Partition ID] ``` Add a new meta partition on the target node. ```bash cfs-cli metapartition add-replica [Address] [Partition ID] ``` Delete the meta partition on the target node. ```bash cfs-cli metapartition del-replica [Address] [Partition ID] ``` Fault diagnosis, find meta partitions that are mostly unavailable and missing. ```bash cfs-cli metapartition check ```" } ]
{ "category": "Runtime", "file_name": "metapartition.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "This document describes the process for contributing to project. At a very high level, the process to contribute and improve is pretty simple: Submit an issue describing your proposed change Create your development branch Commit your changes Submit your Pull Request Sync with the remote `openebs/external-storage` repository Some general guidelines when submitting issues for OpenEBS volume provisioner: It is advisable to search the existing issues related to openebs-provisioner at if your issue is not listed there then you can move to the next step. If you encounter any issue/bug or have feature request then raise an issue in and label it as `area/volume-provisioning` & `repo/k8s-provisioner`. Fork the repository. Create development branch in your forked repository with the following name convention: \"task-description-#issue\". Reference the issue number along with a brief description in your commits. If you are contributing to the Kubernetes project for the first time then you need to sign otherwise you can proceed to the next steps. Rebase your development branch Submit the PR from the development branch to the `kubernetes-incubator/external-storage:master` Update the PR as per comments given by reviewers. Once the PR is accepted, close the branch. You can request to `openebs/external-storage` maintainer/owners to synchronize the `openebs/external-storage` repository. Your changes will appear in `openebs/external-storage` once it is synced. Fork the `kubernetes-incubator/external-storage` repository into your account $user as referred in the following instructions. ```bash working_dir=$GOPATH/src/github.com/kubernetes-incubator mkdir -p $working_dir cd $working_dir ``` Set `user` to match your Github profile name: ```bash user={your Github profile name} ``` Clone your fork inside `$working_dir` ```bash git clone https://github.com/$user/external-storage.git # Clone your fork $user/external-storage cd external-storage git remote add upstream https://github.com/kubernetes-incubator/external-storage.git git remote set-url --push upstream no_push git remote -v # check info on remote repos ``` ```bash git checkout master git fetch upstream master git rebase upstream/master git status git push origin master ``` ```bash git checkout -b <branch_name> git branch ``` ```bash git checkout <branch-name> git fetch upstream master git rebase upstream/master git status git push origin <branch-name> #After this changes will appear in your $user/external-storage:<branch-name> ``` Once above procedure is followed, you will see your changes in branch `<branch-name>` of `$user/external-storage` on Github. You can create a PR at `kubernetes-incubator/external-storage`. You can add the label `area/openebs` to your PR by commenting `/area openebs` in the comment section of your PR. Once, a PR is merged in `kubernetes-incubator/external-storage`, ask one of the OpenEBS owners to fetch latest changes to `openebs/external-storage`. Owners will fetch latest changes from `kubernetes-incubator/external-storage` to `openebs/external-storage`repo. Your changes will appear here! :smile: If you need any help with git, refer to this and go back to the guide to proceed." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark schedule describe\" layout: docs Describe schedules Describe schedules ``` ark schedule describe [NAME1] [NAME2] [NAME...] [flags] ``` ``` -h, --help help for describe -l, --selector string only show items matching this label selector ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with schedules" } ]
{ "category": "Runtime", "file_name": "ark_schedule_describe.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "(first-steps)= This tutorial guides you through the first steps with Incus. It covers installing and initializing Incus, creating and configuring some instances, interacting with the instances, and creating snapshots. After going through these steps, you will have a general idea of how to use Incus, and you can start exploring more advanced use cases! Install the Incus package Incus is available on most common Linux distributions. For detailed distribution-specific instructions, refer to {ref}`installing`. Allow your user to control Incus Access to Incus in the packages above is controlled through two groups: `incus` allows basic user access, no configuration and all actions restricted to a per-user project. `incus-admin` allows full control over Incus. To control Incus without having to run all commands as root, you can add yourself to the `incus-admin` group: sudo adduser YOUR-USERNAME incus-admin newgrp incus-admin The `newgrp` step is needed in any terminal that interacts with Incus until you restart your user session. Initialize Incus ```{note} If you are migrating from an existing LXD installation, skip this step and refer to {ref}`server-migrate-lxd` instead. ``` Incus requires some initial setup for networking and storage. This can be done interactively through: incus admin init Or a basic automated configuration can be applied with just: incus admin init --minimal If you want to tune the initialization options, see {ref}`initialize` for more information. Incus is image based and can load images from different image servers. In this tutorial, we will use the . You can list all images that are available on this server with: incus image list images: See {ref}`images` for more information about the images that Incus uses. Now, let's start by launching a few instances. With instance, we mean either a container or a virtual machine. See {ref}`containers-and-vms` for information about the difference between the two instance types. For managing instances, we use the Incus command line client `incus`. Launch a container called `first` using the Ubuntu 22.04 image: incus launch images:ubuntu/22.04 first ```{note} Launching this container takes a few seconds, because the image must be downloaded and unpacked first. ``` Launch a container called `second` using the same image: incus launch images:ubuntu/22.04 second ```{note} Launching this container is quicker than launching the first, because the image is already available. ``` Copy the first container into a container called `third`: incus copy first third Launch a VM called `ubuntu-vm` using the Ubuntu 22.04 image: incus launch images:ubuntu/22.04 ubuntu-vm --vm ```{note} Even though you are using the same image name to launch the instance, Incus downloads a slightly different image that is compatible with" }, { "data": "``` Check the list of instances that you launched: incus list You will see that all but the third container are running. This is because you created the third container by copying the first, but you didn't start it. You can start the third container with: incus start third Query more information about each instance with: incus info first incus info second incus info third incus info ubuntu-vm We don't need all of these instances for the remainder of the tutorial, so let's clean some of them up: Stop the second container: incus stop second Delete the second container: incus delete second Delete the third container: incus delete third Since this container is running, you get an error message that you must stop it first. Alternatively, you can force-delete it: incus delete third --force See {ref}`instances-create` and {ref}`instances-manage` for more information. There are several limits and configuration options that you can set for your instances. See {ref}`instance-options` for an overview. Let's create another container with some resource limits: Launch a container and limit it to one vCPU and 192 MiB of RAM: incus launch images:ubuntu/22.04 limited --config limits.cpu=1 --config limits.memory=192MiB Check the current configuration and compare it to the configuration of the first (unlimited) container: incus config show limited incus config show first Check the amount of free and used memory on the parent system and on the two containers: free -m incus exec first -- free -m incus exec limited -- free -m ```{note} The total amount of memory is identical for the parent system and the first container, because by default, the container inherits the resources from its parent environment. The limited container, on the other hand, has only 192 MiB available. ``` Check the number of CPUs available on the parent system and on the two containers: nproc incus exec first -- nproc incus exec limited -- nproc ```{note} Again, the number is identical for the parent system and the first container, but reduced for the limited container. ``` You can also update the configuration while your container is running: Configure a memory limit for your container: incus config set limited limits.memory=128MiB Check that the configuration has been applied: incus config show limited Check the amount of memory that is available to the container: incus exec limited -- free -m Note that the number has changed. Depending on the instance type and the storage drivers that you use, there are more configuration options that you can specify. For example, you can configure the size of the root disk device for a VM: Check the current size of the root disk device of the Ubuntu VM: ```{terminal} :input: incus exec ubuntu-vm -- df -h Filesystem Size Used Avail Use% Mounted on /dev/root 9.6G 1.4G" }, { "data": "15% / tmpfs 483M 0 483M 0% /dev/shm tmpfs 193M 604K 193M 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 50M 14M 37M 27% /run/incus_agent /dev/sda15 105M 6.1M 99M 6% /boot/efi ``` Override the size of the root disk device: incus config device override ubuntu-vm root size=30GiB Restart the VM: incus restart ubuntu-vm Check the size of the root disk device again: ```{terminal} :input: incus exec ubuntu-vm -- df -h Filesystem Size Used Avail Use% Mounted on /dev/root 29G 1.4G 28G 5% / tmpfs 483M 0 483M 0% /dev/shm tmpfs 193M 588K 193M 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 50M 14M 37M 27% /run/incus_agent /dev/sda15 105M 6.1M 99M 6% /boot/efi ``` See {ref}`instances-configure` and {ref}`instance-config` for more information. You can interact with your instances by running commands in them (including an interactive shell) or accessing the files in the instance. Start by launching an interactive shell in your instance: Run the `bash` command in your container: incus exec first -- bash Enter some commands, for example, display information about the operating system: cat /etc/*release Exit the interactive shell: exit Instead of logging on to the instance and running commands there, you can run commands directly from the host. For example, you can install a command line tool on the instance and run it: incus exec first -- apt-get update incus exec first -- apt-get install sl -y incus exec first -- /usr/games/sl See {ref}`run-commands` for more information. You can also access the files from your instance and interact with them: Pull a file from the container: incus file pull first/etc/hosts . Add an entry to the file: echo \"1.2.3.4 my-example\" >> hosts Push the file back to the container: incus file push hosts first/etc/hosts Use the same mechanism to access log files: incus file pull first/var/log/syslog - | less ```{note} Press `q` to exit the `less` command. ``` See {ref}`instances-access-files` for more information. You can create a snapshot of your instance, which makes it easy to restore the instance to a previous state. Create a snapshot called \"clean\": incus snapshot create first clean Confirm that the snapshot has been created: incus list first incus info first ```{note} `incus list` shows the number of snapshots. `incus info` displays information about each snapshot. ``` Break the container: incus exec first -- rm /usr/bin/bash Confirm the breakage: incus exec first -- bash ```{note} You do not get a shell, because you deleted the `bash` command. ``` Restore the container to the state of the snapshot: incus snapshot restore first clean Confirm that everything is back to normal: incus exec first -- bash exit Delete the snapshot: incus snapshot delete first clean See {ref}`instances-snapshots` for more information." } ]
{ "category": "Runtime", "file_name": "first_steps.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Generate the autocompletion script for fish Generate the autocompletion script for the fish shell. To load completions in your current shell session: cilium-agent completion fish | source To load completions for every new session, execute once: cilium-agent completion fish > ~/.config/fish/completions/cilium-agent.fish You will need to start a new shell for this setup to take effect. ``` cilium-agent completion fish [flags] ``` ``` -h, --help help for fish --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-agent_completion_fish.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "sidebar_position: 2 sidebar_label: \"Node Expansion\" A storage system is usually expected to expand its capacity by adding a new storage node. In HwameiStor, it can be done with the following steps. Add the node into the Kubernetes cluster, or select a Kubernetes node. The node should have all the required items described in . For example, the new node and disk information are as follows: name: k8s-worker-4 devPath: /dev/sdb diskType: SSD disk After the new node is already added into the Kubernetes cluster, make sure the following HwameiStor pods are already running on this node. ```console $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master-1 Ready master 96d v1.24.3-2+63243a96d1c393 k8s-worker-1 Ready worker 96h v1.24.3-2+63243a96d1c393 k8s-worker-2 Ready worker 96h v1.24.3-2+63243a96d1c393 k8s-worker-3 Ready worker 96d v1.24.3-2+63243a96d1c393 k8s-worker-4 Ready worker 1h v1.24.3-2+63243a96d1c393 $ kubectl -n hwameistor get pod -o wide | grep k8s-worker-4 hwameistor-local-disk-manager-c86g5 2/2 Running 0 19h 10.6.182.105 k8s-worker-4 <none> <none> hwameistor-local-storage-s4zbw 2/2 Running 0 19h 192.168.140.82 k8s-worker-4 <none> <none> $ kubectl get localstoragenode k8s-worker-4 NAME IP STATUS AGE k8s-worker-4 10.6.182.103 Ready 8d ``` Construct the storage pool of the node by adding a LocalStorageClaim CR as below: ```console $ kubectl apply -f - <<EOF apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskClaim metadata: name: k8s-worker-4 spec: nodeName: k8s-worker-4 owner: local-storage description: diskType: SSD EOF ``` Finally, check if the node has constructed the storage pool by checking the LocalStorageNode CR. ```bash kubectl get localstoragenode k8s-worker-4 -o yaml ``` The output may look like: ```yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode metadata: name: k8s-worker-4 spec: hostname: k8s-worker-4 storageIP: 10.6.182.103 topogoly: region: default zone: default status: pools: LocalStorage_PoolSSD: class: SSD disks: capacityBytes: 214744170496 devPath: /dev/sdb state: InUse type: SSD freeCapacityBytes: 214744170496 freeVolumeCount: 1000 name: LocalStorage_PoolSSD totalCapacityBytes: 214744170496 totalVolumeCount: 1000 type: REGULAR usedCapacityBytes: 0 usedVolumeCount: 0 volumeCapacityBytesLimit: 214744170496 volumes: state: Ready ```" } ]
{ "category": "Runtime", "file_name": "node_expansion.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "In a standard Docker Kubernetes cluster, kubelet is running on each node as systemd service and is taking care of communication between runtime and the API service. It is responsible for starting microservices pods (such as `kube-proxy`, `kubedns`, etc. - they can be different for various ways of deploying k8s) and user pods. Configuration of kubelet determines which runtime is used and in what way. Kubelet itself is executed in Docker container (as we can see in `kubelet.service`), but, what is important, it's not a Kubernetes pod (at least for now), so we can keep kubelet running inside the container (as well as directly on the host), and regardless of this, run pods in the chosen runtime. Below, you can find an instruction how to switch one or more nodes on running Kubernetes cluster from Docker to CRI-O. At first, you need to stop kubelet service working on the node: ```shell systemctl stop kubelet ``` and stop all kubelet Docker containers that are still running. ```shell docker stop $(docker ps | grep k8s_ | awk '{print $1}') ``` We have to be sure that `kubelet.service` will start after `crio.service`. It can be done by adding `crio.service` to `Wants=` section in `/etc/systemd/system/kubelet.service`: ```shell $ cat /etc/systemd/system/kubelet.service | grep Wants Wants=docker.socket crio.service ``` If you'd like to change the way of starting kubelet (e.g., directly on the host instead of in a container), you can change it here, but, as mentioned, it's not necessary. Kubelet parameters are stored in `/etc/kubernetes/kubelet.env` file. ```shell $ cat /etc/kubernetes/kubelet.env | grep KUBELET_ARGS KUBELET_ARGS=\"--pod-manifest-path=/etc/kubernetes/manifests --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 --clusterdns=10.233.0.3 --clusterdomain=cluster.local --resolv-conf=/etc/resolv.conf --kubeconfig=/etc/kubernetes/node-kubeconfig.yaml --require-kubeconfig\" ``` You need to add following parameters to `KUBELET_ARGS`: `--container-runtime-endpoint=unix:///var/run/crio/crio.sock` Socket for remote runtime (default `crio` socket localization). `--runtime-request-timeout=10m` - Optional but useful. Some requests, especially pulling huge images, may take longer than default (2 minutes) and will cause an error. You may need to add following parameter to `KUBELET_ARGS` (prior to Kubernetes `v1.24.0-alpha.2`). This flag is deprecated since `v1.24.0-alpha.2`, and will no longer be available starting from `v1.27.0`: `--container-runtime=remote` - Use remote runtime with provided socket. Kubelet is prepared now. If your cluster is using flannel network, your network configuration should be like: ```shell $ cat /etc/cni/net.d/10-crio.conf { \"name\": \"crio\", \"type\": \"flannel\" } ``` Then, kubelet will take parameters from `/run/flannel/subnet.env` - file generated by flannel kubelet microservice. Start crio first, then kubelet. If you created `crio` service: ```shell systemctl start crio systemctl start kubelet ``` You can follow the progress of preparing node using `kubectl get nodes` or `kubectl get pods --all-namespaces` on Kubernetes control-plane." } ]
{ "category": "Runtime", "file_name": "kubernetes.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "``` bash curl -v \"http://10.196.59.198:17010/raftNode/add?addr=10.196.59.197:17010&id=3\" ``` Adds a new master node to the raft replication group. Parameter List | Parameter | Type | Description | |--|--|-| | addr | string | IP address of the master, in the format of ip:port | | id | uint64 | Node identifier of the master | ``` bash curl -v \"http://10.196.59.198:17010/raftNode/remove?addr=10.196.59.197:17010&id=3\" ``` Removes a node from the raft replication group. Parameter List | Parameter | Type | Description | |--|--|-| | addr | string | IP address of the master, in the format of ip:port | | id | uint64 | Node identifier of the master |" } ]
{ "category": "Runtime", "file_name": "management.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "| Case ID | Title | Priority | Smoke | Status | Other | ||-|-|-|--|-| | E00001 | Assign IP to a pod for ipv4, ipv6 and dual-stack case | p1 | true | done | | | E00002 | Assign IP to deployment/pod for ipv4, ipv6 and dual-stack case | p1 | true | done | | | E00003 | Assign IP to statefulSet/pod for ipv4, ipv6 and dual-stack case | p1 | true | done | | | E00004 | Assign IP to daemonSet/pod for ipv4, ipv6 and dual-stack case | p1 | true | done | | | E00005 | Assign IP to job/pod for ipv4, ipv6 and dual-stack case | p1 | true | done | | | E00006 | Assign IP to replicaset/pod for ipv4, ipv6 and dual-stack case | p1 | true | done | | | E00007 | Successfully run a pod with long yaml for ipv4, ipv6 and dual-stack case | p2 | | done | | | E00008 | Failed to run a pod when IP resource of an IPPool is exhausted | p3 | | done | | | E00009 | The cluster is dual stack, but the spiderpool can allocates ipv4 or ipv6 only with IPPools annotation | p2 | | done | | | E00010 | The cluster is dual stack, but the spiderpool can allocates ipv4 or ipv6 only with Subnet annotation | p2 | | done | |" } ]
{ "category": "Runtime", "file_name": "assignip.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "Improve the batch installation of NetworkPolicy rules when the Agent starts: only generate flow operations based on final desired state instead of incrementally. (, [@tnqn]) Fix deadlock when initializing the GroupEntityIndex (in the Antrea Controller) with many groups; this was preventing correct distribution and enforcement of NetworkPolicies. (, [@tnqn]) Use \"os/exec\" package instead of third-party modules to run PowerShell commands and configure host networking on Windows; this change prevents Agent goroutines from getting stuck when configuring routes. (, [@lzhecheng]) [Windows] Fix panic in Agent when calculating the stats for a rule newly added to an existing NetworkPolicy. (, [@tnqn]) Fix bug in iptables rule installation for dual-stack clusters: if a rule was already present for one protocol but not the other, its installation may have been skipped. (, [@lzhecheng]) Upgrade OVS version to 2.14.2 to pick up security fixes for CVE-2015-8011, CVE-2020-27827 and CVE-2020-35498. (, [@antoninbas]) Fix inter-Node ClusterIP Service access when AntreaProxy is disabled. (, [@tnqn]) Fix duplicate group ID allocation in AntreaProxy when using a combination of IPv4 and IPv6 Services in dual-stack clusters; this was causing Service connectivity issues. (, [@hongliangl]) Fix intra-Node ClusterIP Service access when both the AntreaProxy and Egress features are enabled. (, [@tnqn]) Fix invalid clean-up of the HNS Endpoint during Pod deletion, when Docker is used as the container runtime. (, [@wenyingd]) [Windows] Fix race condition on Windows when retrieving the local HNS Network created by Antrea for containers. (, [@tnqn]) [Windows] Fix invalid conversion function between internal and versioned types for controlplane API, which was causing JSON marshalling errors. (, [@tnqn]) Fix implementation of the v1beta1 version of the legacy \"controlplane.antrea.tanzu.vmware.com\" API: the API was incorrectly using some v1beta2 types and it was missing some field selectors. (, [@tnqn]) Enable \"noEncap\" and \"hybrid\" traffic modes for clusters which include Windows Nodes. ( , [@lzhecheng] [@tnqn]) [Windows] Each Agent is responsible for annotating its Node resource with the MAC address of the uplink interface, using the \"node.antrea.io/mac-address\" annotation; the annotation is used to forward Pod traffic Add a generic mechanism to define policy rules enforced on all the network endpoints belonging to the same Namespace as the target of the AppliedTo; this makes it very easy to define an Antrea CNP to only allow same-Namespace traffic (Namespace isolation) across all Namespaces in the cluster or a subset of them. (, [@Dyanngg]) Add support for the \"Reject\" action of Antrea-native policies in the Traceflow observations. (, [@gran-vmv]) Add support for the \"endPort\" field in K8s" }, { "data": "(, [@GraysonWu]) Add support for , [@xliuxu]) Export flow records about connections denied by NetworkPolicies from the FlowExporter and the FlowAggregator; the records include information about the policy responsible for denying the connection when applicable. (, [@zyiou]) Add more NetworkPolicy-related information to IPFIX flow records exported by the FlowAggregator (policy type and rule name). (, [@heanlan]) Add live-traffic Traceflow support to the Antrea , , [@luolanzone]) Add crd.antrea.io/v1alpha3/ClusterGroup API resource which removes the deprecated \"ipBlock\" field; a , [@Dyanngg]) Add support for providing an IP address as the source for live-traffic Traceflow; the source can also be omitted altogether in which case any source can be a match. (, [@jianjuns]) Add ICMP echo ID and sequence number to the captured packet for live-traffic Traceflow. (, [@jianjuns]) Add support for dumping OVS groups with the \"antctl get of\" command. (, [@jianjuns]) Add new \"antreaagentdenyconnectioncount\" Prometheus metric to keep track of the number of connections denied because of NetworkPolicies; if too many connections are denied within a short window of time, the metric may undercount. (, [@zyiou]) Generate and check-in clientset code for ClusterGroupMembers and GroupAssociation, to facilitate consumption of these APIs by third-party software. (, [@Dyanngg]) Document requirements for the Node network (how to configure firewalls, security groups, etc.) when running Antrea. (, [@luolanzone]) Rename Antrea Go module from github.com/vmware-tanzu/antrea to antrea.io/antrea, using a vanity import path. (, [@antoninbas]) Enable , [@tnqn]) Change the export mechanism for the FlowAggregator: instead of exporting all flows periodically with a fixed interval, we introduce an \"active timeout\" and an \"inactive timeout\", and flow information is exported differently based on flow activity. (, [@srikartati]) Periodically verify the local gateway's configuration and the gateway routes on each Node, and correct any discrepancy. (, [@hty690]) Remove the \"enableTLSToFlowAggregator\" parameter from the Agent configuration; this information can be provided using the \"flowCollectorAddr\" parameter. (, [@zyiou]) Specify antrea-agent as the default container for kubectl commands using the \"kubectl.kubernetes.io/default-container\" annotation introduced in K8s v1.21. (, [@tnqn]) Improve the OpenAPI schema for Antrea-native policy CRDs to enable a more comprehensive validation. (, [@wenqiq]) Bump K8s dependencies (k8s.io/apiserver, k8s.io/client-go, etc.) to v0.21.0 and replace klog with klog/v2. (, [@xliuxu]) Add nodeSelector for FlowAggregator and ELK Pods in YAML manifests: they must run on amd64 Nodes. (, [@antoninbas]) Update reference Kibana configuration to decode the flowType field and display a human-friendly string instead of an integer. (, [@zyiou]) Package , [@arunvelayutham]) Start enabling Antrea end-to-end tests for Windows Nodes. (, [@lzhecheng]) Parameterize K8s download path in Windows helper" }, { "data": "( , [@jayunit100] [@lzhecheng]) [Windows] It was discovered that the AntreaProxy implementation has an upper-bound for the number of Endpoints it can support for each Service: we increase this upper-bound from ~500 to 800, log a warning for Services with a number of Endpoints greater than 800, and arbitrarily drop some Endpoints so we can still provide load-balancing for the Service. (, [@hongliangl]) Fix Antrea-native policy with multiple AppliedTo selectors: some rules were never realized by the Agents as they thought they had only received partial information from the Controller. (, [@tnqn]) Fix re-installation of the OpenFlow groups when the OVS daemons are restarted to ensure that AntreaProxy keeps functioning. (, [@antoninbas]) Configure the MTU correctly in Windows containers, or Path MTU Discovery fails and datagrams with the minimum size are transmitted leading to poor performance in overlay mode. (, [@lzhecheng]) [Windows] Fix IPFIX flow records exported by the Antrea Agent. (, [@zyiou]) If a connection spanned multiple export cycles, it wasn't handled properly and no record was sent for the connection If a connection spanned a single export cycle, a single record was sent but \"delta counters\" were set to 0 which caused flow visualization to omit the flow in dashboards Fix incorrect stats reporting for ingress rules of some NetworkPolicies: some types of traffic were bypassing the OVS table keeping track of statistics once the connection was established, causing packet and byte stats to be incorrect. (, [@ceclinux]) Fix ability of the FlowExporter to connect to the FlowAggregator on Windows: the \"flow-aggregator.flow-aggregator.svc\" DNS name cannot be resolved on Windows because the Agent is running as a process. (, [@dreamtalen]) [Windows] Fix Traceflow for \"hairpinned\" Service traffic. (, [@gran-vmv]) Fix possible crash in the FlowExporter and FlowAggregator when re-establishing a connection for exporting flow records. (, [@srikartati]) Fix local access (from the K8s Node) to the port of a Pod with NodePortLocal enabled running on the same Node. (, [@antoninbas]) Add conntrack label parsing in the FlowExporter when using the OVS netdev datapath, so that NetworkPolicy information can be populated correctly in flow records. (, [@dreamtalen]) Fix the retry logic when enabling the OVS bridge local interface on Windows Nodes. (, [@antoninbas]) [Windows] Sleep for a small duration before injecting Traceflow packet even when the destination is local, to ensure that flow installation can complete and avoid transient errors. (, [@gran-vmv]) Build antrea-cni binary and release binaries without cgo, to avoid dependencies on system libraries. (, [@antoninbas]) Do not populate hostNetwork Pods into AppliedTo groups sent by the Controller to the Agents to avoid unnecessary logs (NetworkPolicies are not enforced on hostNetwork Pods). (, [@Dyanngg]) Fix formatting of K8s code generation tags for Antrea API type declarations, to ensure that auto-generated godocs are rendered correctly. (, [@heshengyuan1311]) Update brew install commands in the documentation for bringing up a local K8s test cluster. (, [@RayBB])" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.1.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Targeted for v0.9 Today the version of Ceph is tied to the version of Rook. Each release of Rook releases a specific version of Ceph that is embedded in the same docker image. This needs to be changed such that the version of Ceph is decoupled from the release of Rook. By separating the decision of which version of Ceph will be deployed with Rook, we have a number of advantages: Admins can choose to run the version of Ceph that meets their requirements. Admins can control when they upgrade the version of Ceph. The data path upgrade needs to be carefully controlled by admins in production environments. Developers can test against any version of Ceph, whether a stable version of Luminous or Mimic, or even a private dev build. Today Rook still includes Luminous, even while Mimic was released several months ago. A frequently asked question from users is when we are going to update to Mimic so they can take advantage of the new features such as the improved dashboard. That question will not be heard anymore after this design change. As soon as a new build of Ceph is available, Rook users will be able to try it out. The approach of embedding Ceph into the Rook image had several advantages that contributed to the design. Simpler development and test matrix. A consistent version of Ceph is managed and there are no unstable Ceph bits running in the cluster. Simpler upgrade path. There is only one version to worry about upgrading. The project is growing out of these requirements and we need to support some added complexity in order to get the benefits of the decoupled versions. There are two versions that will be specified independently in the cluster: the Rook version and the Ceph version. The Rook version is defined by the operator's container `image` tag. All Rook containers launched by the operator will also launch the same version of the Rook image. The full image name is an important part of the version. This allows the container to be loaded from a private repo if desired. In this example, the Rook version is `rook/ceph:v0.8.1`. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: rook-ceph-operator spec: template: spec: containers: name: rook-ceph-operator image: rook/ceph:v0.8.1 ``` The Ceph version is defined under the property `cephVersion` in the Cluster CRD. All Ceph daemon containers launched by the Rook operator will use this image, including the mon, mgr, osd, rgw, and mds pods. The significance of this approach is that the Rook binary is not included in the daemon containers. All initialization performed by Rook to generate the Ceph config and prepare the daemons must be completed in an . Once the Rook init containers complete their execution, the daemon container will run the Ceph image. The daemon container will no longer have Rook running. In the following Cluster CRD example, the Ceph version is Mimic `13.2.2` built on 23 Oct 2018. ```yaml apiVersion: ceph.rook.io/v1 kind: Cluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: ceph/ceph:v13.2.2-20181023 ``` The operator needs to run the Ceph client tools to manage the cluster. For example, the `ceph` tool is needed for general Ceph configuration and status, while `radosgw-admin` is required for managing an object store. Therefore, all the necessary client tools will still be included in the Rook image. The client tools are tested by the Ceph team to be backward and forward compatible by two versions. This means the operator can support a version of Ceph up to two versions older than the client tools it" }, { "data": "With each Rook release, the tools will be included from the latest release of Ceph. For example, in 0.9 Rook will likely include the Mimic tools. Upgrades would be supported from Luminous to Mimic. Rook 0.9 can also be tested to support upgrades to Nautilus since they may be released in the same time frame. Since the Ceph tools are forward compatible, the Mimic tools will be sufficient to support upgrading to Nautilus. If Nautilus is released after Rook 0.9, a patch release can be made to 0.9 so that Rook can officially support the upgrade at that point. The changes in the patch release should be minimal since upgrading to Nautilus could have been mostly planned for in 0.9. The operator will be made to understand differences in the Ceph versions that are necessary for orchestration. Some examples might include: If running Luminous, start the Ceph dashboard on http. If running Mimic, a self-signed cert could be generated to start the dashboard with https. If a new daemon is added in a future Ceph release, the operator would understand to deploy that daemon only if the Ceph version is at least that version. Rook will support a very specific list of major versions. Outside these versions, Rook will not be aware of the needs for configuring and upgrading the cluster. In v0.9, the supported versions will be: luminous (ceph/ceph:v12.2.x) mimic (ceph/ceph:v13.2.x) Depending on the timing of the 0.9 and Nautilus releases, Nautilus will likely be supported either in 0.9 or a patch release. Versions not yet officially supported can be tested with settings in the CRD to be mentioned below. All Rook implementation specific to a Ceph version will apply to all patch releases of that major release. For example, Rook is not expected to have any differences handling various Mimic patch releases. The flexibility during upgrades will now be improved since the upgrade of Rook will be independent from the upgrade to the Ceph version. To upgrade Rook, update the version of the Rook operator container To upgrade Ceph, make sure Rook is running the latest release, then update the `cephVersion.image` in the cluster CRD The versions to be supported during upgrade will be a specific set for each version of Rook. In 0.9, it is anticipated that the only upgrade of Ceph supported would only be Luminous to Mimic. When Rook officially adds support for a release of Ceph (ie. Nautilus), the upgrade path will also be supported from one previous version. For example, after Nautilus support is added, Luminous users would first need to upgrade to Mimic and then Nautilus. While it may be possible to skip versions during upgrade, it is not supported in order to keep the testing more scoped. Each time the operator starts, an idempotent orchestration is executed to ensure the cluster is in the desired state. As part of the orchestration, the version of the operator will be reviewed. If the version has changed, the operator will update each of the daemons in a predictable order such as: mon, mgr, osd, rgw, mds. If the Rook upgrade requires any special steps, they will be handled as each version upgrade requires. When the cluster CRD is updated with a new Ceph version, the same idempotent orchestration is executed to evaluate desired state that needs to be applied to the cluster. Over time as the operator becomes smarter and more versions are supported, the custom upgrade steps will be implemented as needed. Daemons will only be restarted when necessary for the" }, { "data": "The Rook upgrade sometimes will not require a restart of the daemons, depending on if the pod spec changed. The Ceph upgrade will always require a restart of the daemons. In either case, a restart will be done in an orderly, rolling manner with one pod at a time along with health checks as the upgrade proceeds. The upgrade will be paused if the cluster becomes unhealthy. See the for more details on the general upgrade approach. To allow more control over the upgrade, we define `upgradePolicy` settings. They will allow the admin to: Upgrade one type of daemon at a time and confirm they are healthy before continuing with the upgrade Allow for testing of future versions that are not officially supported The settings in the CRD to accommodate the design include: `upgradePolicy.cephVersion`: The version of the image to start applying to the daemons specified in the `components` list. `allowUnsupported`: If `false`, the operator would refuse to upgrade the Ceph version if it doesn't support or recognize that version. This would allow testing of upgrade to unreleased versions. The default is `false`. `upgradePolicy.components`: A list of daemons or other components that should be upgraded to the version `newCephVersion`. The daemons include `mon`, `osd`, `mgr`, `rgw`, and `mds`. The ordering of the list will be ignored as Rook will only support ordering as it determines necessary for a version. If there are special upgrade actions in the future, they could be named and added to this list. For example, with the settings below the operator would only upgrade the mons to mimic, while other daemons would remain on luminous. When the admin is ready, he would add more daemons to the list. ```yaml spec: cephVersion: image: ceph/ceph:v12.2.9-20181026 allowUnsupported: false upgradePolicy: cephVersion: image: ceph/ceph:v13.2.2-20181023 allowUnsupported: false components: mon ``` When the admin is completed with the upgrade or he is ready to allow Rook to complete the full upgrade for all daemons, he would set `cephVersion.image: ceph/ceph:v13.2.2`, and the operator would ignore the `upgradePolicy` since the `cephVersion` and `upgradePolicy.cephVersion` match. If the admin wants to pause or otherwise control the upgrade closely, there are a couple of natural back doors: Deleting the operator pod will effectively pause the upgrade. Starting the operator pod up again would resume the upgrade. If the admin wants to manually upgrade the daemons, he could stop the operator pod, then set the container image on each of the Deployments (pods) he wants to update. The difficulty with this approach is if there are any changes to the pod specs that are made between versions of the daemons. The admin could update the pod specs manually, but it would be error prone. If a developer wants to test the upgrade from mimic to nautilus, he would first create the cluster based on mimic. Then he would update the crd with the \"unrecognized version\" attribute in the CRD to specify nautilus such as: ```yaml spec: cephVersion: image: ceph/ceph:v14.1.1 allowUnsupported: true ``` Until Nautilus builds are released, the latest Nautilus build can be tested by using the image `ceph/daemon-base:latest-master`. For backward compatibility, if the `cephVersion` property is not set, the operator will need to internally set a default version of Ceph. The operator will assume the desired Ceph version is Luminous 12.2.7, which was shipped with Rook v0.8. This default will allow the Rook upgrade from v0.8 to v0.9 to only impact the Rook version and hold the Ceph version at Luminous. After the Rook upgrade to v0.9, the user can choose to set the `cephVersion` property to some newer version of Ceph such as mimic." } ]
{ "category": "Runtime", "file_name": "decouple-ceph-version.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "`Proxy` is the proxy module, mainly responsible for message forwarding, volume allocation and renewal proxy, caching, etc., mainly to alleviate the service pressure of clustermgr. The configuration of the proxy is based on the , and the following configuration instructions mainly focus on the private configuration of the proxy. ::: tip Note Starting from version v3.3.0, Proxy node supports caching of volume and disk information. ::: | Configuration Item | Description | Required | |:--|:|:| | Public configuration | Such as service port, running logs, audit logs, etc., refer to the section | Yes | | host | Current host information, used for reporting to clustermgr for service discovery, for example, http://serviceip:bindport | Yes | | cluster_id | Cluster ID | Yes | | idc | IDC ID | Yes | | retainintervals | Renewal interval cycle, used in conjunction with the volume expiration time set by cm | Yes | | initvolumenum | The number of volumes requested from clustermgr when starting up, set according to the size of the cluster | Yes | | defaultallocvols_num | The number of volumes requested from clustermgr each time, set according to the size of the cluster | Yes | | mq | Kafka producer configuration | Yes | | diskvbasepath | Persistent path for caching volume and disk information | Yes | ```json { \"heartbeatintervals\": \"Interval for sending heartbeat to Clustermgr. The heartbeat time is heartbeatTicks * tickInterval\", \"heartbeatticks\": \"Used in conjunction with heartbeatinterval_s\", \"expires_ticks\": \"\", \"diskvbasepath\": \"Local persistent path for caching information\", \"volume_capacity\": \"Capacity of memory volume information, default is 1M\", \"volumeexpirationseconds\": \"Expiration time of memory volume information, default is 0, which means no expiration\", \"disk_capacity\": \"Capacity of memory disk information, default is 1M\", \"diskexpirationseconds\": \"Expiration time of memory disk information, default is 0, which means no expiration\", \"clustermgr\": { \"hosts\": \"List of clustermgr hosts, [`http://ip:port`,`http://ip1:port`]\", \"rpc\": \"Refer to the rpc LbClient configuration introduction\" }, \"bidallocnums\": \"Maximum number of bids that access can request from proxy each time\", \"host\": \"Current host information, used for reporting to clustermgr for service discovery, for example, http://serviceip:bindport\", \"cluster_id\": \"Cluster ID\", \"idc\": \"IDC ID\", \"retainintervals\": \"Renewal interval cycle, used in conjunction with the volume expiration time set by cm\", \"initvolumenum\": \"The number of volumes requested from clustermgr when starting up, set according to the size of the cluster\", \"defaultallocvols_num\": \"The number of volumes requested from clustermgr each time, access allocation requests can trigger\", \"retainvolumebatch_num\": \"The number of retain volumes in batches, which can relieve the pressure of a single retain, the default is 400\", \"retainbatchinterval_s\": \"Batch retain interval\", \"metricreportinterval_s\": \"Time interval for proxy to report running status to Prometheus\", \"mq\": { \"blobdeletetopic\": \"Topic name for delete messages\", \"shardrepairtopic\": \"Topic name for repair messages\", \"shardrepairpriority_topic\": \"Messages with high-priority repair will be delivered to this topic, usually when a bid has missing chunks in multiple chunks\", \"version\": \"kafka version, default is 2.1.0\", \"msg_sender\": { \"kafka\": \"Refer to the Kafka producer usage configuration introduction\" } } } ``` ```json { \"bind_addr\": \":9600\", \"host\": \"http://127.0.0.1:9600\", \"idc\": \"z0\", \"cluster_id\": 1, \"defaultallocvols_num\" : 2, \"initvolumenum\": 4, \"diskvbasepath\": \"./run/cache\", \"clustermgr\": { \"hosts\": [ \"http://127.0.0.1:9998\", \"http://127.0.0.1:9999\", \"http://127.0.0.1:10000\" ] }, \"mq\": { \"blobdeletetopic\": \"blob_delete\", \"shardrepairtopic\": \"shard_repair\", \"shardrepairprioritytopic\": \"shardrepair_prior\", \"version\": \"0.10.2.0\", \"msg_sender\": { \"broker_list\": [\"127.0.0.1:9092\"] } }, \"log\": { \"level\": \"info\", \"filename\": \"./run/logs/proxy.log\" } } ```" } ]
{ "category": "Runtime", "file_name": "proxy.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark client config set\" layout: docs Set client configuration file values Set client configuration file values ``` ark client config set KEY=VALUE [KEY=VALUE]... [flags] ``` ``` -h, --help help for set ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Get and set client configuration file values" } ]
{ "category": "Runtime", "file_name": "ark_client_config_set.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "containerd uses the issues and milestones to define its roadmap. `ROADMAP.md` files are common in open source projects, but we find they quickly become out of date. We opt for an issues and milestone approach that our maintainers and community can keep up-to-date as work is added and completed. Issues tagged with the `roadmap` label are high level roadmap items. They are tasks and/or features that the containerd community wants completed. Smaller issues and pull requests can reference back to the main roadmap issue that is tagged to help detail progress towards the overall goal. Milestones define when an issue, pull request, and/or roadmap item is to be completed. Issues are the what, milestones are the when. Development is complex therefore roadmap items can move between milestones depending on the remaining development and testing required to release a change. To find the roadmap items currently planned for containerd you can filter on the `roadmap` label. After searching for roadmap items you can view what milestone they are scheduled to be completed in along with the progress." } ]
{ "category": "Runtime", "file_name": "ROADMAP.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "(howto-storage-pools)= See the following sections for instructions on how to create, configure, view and resize {ref}`storage-pools`. (storage-create-pool)= Incus creates a storage pool during initialization. You can add more storage pools later, using the same driver or different drivers. To create a storage pool, use the following command: incus storage create <poolname> <driver> [configurationoptions...] Unless specified otherwise, Incus sets up loop-based storage with a sensible default size (20% of the free disk space, but at least 5 GiB and at most 30 GiB). See the {ref}`storage-drivers` documentation for a list of available configuration options for each driver. See the following examples for how to create a storage pool using different storage drivers. `````{tabs} ````{group-tab} Directory Create a directory pool named `pool1`: incus storage create pool1 dir Use the existing directory `/data/incus` for `pool2`: incus storage create pool2 dir source=/data/incus ```` ````{group-tab} Btrfs Create a loop-backed pool named `pool1`: incus storage create pool1 btrfs Use the existing Btrfs file system at `/some/path` for `pool2`: incus storage create pool2 btrfs source=/some/path Create a pool named `pool3` on `/dev/sdX`: incus storage create pool3 btrfs source=/dev/sdX ```` ````{group-tab} LVM Create a loop-backed pool named `pool1` (the LVM volume group will also be called `pool1`): incus storage create pool1 lvm Use the existing LVM volume group called `my-pool` for `pool2`: incus storage create pool2 lvm source=my-pool Use the existing LVM thin pool called `my-pool` in volume group `my-vg` for `pool3`: incus storage create pool3 lvm source=my-vg lvm.thinpool_name=my-pool Create a pool named `pool4` on `/dev/sdX` (the LVM volume group will also be called `pool4`): incus storage create pool4 lvm source=/dev/sdX Create a pool named `pool5` on `/dev/sdX` with the LVM volume group name `my-pool`: incus storage create pool5 lvm source=/dev/sdX lvm.vg_name=my-pool ```` ````{group-tab} ZFS Create a loop-backed pool named `pool1` (the ZFS zpool will also be called `pool1`): incus storage create pool1 zfs Create a loop-backed pool named `pool2` with the ZFS zpool name `my-tank`: incus storage create pool2 zfs zfs.pool_name=my-tank Use the existing ZFS zpool `my-tank` for `pool3`: incus storage create pool3 zfs source=my-tank Use the existing ZFS dataset `my-tank/slice` for `pool4`: incus storage create pool4 zfs source=my-tank/slice Use the existing ZFS dataset `my-tank/zvol` for `pool5` and configure it to use ZFS block mode: incus storage create pool5 zfs source=my-tank/zvol volume.zfs.block_mode=yes Create a pool named `pool6` on `/dev/sdX` (the ZFS zpool will also be called `pool6`): incus storage create pool6 zfs source=/dev/sdX Create a pool named `pool7` on `/dev/sdX` with the ZFS zpool name `my-tank`: incus storage create pool7 zfs source=/dev/sdX zfs.pool_name=my-tank ```` ````{group-tab} Ceph RBD Create an OSD storage pool named `pool1` in the default Ceph cluster (named `ceph`): incus storage create pool1 ceph Create an OSD storage pool named `pool2` in the Ceph cluster `my-cluster`: incus storage create pool2 ceph ceph.cluster_name=my-cluster Create an OSD storage pool named `pool3` with the on-disk name `my-osd` in the default Ceph cluster: incus storage create pool3 ceph ceph.osd.pool_name=my-osd Use the existing OSD storage pool `my-already-existing-osd` for `pool4`: incus storage create pool4 ceph source=my-already-existing-osd Use the existing OSD erasure-coded pool `ecpool` and the OSD replicated pool `rpl-pool` for `pool5`: incus storage create pool5 ceph source=rpl-pool ceph.osd.datapoolname=ecpool ```` ````{group-tab} CephFS ```{note} Each CephFS file system consists of two OSD storage pools, one for the actual data and one for the file" }, { "data": "``` Use the existing CephFS file system `my-filesystem` for `pool1`: incus storage create pool1 cephfs source=my-filesystem Use the sub-directory `my-directory` from the `my-filesystem` file system for `pool2`: incus storage create pool2 cephfs source=my-filesystem/my-directory Create a CephFS file system `my-filesystem` with a data pool called `my-data` and a metadata pool called `my-metadata` for `pool3`: incus storage create pool3 cephfs source=my-filesystem cephfs.createmissing=true cephfs.datapool=my-data cephfs.meta_pool=my-metadata ```` ````{group-tab} Ceph Object ```{note} When using the Ceph Object driver, you must have a running Ceph Object Gateway URL available beforehand. ``` Use the existing Ceph Object Gateway `https://www.example.com/radosgw` to create `pool1`: incus storage create pool1 cephobject cephobject.radosgw.endpoint=https://www.example.com/radosgw ```` ````` (storage-pools-cluster)= If you are running an Incus cluster and want to add a storage pool, you must create the storage pool for each cluster member separately. The reason for this is that the configuration, for example, the storage location or the size of the pool, might be different between cluster members. Therefore, you must first create a pending storage pool on each member with the `--target=<cluster_member>` flag and the appropriate configuration for the member. Make sure to use the same storage pool name for all members. Then create the storage pool without specifying the `--target` flag to actually set it up. For example, the following series of commands sets up a storage pool with the name `my-pool` at different locations and with different sizes on three cluster members: ```{terminal} :input: incus storage create my-pool zfs source=/dev/sdX size=10GiB --target=vm01 Storage pool my-pool pending on member vm01 :input: incus storage create my-pool zfs source=/dev/sdX size=15GiB --target=vm02 Storage pool my-pool pending on member vm02 :input: incus storage create my-pool zfs source=/dev/sdY size=10GiB --target=vm03 Storage pool my-pool pending on member vm03 :input: incus storage create my-pool zfs Storage pool my-pool created ``` Also see {ref}`cluster-config-storage`. ```{note} For most storage drivers, the storage pools exist locally on each cluster member. That means that if you create a storage volume in a storage pool on one member, it will not be available on other cluster members. This behavior is different for Ceph-based storage pools (`ceph`, `cephfs` and `cephobject`) where each storage pool exists in one central location and therefore, all cluster members access the same storage pool with the same storage volumes. ``` See the {ref}`storage-drivers` documentation for the available configuration options for each storage driver. General keys for a storage pool (like `source`) are top-level. Driver-specific keys are namespaced by the driver name. Use the following command to set configuration options for a storage pool: incus storage set <pool_name> <key> <value> For example, to turn off compression during storage pool migration for a `dir` storage pool, use the following command: incus storage set my-dir-pool rsync.compression false You can also edit the storage pool configuration by using the following command: incus storage edit <pool_name> You can display a list of all available storage pools and check their configuration. Use the following command to list all available storage pools: incus storage list The resulting table contains the storage pool that you created during initialization (usually called `default` or `local`) and any storage pools that you added. To show detailed information about a specific pool, use the following command: incus storage show <pool_name> To see usage information for a specific pool, run the following command: incus storage info <pool_name> (storage-resize-pool)= If you need more storage, you can increase the size of your storage pool by changing the `size` configuration key: incus storage set <poolname> size=<newsize> This will only work for loop-backed storage pools that are managed by Incus. You can only grow the pool (increase its size), not shrink it." } ]
{ "category": "Runtime", "file_name": "storage_pools.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "% runc-checkpoint \"8\" runc-checkpoint - checkpoint a running container runc checkpoint [option ...] container-id The checkpoint command saves the state of the running container instance with the help of criu(8) tool, to be restored later. --image-path path : Set path for saving criu image files. The default is ./checkpoint. --work-path path : Set path for saving criu work files and logs. The default is to reuse the image files directory. --parent-path path : Set path for previous criu image files, in pre-dump. --leave-running : Leave the process running after checkpointing. --tcp-established : Allow checkpoint/restore of established TCP connections. See . --ext-unix-sk : Allow checkpoint/restore of external unix sockets. See . --shell-job : Allow checkpoint/restore of shell jobs. --lazy-pages : Use lazy migration mechanism. See . --status-fd fd : Pass a file descriptor fd to criu. Once lazy-pages server is ready, criu writes \\0 (a zero byte) to that fd. Used together with --lazy-pages. --page-server IP-address:port : Start a page server at the specified IP-address and port. This is used together with criu lazy-pages. See . --file-locks : Allow checkpoint/restore of file locks. See . --pre-dump : Do a pre-dump, i.e. dump container's memory information only, leaving the container running. See . --manage-cgroups-mode soft|full|strict|ignore. : Cgroups mode. Default is soft. See . --empty-ns namespace : Checkpoint a namespace, but don't save its properties. See . --auto-dedup : Enable auto deduplication of memory images. See . criu(8), runc-restore(8), runc(8), criu(8)." } ]
{ "category": "Runtime", "file_name": "runc-checkpoint.8.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "`virtcontainers` has a few prerequisites for development: CNI golang To build `virtcontainers`, at the top level directory run: ```bash ``` Before testing `virtcontainers`, ensure you have met the . To test `virtcontainers`, at the top level run: ``` ``` This will: run static code checks on the code base. run `go test` unit tests from the code base. For details on the format and how to submit changes, refer to the document." } ]
{ "category": "Runtime", "file_name": "Developers.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "Firecracker uses KVM for the actual resource virtualization, hence setting up a development environment requires either a bare-metal machine (with hardware virtualization), or a virtual machine that supports nested virtualization. The different options are outlined below. Once the environment is set up, one can continue with the specific steps of setting up Firecracker (e.g., as outlined in the instructions). `[TODO]` Note that Firecracker development on macOS has no hard dependency on VMware Fusion or Ubuntu. All that is required is a Linux VM that supports nested virtualization. This is but one example of that setup: Download and install . Download an ISO image. Open VMware Fusion, open the File menu, and select New... to bring up the Select the Installation Method window. Find the ISO image you downloaded in step 2, and drag it onto the VMware window opened in step 3. You should now be at the Create a New Virtual Machine window. Ensure the Ubuntu 18.04.2 image is highlighted, and click Continue. On the Linux Easy Install window, leave the Use Easy Install option checked, enter a password, and click Continue. On the Finish window, click Finish, and save the `.vmwarevm` file if prompted. After the VM starts up, open the Virtual Machine menu, and select Shut Down. After the VM shuts down, open the Virtual Machine menu, and select Settings.... From the settings window, select Processors & Memory, and then unfurl the Advanced options section. Check the Enable hypervisor applications in this virtual machine option, close the settings window, open the Virtual Machine menu, and select Start Up. If you receive a Cannot connect the virtual device sata0:1 because no corresponding device is available on the host. error, you can respond No to the prompt. Once the VM starts up, log in as the user you created in step 6. After logging in, open the Terminal app, and run `sudo apt install curl -y` to install cURL. Now you can continue with the Firecracker instructions to install and configure Firecracker in the new VM. Firecracker development environment on AWS can be setup using bare metal instances. Follow these steps to create a bare metal instance. If you don't already have an AWS account, create one using the . Login to . You must select a region that offers bare metal EC2 instances. To check which regions support bare-metal, visit and look for `*.metal` instance types. Click on `Launch a virtual machine` in `Build Solution` section. Firecracker requires a relatively new kernel, so you should use a recent Linux distribution - such as `Ubuntu Server 22.04 LTS (HVM), SSD Volume Type`. In `Step 2`, scroll to the bottom and select `c5.metal` instance type. Click on `Next: Configure Instance Details`. In `Step 3`, click on `Next: Add Storage`. In `Step 4`, click on `Next: Add Tags`. In `Step 5`, click on `Next: Configure Security Group`. In `Step 6`, take the default security group. This opens up port 22 and is needed so that you can ssh into the machine later. Click on `Review and Launch`. Verify the details and click on `Launch`. If you do not have an existing key pair, then you can select `Create a new key pair` to create a key pair. This is needed so that you can use it later to ssh into the machine. Click on the instance id in the green box. Copy `Public DNS` from the `Description` tab of the selected" }, { "data": "Login to the newly created instance: ```console ssh -i <ssh-key> ubuntu@<public-ip> ``` Now you can continue with the Firecracker instructions to use Firecracker to create a microVM. One of the options to set up Firecracker for development purposes is to use a VM on Google Compute Engine (GCE), which supports nested virtualization and allows to run KVM. If you don't have a Google Cloud Platform (GCP) account, you can find brief instructions in the Addendum . Here is a brief summary of steps to create such a setup (full instructions to set up a Ubuntu-based VM on GCE with nested KVM enablement can be found in GCE ). Select a GCP project and zone ```console $ FCPROJECT=yourname-firecracker $ FC_REGION=us-east1 $ FC_ZONE=us-east1-b ``` <details><summary>Click here for instructions to create a new project</summary> <p> It might be convenient to keep your Firecracker-related GCP resources in a separate project, so that you can keep track of resources more easily and remove everything easily once your are done. For convenience, give the project a unique name (e.g., your_name-firecracker), so that GCP does not need to create a project id different than project name (by appending randomized numbers to the name you provide). ```console $ gcloud projects create ${FC_PROJECT} --enable-cloud-apis --set-as-default ``` </p> </details> ```console $ gcloud config set project ${FC_PROJECT} $ gcloud config set compute/region ${FC_REGION} $ gcloud config set compute/zone ${FC_ZONE} ``` The next step is to create a VM image able to run nested KVM (as outlined ). Now we create the VM: Keep in mind that you will need an instance type that supports nested virtualization. `E2` and `N2D` instances will not work. If you want to use a `N1` instance (default in some regions), make sure it uses at least a processor of the `Haswell` architecture by specifying `--min-cpu-platform=\"Intel Haswell\"` when you create the instance. Alternatively, use `N2` instances (such as with `--machine-type=\"n2-standard-2\"`). ```console $ FC_VM=firecracker-vm $ gcloud compute instances create ${FC_VM} --enable-nested-virtualization \\ --zone=${FC_ZONE} --min-cpu-platform=\"Intel Haswell\" \\ --machine-type=n1-standard-2 ``` Connect to the VM via SSH. ```console $ gcloud compute ssh ${FC_VM} ``` When doing it for the first time, a key-pair will be created for you (you will be propmpted for a passphrase - can just keep it empty) and uploaded to GCE. Done! You should see the prompt of the new VM: ```console [YOURUSERNAME]@firecracker-vm:~$ ``` Verify that VMX is enabled, enable KVM ```console $ grep -cw vmx /proc/cpuinfo 1 $ apt-get update $ apt-get install acl $ sudo setfacl -m u:${USER}:rw /dev/kvm $ [ -r /dev/kvm ] && [ -w /dev/kvm ] && echo \"OK\" || echo \"FAIL\" OK ``` Depending on your machine you will get a different number, but anything except 0 means `KVM` is enabled. Now you can continue with the Firecracker instructions to install and configure Firecracker in the new VM. In a nutshell, setting up a GCP account involves the following steps: Log in to GCP with your Google credentials. If you don't have account, you will be prompted to join the trial. Install GCP CLI & SDK (full instructions can be found ). ```console $ export CLOUDSDKREPO=\"cloud-sdk-$(lsb_release -c -s)\" $ echo \"deb http://packages.cloud.google.com/apt $CLOUDSDKREPO main\" \\ | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list $ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg \\ | sudo apt-key add - $ sudo apt-get update && sudo apt-get install -y google-cloud-sdk ``` Configure the `gcloud` CLI by running: ```console $ gcloud init --console-only ``` Follow the prompts to authenticate (open the provided link, authenticate, copy the token back to console) and select the default project. `[TODO]`" } ]
{ "category": "Runtime", "file_name": "dev-machine-setup.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "This project aims to create an API to enable multiple network interface for Rook storage providers. Currently, Rook providers only choice is to use `hostNetwork` or not. The API will be used to define networks resource for Rook clusters. It enables more fine-grained control over network access. To achieve non-flat networking model, Rook can choose to enable `hostNetwork` and expose host network interfaces to Storage Provider pods. Ceph Rook cluster network definition example: ```yaml network: hostNetwork: true ``` The Ceph operator without specifying this configuration will always default to pod networking. Rook operators can define storage cluster's network using network provider. Network provider example includes host, and multus. To configure the cluster network, cluster CRD needs to tell the network provider the appropriate `NetworkInterfaceSelector`. `NetworkInterfaceSelector` will be provided as list of `interfaces` key-value. ```yaml network: provider: <network-provider> interfaces: <key>: <network-interface-selector> <key>: <network-interface-selector> ``` Network provider determines multi-homing method and the network interface selector. Using host network provider, pod can use host's network namespace and its interfaces to use as cluster network. Using popular multi-plugin CNI plugin such as , Rook operators can attach multiple network interfaces to pods. One thing to remember, leaving the network configuration empty will default to kubernetes cluster default networking. The network interface selectors with key values `public` and `cluster` are specified for Ceph. Network interface selector is fed to network provider to connect pod to cluster network. This selector may vary from one network provider to another. For example, host network provider only needs to know the interface name. On the other hand, multi-plugin CNI plugin needs to know the network attachment definition's name and vice versa. Multi-plugin such as multus may seem to follow Network Attachment Selection Annotation documented by Kubernetes Network Plumbing Working Group's [Multi-Net Spec]. Any future multi-homed network provider that implements the Network Plumbing WG's Multi-net spec may use the `multus` network. Previously-identified providers [CNI-Genie] and [Knitter] have since gone dormant (upd. 2023), and Multus continues to be the most prominent known" }, { "data": "Rook will implement a user-runnable routine that validates a multi-net configuration based on the [Multi-Net Spec]. Users can input one or both of the Network Attachment Definitions (NADs) for Ceph's `multus` network, and the routine will perform cursory validation of the ability of the specified networks to support Ceph. The routine will start up a single web server with the specified networks attached (via NADs). The routine will also start up a number of clients on each node that will test the network(s)'s connections by HTTP(S) requests to the web server. A client represents a simplified view of a Ceph daemon from the networking perspective. The number of clients is intended to represent the number of Ceph daemons that could run on a node in the worst possible failure case where all daemons are rescheduled to a single node. This helps verify that the network provider is able to successfully assign IP addresses to Ceph daemon pods in the worst case. Some network interface software/hardware may limit the number of addresses that can be assigned to an interface on a given node, and this helps verify that issue is not present. It is important that the web server be only a single instance. If clients collocated on the same node with the web server can make successful HTTP(S) requests but clients on other nodes are not, that is an important indicator of inter-node traffic being blocked. The validation test routine will run in the Rook operator container. Rook's kubectl [kubectl plugin] may facilitate running the test, but it is useful to have the option of executing a long-running validation routine running in the Kubernetes cluster instead of on an administrator's log-in session. Additionally, the results of the validation tester may become important for users wanting to get help with Multus configuration, and having a tool that is present in the Rook container image will allow users to run the routine for bug reports more readily than if they needed to install the kubectl plugin. <!-- LINKS -->" } ]
{ "category": "Runtime", "file_name": "multi-net-multus.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "MIT License Copyright (c) 2019 Josh Bleecher Snyder Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." } ]
{ "category": "Runtime", "file_name": "license.md", "project_name": "Multus", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Backup API Type\" layout: docs Use the `Backup` API type to request the Velero server to perform a backup. Once created, the Velero Server immediately starts the backup process. Backup belongs to the API group version `velero.io/v1`. Here is a sample `Backup` object with each of the fields documented: ```yaml apiVersion: velero.io/v1 kind: Backup metadata: name: a namespace: velero spec: csiSnapshotTimeout: 10m includedNamespaces: '*' excludedNamespaces: some-namespace includedResources: '*' excludedResources: storageclasses.storage.k8s.io includeClusterResources: null labelSelector: matchLabels: app: velero component: server orLabelSelectors: matchLabels: app: velero matchLabels: app: data-protection snapshotVolumes: null storageLocation: aws-primary volumeSnapshotLocations: aws-primary gcp-primary ttl: 24h0m0s defaultVolumesToRestic: true hooks: resources: name: my-hook includedNamespaces: '*' excludedNamespaces: some-namespace includedResources: pods excludedResources: [] labelSelector: matchLabels: app: velero component: server pre: exec: container: my-container command: /bin/uname -a onError: Fail timeout: 10s post: status: version: 1 expiration: null phase: \"\" validationErrors: null startTimestamp: 2019-04-29T15:58:43Z completionTimestamp: 2019-04-29T15:58:56Z volumeSnapshotsAttempted: 2 volumeSnapshotsCompleted: 1 warnings: 2 errors: 0 failureReason: \"\" ```" } ]
{ "category": "Runtime", "file_name": "backup.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark describe backups\" layout: docs Describe backups Describe backups ``` ark describe backups [NAME1] [NAME2] [NAME...] [flags] ``` ``` -h, --help help for backups -l, --selector string only show items matching this label selector --volume-details display details of restic volume backups ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Describe ark resources" } ]
{ "category": "Runtime", "file_name": "ark_describe_backups.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Performance Profiling !!! warn This is an advanced topic please be aware of the steps you're performing or reach out to the experts for further guidance. There are some cases where the debug logs are not sufficient to investigate issues like high CPU utilization of a Ceph process. In that situation, coredump and perf information of a Ceph process is useful to be collected which can be shared with the Ceph team in an issue. To collect this information, please follow these steps: Edit the rook-ceph-operator deployment and set `ROOKHOSTPATHREQUIRES_PRIVILEGED` to `true`. Wait for the pods to get reinitialized: ```console ``` Enter the respective pod of the Ceph process which needs to be investigated. For example: ```console ``` Install `gdb` , `perf` and `git` inside the pod. For example: ```console ``` Capture perf data of the respective Ceph process: ```console ``` Grab the `pid` of the respective Ceph process to collect its backtrace at multiple time instances, attach `gdb` to it and share the output `gdb.txt`: ```console set pag off set log on thr a a bt full # This captures the complete backtrace of the process backtrace Ctrl+C backtrace Ctrl+C backtrace Ctrl+C backtrace set log off q (to exit out of gdb) ``` Grab the live coredump of the respective process using `gcore`: ```console ``` Capture the data for the respective Ceph process and share the output `gdbpmp.data` generated: ```console ``` Collect the `perf.data`, `perf_report`, backtrace of the process `gdb.txt` , `core` file and profiler data `gdbpmp.data` and upload it to the tracker issue for troubleshooting purposes." } ]
{ "category": "Runtime", "file_name": "performance-profiling.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: Running Trino with Alluxio This guide describes how to configure to access Alluxio. is an open source distributed SQL query engine for running interactive analytic queries on data at a large scale. This guide describes how to run queries against Trino with Alluxio as a distributed caching layer, for any data storage systems that Alluxio supports (AWS S3, HDFS, Azure Blob Store, NFS, and more). Alluxio allows Trino to access data regardless of the data source and transparently cache frequently accessed data (e.g., tables commonly used) into Alluxio distributed storage. Co-locating Alluxio workers with Trino workers improves data locality and reduces the I/O access latency when other storage systems are remote or the network is slow or congested. Setup Java for Java 11, at least version 11.0.7, 64-bitas required by Trino Setup Python version 2.6.x, 2.7.x, or 3.x, as required by Trino . This guide is tested with `Trino-352`. Alluxio has been set up and is running. Make sure that the Alluxio client jar is available. This Alluxio client jar file can be found at `{{site.ALLUXIOCLIENTJAR_PATH}}` in the tarball downloaded from Alluxio . Make sure that Hive Metastore is running to serve metadata information of Hive tables. Trino gets the database and table metadata information (including file system locations) from the Hive Metastore, via Trino's Hive connector. Here is a example Trino configuration file `${TRINO_HOME}/etc/catalog/hive.properties`, for a catalog using the Hive connector, where the metastore is located on `localhost`. ```properties connector.name=hive-hadoop2 hive.metastore.uri=thrift://localhost:9083 ``` In order for Trino to be able to communicate with the Alluxio servers, the Alluxio client jar must be in the classpath of Trino servers. Put the Alluxio client jar `{{site.ALLUXIOCLIENTJAR_PATH}}` into the directory `${TRINO_HOME}/plugin/hive-hadoop2/` (this directory may differ across versions) on all Trino servers. Restart the Trino workers and coordinator: ```shell $ ${TRINO_HOME}/bin/launcher restart ``` After completing the basic configuration, Trino should be able to access data in Alluxio. To configure more advanced features for Trino (e.g., connect to Alluxio with HA), please follow the instructions at . Here is an example to create an internal table in Hive backed by files in Alluxio. You can download a data file (e.g., `ml-100k.zip`) from . Unzip this file and upload the file `u.user` into `/ml-100k/` in Alluxio: ```shell $ ./bin/alluxio fs mkdir /ml-100k $ ./bin/alluxio fs cp file:///path/to/ml-100k/u.user alluxio:///ml-100k ``` Create an external Hive table pointing to the Alluxio file" }, { "data": "```sql hive> CREATE TABLE u_user ( userid INT, age INT, gender CHAR(1), occupation STRING, zipcode STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE LOCATION 'alluxio://master_hostname:port/ml-100k'; ``` You can see the directory and files that Hive creates by viewing the Alluxio WebUI at `http://master_hostname:19999` Ensure your Hive Metastore service is running. Hive Metastore listens on port `9083` by default. If it is not running, execute the following command to start the metastore: ```shell $ ${HIVE_HOME}/bin/hive --service metastore ``` Start your Trino server. Trino server runs on port `8080` by default (configurable with `http-server.http.port` in `${TRINO_HOME}/etc/config.properties` ): ```shell $ ${TRINO_HOME}/bin/launcher run ``` Follow to download the `trino-cli-<Trino_VERSION>-executable.jar`, rename it to `trino`, and make it executable with `chmod +x` (sometimes the executable `trino` exists in `${trino_HOME}/bin/trino` and you can use it directly). Run a single query (replace `localhost:8080` with your actual Trino server hostname and port): ```shell $ ./trino --server localhost:8080 --execute \"use default; select * from u_user limit 10;\" \\ --catalog hive --debug ``` To configure additional Alluxio properties, you can append the conf path (i.e. `${ALLUXIO_HOME}/conf`) containing to Trino's JVM config at `etc/jvm.config` under Trino folder. The advantage of this approach is to have all the Alluxio properties set within the same file of `alluxio-site.properties`. ```shell $ -Xbootclasspath/a:<path-to-alluxio-conf> ``` Alternatively, add Alluxio properties to the Hadoop configuration files (`core-site.xml`, `hdfs-site.xml`), and use the Trino property `hive.config.resources` in the file `${TRINO_HOME}/etc/catalog/hive.properties` to point to the Hadoop resource locations for every Trino worker. ```properties hive.config.resources=/<PATHTOCONF>/core-site.xml,/<PATHTOCONF>/hdfs-site.xml ``` If the Alluxio HA cluster uses internal leader election, set the Alluxio cluster property appropriately in the `alluxio-site.properties` file which is on the classpath. ```properties alluxio.master.rpc.addresses=masterhostname1:19998,masterhostname2:19998,masterhostname3:19998 ``` Alternatively you can add the property to the Hadoop `core-site.xml` configuration which is contained by `hive.config.resources`. ```xml <configuration> <property> <name>alluxio.master.rpc.addresses</name> <value>masterhostname1:19998,masterhostname2:19998,masterhostname3:19998</value> </property> </configuration> ``` For information about how to connect to Alluxio HA cluster using Zookeeper-based leader election, please refer to . For example, change `alluxio.user.file.writetype.default` from default `ASYNCTHROUGH` to `CACHETHROUGH`. One can specify the property in `alluxio-site.properties` and distribute this file to the classpath of each Trino node: ```properties alluxio.user.file.writetype.default=CACHE_THROUGH ``` Alternatively, modify `conf/hive-site.xml` to include: ```xml <property> <name>alluxio.user.file.writetype.default</name> <value>CACHE_THROUGH</value> </property> ``` Trino's Hive connector uses the config `hive.max-split-size` to control the parallelism of the query. For Alluxio 1.6 or earlier, it is recommended to set this size no less than Alluxio's block size to avoid the read contention within the same block. For later Alluxio versions, this is no longer an issue because of Alluxio's async caching abilities. It is recommended to increase `alluxio.user.streaming.data.timeout` to a bigger value (e.g `10min`) to avoid a timeout failure when reading large files from remote workers." } ]
{ "category": "Runtime", "file_name": "Trino.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "The fly stage1 is an alternative stage1 that runs a single-application ACI with only `chroot`-isolation. The motivation of the fly feature is to add the ability to run applications with full privileges on the host but still benefit from the image management and discovery from rkt. The Kubernetes is one candidate for rkt fly. In comparison to the default stage1, there is no process manager involved in the stage1. This a visual illustration for the differences in the process tree between the default and the fly stage1: stage1-coreos.aci: ``` host OS rkt systemd-nspawn systemd chroot user-app1 ``` stage1-fly.aci: ``` host OS rkt chroot user-app1 ``` The rkt application sets up bind mounts for `/dev`, `/proc`, `/sys`, and the user-provided volumes. In addition to the bind mounts, an additional tmpfs mount is done at `/tmp`. After the mounts are set up, rkt `chroot`s to the application's RootFS and finally executes the application. The fly stage1 makes use of Linux . If a volume source path is a mountpoint on the host, this mountpoint is made recursively shared before the host path is mounted on the target path in the container. Hence, changes to the mounts inside the container will be propagated back to the host. The bind mounts for `/dev`, `/proc`, and `/sys` are done automatically and are recursive, because their hierarchy contains mounts which also need to be available for the container to function properly. User-provided volumes are not mounted recursively. This is a safety measure to prevent system crashes when multiple containers are started that mount `/` into the container. You can either use `stage1-fly.aci` from the official release, or build rkt yourself with the right options: ``` $ ./autogen.sh && ./configure --with-stage1-flavors=fly && make ``` For more details about configure parameters, see the . This will build the rkt binary and the stage1-fly.aci in `build-rkt-1.30.0+git/bin/`. Here is a quick example of how to use a container with the official fly stage1: ``` ``` If the image is not in the store, `--stage1-name` will perform discovery and fetch the image. By design, the fly stage1 does not provide the same isolation and security features as the default stage1. Specifically, the following constraints are not available when using the fly stage1: network namespace isolation CPU isolators Memory isolators CAPABILITY bounding SELinux When using systemd on the host it is possible to to provide additional isolation. For more information please consult the systemd manual." } ]
{ "category": "Runtime", "file_name": "running-fly-stage1.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "Before raising a PR containing code changes, we suggest you consider the following to ensure a smooth and fast process. Note: All the advice in this document is optional. However, if the advice provided is not followed, there is no guarantee your PR will be merged. All the check tools will be run automatically on your PR by the CI. However, if you run them locally first, there is a much better chance of a successful initial CI run. This document assumes you have already read (and in the case of the code of conduct agreed to): The . The . Do not write architecture-specific code if it is possible to write the code generically. Do not write code to impress: instead write code that is easy to read and understand. Always consider which user will run the code. Try to minimise the privileges the code requires. Always add comments if the intent of the code is not obvious. However, try to avoid comments if the code could be made clearer (for example by using more meaningful variable names). Don't embed magic numbers and strings in functions, particularly if they are used repeatedly. Create constants at the top of the file instead. Ensure all new files contain a copyright statement and an SPDX license identifier in the comments at the top of the file. If the code contains areas that are not fully implemented, make this clear a comment which provides a link to a GitHub issue that provides further information. Do not just rely on comments in this case though: if possible, return a \"`BUG: feature X not implemented see {bug-url}`\" type error. Keep functions relatively short (less than 100 lines is a good \"rule of thumb\"). Document functions if the parameters, return value or general intent of the function is not obvious. Always return errors where possible. Do not discard error return values from the functions this function calls. Don't use multiple log calls when a single log call could be used. Use structured logging where possible to allow be able to extract the log fields. Give functions, macros and variables clear and meaningful names. Unlike Rust, Go does not enforce that all structure members be set. This has lead to numerous bugs in the past where code like the following is used: ```go type Foo struct { Key string Value string } // BUG: Key not set, but nobody noticed! ;( let foo1 = Foo { Value: \"foo\", } ``` A much safer approach is to create a constructor function to enforce integrity: ```go type Foo struct { Key string Value string } func NewFoo(key, value string) (*Foo, error) { if key == \"\" { return nil, errors.New(\"Foo needs a key\") } if value == \"\" { return nil," }, { "data": "needs a value\") } return &Foo{ Key: key, Value: value, }, nil } func testFoo() error { // BUG: Key not set, but nobody noticed! ;( badFoo := Foo{Value: \"value\"} // Ok - the constructor performs needed validation goodFoo, err := NewFoo(\"name\", \"value\") if err != nil { return err } return nil ``` Note: The above is just an example. The safest approach would be to move `NewFoo()` into a separate package and make `Foo` and it's elements private. The compiler would then enforce the use of the constructor to guarantee correctly defined objects. Consider if the code needs to create a new . Ensure any new trace spans added to the code are completed. Where possible, code changes should be accompanied by unit tests. Consider using the standard as it encourages you to make functions small and simple, and also allows you to think about what types of value to test. Raised a GitHub issue in the Kata Containers repository that explains what sort of test is required along with as much detail as possible. Ensure the original issue is referenced in the issue. Minimise the use of `unsafe` blocks in Rust code and since it is potentially dangerous always write for this code where possible. `expect()` and `unwrap()` will cause the code to panic on error. Prefer to return a `Result` on error rather than using these calls to allow the caller to deal with the error condition. The table below lists the small number of cases where use of `expect()` and `unwrap()` are permitted: | Area | Rationale for permitting | |-|-| | In test code (the `tests` module) | Panics will cause the test to fail, which is desirable. | | `lazy_static!()` | This magic macro cannot \"return\" a value as it runs before `main()`. | | `defer!()` | Similar to golang's `defer()` but doesn't allow the use of `?`. | | `tokio::spawn(async move {})` | Cannot currently return a `Result` from an `async move` closure. | | If an explicit test is performed before the `unwrap()` / `expect()` | \"Just about acceptable\", but not ideal `[*]` | | `Mutex.lock()` | Almost unrecoverable if failed in the lock acquisition | `[]` - There can lead to bad future* code: consider what would happen if the explicit test gets dropped in the future. This is easier to happen if the test and the extraction of the value are two separate operations. In summary, this strategy can introduce an insidious maintenance issue. All new features should be accompanied by documentation explaining: What the new feature does Why it is useful How to use the feature Any known issues or limitations Links should be provided to GitHub issues tracking the issues The explains how the project formats documentation. Run the on your documentation changes. Run the on your documentation changes. You may wish to read the documentation that the use to help review PRs: . ." } ]
{ "category": "Runtime", "file_name": "code-pr-advice.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 1 sidebar_label: \"Local Disk Manager\" Local Disk Manager (LDM) is one of the modules of HwameiStor. `LDM` is used to simplify the management of disks on nodes. It can abstract the disk on a node into a resource for monitoring and management purposes. It's a daemon that will be deployed on each node, then detect the disk on the node, abstract it into local disk (LD) resources and save it to kubernetes. At present, the LDM project is still in the `alpha` stage. LocalDisk (LD): LDM abstracts disk resources into objects in kubernetes. An `LD` resource object represents the disk resources on the host. LocalDiskClaim (LDC): This is a way to use disks. A user can add the disk description to select a disk for use. At present, LDC supports the following options to describe disk: NodeName Capacity DiskType (such as HDD/SSD/NVMe) Get the LocalDisk information. ```bash kubectl get localdisk NAME NODEMATCH PHASE 10-6-118-11-sda 10-6-118-11 Available 10-6-118-11-sdb 10-6-118-11 Available ``` Get locally discovered disk resource information with three columns displayed. NAME: represents how this disk is displayed in the cluster resources. NODEMATCH: indicates which host this disk is on. PHASE: represents the current state of the disk. Use `kubectl get localdisk <name> -o yaml` to view more information about disks. Claim available disks. Apply a LocalDiskClaim. ```bash cat << EOF | kubectl apply -f - apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskClaim metadata: name: <localDiskClaimName> spec: description: diskType: <diskType> nodeName: <nodeName> owner: <ownerName> EOF ``` Allocate available disks by issuing a disk usage request. In the request description, you can add more requirements about the disk, such as disk type and capacity. Get the LocalDiskClaim information. ```bash kubectl get localdiskclaim <name> ``` Once the LDC is processed successfully, it will be cleanup by the system automatically. The result will be recorded in the `LocalStorageNode` if the owner is `local-storage`." } ]
{ "category": "Runtime", "file_name": "ldm.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "Pin logrus to 4b6ea73. Pin libcalico-go to v1.0.0-beta-rc2. Use 'glide up' to update other Go dependencies. Fix that \"nat-outgoing\" was not being honoured. Separate Felix into dataplane driver and dataplane-independent parts. (The initial dataplane driver is the one that uses Linux iptables and routing commands; this division will allow us to target other dataplane implementations.) Rewrite the dataplane-independent part of Felix in Go, for improved performance. Update calico-diags to collect Upstart logs. Improve usage reporting: extra stats, better version number. Improve endpoint status reporting. Support Kubernetes backend. Build system improvements. Add a retry for deleting conntrack entries. calico-diags: include DevStack logs, if present Make repo branch for coverage diff configurable Add 'this doc has moved' to relevant location in new docs site. Update coveralls badge. IP SAN support in pyinstaller build Add SemaphoreCI badge. Pin pycparser version. Support InterfacePrefix having multiple values, to allow hybrid Calico use by OpenStack and Kubernetes/Docker/Mesos at the same time. Use PyInstaller-based Felix in calico/felix container build. Update Debian and RPM packaging to stop requiring /etc/calico/felix.cfg, as Felix itself no longer requires this file to exist. Update URLs for the renaming of this repository from 'calico' to 'felix'. Add CircleCI config Fix for baremetal issue (#1071) Allow {inbound,outbound}_rules to be omitted, and handle as [] Add IgnoreLooseRPF config parameter Handle interface renaming Documentation improvements: Add EtcdEndpoints to Felix configuration reference. Improve overview documentation about Calico security. Update recommended RPM repo for Calico with Liberty or later Add Usage Reporting to Felix Allow customization of 'etcdctl' for calico-diags Add config option to disable IPv6 Reduce EtcdWatcher timeout to 10s Increase urllib3 log severity to avoid log spam from EtcdWatcher Fix example policy in bare metal docs to be valid json Use a different conntrack command to trigger module load. Missing conntrack requires conntrack, not iptables Allow missing or \"default\" for tier order. Updates for transition to Tigera. (#1055, #1049) specified coverage >=4.02,<4.1 to work around #1057 Fix hypothesis test for label validation. (#1060) Default to using system certificate store. Fix that conntrack rules only RETURNed packets rather than ACCEPTing. Fill in missing log substitution (#1066) Add tool to remove all felix iptables/ipsets changes. (#1048) Add option to override DROP rules for debugging policy. Add log action, and ability to log any rule. Add support for securing bare-metal host endpoints. This is a significant change that extends Calico's security model to hosts as well as the workloads running on them. InterfacePrefix now defaults to \"cali\", which is a safe default that happens to be the correct value for container systems. MAC address field in endpoint objects is now optional. If omitted, the MAC address is not policed in iptables. Add support for running Felix on RedHat 6.5+ and other distributions with glibc 2.12+ and kernel 2.6.32+ via creation of Python 2.7 PyInstaller bundle. Fix iptables programming for interfaces with atypically long names. Documentation fixes and updates. Add Xenial support (systemd configuration for Felix). Update CLA process and copyrights for new sponsor Tigera. Add Dockerfile metadata labels (as defined at label-schema.org). Check that conntrack and iptables are installed at" }, { "data": "Fix that a config section called [DEFAULT] was ignored. Simplify upstart job. (#1035) Add Timeout to socket.accept(). (#1045) Add negation to selector expressions (#1016). Add negated match criteria (#1003). Fix next-tier action, which incorrectly accepted packets (#1014). Update bird config generation scripts. Fix conntrack entry deletion (#987). Fix iptables retry on commit (#1010). Add floating IP support (via 1:1 NAT) in Felix. Add tiered security policy based on labels and selectors (PR #979). Allows for a rich, hierarchical security model. Felix now parses the etcd snapshot in parallel with the event stream; this dramatically increases scale when under load. Various performance and scale improvements. Removed support for Python 2.6. python-etcd no longer supports 2.6 as of 0.4.3. Add IpInIpTunnelAddr configuration parameter to allow the IP address of the IPIP tunnel device to be set. Add IptablesMarkMask configuration parameter to control which bits are used from the iptables forwarding mark. Increase default size of ipsets and make configurable via the MaxIpsetSize parameter. Bug fixes, including fixes to NAT when using IPIP mode. Don't report port deletion as an error status. Improve leader election performance after restart. Catch additional python-etcd exceptions. Reduce election refresh interval. Resolve \"Felix dies if interface missing\" on Alpine Linux. Rebase to latest 2015.1.2 and 2014.2.4 upstream Ubuntu packages. Fix Felix ipset exception when using IPIP. Use iptables protocol numbers not names. Fixes to diagnostics collection scripts. Pin networking-calico pip version. Really delete routes to ns-* devices in pre-Liberty OpenStack. Add liveness reporting to Felix. Felix now reports its liveness into etcd and the neutron driver copies that information to the Neutron DB. If Felix is down on a host, Neutron will not try to schedule a VM on that host. Add endpoint status reporting to Felix. Felix now reports the state of endpoints into etcd so that the OpenStack plugin can report this information into Neutron. If Felix fails to configure a port, this now causes VM creation to fail. Performance enhancements to ipset manipulation. Rev python-etcd dependency to 0.4.1. Our patched python-etcd version (which contains additional patches) is still required. Reduce occupancy of Felix's tag resolution index in the common case where IP addresses only have a single owner. Felix now sets the default.rp_filter sysctl to ensure that endpoints come up with the Kernel's RPF check enabled by default. Optimize Felix's actor framework to reduce message-passing overhead. Truncate long output from FailedSystemCall exception. Add instructions for use with OpenStack Liberty. Improve the documentation about upgrading a Calico/OpenStack system. Fix compatibility with latest OpenStack code (oslo_config). Use posix_spawn to improve Felix's performance under heavy load. Explicitly use and enable the kernel's reverse path filtering function, and remove our iptables anti-spoofing rules, which were not as robust. Add support for setting MTU on IP-in-IP device. Enhance BIRD configuration and documentation for graceful restart. Felix now restarts if its etcd configuration changes. Felix now periodically refreshes iptables to be robust to other processes corrupting its chains. More thorough resynchronization of etcd from the Neutron mechanism driver. Added process-specific information to the diagnostics dumps from" }, { "data": "Limit number of concurrent shell-outs in felix to prevent file descriptor exhaustion. Have felix periodically resync from etcd and force-refresh the dataplane. Stop restarting Felix on Ubuntu if it fails more than 5 times in 10 seconds. Move DHCP checksum calculation to Neutron. Get all fixed IPs for a port. Update and improve security model documentation. Streamline conntrack rules, move them to top-level chains to avoid duplication. Narrow focus of input iptables chain so that it only applies to Calico-handled traffic. Provide warning log when attempting to use Neutron networks that are not of type 'local' or 'flat' with Calico. Handle invalid JSON in IPAM key in etcd. Move all log rotation into logrotate and out of Felix, to prevent conflicts. Change log rotation strategy for logrotate to not rotate small log files. Delay starting the Neutron resynchronization thread until after all the necessary state has been configured, to avoid race conditions. Prevent systemd restarting Felix when it is killed by administrators. Remove stale conntrack entries when an endpoint's IP is removed. #672: Fix bug where profile chain was left empty instead of being stubbed out. Improve security between endpoint and host and simplify INPUT chain logic. Add Felix statistics logging on USR1 signal. Add support for routing over IP-in-IP interfaces in order to make it easier to evaluate Calico without reconfiguring underlying network. Reduce felix occupancy by replacing endpoint dictionaries by \"struct\" objects. Allow different hosts to have different interface prefixes for combined OpenStack and Docker systems. Add missing support for 0 as a TCP port. Add support for arbitrary IP protocols. Intern various IDs in felix to reduce occupancy. Fix bug where Calico may not propagate security group rule changes from OpenStack. Reduced logspam from Calico Mechanism Driver. Reset ARP configuration when endpoint MAC changes. Forget about profiles when they are deleted. Treat bad JSON as missing data. Add instructions for Kilo on RHEL7. Extend diagnostics script to collect etcd and RabbitMQ information. Improve BIRD config to prevent NETLINK: File Exists log spam. Reduce Felix logging volume. Updated Mechanism driver to specify fixed MAC address for Calico tap interfaces. Prevent the possibility of gevent context-switching during garbage collection in Felix. Increase the number of file descriptors available to Felix. Firewall input characters in profiles and tags. Implement tree-based dispatch chains to improve IPTables performance with many local endpoints. Neutron mechanism driver patches and docs for OpenStack Kilo release. Correct IPv6 documentation for Juno and Kilo. Support for running multiple neutron-server instances in OpenStack. Support for running neutron-server API workers in OpenStack. Calico Mechanism Driver now performs leader election to control state resynchronization. Extended data model to support multiple security profiles per endpoint. Calico Mechanism Driver now attempts to delete empty etcd directories. Felix no longer leaks memory when etcd directories it watches are deleted. Fix error on port creation where the Mechanism Driver would create, delete, and then recreate the port in etcd. Handle EtcdKeyNotFound from atomic delete methods Handle etcd cluster ID changes on API actions Fix ipsets cleanup to correctly iterate through stopping ipsets Ensure that metadata is not blocked by over-restrictive rules on outbound traffic Updates and clarifications to documentation ](https://github.com/igrigorik/ga-beacon)" } ]
{ "category": "Runtime", "file_name": "CHANGES.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "There's a variety of ways to inspect how containers work. Linux provides APIs that expose information about namespaces (proc filesystem) and cgroups (cgroup filesystem). We also have tools like strace that allow us to see what system calls are used in processes. This document explains how to use those APIs and tools to give details on what rkt does under the hood. Note that this is not a comprehensive analysis of the inner workings of rkt, but a starting point for people interested in learning how containers work. Let's use to find out what system calls rkt uses to set up containers. We'll only trace a handful of syscalls since, by default, strace traces every syscall resulting in a lot of output. Also, we'll redirect its output to a file to make the analysis easier. ```bash $ sudo strace -f -s 512 -e unshare,clone,mount,chroot,execve -o out.txt rkt run coreos.com/etcd:v2.0.10 ... ^]^]Container rkt-e6d92625-aa3f-4449-bf5d-43ffed440de4 terminated by signal KILL. ``` We now have our trace in `out.txt`, let's go through some of its relevant parts. First, we see the actual execution of the rkt command: ``` 5710 execve(\"/home/iaguis/work/go/src/github.com/rkt/rkt/build-rkt/target/bin/rkt\", [\"rkt\", \"run\", \"coreos.com/etcd:v2.0.10\"], 0x7ffce2052be8 / 22 vars /) = 0 ``` Since the image was already fetched and we don't trace many system calls, nothing too exciting happens here except mounting the container filesystems. ``` 5710 mount(\"overlay\", \"/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/stage1/rootfs\", \"overlay\", 0, \"lowerdir=/var/lib/rkt/cas/tree/deps-sha512-cc076d6c508223cc3c13c24d09365d64b6d15e7915a165eab1d9e87f87be5015/rootfs,upperdir=/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/overlay/deps-sha512-cc076d6c508223cc3c13c24d09365d64b6d15e7915a165eab1d9e87f87be5015/upper,workdir=/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/overlay/deps-sha512-cc076d6c508223cc3c13c24d09365d64b6d15e7915a165eab1d9e87f87be5015/work\") = 0 5710 mount(\"overlay\", \"/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/stage1/rootfs/opt/stage2/etcd/rootfs\", \"overlay\", 0, \"lowerdir=/var/lib/rkt/cas/tree/deps-sha512-c0de11e9d504069810da931c94aece3bcc5430dc20f9a5177044eaef62f93fcc/rootfs,upperdir=/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/overlay/deps-sha512-c0de11e9d504069810da931c94aece3bcc5430dc20f9a5177044eaef62f93fcc/upper/etcd,workdir=/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/overlay/deps-sha512-c0de11e9d504069810da931c94aece3bcc5430dc20f9a5177044eaef62f93fcc/work/etcd\") = 0 ``` We can see that rkt mounts the stage1 and stage2 filesystems with the as `lowerdir` on the directory rkt expects them to be. Note that the stage2 tree is mounted within the stage1 tree via `/opt/stage2`. You can read more about the tree structure in . This means that, for a same tree store, everything will be shared in a copy-on-write (COW) manner, except the bits that each container modifies, which will be in the `upperdir` and will appear magically in the mount destination. You can read more about this filesystem in the . This is where most of the interesting things happen, the first being executing the stage1 , which is `/init` by default: ``` 5710 execve(\"/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/stage1/rootfs/init\", [\"/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/stage1/rootfs/init\", \"--net=default\", \"--local-config=/etc/rkt\", \"d5513d49-d14f-45d1-944b-39437798ddda\"], 0xc42009b040 / 25 vars / <unfinished ...> ``` init does a bunch of stuff, including creating the container's network namespace and mounting a reference to it on the host filesystem: ``` 5723 unshare(CLONE_NEWNET) = 0 5723 mount(\"/proc/5710/task/5723/ns/net\", \"/var/run/netns/cni-eee014d2-8268-39cc-c176-432bbbc9e959\", 0xc42017c6a8, MS_BIND, NULL) = 0 ``` After creating the network namespace, it will execute the relevant plugins from within that network namespace. The default network uses the with as IPAM: ``` 5725 execve(\"stage1/rootfs/usr/lib/rkt/plugins/net/ptp\", [\"stage1/rootfs/usr/lib/rkt/plugins/net/ptp\"], 0xc4201ac000 / 32 vars / <unfinished ...> 5730 execve(\"stage1/rootfs/usr/lib/rkt/plugins/net/host-local\", [\"stage1/rootfs/usr/lib/rkt/plugins/net/host-local\"], 0xc42008e240 / 32 vars / <unfinished" }, { "data": "``` In this case, the CNI plugins use come from rkt's stage1, but . The plugins will do some iptables magic to configure the network: ``` 5739 execve(\"/usr/bin/iptables\", [\"iptables\", \"--version\"], 0xc42013e000 / 32 vars / <unfinished ...> 5740 execve(\"/usr/bin/iptables\", [\"/usr/bin/iptables\", \"-t\", \"nat\", \"-N\", \"CNI-7a59ad232c32bcea94ee08d5\", \"--wait\"], 0xc4200b0a20 / 32 vars / <unfinished ...> ... ``` After the network is configured, rkt mounts the container cgroups instead of letting systemd-nspawn do it because we want to have control on how they're mounted. We also mount the host cgroups if they're not already mounted in the way systemd-nspawn expects them, like in old distributions or distributions that don't use systemd (e.g. ). We do this in a new mount namespace to avoid polluting the host mounts and to get automatic cleanup when the container exits (`CLONE_NEWNS` is the flag for for historical reasons: it was the first namespace implemented on Linux): ``` 5710 unshare(CLONE_NEWNS) = 0 ``` Here we mount the container hierarchies read-write so the pod can modify its cgroups but we mount the controllers read-only so the pod doesn't modify other cgroups: ``` 5710 mount(\"stage1/rootfs/sys/fs/cgroup/freezer/machine.slice/machine-rkt\\\\x2dd5513d49\\\\x2dd14f\\\\x2d45d1\\\\x2d944b\\\\x2d39437798ddda.scope/system.slice\", \"stage1/rootfs/sys/fs/cgroup/freezer/machine.slice/machine-rkt\\\\x2dd5513d49\\\\x2dd14f\\\\x2d45d1\\\\x2d944b\\\\x2d39437798ddda.scope/system.slice\", 0xc42027d2a8, MS_BIND, NULL) = 0 ... 5710 mount(\"stage1/rootfs/sys/fs/cgroup/freezer\", \"stage1/rootfs/sys/fs/cgroup/freezer\", 0xc42027d2b8, MSRDONLY|MSNOSUID|MSNODEV|MSNOEXEC|MSREMOUNT|MSBIND, NULL) = 0 ``` Now is the time to start systemd-nspawn to create the pod itself: ``` 5710 execve(\"stage1/rootfs/usr/lib/ld-linux-x86-64.so.2\", [\"stage1/rootfs/usr/lib/ld-linux-x86-64.so.2\", \"stage1/rootfs/usr/bin/systemd-nspawn\", \"--boot\", \"--notify-ready=yes\", \"--register=true\", \"--link-journal=try-guest\", \"--quiet\", \"--uuid=d5513d49-d14f-45d1-944b-39437798ddda\", \"--machine=rkt-d5513d49-d14f-45d1-944b-39437798ddda\", \"--directory=stage1/rootfs\", \"--capability=CAPAUDITWRITE,CAPCHOWN,CAPDACOVERRIDE,CAPFSETID,CAPFOWNER,CAPKILL,CAPMKNOD,CAPNETRAW,CAPNETBINDSERVICE,CAPSETUID,CAPSETGID,CAPSETPCAP,CAPSETFCAP,CAPSYSCHROOT\", \"--\", \"--default-standard-output=tty\", \"--log-target=null\", \"--show-status=0\"], 0xc4202bc0f0 / 29 vars / <unfinished ...> ``` Note we don't need to pass the `--private-network` option because rkt already created and configured the network using CNI. Some interesting things systemd-nspawn does are moving the container filesystem tree to `/`: ``` 5747 mount(\"/var/lib/rkt/pods/run/d5513d49-d14f-45d1-944b-39437798ddda/stage1/rootfs\", \"/\", NULL, MS_MOVE, NULL) = 0 5747 chroot(\".\") = 0 ``` And creating all the other namespaces: mount, UTS, IPC, and PID. Check for more information. ``` 5747 clone(childstack=NULL, flags=CLONENEWNS|CLONENEWUTS|CLONENEWIPC|CLONE_NEWPID|SIGCHLD) = 5748 ``` Once it's done creating the container, it will execute the init process, which is systemd: ``` 5748 execve(\"/usr/lib/systemd/systemd\", [\"/usr/lib/systemd/systemd\", \"--default-standard-output=tty\", \"--log-target=null\", \"--show-status=0\"], 0x7f904604f250 / 8 vars /) = 0 ``` Which then will execute systemd-journald to handle logging: ``` 5749 execve(\"/usr/lib/systemd/systemd-journald\", [\"/usr/lib/systemd/systemd-journald\"], 0x5579c5d79d50 / 8 vars / <unfinished ...> ... ``` And at some point it will execute our application's service (in this example, etcd). But first, it needs to execute its companion `prepare-app` dependency: ``` 5751 execve(\"/prepare-app\", [\"/prepare-app\", \"/opt/stage2/etcd/rootfs\"], 0x5579c5d7d580 / 7 vars /) = 0 ``` `prepare-app` bind-mounts a lot of files from stage1 to stage2, so our app has access to a : ``` 5751 mount(\"/dev/null\", \"/opt/stage2/etcd/rootfs/dev/null\", 0x49006f, MS_BIND, NULL) = 0 5751 mount(\"/dev/zero\", \"/opt/stage2/etcd/rootfs/dev/zero\", 0x49006f, MS_BIND, NULL) = 0" }, { "data": "``` After it's finished, our etcd service is ready to start! Since we use some additional security directives in our service file (like or ), systemd will create an additional mount namespace per application in the pod and move the stage2 filesystem to `/`: ``` 5753 unshare(CLONE_NEWNS) = 0 ... 5753 mount(\"/opt/stage2/etcd/rootfs\", \"/\", NULL, MS_MOVE, NULL) = 0 5753 chroot(\".\") = 0 ``` Now we're ready to execute the etcd binary. ``` 5753 execve(\"/etcd\", [\"/etcd\"], 0x5579c5dbd660 / 9 vars /) = 0 ``` And that's it, etcd is running in a container! We'll now inspect a running container by using the . Let's start a new container limiting the CPU to 200 millicores and the memory to 100MB: ``` $ sudo rkt run --interactive kinvolk.io/aci/busybox --memory=100M --cpu=200m / # ``` First we'll need to find the PID of a process running inside the container. We can see the container PID by running `rkt status`: ``` $ sudo rkt status 567264dd state=running created=2018-01-03 17:17:39.653 +0100 CET started=2018-01-03 17:17:39.749 +0100 CET networks=default:ip4=172.16.28.37 pid=10985 exited=false ``` Now we need to find the sh process running inside the container: ``` $ ps auxf | grep [1]0985 -A 3 root 10985 0.0 0.0 54204 5040 pts/2 S+ 17:17 0:00 \\ stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --notify-ready=yes --register=true --link-journal=try-guest --quiet --uuid=567264dd-f28d-42fb-84a1-4714dde9e82c --machine=rkt-567264dd-f28d-42fb-84a1-4714dde9e82c --directory=stage1/rootfs --capability=CAPAUDITWRITE,CAPCHOWN,CAPDACOVERRIDE,CAPFSETID,CAPFOWNER,CAPKILL,CAPMKNOD,CAPNETRAW,CAPNETBINDSERVICE,CAPSETUID,CAPSETGID,CAPSETPCAP,CAPSETFCAP,CAPSYS_CHROOT -- --default-standard-output=tty --log-target=null --show-status=0 root 11021 0.0 0.0 62280 7392 ? Ss 17:17 0:00 \\_ /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0 root 11022 0.0 0.0 66408 8812 ? Ss 17:17 0:00 \\_ /usr/lib/systemd/systemd-journald root 11026 0.0" }, { "data": "1212 4 pts/0 Ss+ 17:17 0:00 \\_ /bin/sh ``` It's 11026! Let's start by having a look at its namespaces: ``` $ sudo ls -l /proc/11026/ns/ total 0 lrwxrwxrwx 1 root root 0 Jan 3 17:19 cgroup -> 'cgroup:[4026531835]' lrwxrwxrwx 1 root root 0 Jan 3 17:19 ipc -> 'ipc:[4026532764]' lrwxrwxrwx 1 root root 0 Jan 3 17:19 mnt -> 'mnt:[4026532761]' lrwxrwxrwx 1 root root 0 Jan 3 17:19 net -> 'net:[4026532702]' lrwxrwxrwx 1 root root 0 Jan 3 17:19 pid -> 'pid:[4026532765]' lrwxrwxrwx 1 root root 0 Jan 3 17:19 pidforchildren -> 'pid:[4026532765]' lrwxrwxrwx 1 root root 0 Jan 3 17:19 user -> 'user:[4026531837]' lrwxrwxrwx 1 root root 0 Jan 3 17:19 uts -> 'uts:[4026532763]' ``` We can compare it with the namespaces on the host ``` $ sudo ls -l /proc/1/ns/ total 0 lrwxrwxrwx 1 root root 0 Jan 3 17:20 cgroup -> 'cgroup:[4026531835]' lrwxrwxrwx 1 root root 0 Jan 3 17:20 ipc -> 'ipc:[4026531839]' lrwxrwxrwx 1 root root 0 Jan 3 17:20 mnt -> 'mnt:[4026531840]' lrwxrwxrwx 1 root root 0 Jan 3 17:20 net -> 'net:[4026532009]' lrwxrwxrwx 1 root root 0 Jan 3 17:20 pid -> 'pid:[4026531836]' lrwxrwxrwx 1 root root 0 Jan 3 17:20 pidforchildren -> 'pid:[4026531836]' lrwxrwxrwx 1 root root 0 Jan 3 17:20 user -> 'user:[4026531837]' lrwxrwxrwx 1 root root 0 Jan 3 17:20 uts -> 'uts:[4026531838]' ``` We can see that the cgroup and user namespace are the same, since rkt doesn't use cgroup namespaces and user namespaces weren't enabled for this execution. If, for example, we run rkt with `--net=host`, we'll see that the network namespace is the same as the host's. Running we can see this information too, along with the PID that created the namespace: ``` sudo lsns -p 11026 NS TYPE NPROCS PID USER COMMAND 4026531835 cgroup 231 1 root /sbin/init 4026531837 user 231 1 root /sbin/init 4026532702 net 4 10945 root stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --notify-ready=yes --register=true --link-journal= 4026532761 mnt 1 11026 root /etcd 4026532763 uts 3 11021 root /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0 4026532764 ipc 3 11021 root /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0 4026532765 pid 3 11021 root /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0 ``` We can also see some interesting data about the process: ``` $ sudo cat /proc/11026/status Name: sh Umask: 0022 State: S (sleeping) ... CapBnd: 00000000a80425fb ... NoNewPrivs: 0 Seccomp: 2 ... ``` This tells us the container is not using the `nonewprivs` feature, but it is using . We can also see what are in the bounding set of the process, let's decode them with `capsh`: ``` $ capsh --decode=00000000a80425fb 0x00000000a80425fb=capchown,capdacoverride,capfowner,capfsetid,capkill,capsetgid,capsetuid,capsetpcap,capnetbindservice,capnetraw,capsyschroot,capmknod,capauditwrite,capsetfcap ``` Another interesting thing is the environment variables of the process: ``` $ sudo cat /proc/11026/environ | tr '\\0' '\\n' PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOME=/root LOGNAME=root USER=root SHELL=/bin/sh INVOCATION_ID=d5d94569d482495c809c113fca55abd4 TERM=xterm ACAPPNAME=busybox ``` Finally, we can check in which cgroups the process is: ``` $ sudo cat /proc/11026/cgroup 11:freezer:/ 10:rdma:/ 9:cpu,cpuacct:/machine.slice/machine-rkt\\x2d97910fdc\\x2d13ec\\x2d4025\\x2d8f93\\x2d5ddea0089eff.scope/system.slice/busybox.service 8:devices:/machine.slice/machine-rkt\\x2d97910fdc\\x2d13ec\\x2d4025\\x2d8f93\\x2d5ddea0089eff.scope/system.slice/busybox.service 7:blkio:/machine.slice/machine-rkt\\x2d97910fdc\\x2d13ec\\x2d4025\\x2d8f93\\x2d5ddea0089eff.scope/system.slice 6:memory:/machine.slice/machine-rkt\\x2d97910fdc\\x2d13ec\\x2d4025\\x2d8f93\\x2d5ddea0089eff.scope/system.slice/busybox.service 5:perf_event:/ 4:pids:/machine.slice/machine-rkt\\x2d97910fdc\\x2d13ec\\x2d4025\\x2d8f93\\x2d5ddea0089eff.scope/system.slice/busybox.service 3:netcls,netprio:/ 2:cpuset:/ 1:name=systemd:/machine.slice/machine-rkt\\x2d97910fdc\\x2d13ec\\x2d4025\\x2d8f93\\x2d5ddea0089eff.scope/system.slice/busybox.service 0::/machine.slice/machine-rkt\\x2d97910fdc\\x2d13ec\\x2d4025\\x2d8f93\\x2d5ddea0089eff.scope/system.slice/busybox.service ``` Let's explore the cgroups a bit more. systemd offers tools to easily inspect the cgroups of containers. We can use `systemd-cgls` to see the cgroup hierarchy of a container: ``` $ machinectl MACHINE CLASS SERVICE OS VERSION ADDRESSES rkt-97910fdc-13ec-4025-8f93-5ddea0089eff container rkt - - 172.16.28.25... 1 machines listed. $ systemd-cgls -M rkt-97910fdc-13ec-4025-8f93-5ddea0089eff Control group /machine.slice/machine-rkt\\x2d97910fdc\\x2d13ec\\x2d4025\\x2d8f93\\x2d5ddea0089eff.scope: -.slice init.scope 12474 /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0 system.slice busybox.service 12479 /bin/sh systemd-journald.service 12475 /usr/lib/systemd/systemd-journald ``` And we can use `systemd-cgtop` to see the resource consumption of the container. This is the output while running the `yes` command (which is basically an infinite loop that outputs the character `y`, so it takes all the CPU) in the container: ``` Control Group Tasks %CPU Memory Input/s Output/s" } ]
{ "category": "Runtime", "file_name": "inspect-containers.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "name: Feature request about: Suggest an idea/feature title: \"[FEATURE] \" labels: [\"kind/feature\", \"require/lep\", \"require/doc\", \"require/auto-e2e-test\", \"require/manual-test-plan\"] assignees: '' <!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]--> <!--A clear and concise description of what you want to happen--> <!--A clear and concise description of any alternative solutions or features you've considered.--> <!--Add any other context or screenshots about the feature request here.-->" } ]
{ "category": "Runtime", "file_name": "feature.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }