content
listlengths 1
171
| tag
dict |
---|---|
[
{
"data": "Longhorn has no boundary on the number of concurrent volume backup restoring. Having a new `concurrent-backup-restore-per-node-limit` setting allows the user to limit the concurring backup restoring. Setting this restriction lowers the potential risk of overloading the cluster when volumes restoring from backup concurrently. For ex: during the Longhorn system restore. https://github.com/longhorn/longhorn/issues/4558 Introduce a new `concurrent-backup-restore-per-node-limit` setting to define the boundary of the concurrent volume backup restoring. `None` Introduce a new `concurrent-backup-restore-per-node-limit` setting. Track the number of per-node volumes restoring from backup with atomic count (thread-safe) in the engine monitor. Allow the user to set the concurrent backup restore per node limit to control the risk of cluster overload when Longhorn volume is restoring from backup concurrently. Longhorn holds the engine backup restore when the number of volume backups restoring on a node reaches the `concurrent-backup-restore-per-node-limit`. The volume backup restore continues when the number of volume backups restoring on a node is below the limit. This setting controls how many engines on a node can restore the backup concurrently. Longhorn engine monitor backs off when the volume reaches the setting limit. Set the value to 0 to disable backup restore. ``` Category = SettingCategoryGeneral, Type = integer Default = 5 # same as the default replica rebuilding number ``` Create a new atomic counter in the engine controller. ``` type EngineController struct { restoringCounter util.Counter } ``` Pass the restoring counter to each of its engine monitors. ``` type EngineMonitor struct { restoringCounter util.Counter } ``` Increase the restoring counter before backup restore. > Ignore DR volumes (volume.Status.IsStandby). Decrease the restoring counter when the backup restore caller method ends Test the setting should block backup restore when creating multiple volumes from the backup at the same time. Test the setting should be per-node limited. Test the setting should not have effect on DR volumes. `None` `None`"
}
] | {
"category": "Runtime",
"file_name": "20221205-concurrent-backup-restore-limit.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Resolve all of the configuration sources that apply to this node ``` cilium-dbg build-config --node-name $K8SNODENAME [flags] ``` ``` --allow-config-keys strings List of configuration keys that are allowed to be overridden (e.g. set from not the first source. Takes precedence over deny-config-keys --deny-config-keys strings List of configuration keys that are not allowed to be overridden (e.g. set from not the first source. If allow-config-keys is set, this field is ignored --dest string Destination directory to write the fully-resolved configuration. (default \"/tmp/cilium/config-map\") --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API -h, --help help for build-config --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --node-name string The name of the node on which we are running. Also set via K8SNODENAME environment. --source strings Ordered list of configuration sources. Supported values: config-map:<namespace>/name - a ConfigMap with <name>, optionally in namespace <namespace>. cilium-node-config:<NAMESPACE> - any CiliumNodeConfigs in namespace <NAMESPACE>. node:<NODENAME> - Annotations on the node. Namespace and nodename are optional (default [config-map:cilium-config,cilium-node-config:]) ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI"
}
] | {
"category": "Runtime",
"file_name": "cilium-dbg_build-config.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "This document describes how to import Kata Containers logs into , typically for importing into an Elastic/Fluentd/Kibana() or Elastic/Logstash/Kibana() stack. The majority of this document focusses on CRI-O based (classic) Kata runtime. Much of that information also applies to the Kata `shimv2` runtime. Differences pertaining to Kata `shimv2` can be found in their . Note: This document does not cover any aspect of \"log rotation\". It is expected that any production stack already has a method in place to control node log growth. Kata generates logs. The logs can come from numerous parts of the Kata stack (the runtime, proxy, shim and even the agent). By default the logs , but they can also be configured to be stored in files. The logs default format is in , but can be switched to be JSON with a command line option. Provided below are some examples of Kata log import and processing using . Some of the testing we can perform locally, but other times we really need a live stack for testing. We will use a stack with EFK enabled and Kata installed to do our tests. Some details such as specific paths and versions of components may need to be adapted to your specific installation. The was used to install `minikube` with Kata Containers enabled. The minikube EFK stack `addon` is then enabled: ```bash $ minikube addons enable efk ``` Note: Installing and booting EFK can take a little while - check progress with `kubectl get pods -n=kube-system` and wait for all the pods to get to the `Running` state. Kata offers us two choices to make when storing the logs: Do we store them to the system log, or to separate files? Do we store them in `logfmt` format, or `JSON`? We will start by examining the Kata default setup (`logfmt` stored in the system log), and then look at other options. Fluentd contains both a component that can read the `systemd` system journals, and a component that can parse `logfmt` entries. We will utilise these in two separate steps to evaluate how well the Kata logs import to the EFK stack. Note: Setting up, configuration and deployment of `minikube` is not covered in exacting detail in this guide. It is presumed the user has the abilities and their own Kubernetes/Fluentd stack they are able to utilise in order to modify and test as necessary. Minikube by default the `systemd-journald` with the option, which results in the journal being stored in `/run/log/journal`. Unfortunately, the Minikube EFK Fluentd install extracts most of its logs in `/var/log`, and therefore does not mount `/run/log` into the Fluentd pod by default. This prevents us from reading the system journal by default. This can be worked around by patching the Minikube EFK `addon` YAML to mount `/run/log` into the Fluentd container: ```patch diff --git a/deploy/addons/efk/fluentd-es-rc.yaml.tmpl b/deploy/addons/efk/fluentd-es-rc.yaml.tmpl index 75e386984..83bea48b9 100644 a/deploy/addons/efk/fluentd-es-rc.yaml.tmpl +++"
},
{
"data": "@@ -44,6 +44,8 @@ spec: volumeMounts: name: varlog mountPath: /var/log - name: runlog mountPath: /run/log name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true @@ -57,6 +59,9 @@ spec: name: varlog hostPath: path: /var/log - name: runlog hostPath: path: /run/log name: varlibdockercontainers hostPath: path: /var/lib/docker/containers ``` Note: After making this change you will need to build your own `minikube` to encapsulate and use this change, or fine another method to (re-)launch the Fluentd containers for the change to take effect. We will start with testing Fluentd pulling the Kata logs directly from the system journal with the Fluentd . We modify the Fluentd config file with the following fragment. For reference, the Minikube YAML can be found : Note: The below Fluentd config fragment is in the \"older style\" to match the Minikube version of Fluentd. If using a more up to date version of Fluentd, you may need to update some parameters, such as using `matches` rather than `filters` and placing `@` before `type`. Your Fluentd should warn you in its logs if such updates are necessary. ``` <source> type systemd tag kata-containers path /run/log/journal pos_file /run/log/journal/kata-journald.pos filters [{\"SYSLOGIDENTIFIER\": \"kata-runtime\"}, {\"SYSLOGIDENTIFIER\": \"kata-shim\"}] readfromhead true </source> ``` We then apply the new YAML, and restart the Fluentd pod (by killing it, and letting the `ReplicationController` start a new instance, which will pick up the new `ConfigurationMap`): ```bash $ kubectl apply -f new-fluentd-cm.yaml $ kubectl delete pod -n=kube-system fluentd-es-XXXXX ``` Now open the Kibana UI to the Minikube EFK `addon`, and launch a Kata QEMU based test pod in order to generate some Kata specific log entries: ```bash $ minikube addons open efk $ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy $ kubectl apply -f examples/nginx-deployment-qemu.yaml ``` Looking at the Kibana UI, we can now see that some `kata-runtime` tagged records have appeared: If we now filter on that tag, we can see just the Kata related entries If we expand one of those entries, we can see we have imported useful information. You can then sub-filter on, for instance, the `SYSLOG_IDENTIFIER` to differentiate the Kata components, and on the `PRIORITY` to filter out critical issues etc. Kata generates a significant amount of Kata specific information, which can be seen as . data contained in the `MESSAGE` field. Imported as-is, there is no easy way to filter on that data in Kibana: . We can however further sub-parse the Kata entries using the that will parse `logfmt` formatted data. We can utilise these to parse the sub-fields using a Fluentd filter section. At the same time, we will prefix the new fields with `kata_` to make it clear where they have come from: ``` <filter kata-containers> @type parser key_name MESSAGE format logfmt reserve_data true injectkeyprefix kata_ </filter> ``` The Minikube Fluentd version does not come with the `logfmt` parser installed, so we will run a local test to check the parsing works. The resulting output from Fluentd is: ``` 2020-02-21 10:31:27.810781647 +0000 kata-containers: {\"BOOTID\":\"590edceeef5545a784ec8c6181a10400\", \"MACHINEID\":\"3dd49df65a1b467bac8d51f2eaa17e92\", \"_HOSTNAME\":\"minikube\", \"PRIORITY\":\"6\", \"_UID\":\"0\", \"_GID\":\"0\", \"SYSTEMDSLICE\":\"system.slice\", \"SELINUXCONTEXT\":\"kernel\", \"CAPEFFECTIVE\":\"3fffffffff\", \"_TRANSPORT\":\"syslog\", \"SYSTEMDCGROUP\":\"/system.slice/crio.service\", \"SYSTEMDUNIT\":\"crio.service\", \"SYSTEMDINVOCATION_ID\":\"f2d99c784e6f406c87742f4bca16a4f6\", \"SYSLOG_IDENTIFIER\":\"kata-runtime\", \"_COMM\":\"kata-runtime\", \"_EXE\":\"/opt/kata/bin/kata-runtime\", \"SYSLOG_TIMESTAMP\":\"Feb 21 10:31:27 \", \"_CMDLINE\":\"/opt/kata/bin/kata-runtime --config /opt/kata/share/defaults/kata-containers/configuration-qemu.toml --root /run/runc state 7cdd31660d8705facdadeb8598d2c0bd008e8142c54e3b3069abd392c8d58997\", \"SYSLOG_PID\":\"14314\", \"_PID\":\"14314\", \"MESSAGE\":\"time=\\\"2020-02-21T10:31:27.810781647Z\\\" level=info msg=\\\"release sandbox\\\" arch=amd64 command=state container=7cdd31660d8705facdadeb8598d2c0bd008e8142c54e3b3069abd392c8d58997 name=kata-runtime pid=14314 sandbox=1c3e77cad66aa2b6d8cc846f818370f79cb0104c0b840f67d0f502fd6562b68c source=virtcontainers subsystem=sandbox\", \"SYSLOG_RAW\":\"<6>Feb 21 10:31:27 kata-runtime[14314]:"
},
{
"data": "level=info msg=\\\"release sandbox\\\" arch=amd64 command=state container=7cdd31660d8705facdadeb8598d2c0bd008e8142c54e3b3069abd392c8d58997 name=kata-runtime pid=14314 sandbox=1c3e77cad66aa2b6d8cc846f818370f79cb0104c0b840f67d0f502fd6562b68c source=virtcontainers subsystem=sandbox\\n\", \"SOURCEREALTIME_TIMESTAMP\":\"1582281087810805\", \"kata_level\":\"info\", \"kata_msg\":\"release sandbox\", \"kata_arch\":\"amd64\", \"kata_command\":\"state\", \"kata_container\":\"7cdd31660d8705facdadeb8598d2c0bd008e8142c54e3b3069abd392c8d58997\", \"kata_name\":\"kata-runtime\", \"kata_pid\":14314, \"kata_sandbox\":\"1c3e77cad66aa2b6d8cc846f818370f79cb0104c0b840f67d0f502fd6562b68c\", \"kata_source\":\"virtcontainers\", \"kata_subsystem\":\"sandbox\"} ``` Here we can see that the `MESSAGE` field has been parsed out and pre-pended into the `kata_*` fields, which contain usefully filterable fields such as `katalevel`, `katacommand` and `kata_subsystem` etc. We have managed to configure Fluentd to capture the Kata logs entries from the system journal, and further managed to then parse out the `logfmt` message into JSON to allow further analysis inside Elastic/Kibana. The underlying basic data format used by Fluentd and Elastic is JSON. If we output JSON directly from Kata, that should make overall import and processing of the log entries more efficient. There are potentially two things we can do with Kata here: Get Kata to rather than `logfmt`. Get Kata to log directly into a file, rather than via the system journal. This would allow us to not need to parse the systemd format files, and capture the Kata log lines directly. It would also avoid Fluentd having to potentially parse or skip over many non-Kata related systemd journal that it is not at all interested in. In theory we could get Kata to post its messages in JSON format to the systemd journal by adding the `--log-format=json` option to the Kata runtime, and then swapping the `logfmt` parser for the `json` parser, but we would still need to parse the systemd files. We will skip this setup in this document, and go directly to a full Kata specific JSON format logfile test. Kata runtime has the ability to generate JSON logs directly, rather than its default `logfmt` format. Passing the `--log-format=json` argument to the Kata runtime enables this. The easiest way to pass in this extra parameter from a installation is to edit the `/opt/kata/bin/kata-qemu` shell script. At the same time, we will add the `--log=/var/log/kata-runtime.log` argument to store the Kata logs in their own file (rather than into the system journal). ```bash /opt/kata/bin/kata-runtime --config \"/opt/kata/share/defaults/kata-containers/configuration-qemu.toml\" --log-format=json --log=/var/log/kata-runtime.log $@ ``` And then we'll add the Fluentd config section to parse that file. Note, we inform the parser that Kata is generating timestamps in `iso8601` format. Kata places these timestamps into a field called `time`, which is the default field the Fluentd parser looks for: ``` <source> type tail tag kata-containers path /var/log/kata-runtime.log pos_file /var/log/kata-runtime.pos format json time_format %iso8601 readfromhead true </source> ``` This imports the `kata-runtime` logs, with the resulting records looking like: Something to note here is that we seem to have gained an awful lot of fairly identical looking fields in the elastic database: In reality, they are not all identical, but do come out of one of the Kata log entries - from the `kill` command. A JSON fragment showing an example is below: ```json {"
},
{
"data": "\"EndpointProperties\": { \"Iface\": { \"Index\": 4, \"MTU\": 1460, \"TxQLen\": 0, \"Name\": \"eth0\", \"HardwareAddr\": \"ClgKAQAL\", \"Flags\": 19, \"RawFlags\": 69699, \"ParentIndex\": 15, \"MasterIndex\": 0, \"Namespace\": null, \"Alias\": \"\", \"Statistics\": { \"RxPackets\": 1, \"TxPackets\": 5, \"RxBytes\": 42, \"TxBytes\": 426, \"RxErrors\": 0, \"TxErrors\": 0, \"RxDropped\": 0, \"TxDropped\": 0, \"Multicast\": 0, \"Collisions\": 0, \"RxLengthErrors\": 0, \"RxOverErrors\": 0, \"RxCrcErrors\": 0, \"RxFrameErrors\": 0, \"RxFifoErrors\": 0, \"RxMissedErrors\": 0, \"TxAbortedErrors\": 0, \"TxCarrierErrors\": 0, \"TxFifoErrors\": 0, \"TxHeartbeatErrors\": 0, \"TxWindowErrors\": 0, \"RxCompressed\": 0, \"TxCompressed\": 0 ... ``` If these new fields are not required, then a Fluentd could be used to delete them before they are injected into Elastic. It may be noted above that all the fields are imported with their base native name, such as `arch` and `level`. It may be better for data storage and processing if all the fields were identifiable as having come from Kata, and avoid namespace clashes with other imports. This can be achieved by prefixing all the keys with, say, `kata_`. It appears `fluend` cannot do this directly in the input or match phases, but can in the filter/parse phase (as was done when processing `logfmt` data for instance). To achieve this, we can first input the Kata JSON data as a single line, and then add the prefix using a JSON filter section: ``` <source> @type tail path /var/log/kata-runtime.log pos_file /var/log/kata-runtime.pos readfromhead true tag kata-runtime <parse> @type none </parse> </source> <filter kata-runtime> @type parser key_name message reserve_data false injectkeyprefix kata_ <parse> @type json </parse> </filter> ``` When using the Kata `shimv2` runtime with `containerd`, as described in this , the Kata logs are routed differently, and some adjustments to the above methods will be necessary to filter them in Fluentd. The Kata `shimv2` logs are different in two primary ways: The Kata logs are directed via `containerd`, and will be captured along with the `containerd` logs, such as on the containerd stdout or in the system journal. In parallel, Kata `shimv2` places its logs into the system journal under the systemd name of `kata`. Below is an example Fluentd configuration fragment showing one possible method of extracting and separating the `containerd` and Kata logs from the system journal by filtering on the Kata `SYSLOG_IDENTIFIER` field, using the : ```yaml <source> type systemd path /path/to/journal filters [{ \"SYSTEMDUNIT\": \"containerd.service\" }] pos_file /tmp/systemd-containerd.pos readfromhead true tag containerdtmptag </source> <match containerdtmptag> @type rewritetagfilter <rule> key SYSLOG_IDENTIFIER pattern kata tag kata_tag </rule> <rule> key MESSAGE pattern /.+/ tag containerd_tag </rule> </match> ``` Warning: You should be aware of the following caveats, which may disrupt or change what and how you capture and process the Kata Containers logs. The following caveats should be noted: There is a whereby enabling full debug in Kata, particularly enabling agent kernel log messages, can result in corrupt log lines being generated by Kata (due to overlapping multiple output streams). Presently only the `kata-runtime` can generate JSON logs, and direct them to files. Other components such as the `proxy` and `shim` can only presently report to the system journal. Hopefully these components will be extended with extra functionality. We have shown how native Kata logs using the systemd journal and `logfmt` data can be import, and also how Kata can be instructed to generate JSON logs directly, and import those into Fluentd. We have detailed a few known caveats, and leave it to the implementer to choose the best method for their system."
}
] | {
"category": "Runtime",
"file_name": "how-to-import-kata-logs-with-fluentd.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "(containers-and-vms)= Incus provides support for two different types of {ref}`instances <expl-instances>`: system containers and virtual machines. Incus uses features of the Linux kernel (such as `namespaces` and `cgroups`) in the implementation of system containers. These features provide a software-only way to isolate and restrict a running system container. A system container can only be based on the Linux kernel. When running a virtual machine, Incus uses hardware features of the the host system as a way to isolate and restrict a running virtual machine. Therefore, virtual machines can be used to run, for example, different operating systems than the host system. | Virtual Machines | Application Containers | System Containers | | :--: | :--: | :--: | | Uses a dedicated kernel | Uses the kernel of the host | Uses the kernel of the host | | Can host different types of OS | Can only host Linux | Can only host Linux | | Uses more resources | Uses less resources | Uses less resources | | Requires hardware virtualization | Software-only | Software-only | | Can host multiple applications | Can host a single app | Can host multiple applications | | Supported by Incus | Supported by Docker | Supported by Incus | Application containers (as provided by, for example, Docker) package a single process or application. System containers, on the other hand, simulate a full operating system similar to what you would be running on a host or in a virtual machine. You can run Docker in an Incus system container, but you would not run Incus in a Docker application container. Therefore, application containers are suitable to provide separate components, while system containers provide a full solution of libraries, applications, databases and so on. In addition, you can use system containers to create different user spaces and isolate all processes belonging to each user space, which is not what application containers are intended for. Virtual machines create a virtual version of a physical machine, using hardware features of the host system. The boundaries between the host system and virtual machines is enforced by those hardware features. System containers, on the other hand, use the already running OS kernel of the host system instead of launching their own kernel. If you run several system containers, they all share the same kernel, which makes them faster and more lightweight than virtual machines. With Incus, you can create both system containers and virtual machines. You should use a system container to leverage the smaller size and increased performance if all functionality you require is compatible with the kernel of your host operating system. If you need functionality that is not supported by the OS kernel of your host system or you want to run a completely different OS, use a virtual machine."
}
] | {
"category": "Runtime",
"file_name": "containers_and_vms.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "- - - - - - - - - Velero introduced the concept of Resource Modifiers in v1.12.0. This feature allows the user to specify a configmap with a set of rules to modify the resources during restore. The user can specify the filters to select the resources and then specify the JSON Patch to apply on the resource. This feature is currently limited to the operations supported by JSON Patch RFC. This proposal is to add support for JSON Merge Patch and Strategic Merge Patch in the Resource Modifiers. This will allow the user to use the same configmap to apply JSON Merge Patch and Strategic Merge Patch on the resources during restore. Allow the user to specify a JSON patch, JSON Merge Patch or Strategic Merge Patch for modification. Allow the user to specify multiple JSON Patch, JSON Merge Patch or Strategic Merge Patch. Allow the user to specify mixed JSON Patch, JSON Merge Patch and Strategic Merge Patch in the same configmap. Deprecating the existing RestoreItemAction plugins for standard substitutions(like changing the namespace, changing the storage class, etc.) Alice has some Pods and part of them have an annotation `{\"for\": \"bar\"}`. Alice wishes to restore these Pods to a different cluster without this annotation. Alice can use this feature to remove this annotation during restore. Bob has a Pod with several containers and one container with name nginx has an image `repo1/nginx`. Bob wishes to restore this Pod to a different cluster, but new cluster can not access repo1, so he pushes the image to repo2. Bob can use this feature to update the image of container nginx to `repo2/nginx` during restore. The design and approach is inspired by kubectl patch command and . New fields `MergePatches` and `StrategicPatches` will be added to the `ResourceModifierRule` struct to support all three patch types. Only one of the three patch types can be specified in a single `ResourceModifierRule`. Add wildcard support for `groupResource` in `conditions` struct. The workflow to create Resource Modifier ConfigMap and reference it in RestoreSpec will remain the same as described in document . is a naively simple format, with limited usability. Probably it is a good choice if you are building something small, with very simple JSON Schema. is a more complex format, but it is applicable to any JSON documents. For a comparison of JSON patch and JSON merge patch, see . Strategic Merge Patch is a Kubernetes defined patch type, mainly used to process resources of type list. You can replace/merge a list, add/remove items from a list by key, change the order of items in a list, etc. Strategic merge patch is not supported for custom resources. For more details, see . MergePatches is a list to specify the merge patches to be applied on the resource. The merge patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous"
},
{
"data": "Example of MergePatches in ResourceModifierRule ```yaml version: v1 resourceModifierRules: conditions: groupResource: pods namespaces: ns1 mergePatches: patchData: | { \"metadata\": { \"annotations\": { \"foo\": null } } } ``` The above configmap will apply the Merge Patch to all the pods in namespace ns1 and remove the annotation `foo` from the pods. Both json and yaml format are supported for the patchData. StrategicPatches is a list to specify the strategic merge patches to be applied on the resource. The strategic merge patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches. Example of StrategicPatches in ResourceModifierRule ```yaml version: v1 resourceModifierRules: conditions: groupResource: pods resourceNameRegex: \"^my-pod$\" namespaces: ns1 strategicPatches: patchData: | { \"spec\": { \"containers\": [ { \"name\": \"nginx\", \"image\": \"repo2/nginx\" } ] } } ``` The above configmap will apply the Strategic Merge Patch to the pod with name my-pod in namespace ns1 and update the image of container nginx to `repo2/nginx`. Both json and yaml format are supported for the patchData. Since JSON Merge Patch and Strategic Merge Patch do not support conditional patches, we will use the `test` operation of JSON Patch to support conditional patches in all patch types by adding it to `Conditions` struct in `ResourceModifierRule`. Example of test in conditions ```yaml version: v1 resourceModifierRules: conditions: groupResource: persistentvolumeclaims.storage.k8s.io matches: path: \"/spec/storageClassName\" value: \"premium\" mergePatches: patchData: | { \"metadata\": { \"annotations\": { \"foo\": null } } } ``` The above configmap will apply the Merge Patch to all the PVCs in all namespaces with storageClassName premium and remove the annotation `foo` from the PVCs. You can specify multiple rules in the `matches` list. The patch will be applied only if all the matches are satisfied. The user can specify a wildcard for groupResource in the conditions' struct. This will allow the user to apply the patches for all the resources of a particular group or all resources in all groups. For example, `.apps` will apply to all the resources in the `apps` group, `` will apply to all the resources in all groups. The patchData of Strategic Merge Patch is sometimes a bit complex for user to write. We can provide a helper command to generate the patchData for Strategic Merge Patch. The command will take the original resource and the modified resource as input and generate the patchData. It can also be used in JSON Merge Patch. Here is a sample code snippet to achieve this: ```go package main import ( \"fmt\" corev1 \"k8s.io/api/core/v1\" \"sigs.k8s.io/controller-runtime/pkg/client\" ) func main() { pod := &corev1.Pod{ Spec: corev1.PodSpec{ Containers: []corev1.Container{ { Name: \"web\", Image: \"nginx\", }, }, }, } newPod := pod.DeepCopy() patch := client.StrategicMergeFrom(pod) newPod.Spec.Containers[0].Image = \"nginx1\" data, _ := patch.Data(newPod) fmt.Println(string(data)) // Output: // {\"spec\":{\"$setElementOrder/containers\":[{\"name\":\"web\"}],\"containers\":[{\"image\":\"nginx1\",\"name\":\"web\"}]}} } ``` No security impact. Compatible with current Resource Modifiers. Use \"github.com/evanphx/json-patch\" to support JSON Merge Patch. Use \"k8s.io/apimachinery/pkg/util/strategicpatch\" to support Strategic Merge Patch. Use glob to support wildcard for `groupResource` in `conditions` struct. Use `test` operation of JSON Patch to calculate the `matches` in `conditions` struct. add a Velero subcommand to generate/validate the patchData for Strategic Merge Patch and JSON Merge Patch. add jq support for more complex conditions or patches, to meet the situations that the current conditions or patches can not handle. like N/A"
}
] | {
"category": "Runtime",
"file_name": "merge-patch-and-strategic-in-resource-modifier.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Name | Type | Description | Notes | - | - | - Size | int64 | | HotplugSize | Pointer to int64 | | [optional] HotpluggedSize | Pointer to int64 | | [optional] Mergeable | Pointer to bool | | [optional] [default to false] HotplugMethod | Pointer to string | | [optional] [default to \"Acpi\"] Shared | Pointer to bool | | [optional] [default to false] Hugepages | Pointer to bool | | [optional] [default to false] HugepageSize | Pointer to int64 | | [optional] Prefault | Pointer to bool | | [optional] [default to false] Thp | Pointer to bool | | [optional] [default to true] Zones | Pointer to | | [optional] `func NewMemoryConfig(size int64, ) *MemoryConfig` NewMemoryConfig instantiates a new MemoryConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewMemoryConfigWithDefaults() *MemoryConfig` NewMemoryConfigWithDefaults instantiates a new MemoryConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *MemoryConfig) GetSize() int64` GetSize returns the Size field if non-nil, zero value otherwise. `func (o MemoryConfig) GetSizeOk() (int64, bool)` GetSizeOk returns a tuple with the Size field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetSize(v int64)` SetSize sets Size field to given value. `func (o *MemoryConfig) GetHotplugSize() int64` GetHotplugSize returns the HotplugSize field if non-nil, zero value otherwise. `func (o MemoryConfig) GetHotplugSizeOk() (int64, bool)` GetHotplugSizeOk returns a tuple with the HotplugSize field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetHotplugSize(v int64)` SetHotplugSize sets HotplugSize field to given value. `func (o *MemoryConfig) HasHotplugSize() bool` HasHotplugSize returns a boolean if a field has been set. `func (o *MemoryConfig) GetHotpluggedSize() int64` GetHotpluggedSize returns the HotpluggedSize field if non-nil, zero value otherwise. `func (o MemoryConfig) GetHotpluggedSizeOk() (int64, bool)` GetHotpluggedSizeOk returns a tuple with the HotpluggedSize field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetHotpluggedSize(v int64)` SetHotpluggedSize sets HotpluggedSize field to given value. `func (o *MemoryConfig) HasHotpluggedSize() bool` HasHotpluggedSize returns a boolean if a field has been set. `func (o *MemoryConfig) GetMergeable() bool` GetMergeable returns the Mergeable field if non-nil, zero value otherwise. `func (o MemoryConfig) GetMergeableOk() (bool, bool)` GetMergeableOk returns a tuple with the Mergeable field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetMergeable(v bool)` SetMergeable sets Mergeable field to given value. `func (o *MemoryConfig) HasMergeable() bool` HasMergeable returns a boolean if a field has been set. `func (o *MemoryConfig) GetHotplugMethod() string` GetHotplugMethod returns the HotplugMethod field if non-nil, zero value"
},
{
"data": "`func (o MemoryConfig) GetHotplugMethodOk() (string, bool)` GetHotplugMethodOk returns a tuple with the HotplugMethod field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetHotplugMethod(v string)` SetHotplugMethod sets HotplugMethod field to given value. `func (o *MemoryConfig) HasHotplugMethod() bool` HasHotplugMethod returns a boolean if a field has been set. `func (o *MemoryConfig) GetShared() bool` GetShared returns the Shared field if non-nil, zero value otherwise. `func (o MemoryConfig) GetSharedOk() (bool, bool)` GetSharedOk returns a tuple with the Shared field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetShared(v bool)` SetShared sets Shared field to given value. `func (o *MemoryConfig) HasShared() bool` HasShared returns a boolean if a field has been set. `func (o *MemoryConfig) GetHugepages() bool` GetHugepages returns the Hugepages field if non-nil, zero value otherwise. `func (o MemoryConfig) GetHugepagesOk() (bool, bool)` GetHugepagesOk returns a tuple with the Hugepages field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetHugepages(v bool)` SetHugepages sets Hugepages field to given value. `func (o *MemoryConfig) HasHugepages() bool` HasHugepages returns a boolean if a field has been set. `func (o *MemoryConfig) GetHugepageSize() int64` GetHugepageSize returns the HugepageSize field if non-nil, zero value otherwise. `func (o MemoryConfig) GetHugepageSizeOk() (int64, bool)` GetHugepageSizeOk returns a tuple with the HugepageSize field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetHugepageSize(v int64)` SetHugepageSize sets HugepageSize field to given value. `func (o *MemoryConfig) HasHugepageSize() bool` HasHugepageSize returns a boolean if a field has been set. `func (o *MemoryConfig) GetPrefault() bool` GetPrefault returns the Prefault field if non-nil, zero value otherwise. `func (o MemoryConfig) GetPrefaultOk() (bool, bool)` GetPrefaultOk returns a tuple with the Prefault field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetPrefault(v bool)` SetPrefault sets Prefault field to given value. `func (o *MemoryConfig) HasPrefault() bool` HasPrefault returns a boolean if a field has been set. `func (o *MemoryConfig) GetThp() bool` GetThp returns the Thp field if non-nil, zero value otherwise. `func (o MemoryConfig) GetThpOk() (bool, bool)` GetThpOk returns a tuple with the Thp field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetThp(v bool)` SetThp sets Thp field to given value. `func (o *MemoryConfig) HasThp() bool` HasThp returns a boolean if a field has been set. `func (o *MemoryConfig) GetZones() []MemoryZoneConfig` GetZones returns the Zones field if non-nil, zero value otherwise. `func (o MemoryConfig) GetZonesOk() ([]MemoryZoneConfig, bool)` GetZonesOk returns a tuple with the Zones field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryConfig) SetZones(v []MemoryZoneConfig)` SetZones sets Zones field to given value. `func (o *MemoryConfig) HasZones() bool` HasZones returns a boolean if a field has been set."
}
] | {
"category": "Runtime",
"file_name": "MemoryConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "The MIT License (MIT) Copyright (c) 2014 Brian Goff Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
}
] | {
"category": "Runtime",
"file_name": "LICENSE.md",
"project_name": "runc",
"subcategory": "Container Runtime"
} |
[
{
"data": "This document describes some security recommendations when deploying Antrea in a cluster, and in particular a [multi-tenancy cluster](https://cloud.google.com/kubernetes-engine/docs/concepts/multitenancy-overview#whatismulti-tenancy). To report a vulnerability in Antrea, please refer to . For information about securing Antrea control-plane communications, refer to this . Like all other K8s Network Plugins, Antrea runs an agent (the Antrea Agent) on every Node on the cluster, using a K8s DaemonSet. And just like for other K8s Network Plugins, this agent requires a specific set of permissions which grant it access to the K8s API using . These permissions are required to implement the different features offered by Antrea. If any Node in the cluster happens to become compromised (e.g., by an escaped container) and the token for the `antrea-agent` ServiceAccount is harvested by the attacker, some of these permissions can be leveraged to negatively affect other workloads running on the cluster. In particular, the Antrea Agent is granted the following permissions: `patch` the `pods/status` resources: a successful attacker could abuse this permission to re-label Pods to facilitate [confused deputy attacks](https://en.wikipedia.org/wiki/Confuseddeputyproblem) against built-in controllers. For example, making a Pod match a Service selector in order to man-in-the-middle (MITM) the Service traffic, or making a Pod match a ReplicaSet selector so that the ReplicaSet controller deletes legitimate replicas. `patch` the `nodes/status` resources: a successful attacker could abuse this permission to affect scheduling by modifying Node fields like labels, capacity, and conditions. In both cases, the Antrea Agent only requires the ability to mutate the annotations field for all Pods and Nodes, but with K8s RBAC, the lowest permission level that we can grant the Antrea Agent to satisfy this requirement is the `patch` verb for the `status` subresource for Pods and Nodes (which also provides the ability to mutate labels). To mitigate the risk presented by these permissions in case of a compromised token, we suggest that you use , with the appropriate policy. We provide the following Gatekeeper policy, consisting of a `ConstraintTemplate` and the corresponding `Constraint`. When using this policy, it will no longer be possible for the `antrea-agent` ServiceAccount to mutate anything besides annotations for the Pods and Nodes resources. ```yaml apiVersion: templates.gatekeeper.sh/v1 kind: ConstraintTemplate metadata: name: antreaagentstatusupdates annotations: description: >- Disallows unauthorized updates to status subresource by Antrea Agent Only annotations can be mutated spec: crd: spec: names: kind: AntreaAgentStatusUpdates targets: target: admission.k8s.gatekeeper.sh rego: | package antreaagentstatusupdates username := object.get(input.review.userInfo, \"username\", \"\") targetUsername := \"system:serviceaccount:kube-system:antrea-agent\" allowed_mutation(object, oldObject) { object.status == oldObject.status object.metadata.labels == oldObject.metadata.labels } violation[{\"msg\": msg}] { username == targetUsername input.review.operation == \"UPDATE\" input.review.requestSubResource == \"status\" not allowed_mutation(input.review.object, input.review.oldObject) msg := \"Antrea Agent is not allowed to mutate this field\" } ``` ```yaml apiVersion:"
},
{
"data": "kind: AntreaAgentStatusUpdates metadata: name: antrea-agent-status-updates spec: match: kinds: apiGroups: [\"\"] kinds: [\"Pod\", \"Node\"] ``` *Please ensure that the `ValidatingWebhookConfiguration` for your Gatekeeper installation enables policies to be applied on the `pods/status` and `nodes/status` subresources, which may not be the case by default.* As a reference, the following `ValidatingWebhookConfiguration` rule will cause policies to be applied to all resources and their subresources: ```yaml apiGroups: '*' apiVersions: '*' operations: CREATE UPDATE resources: '/' scope: '*' ``` while the following rule will cause policies to be applied to all resources, but not their subresources: ```yaml apiGroups: '*' apiVersions: '*' operations: CREATE UPDATE resources: '*' scope: '*' ``` The Antrea Controller, which runs as a single-replica Deployment, enjoys higher level permissions than the Antrea Agent. We recommend for production clusters running Antrea to schedule the `antrea-controller` Pod on a \"secure\" Node, which could for example be the Node (or one of the Nodes) running the K8s control-plane. Antrea relies on persisting files on each K8s Node's filesystem, in order to minimize disruptions to network functions across Antrea Agent restarts, in particular during an upgrade. All these files are located under `/var/run/antrea/`. The most notable of these files is `/var/run/antrea/openvswitch/conf.db`, which stores the Open vSwitch database. Prior to Antrea v0.10, any user had read access to the file on the host (permissions were set to `0644`). Starting with v0.10, this is no longer the case (permissions are now set to `0640`). Starting with v0.13, we further remove access to the `/var/run/antrea/` directory for non-root users (permissions are set to `0750`). If a malicious Pod can gain read access to this file, or, prior to Antrea v0.10, if an attacker can gain access to the host, they can potentially access sensitive information stored in the database, most notably the Pre-Shared Key (PSK) used to configure , which is stored in plaintext in the database. If a PSK is leaked, an attacker can mount a man-in-the-middle attack and intercept tunnel traffic. If a malicious Pod can gain write access to this file, it can modify the contents of the database, and therefore impact network functions. Administrators of multi-tenancy clusters running Antrea should take steps to restrict the access of Pods to `/var/run/antrea/`. One way to achieve this is to use a and restrict the set of allowed to exclude `hostPath`. This guidance applies to all multi-tenancy clusters and is not specific to Antrea. To quote the K8s documentation: There are many ways a container with unrestricted access to the host filesystem can escalate privileges, including reading data from other containers, and abusing the credentials of system services, such as Kubelet. An alternative solution to K8s PodSecurityPolicies is to use to constrain usage of the host filesystem by Pods."
}
] | {
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "](https://travis-ci.org/kubernetes-sigs/yaml) kubernetes-sigs/yaml is a permanent fork of . A wrapper around designed to enable a better way of handling YAML when marshaling to and from structs. In short, this library first converts YAML to JSON using go-yaml and then uses `json.Marshal` and `json.Unmarshal` to convert to or from the struct. This means that it effectively reuses the JSON struct tags as well as the custom JSON methods `MarshalJSON` and `UnmarshalJSON` unlike go-yaml. For a detailed overview of the rationale behind this method, . This package uses and therefore supports . Caveat #1: When using `yaml.Marshal` and `yaml.Unmarshal`, binary data should NOT be preceded with the `!!binary` YAML tag. If you do, go-yaml will convert the binary data from base64 to native binary data, which is not compatible with JSON. You can still use binary in your YAML files though - just store them without the `!!binary` tag and decode the base64 in your code (e.g. in the custom JSON methods `MarshalJSON` and `UnmarshalJSON`). This also has the benefit that your YAML and your JSON binary data will be decoded exactly the same way. As an example: ``` BAD: exampleKey: !!binary gIGC GOOD: exampleKey: gIGC ... and decode the base64 data in your code. ``` Caveat #2: When using `YAMLToJSON` directly, maps with keys that are maps will result in an error since this is not supported by JSON. This error will occur in `Unmarshal` as well since you can't unmarshal map keys anyways since struct fields can't be keys. To install, run: ``` $ go get sigs.k8s.io/yaml ``` And import using: ``` import \"sigs.k8s.io/yaml\" ``` Usage is very similar to the JSON library: ```go package main import ( \"fmt\" \"sigs.k8s.io/yaml\" ) type Person struct { Name string `json:\"name\"` // Affects YAML field names too. Age int `json:\"age\"` } func main() { // Marshal a Person struct to YAML. p := Person{\"John\", 30} y, err := yaml.Marshal(p) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(y)) /* Output: age: 30 name: John */ // Unmarshal the YAML back into a Person struct. var p2 Person err = yaml.Unmarshal(y, &p2) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(p2) /* Output: {John 30} */ } ``` `yaml.YAMLToJSON` and `yaml.JSONToYAML` methods are also available: ```go package main import ( \"fmt\" \"sigs.k8s.io/yaml\" ) func main() { j := []byte(`{\"name\": \"John\", \"age\": 30}`) y, err := yaml.JSONToYAML(j) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(y)) /* Output: age: 30 name: John */ j2, err := yaml.YAMLToJSON(y) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(j2)) /* Output: {\"age\":30,\"name\":\"John\"} */ } ```"
}
] | {
"category": "Runtime",
"file_name": "README.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
} |
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for bash Generate the autocompletion script for the bash shell. This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager. To load completions in your current shell session: source <(cilium-operator-aws completion bash) To load completions for every new session, execute once: cilium-operator-aws completion bash > /etc/bash_completion.d/cilium-operator-aws cilium-operator-aws completion bash > $(brew --prefix)/etc/bash_completion.d/cilium-operator-aws You will need to start a new shell for this setup to take effect. ``` cilium-operator-aws completion bash ``` ``` -h, --help help for bash --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell"
}
] | {
"category": "Runtime",
"file_name": "cilium-operator-aws_completion_bash.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "title: \"How to use CSI Volume Snapshotting with Velero\" excerpt: In the Velero 1.4 release, we introduced support for CSI snapshotting v1beta1 APIs. This post provides step-by-step instructions on setting up a CSI environment in Azure, installing Velero 1.4 with the velero-plugin-for-csi, and a demo of this feature in action. author_name: Ashish Amarnath image: /img/posts/csi-announce-blog.jpg tags: ['Velero Team', 'Ashish Amarnath'] In the recent , we announced a new feature of supporting CSI snapshotting using the . With this capability of CSI volume snapshotting, Velero can now support any volume provider that has a CSI driver with snapshotting capability, without requiring a Velero-specific volume snapshotter plugin to be available. This post has the necessary instructions for you to start using this feature. Using the CSI volume snapshotting features in Velero involves the following steps. Set up a CSI environment with a driver supporting the Kubernetes CSI snapshot beta APIs. Install Velero with CSI snapshotting feature enabled. Deploy `csi-app`: a stateful application that uses CSI backed volumes that we will backup and restore. Use Velero to backup and restore the `csi-app`. As the is available starting from Kubernetes `1.17`, you need to run Kubernetes `1.17` or later. This post uses an AKS cluster running Kubernetes `1.17`, with Azure disk CSI driver as an example. Following instructions to install the Azure disk CSI driver from run the below command ```bash curl -skSL https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/install-driver.sh | bash -s master snapshot -- ``` This script will deploy the following CSI components, CRDs, and necessary RBAC: The CSI volume snapshot capability is currently, as of Velero 1.4, a beta feature behind the `EnableCSI` feature flag and is not enabled by default. Following instructions from our , install Velero with the and using the Azure Blob Store as our BackupStorageLocation. Please refer to our for instructions on setting up the BackupStorageLocation. Please note that the BackupStorageLocation should be set up before installing Velero. Install Velero by running the below command ```bash velero install \\ --provider azure \\ --plugins velero/velero-plugin-for-microsoft-azure:v1.1.0,velero/velero-plugin-for-csi:v0.1.1 \\ --bucket $BLOB_CONTAINER \\ --secret-file <PATHTOCREDS_FILE>/aks-creds \\ --backup-location-config resourceGroup=$AZUREBACKUPRESOURCEGROUP,storageAccount=$AZURESTORAGEACCOUNTID,subscriptionId=$AZUREBACKUPSUBSCRIPTION_ID \\ --snapshot-location-config apiTimeout=5m,resourceGroup=$AZUREBACKUPRESOURCEGROUP,subscriptionId=$AZUREBACKUPSUBSCRIPTIONID \\ --image velero/velero:v1.4.0 \\ --features=EnableCSI ``` Before installing the stateful application with CSI backed volumes, install the storage class and the volume snapshot class for the Azure disk CSI driver by applying the below `yaml` to our cluster. ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: disk.csi.azure.com provisioner: disk.csi.azure.com parameters: skuname: StandardSSD_LRS reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshotClass metadata: name: csi-azuredisk-vsc driver: disk.csi.azure.com deletionPolicy: Retain parameters: tags: 'foo=aaa,bar=bbb' ``` NOTE: The above `yaml` was sourced from and . Deploy the stateful application that is using CSI backed PVCs, in the `csi-app` namespace by applying the below"
},
{
"data": "```yaml apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: csi-app kind: Pod apiVersion: v1 metadata: namespace: csi-app name: csi-nginx spec: nodeSelector: kubernetes.io/os: linux containers: image: nginx name: nginx command: [ \"sleep\", \"1000000\" ] volumeMounts: name: azuredisk01 mountPath: \"/mnt/azuredisk\" volumes: name: azuredisk01 persistentVolumeClaim: claimName: pvc-azuredisk apiVersion: v1 kind: PersistentVolumeClaim metadata: namespace: csi-app name: pvc-azuredisk spec: accessModes: ReadWriteOnce resources: requests: storage: 1Gi storageClassName: disk.csi.azure.com ``` For demonstration purposes, instead of relying on the application writing data to the mounted CSI volume, exec into the pod running the stateful application to write data into `/mnt/azuredisk`, where the CSI volume is mounted. This is to let us get a consistent checksum value of the data and verify that the data on restore is exactly same as that in the backup. ```bash $ kubectl -n csi-app exec -ti csi-nginx bash root@csi-nginx:/# while true; do echo -n \"FOOBARBAZ \" >> /mnt/azuredisk/foobar; done ^C root@csi-nginx:/# cksum /mnt/azuredisk/foobar 2279846381 1726530 /mnt/azuredisk/foobar ``` Back up the `csi-app` namespace by running the below command ```bash $ velero backup create csi-b2 --include-namespaces csi-app --wait Backup request \"csi-b2\" submitted successfully. Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. .................. Backup completed with status: Completed. You may check for more information using the commands `velero backup describe csi-b2` and `velero backup logs csi-b2`. ``` Before restoring from the backup simulate a disaster by running ```bash kubectl delete ns csi-app ``` Once the namespace has been deleted, restore the `csi-app` from the backup `csi-b2`. ```bash $ velero create restore --from-backup csi-b2 --wait Restore request \"csi-b2-20200518085136\" submitted successfully. Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. .... Restore completed with status: Completed. You may check for more information using the commands `velero restore describe csi-b2-20200518085136` and `velero restore logs csi-b2-20200518085136`. ``` Now that the restore has completed and our `csi-nginx` pod is `Running`, confirm that contents of `/mnt/azuredisk/foobar` have been correctly restored. ```bash $ kubectl -n csi-app exec -ti csi-nginx bash root@csi-nginx:/# cksum /mnt/azuredisk/foobar 2279846381 1726530 /mnt/azuredisk/foobar root@csi-nginx:/# ``` The stateful application that we deployed has been successfully restored with its data intact. And that's all it takes to backup and restore a stateful application that uses CSI backed volumes! Please try out the CSI support in Velero 1.4. Feature requests, suggestions, bug reports, PRs are all welcome. Get in touch with us on , , or More details about CSI volume snapshotting and its support in Velero may be found in the following links: for more information on the CSI beta snapshot APIs. Prerequisites to use this feature is available . : To understand components in a CSI environment for implementation details of the CSI plugin."
}
] | {
"category": "Runtime",
"file_name": "2020-05-27-CSI-integration.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "This document describes the process for adding or improving the existing examples of applications using OpenEBS Volumes. Each application example should comprise of the following: K8s YAML file(s) for starting the application and its associated components. The volumes should point to the OpenEBS Storage Class. If the existing storage-classes do not suit the need, you may create a new storage class. Refer to our for examples. K8s YAML file(s) for starting a client that accesses the application. This is optional, in case the application itself provides a mechanism like in Jupyter, Wordpress, etc. When demonstrating a database-like application like Apache Cassandra, Redis, and so on, it is recommended to have such a mechanism to test that the application has been launched. An instruction guide, that will help in launch and verification of the application. At a very high level, the process to contribute and improve is pretty simple: Submit an Issue describing your proposed change. Or pick an existing issue tagged as . Create your development branch. Commit your changes. Submit your Pull Request. Submit an issue to update the User Documentation. *Note: You could also help by just adding new issues for applications that are currently missing in the examples or by raising issues on existing applications for further enhancements.* The following sections describe some guidelines that are helpful with the above process. Following the guidelines, here is a with frequently used git commands. Some general guidelines when submitting issues for example applications: If the proposed change requires an update to the existing example, please provide a link to the example in the issue. Fork the openebs repository and if you had previously forked, rebase with master to fetch latest changes. Create a new development branch in your forked repository with the following naming convention: \"task description-#issue\" Example: OpenEBS-Support-Kafka-Application-#538 Reference the issue number along with a brief description in your commits. Set your commit.template to the `COMMIT_TEMPLATE` given in the `.github` directory. `git config --local commit.template $GOPATH/src/github.com/openebs/openebs/.github` Rebase your development branch. Submit the PR from the development branch to the openebs/openebs:master Incorporate review comments, if any, in the development branch. Once the PR is accepted, close the branch. After the PR is merged the development branch in the forked repository can be deleted. If you need any help with git, refer to this and go back to the guide to proceed."
}
] | {
"category": "Runtime",
"file_name": "CONTRIBUTING-TO-K8S-DEMO.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Write your description of the PR here. Be sure to include as much background, and details necessary for the reviewers to understand exactly what this is fixing or enhancing. Fixes # Read the , and this PR conforms to the stated requirements. Added changes to the if necessary according to the Added tests to validate this PR, linted with `make check` and tested this PR locally with a `make test`, and `make testall` if possible (see CONTRIBUTING.md). Based this PR against the appropriate branch according to the Added myself as a contributor to the"
}
] | {
"category": "Runtime",
"file_name": "PULL_REQUEST_TEMPLATE.md",
"project_name": "Singularity",
"subcategory": "Container Runtime"
} |
[
{
"data": "Support check flag in gxz command. Review encoder and check for lzma improvements under xz. Fix binary tree matcher. Compare compression ratio with xz tool using comparable parameters and optimize parameters rename operation action and make it a simple type of size 8 make maxMatches, wordSize parameters stop searching after a certain length is found (parameter sweetLen) Optimize code Do statistical analysis to get linear presets. Test sync.Pool compatability for xz and lzma Writer and Reader Fuzz optimized code. Support parallel go routines for writing and reading xz files. Support a ReaderAt interface for xz files with small block sizes. Improve compatibility between gxz and xz Provide manual page for gxz Improve documentation Fuzz again Full functioning gxz Add godoc URL to README.md (godoc.org) Resolve all issues. Define release candidates. Public announcement. Rewrite Encoder into a simple greedy one-op-at-a-time encoder including simple scan at the dictionary head for the same byte use the killer byte (requiring matches to get longer, the first test should be the byte that would make the match longer) There may be a lot of false sharing in lzma. State; check whether this can be improved by reorganizing the internal structure of it. Check whether batching encoding and decoding improves speed. Use full buffer to create minimal bit-length above range encoder. Might be too slow (see v0.4) hashes with 2, 3 characters additional to 4 characters binary trees with 2-7 characters (uint64 as key, use uint32 as pointers into a an array) rb-trees with 2-7 characters (uint64 as key, use uint32 as pointers into an array with bit-steeling for the colors) execute goch -l for all packages; probably with lower param like 0.5. check orthography with gospell Write release notes in doc/relnotes. Update README.md xb copyright . in xz directory to ensure all new files have Copyright header `VERSION=<version> go generate github.com/ulikunitz/xz/...` to update version files Execute test for Linux/amd64, Linux/x86 and Windows/amd64. Update TODO.md - write short log entry `git checkout master && git merge dev` `git tag -a <version>` `git push` Release v0.5.12 updates README.md and SECURITY.md to address the supply chain attack on the original xz implementation. Matt Dantay (@bodgit) reported an issue with the LZMA reader. The implementation returned an error if the dictionary size was less than 4096 byte, but the recommendation stated the actual used window size should be set to 4096 byte in that case. It actually was the pull request . The new patch v0.5.11 will fix it. Mituo Heijo has fuzzed xz and found a bug in the function readIndexBody. The function allocated a slice of records immediately after reading the value without further checks. Since the number has been too large the make function did panic. The fix is to check the number against the expected number of records before allocating the records. Release v0.5.9 fixes warnings, a typo and adds SECURITY.md. One fix is interesting. ```go const ( a byte = 0x1 b = 0x2 ) ``` The constants a and b don't have the same type. Correct is ```go const ( a byte = 0x1 b byte = 0x2 ) ``` Release v0.5.8 fixes issue . Release v0.5.7 supports the check-ID None and fixes . Release v0.5.6 supports the go.mod file. Release v0.5.5 fixes issues #19 observing ErrLimit outputs. Release v0.5.4 fixes"
},
{
"data": "#15 of another problem with the padding size check for the xz block header. I removed the check completely. Release v0.5.3 fixes issue #12 regarding the decompression of an empty XZ stream. Many thanks to Tomasz Kak, who reported the issue. Release v0.5.2 became necessary to allow the decoding of xz files with 4-byte padding in the block header. Many thanks to Greg, who reported the issue. Release v0.5.1 became necessary to fix problems with 32-bit platforms. Many thanks to Bruno Brigas, who reported the issue. Release v0.5 provides improvements to the compressor and provides support for the decompression of xz files with multiple xz streams. Another compression rate increase by checking the byte at length of the best match first, before checking the whole prefix. This makes the compressor even faster. We have now a large time budget to beat the compression ratio of the xz tool. For enwik8 we have now over 40 seconds to reduce the compressed file size for another 7 MiB. I simplified the encoder. Speed and compression rate increased dramatically. A high compression rate affects also the decompression speed. The approach with the buffer and optimizing for operation compression rate has not been successful. Going for the maximum length appears to be the best approach. The release v0.4 is ready. It provides a working xz implementation, which is rather slow, but works and is interoperable with the xz tool. It is an important milestone. I have the first working implementation of an xz reader and writer. I'm happy about reaching this milestone. I'm now ready to implement xz because, I have a working LZMA2 implementation. I decided today that v0.4 will use the slow encoder using the operations buffer to be able to go back, if I intend to do so. I have restarted the work on the library. While trying to implement LZMA2, I discovered that I need to resimplify the encoder and decoder functions. The option approach is too complicated. Using a limited byte writer and not caring for written bytes at all and not to try to handle uncompressed data simplifies the LZMA encoder and decoder much. Processing uncompressed data and handling limits is a feature of the LZMA2 format not of LZMA. I learned an interesting method from the LZO format. If the last copy is too far away they are moving the head one 2 bytes and not 1 byte to reduce processing times. I have now reimplemented the lzma package. The code is reasonably fast, but can still be optimized. The next step is to implement LZMA2 and then xz. Created release v0.3. The version is the foundation for a full xz implementation that is the target of v0.4. The gflag package has been developed because I couldn't use flag and pflag for a fully compatible support of gzip's and lzma's options. It seems to work now quite nicely. The overflow issue was interesting to research, however Henry S. Warren Jr. Hacker's Delight book was very helpful as usual and had the issue explained perfectly. Fefe's information on his website was based on the C FAQ and quite bad, because it didn't address the issue of -MININT == MININT. It has been a productive day. I improved the interface of lzma. Reader and lzma. Writer and fixed the error handling. By computing the bit length of the LZMA operations I was able to improve the greedy algorithm"
},
{
"data": "By using an 8 MByte buffer the compression rate was not as good as for xz but already better then gzip default. Compression is currently slow, but this is something we will be able to improve over time. Checked the license of ogier/pflag. The binary lzmago binary should include the license terms for the pflag library. I added the endorsement clause as used by Google for the Go sources the LICENSE file. The package lzb contains now the basic implementation for creating or reading LZMA byte streams. It allows the support for the implementation of the DAG-shortest-path algorithm for the compression function. Completed yesterday the lzbase classes. I'm a little bit concerned that using the components may require too much code, but on the other hand there is a lot of flexibility. Implemented Reader and Writer during the Bayern game against Porto. The second half gave me enough time. While showering today morning I discovered that the design for OpEncoder and OpDecoder doesn't work, because encoding/decoding might depend on the current status of the dictionary. This is not exactly the right way to start the day. Therefore we need to keep the Reader and Writer design. This time around we simplify it by ignoring size limits. These can be added by wrappers around the Reader and Writer interfaces. The Parameters type isn't needed anymore. However I will implement a ReaderState and WriterState type to use static typing to ensure the right State object is combined with the right lzbase. Reader and lzbase. Writer. As a start I have implemented ReaderState and WriterState to ensure that the state for reading is only used by readers and WriterState only used by Writers. Today I implemented the OpDecoder and tested OpEncoder and OpDecoder. Came up with a new simplified design for lzbase. I implemented already the type State that replaces OpCodec. The new lzma package is now fully usable and lzmago is using it now. The old lzma package has been completely removed. Implemented lzma. Reader and tested it. Implemented baseReader by adapting code form lzma. Reader. The opCodec has been copied yesterday to lzma2. opCodec has a high number of dependencies on other files in lzma2. Therefore I had to copy almost all files from lzma. Removed only a TODO item. However in Francesco Campoy's presentation \"Go for Javaneros (Javastes?)\" is the the idea that using an embedded field E, all the methods of E will be defined on T. If E is an interface T satisfies E. <https://talks.golang.org/2014/go4java.slide#51> I have never used this, but it seems to be a cool idea. Finished the type writerDict and wrote a simple test. I started to implement the writerDict. After thinking long about the LZMA2 code and several false starts, I have now a plan to create a self-sufficient lzma2 package that supports the classic LZMA format as well as LZMA2. The core idea is to support a baseReader and baseWriter type that support the basic LZMA stream without any headers. Both types must support the reuse of dictionaries and the opCodec. Implemented simple lzmago tool Tested tool against large 4.4G file compression worked correctly; tested decompression with lzma decompression hits a full buffer condition Fixed a bug in the compressor and wrote a test for it Executed full cycle for 4.4 GB file; performance can be improved ;-) Release v0.2 because of the working LZMA encoder and decoder"
}
] | {
"category": "Runtime",
"file_name": "TODO.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
} |
[
{
"data": "name: Bug Report about: Create a report to help us improve this project <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! --> What happened: What you expected to happen: How to reproduce it: Anything else we need to know?: Environment: CSI Driver version: Kubernetes version (use `kubectl version`): OS (e.g. from /etc/os-release): Kernel (e.g. `uname -a`): Install tools: Others:"
}
] | {
"category": "Runtime",
"file_name": "bug-report.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "If you administrate an you usually need to edit multiple config files and set some environment variables. can help here especially if you have multiple Tomcats in a cluster. Configuring Tomcat is an interesting sample, because it needs multiple config files and some environment variables. Important configuration files of Tomcat are server.xml: e.g. configure here jvmRoute for load balancing, ports etc. tomcat-users.xml: configure access rights for manager webapp A frequently used environment variable is CATALINA_OPTS: Here you can define memory settings In this case we simply want to use the hostname for jvmRoute. Because it is not possible to execute a Unix command within a toml template, we use an environment variable \"HOSTNAME\" instead. Copy the file conf/server.xml from your Tomcat installation to /etc/confd/templates/server.xml.tmpl and edit \"Engine tag\" and add jvmRoute ``` <Engine name=\"Catalina\" defaultHost=\"localhost\" jvmRoute=\"{{getenv \"HOSTNAME\"}}\"> ``` Create /etc/confd/conf.d/server.xml.toml ``` [template] src = \"server.xml.tmpl\" dest = \"/usr/local/tomcat/conf/server.xml\" check_cmd = \"/usr/local/tomcat/bin/catalina.sh configtest\" reload_cmd = \"/usr/local/tomcat/bin/catalina.sh stop -force && /usr/local/tomcat/bin/catalina.sh start\" ``` We want to get the Tomcat-manager's credentials from a central configuration repository. Copy the file conf/tomcat-users.xml from your Tomcat installation to /etc/confd/templates/tomcat-users.xml.tmpl Add the following lines: ``` <role rolename=\"tomcat\"/> <role rolename=\"manager-gui\"/> <user username=\"{{getv \"/user\"}}\" password=\"{{getv \"/password\"}}\" roles=\"tomcat,manager-gui\"/> ``` Create a /etc/confd/conf.d/tomcat-users.xml.toml ``` [template] prefix = \"tomcat\" keys = [ \"user\", \"password\" ] src = \"tomcat-users.xml.tmpl\" dest = \"/usr/local/tomcat/conf/tomcat-users.xml\" reload_cmd = \"/usr/local/tomcat/bin/catalina.sh stop -force && /usr/local/tomcat/bin/catalina.sh start\" ``` File catalina.sh is the startscript for Tomcat. If confd should set memory settings like Xmx or Xms, we could either create a catalina.sh.tmpl and proceed like above or we can try to use environment variables and leave catalina.sh untouched. Leaving catalina.sh untouched is preferred here. Because it is not possible to use environment variables within toml files, we need to write a minimal shell script that passes CATALINA_OPTS variable. Create the file /etc/confd/conf.d/catalina_start.sh.toml ``` CATALINA_OPTS=\"-Xms{{getv \"/Xms\"}} -Xmx{{getv \"/Xmx\"}}\" /usr/local/tomcat/bin/catalina.sh start ``` Create the file /etc/confd/conf.d/catalina_start.sh.toml ``` [template] prefix = \"tomcat\" keys = [ \"Xmx\", \"Xms\" ] src = \"catalina_start.sh.tmpl\" dest = \"/usr/local/tomcat/bin/catalina_start.sh\" mode = \"0775\" reloadcmd = \"/usr/local/tomcat/bin/catalina.sh stop -force && /usr/local/tomcat/bin/catalinastart.sh start\" ``` Finally we need to replace in all above templates ```catalina.sh start``` by ```catalina_start.sh start``` Follow conf documentation and test it calling ```confd -onetime``` or you try the complete sample in a Docker and/or Vagrant environment"
}
] | {
"category": "Runtime",
"file_name": "tomcat-sample.md",
"project_name": "Project Calico",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "`curvefs.py` and `curvefs_wrap.cxx` are generated by command `swig -c++ -python curvefs.i`. Revert `ListDir` and Delete `Opendir`/`Closedir` functions in `curvefs.py` if you don intend to make changes about these functions. Revert `wrapRead`/`wrapListdir`/`wrapGetClusterId`/`wrapCBDClientRead`/`wrapCBDClientListdir` if you don intend to make changes about these functions. :exclamation: C functions in `libcurvefs.h` are not recommended for use anymore. :exclamation: Types in `curve_type.h` are different from `include/client/*.h` even they have the similar name."
}
] | {
"category": "Runtime",
"file_name": "HOWTO.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Prerequisite: Go version `>=1.16` `export GO111MODULE=off` ``` $ git clone http://github.com/cubefs/cubefs.git $ cd cubefs $ make ``` For example,the current cubefs directory is /root/arm64/cubefs,build.sh will auto download follow source codes to vendor/dep directory : bzip2-1.0.6 lz4-1.9.2 zlib-1.2.11 zstd-1.4.5 gcc version as v4 or v5: ``` cd /root/arm64/cubefs export CPUTYPE=arm64_gcc4 && bash ./build.sh ``` gcc version as v9 : ``` export CPUTYPE=arm64_gcc9 && bash ./build.sh ``` gcc version as v4, support Ububtu 14.04 and up version,CentOS7.6 and up version. Check libstdc++.so.6 version must more than `GLIBCXX_3.4.19',if fail please update libstdc++. ``` cd /root/arm64/cubefs docker build --rm --tag arm64gcc4golang113ubuntu1404_cubefs ./build/compile/arm64/gcc4 make dist-clean docker run -v /root/arm64/cubefs:/root/cubefs arm64gcc4golang113ubuntu1404_cubefs /root/buildcfs.sh ``` Remove image: ``` docker image remove -f arm64gcc4golang113ubuntu1404_cubefs ``` The list of RPM packages dependencies can be installed with: ``` $ yum install https://ocs-cn-north1.heytapcs.com/cubefs/rpm/3.2.0/cfs-install-3.2.0-el7.x86_64.rpm $ cd /cfs/install $ tree -L 2 . install_cfs.yml install.sh iplist src template client.json.j2 create_vol.sh.j2 datanode.json.j2 grafana master.json.j2 metanode.json.j2 ``` Set parameters of the CubeFS cluster in `iplist`. `[master]`, `[datanode]`, `[metanode]`, `[monitor]`, `[client]` modules define IP addresses of each role. `#datanode config` module defines parameters of DataNodes. `datanode_disks` defines `path` and `reserved space` separated by \":\". The `path` is where the data store in, so make sure it exists and has at least 30GB of space; `reserved space` is the minimum free space(Bytes) reserved for the path. `[cfs:vars]` module defines parameters for SSH connection. So make sure the port, username and password for SSH connection is unified before start. `#metanode config` module defines parameters of MetaNodes. `metanode_totalMem` defines the maximum memory(Bytes) can be use by MetaNode process. ```yaml [master] 10.196.0.1 10.196.0.2 10.196.0.3 [datanode] ... [cfs:vars] ansiblesshport=22 ansiblesshuser=root ansiblesshpass=\"password\" ... ... datanode_disks = '\"/data0:10737418240\",\"/data1:10737418240\"' ... ... metanode_totalMem = \"28589934592\" ... ``` For more configurations please refer to . Start the resources of CubeFS cluster with script `install.sh`. (make sure the Master is started first) ``` $ bash install.sh -h Usage: install.sh -r | --role [datanode | metanode | master | objectnode | client | all | createvol ] [2.1.0 or latest] $ bash install.sh -r master $ bash install.sh -r metanode $ bash install.sh -r datanode $ bash install.sh -r client ``` Check mount point at `/cfs/mountpoint` on `client` node defined in `iplist`. A helper tool called `run_docker.sh` (under the `docker` directory) has been provided to run CubeFS with . ``` $ docker/run_docker.sh -r -d /data/disk ``` Note that /data/disk can be any directory but please make sure it has at least 10G available space. To check the mount status, use the `mount` command in the client docker shell: ``` $ mount | grep cubefs ``` To view grafana monitor metrics, open http://127.0.0.1:3000 in browser and login with `admin/123456`. To run server and client separately, use the following commands: ``` $ docker/run_docker.sh -b $ docker/run_docker.sh -s -d /data/disk $ docker/run_docker.sh -c $ docker/run_docker.sh -m ``` For more usage: ``` $ docker/run_docker.sh -h ```"
}
] | {
"category": "Runtime",
"file_name": "INSTALL.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "title: Fast Datapath & Weave Net menu_order: 30 search_type: Documentation Weave Net implements an overlay network between Docker hosts. Without fast datapath enabled, each packet is encapsulated in a tunnel protocol header and sent to the destination host, where the header is removed. The Weave router is a user space process, which means that the packet follows a winding path in and out of the Linux kernel: The fast datapath in Weave Net uses the Linux kernel's . This module enables the Weave Net router to tell the kernel how to process packets: Because Weave Net issues instructions directly to the kernel, context switches are decreased, and so by using `fast datapath` CPU overhead and latency is reduced. The packet goes straight from your application to the kernel, where the Virtual Extensible Lan (VXLAN) header is added (the NIC does this if it offers VXLAN acceleration). VXLAN is an IETF standard UDP-based tunneling protocol that enable you to use common networking tools like to inspect the tunneled packets. Prior to version 1.2, Weave Net used a custom encapsulation format. Fast datapath uses VXLAN, and like Weave Net's custom encapsulation format, VXLAN is UDP-based, and therefore needs no special configuration with network infrastructure. Note: The required open vSwitch datapath (ODP) and VXLAN features are present in Linux kernel versions 3.12 and greater. If your kernel was built without the necessary modules Weave Net will fall back to the \"user mode\" packet path. See Also *"
}
] | {
"category": "Runtime",
"file_name": "fastdp-how-it-works.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "The `klog` is released on an as-needed basis. The process is as follows: An issue is proposing a new release with a changelog since the last release All must LGTM this release An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` The release issue is closed An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`"
}
] | {
"category": "Runtime",
"file_name": "RELEASE.md",
"project_name": "Soda Foundation",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "title: Amazon Web Services (AWS) link: https://github.com/vmware-tanzu/velero-plugin-for-aws objectStorage: true volumesnapshotter: true supportedByVeleroTeam: true This repository contains an object store plugin and a volume snapshotter plugin to support running Velero on Amazon Web Services."
}
] | {
"category": "Runtime",
"file_name": "01-amazon-web-services.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Docker is a platform designed to help developers build, share, and run container applications. In gVisor, all basic docker commands should function as expected. However, it's important to note that, currently, only the host network driver is supported. This means that both 'docker run' and 'docker build' commands must be executed with the `--network=host` option. First, install a GKE cluster (1.29.0 or higher) and deploy a node pool with gVisor enabled. You can view the full documentation . Prepare a container image with pre-installed Docker: ```shell $ cd g3doc/user_guide/tutorials/docker-in-gke-sandbox/ $ docker build -t {registry_url}/docker-in-gvisor:latest . $ docker push {registry_url}/docker-in-gvisor:latest ``` Create a Kubernetes pod YAML file (docker.yaml) with the following content: ```yaml apiVersion: v1 kind: Pod metadata: name: docker-in-gvisor spec: runtimeClassName: gvisor containers: name: docker-in-gvisor image: {registry_url}/docker-in-gvisor:latest securityContext: capabilities: add: [\"all\"] volumeMounts: name: docker mountPath: /var/lib/docker volumes: name: docker emptyDir: {} ``` This YAML file defines a Kubernetes Pod named docker-in-gvisor that will run a single container from the avagin/docker-in-gvisor:0.1 image. Apply the pod YAML to your GKE cluster using the kubectl apply command: ```shell $ kubectl apply -f docker.yaml ``` Verify that the docker-in-gvisor pid is running successfully: `shell $ kubectl get pods | grep docker-in-gvisor` You can access the container by executing a shell inside it. Use the following command: ```shell kubectl exec -it docker-in-gvisor -- bash ``` Now, we can build and run Docker containers. ```shell $ mkdir whalesay && cd whalesay $ cat > Dockerfile <<EOF FROM ubuntu RUN apt-get update && apt-get install -y cowsay curl RUN mkdir -p /usr/share/cowsay/cows/ RUN curl -o /usr/share/cowsay/cows/docker.cow https://raw.githubusercontent.com/docker/whalesay/master/docker.cow ENTRYPOINT [\"/usr/games/cowsay\", \"-f\", \"docker.cow\"] EOF $ docker build --network=host -t whalesay . .... Successfully tagged whalesay:latest $ docker run --network host -it --rm whalesay \"Containers do not contain, but gVisor-s do!\" _ / Containers do not contain, but gVisor-s \\ \\ do! / -- \\ ## . \\ ## ## ## == /\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\\_/ === ~ { ~ ~ / ===- ~~~ \\ o / \\ \\ / \\\\/ ```"
}
] | {
"category": "Runtime",
"file_name": "docker-in-gke-sandbox.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
} |
[
{
"data": "Decrease log verbosity value for antrea-agent specified in the Windows manifest for containerd from 4 to 0. (, [@XinShuYang]) Ensure cni folders are created when starting antrea-agent with containerd on Windows. (, [@XinShuYang]) Document the limit of maximum receiver group number on a Linux Node for multicast. (, [@ceclinux]) Update Open vSwitch to 2.17.6 (, [@tnqn]) Bump up whereabouts to v0.6.1. (, [@hjiajing]) Ensure NO_FLOOD is always set for IPsec tunnel ports and TrafficControl ports. ( , [@xliuxu] [@tnqn]) Fix Service routes being deleted on Agent startup on Windows. (, [@hongliangl]) Fix route deletion for Service ClusterIP and LoadBalancerIP when AntreaProxy is enabled. (, [@tnqn]) Fix OpenFlow Group being reused with wrong type because groupDb cache was not cleaned up. (, [@ceclinux]) Fix antctl not being able to talk with GCP kube-apiserver due to missing platforms specific imports. (, [@luolanzone]) Fix Agent crash in dual-stack clusters when any Node is not configured with an IP address for each address family. (, [@hongliangl]) Fix Service not being updated correctly when stickyMaxAgeSeconds or InternalTrafficPolicy is updated. (, [@tnqn]) Fix the Antrea Agent crash issue when large amount of multicast receivers with different multicast IPs on one Node start together. (, [@ceclinux]) Fix the Antrea Agent crash issue which is caused by a concurrency bug in Multicast feature with encap mode. (, [@ceclinux]) Fix the Antrea Agent crash issue on Windows by running modules that rely on Services after AntreaProxy is ready. (, [@tnqn]) Make FQDN NetworkPolicy work for upper case DNS. (, [@GraysonWu]) Fix a bug that a deleted NetworkPolicy is still enforced when a new NetworkPolicy with the same name exists. (, [@tnqn]) Fix a race condition between stale controller and ResourceImport reconcilers in Antrea Multi-cluster controller. (, [@Dyanngg]) Recover ovsdb-server and ovs-vswitched service if they do not exist when running the Windows cleanup script. (, [@wenyingd]) Add L7NetworkPolicy feature which enables users to protect their applications by specifying how they are allowed to communicate with others, taking into account application"
},
{
"data": "( , [@hongliangl] [@qiyueyao] [@tnqn]) Layer 7 NetworkPolicy can be configured through the `l7Protocols` field of Antrea-native policies. Refer to for more information about this feature. Add SupportBundleCollection feature which enables a CRD API for Antrea to collect support bundle files on any K8s Node or ExternalNode, and upload to a user-defined file server. ( , [@wenyingd] [@mengdie-song] [@ceclinux]) Refer to for more information about this feature. Add support for NetworkPolicy for cross-cluster traffic. ( , [@Dyanngg] [@GraysonWu]) Setting `scope` of an ingress peer to `clusterSet` expands the scope of the `podSelector` or `namespaceSelector` to the entire ClusterSet. Setting `scope` of `toServices` to `clusterSet` selects a Multi-cluster Service. (, [@Dyanngg]) Refer to for more information about this feature. Add the following capabilities to the ExternalNode feature: Containerized option for antrea-agent installation on Linux VMs. (, [@Nithish555]) Support for RHEL 8.4. (, [@Nithish555]) Add support for running antrea-agent as DaemonSet when using containerd as the runtime on Windows. (, [@XinShuYang]) Add for Antrea Multicast. (, [@ceclinux]) Extend `antctl mc get joinconfig` to print member token Secret. (, [@jianjuns]) Improve support for Egress in Traceflow. (, [@Atish-iaf]) Add NodePortLocalPortRange field for AntreaAgentInfo. (, [@wenqiq]) Use format \"namespace/name\" as the key for ExternalNode span calculation. (, [@wenyingd]) Enclose Pod labels with single quotes when uploading CSV record to S3 in the FlowAggregator. (, [@dreamtalen]) Upgrade Antrea base image to ubuntu 22.04. ( , [@antoninbas]) Update OVS to 2.17.3. (, [@mnaser]) Reduce confusion caused by transient error encountered when creating static Tiers. (, [@tnqn]) Add a periodic job to rejoin dead Nodes, to fix Egress not working properly after long network downtime. (, [@tnqn]) Fix potential deadlocks and memory leaks of memberlist maintenance in large-scale clusters. (, [@wenyingd]) Fix connectivity issues caused by MAC address changes with systemd v242 and later. (, [@wenyingd]) Fix error handling when S3Uploader partially succeeds. (, [@heanlan]) Fix a ClusterInfo export bug when Multi-cluster Gateway changes. (, [@luolanzone]) Fix OpenFlow rules not being updated when Multi-cluster Gateway updates. (, [@luolanzone]) Delete Pod specific VF resource cache when a Pod gets deleted. (, [@arunvelayutham]) Fix OpenAPI descriptions for AntreaAgentInfo and AntreaControllerInfo. (, [@tnqn])"
}
] | {
"category": "Runtime",
"file_name": "CHANGELOG-1.10.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "WSO2 is an Identity Server open source and is released under Apache Software License Version 2.0, this document covers configuring WSO2 to be used as an identity provider for MinIO server STS API. JAVA 1.8 and above installed already and JAVA_HOME points to JAVA 1.8 installation. Download WSO2 follow their . Once WSO2 is up and running, configure WSO2 to generate Self contained idtokens. In OAuth 2.0 specification there are primarily two ways to provide idtokens The id_token is an identifier that is hard to guess. For example, a randomly generated string of sufficient length, that the server handling the protected resource can use to lookup the associated authorization information. The id_token self-contains the authorization information in a manner that can be verified. For example, by encoding authorization information along with a signature into the token. WSO2 generates tokens in first style by default, but if to be used with MinIO we should configure WSO2 to provide JWT tokens instead. By default, a UUID is issued as an idtoken in WSO2 Identity Server, which is of the first type above. But, it also can be configured to issue a self-contained idtoken (JWT), which is of the second type above. Open the `<IS_HOME>/repository/conf/identity/identity.xml` file and uncomment the following entry under `<OAuth>` element. ``` <IdentityOAuthTokenGenerator>org.wso2.carbon.identity.oauth2.token.JWTTokenIssuer</IdentityOAuthTokenGenerator> ``` Restart the server. Configure an . Initiate an idtoken request to the WSO2 Identity Server, over a known . For example, the following cURL command illustrates the syntax of an idtoken request that can be initiated over the grant type. Navigate to service provider section, expand Inbound Authentication Configurations and expand OAuth/OpenID Connect Configuration. Copy the OAuth Client Key as the value for `<CLIENT_ID>`. Copy the OAuth Client Secret as the value for `<CLIENT_SECRET>`. By default, `<IS_HOST>` is localhost. However, if using a public IP, the respective IP address or domain needs to be specified. By default, `<ISHTTPSPORT>` has been set to 9443. However, if the port offset has been incremented by n, the default port value needs to be incremented by"
},
{
"data": "Request ``` curl -u <CLIENTID>:<CLIENTSECRET> -k -d \"granttype=clientcredentials\" -H \"Content-Type:application/x-www-form-urlencoded\" https://<ISHOST>:<ISHTTPS_PORT>/oauth2/token ``` Example: ``` curl -u PoEgXP6uVO45IsENRngDXj5Au5Ya:eKsw6z8CtOJVBtrOWvhRWL4TUCga -k -d \"granttype=clientcredentials\" -H \"Content-Type:application/x-www-form-urlencoded\" https://localhost:9443/oauth2/token ``` In response, the self-contained JWT id_token will be returned as shown below. ``` { \"idtoken\": \"eyJ4NXQiOiJOVEF4Wm1NeE5ETXlaRGczTVRVMVpHTTBNekV6T0RKaFpXSTRORE5sWkRVMU9HRmtOakZpTVEiLCJraWQiOiJOVEF4Wm1NeE5ETXlaRGczTVRVMVpHTTBNekV6T0RKaFpXSTRORE5sWkRVMU9HRmtOakZpTVEiLCJhbGciOiJSUzI1NiJ9.eyJhdWQiOiJQb0VnWFA2dVZPNDVJc0VOUm5nRFhqNUF1NVlhIiwiYXpwIjoiUG9FZ1hQNnVWTzQ1SXNFTlJuZ0RYajVBdTVZYSIsImlzcyI6Imh0dHBzOlwvXC9sb2NhbGhvc3Q6OTQ0M1wvb2F1dGgyXC90b2tlbiIsImV4cCI6MTUzNDg5MTc3OCwiaWF0IjoxNTM0ODg4MTc4LCJqdGkiOiIxODQ0MzI5Yy1kNjVhLTQ4YTMtODIyOC05ZGY3M2ZlODNkNTYifQ.ELZ8ujk2Xp9xTGgMqnCa5ehuimaAPXWlSCW5QeBbTJIT4M5OB2XEVIV6p89kftjUdKu50oiYe4SbfrxmLm6NGSGd2qxkjzJK3SRKqsrmVWEn19juj8fz1neKtUdXVHuSZu6wsbMDy4f9hN2Jv9dFnkoyeNT54r4jSTJ4A2FzN2rkiURheVVsc8qlm8O7g64Az-5h4UGryyXU4zsnjDCBKYk9jdbEpcUskrFMYhuUlj1RWSASiGhHHHDU5dTRqHkVLIItfG48kfb-ehU60T7EFWH1JBdNjOxM9oNyb0hGwOjLUyCUJO_Y7xcd5F4dZzrBg8LffFmvJ09wzHNtQ\", \"token_type\": \"Bearer\", \"expires_in\": 3600 } ``` The idtoken received is a signed JSON Web Token (JWT). Use a JWT decoder to decode the idtoken to access the payload of the token that includes following JWT claims: | Claim Name | Type | Claim Value | |:-:|:--:|::| | iss | string | The issuer of the JWT. The '> Identity Provider Entity Id ' value of the OAuth2/OpenID Connect Inbound Authentication configuration of the Resident Identity Provider is returned here. | | aud | string array | The token audience list. The client identifier of the OAuth clients that the JWT is intended for, is sent herewith. | | azp | string | The authorized party for which the token is issued to. The client identifier of the OAuth client that the token is issued for, is sent herewith. | | iat | integer | The token issue time. | | exp | integer | The token expiration time. | | jti | string | Unique identifier for the JWT token. | | policy | string | Canned policy name to be applied for STS credentials. (Recommended) | Using the above `id_token` we can perform an STS request to MinIO to get temporary credentials for MinIO API operations. MinIO STS API uses to validate if JWT is valid and is properly signed. We recommend setting `policy` as a custom claim for the JWT service provider follow and for relevant docs on how to configure claims for a service provider. MinIO server expects environment variable for OpenID configuration url as `MINIOIDENTITYOPENIDCONFIGURL`, this environment variable takes a single entry. ``` export MINIOIDENTITYOPENIDCONFIGURL=https://localhost:9443/oauth2/oidcdiscovery/.well-known/openid-configuration export MINIOIDENTITYOPENIDCLIENTID=\"843351d4-1080-11ea-aa20-271ecba3924a\" minio server /mnt/data ``` Assuming that MinIO server is configured to support STS API by following the doc , execute the following command to temporary credentials from MinIO server. ``` go run client-grants.go -cid PoEgXP6uVO45IsENRngDXj5Au5Ya -csec eKsw6z8CtOJVBtrOWvhRWL4TUCga { \"accessKey\": \"IRBLVDGN5QGMDCMO1X8V\", \"secretKey\": \"KzS3UZKE7xqNdtRbKyfcWgxBS6P1G4kwZn4DXKuY\", \"expiration\": \"2018-08-21T15:49:38-07:00\", \"sessionToken\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJJUkJMVkRHTjVRR01EQ01PMVg4ViIsImF1ZCI6IlBvRWdYUDZ1Vk80NUlzRU5SbmdEWGo1QXU1WWEiLCJhenAiOiJQb0VnWFA2dVZPNDVJc0VOUm5nRFhqNUF1NVlhIiwiZXhwIjoxNTM0ODkxNzc4LCJpYXQiOjE1MzQ4ODgxNzgsImlzcyI6Imh0dHBzOi8vbG9jYWxob3N0Ojk0NDMvb2F1dGgyL3Rva2VuIiwianRpIjoiMTg0NDMyOWMtZDY1YS00OGEzLTgyMjgtOWRmNzNmZTgzZDU2In0.4rKsZ8VkZnIS_ALzfTJ9UbEKPFlQVvIyuHw6AWTJcDFDVgQA2ooQHmH9wUDnhXBi1M7o8yWJ47DXP-TLPhwCgQ\" } ``` These credentials can now be used to perform MinIO API operations, these credentials automatically expire in 1hr. To understand more about credential expiry duration and client grants STS API read further ."
}
] | {
"category": "Runtime",
"file_name": "wso2.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "You are under no obligation whatsoever to provide any bug fixes, patches, or upgrades to the features, functionality or performance of the source code (\"Enhancements\") to anyone; however, if you choose to make your Enhancements available either publicly, or directly to the project, without imposing a separate written license agreement for such Enhancements, then you hereby grant the following license: a non-exclusive, royalty-free perpetual license to install, use, modify, prepare derivative works, incorporate into other computer software, distribute, and sublicense such enhancements or derivative works thereof, in binary and source code form. When contributing to Singularity, it is important to properly communicate the gist of the contribution. If it is a simple code or editorial fix, simply explaining this within the GitHub Pull Request (PR) will suffice. But if this is a larger fix or Enhancement, you are advised to first discuss the change with the project leader or developers. Please note we have a . Please follow it in all your interactions with the project members and users. Essential bug fix PRs should be sent to both master and release branches. Small bug fix and feature enhancement PRs should be sent to master only. Follow the existing code style precedent, especially for C. For Go, you will mostly conform to the style and form enforced by the \"go fmt\" and \"golint\" tools for proper formatting. For any new functionality, please write appropriate go tests that will run as part of the Continuous Integration (github workflow actions) system. Make sure that the project's default copyright and header have been included in any new source files. Make sure your code passes linting, by running `make check` before submitting the PR. We use `golangci-lint` as our linter. You may need to address linting errors by: Running `gofumpt -w .` to format all `.go` files. We use instead of `gofmt` as it adds additional formatting rules which are helpful for clarity. Leaving a function comment on every new exported function and package that your PR has"
},
{
"data": "To learn about how to properly comment Go code, read Make sure you have locally tested using `make -C builddir test` and that all tests succeed before submitting the PR. If possible, run `make -C builddir testall` locally, after setting the environment variables `E2EDOCKERUSERNAME` and `E2EDOCKERPASSWORD` appropriately for an authorized Docker Hub account. This is required as Singularity's end-to-end tests perform many tests that build from or execute docker images. Our CI is authorized to run these tests if you cannot. Ask yourself is the code human understandable? This can be accomplished via a clear code style as well as documentation and/or comments. The pull request will be reviewed by others, and finally merged when all requirements are met. The `CHANGELOG.md` must be updated for any of the following changes: Renamed commands Deprecated / removed commands Changed defaults / behaviors Backwards incompatible changes New features / functionalities PRs which introduce a new Go dependency to the project via `go get` and additions to `go.mod` should explain why the dependency is required. There are a few places where documentation for the Singularity project lives. The is where PRs should include documentation if necessary. When a new release is tagged, the and will be updated using the contents of the `CHANGELOG.md` file as reference. The is a place to document functional differences between versions of Singularity. PRs which require documentation must update this file. This should be a document which can be used to explain what the new features of each version of Singularity are, and should not read like a commit log. Once a release is tagged (*e.g. v3.0.0), a new top level section will be made titled *Changes Since vX.Y.Z (e.g. Changes Since v3.0.0) where new changes will now be documented, leaving the previous section immutable. The is a place to document critical information for new users of Singularity. It should typically not change, but in the case where a change is necessary a PR may update it. The should document anything pertinent to the usage of Singularity. The document anything that is pertinent to a system administrator who manages a system with Singularity installed. If necessary, changes to the message displayed when running `singularity help *` can be made by editing `docs/content.go`."
}
] | {
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Singularity",
"subcategory": "Container Runtime"
} |
[
{
"data": "Name | Type | Description | Notes | - | - | - Size | int64 | The total number of tokens this bucket can hold. | OneTimeBurst | Pointer to int64 | The initial size of a token bucket. | [optional] RefillTime | int64 | The amount of milliseconds it takes for the bucket to refill. | `func NewTokenBucket(size int64, refillTime int64, ) *TokenBucket` NewTokenBucket instantiates a new TokenBucket object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewTokenBucketWithDefaults() *TokenBucket` NewTokenBucketWithDefaults instantiates a new TokenBucket object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *TokenBucket) GetSize() int64` GetSize returns the Size field if non-nil, zero value otherwise. `func (o TokenBucket) GetSizeOk() (int64, bool)` GetSizeOk returns a tuple with the Size field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *TokenBucket) SetSize(v int64)` SetSize sets Size field to given value. `func (o *TokenBucket) GetOneTimeBurst() int64` GetOneTimeBurst returns the OneTimeBurst field if non-nil, zero value otherwise. `func (o TokenBucket) GetOneTimeBurstOk() (int64, bool)` GetOneTimeBurstOk returns a tuple with the OneTimeBurst field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *TokenBucket) SetOneTimeBurst(v int64)` SetOneTimeBurst sets OneTimeBurst field to given value. `func (o *TokenBucket) HasOneTimeBurst() bool` HasOneTimeBurst returns a boolean if a field has been set. `func (o *TokenBucket) GetRefillTime() int64` GetRefillTime returns the RefillTime field if non-nil, zero value otherwise. `func (o TokenBucket) GetRefillTimeOk() (int64, bool)` GetRefillTimeOk returns a tuple with the RefillTime field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *TokenBucket) SetRefillTime(v int64)` SetRefillTime sets RefillTime field to given value."
}
] | {
"category": "Runtime",
"file_name": "TokenBucket.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "Piraeus Datastore offers integration with the [Prometheus monitoring stack]. The integration configures: Metrics scraping for the LINSTOR and DRBD state. Alerts based on the cluster state A Grafana dashboard To complete this guide, you should be familiar with: Deploying workloads in Kubernetes using Deploying resources using NOTE: If you already have a working Prometheus Operator deployment, skip this step. First deploy the . A simple way to deploy it, is to use the helm chart provided by the Prometheus Community. First, add the helm chart repository to your local helm configuration: ``` $ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts ``` Then, deploy the chart. This chart will set up Prometheus, AlertManager and Grafana for your cluster. Configure it to search for monitoring and alerting rules in all namespaces: ``` $ helm install --create-namespace -n monitoring prometheus prometheus-community/kube-prometheus-stack \\ --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false \\ --set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \\ --set prometheus.prometheusSpec.ruleSelectorNilUsesHelmValues=false ``` NOTE: By default, the deployment will only monitor resources in the `kube-system` and its own namespace. Piraeus Datastore is usually deployed in a different namespace, so Prometheus needs to be configured to watch this namespace. In the example above, this is achieved by setting the various `*NilUsesHelmValues` parameters to `false`. After creating a Prometheus Operator deployment and configuring it to watch all namespaces, apply the monitoring and alerting resources for Piraeus Datastore: ``` $ kubectl apply --server-side -n piraeus-datastore -k \"https://github.com/piraeusdatastore/piraeus-operator//config/extras/monitoring?ref=v2\" ``` Verify that the monitoring configuration is working by checking the prometheus console. First, get access to the prometheus console from your local browser by forwarding it to local port 9090: ``` $ kubectl port-forward -n monitoring services/prometheus-kube-prometheus-prometheus 9090:9090 ``` Now, open http://localhost:9090/graph and display the `linstorinfo` and `drbdversion` metrics: To view the dashboard, forward the grafana service to local port 3000: ``` $ kubectl port-forward -n monitoring services/prometheus-grafana 3000:http-web ``` Now, open http://localhost:3000 and log in. If using the example deployment from above, use username `admin` and password `prom-operator` to gain access. Then, select \"Piraeus Datastore\" from the available dashboards:"
}
] | {
"category": "Runtime",
"file_name": "monitoring.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "In this article, I will demonstrate how to use WasmEdge as a sidecar application runtime for Dapr. There are two ways to do this: Standalone WasmEdge is the recommended approach* is to write a microservice using or , and run it in WasmEdge. The WasmEdge application serves web requests and communicates with the sidecar via sockets using the Dapr API. In this case, we can . Alternatively, Embedded WasmEdge is to create a simple microservice in Rust or Go to listen for web requests and communicate with the Dapr sidecar. It passes the request data to a WasmEdge runtime for processing. The business logic of the microservice is a WebAssembly function created and deployed by an application developer. While the first approach (running the entire microservice in WasmEdge) is much preferred, we are still working on a fully fledged Dapr SDKs for WasmEdge. You can track their progress in GitHub issues -- and . First you need to install and . and are optional for the standalone WasmEdge approach. However, they are required for the demo app since it showcases both standalone and embedded WasmEdge approaches. Fork or clone the demo application from Github. You can use this repo as your own application template. ```bash git clone https://github.com/second-state/dapr-wasm ```` The demo has 4 Dapr sidecar applications. The project provides a public web service for a static HTML page. This is the applications UI. From the static HTML page, the user can select a microservice to turn an input image into grayscale. All 3 microsoervices below perform the same function. They are just implemented using different approaches. Standalone WasmEdge approach:* The project provides a standalone WasmEdge sidecar microservice that takes the input image and returns the grayscale image. The microservice is written in Rust and compiled into WebAssembly bytecode to run in WasmEdge. Embedded WasmEdge approach #1: The project provides a simple Rust-based microservice. It embeds a to turn an input image into a grayscale image. Embedded WasmEdge approach #2: The project provides a simple Go-based microservice. It embeds a to turn an input image into a grayscale image. You can follow the instructions in the to start the sidecar services. Here are commands to build the WebAssembly functions and start the sidecar services. The first set of commands deploy the static web page service and the standalone WasmEdge service written in Rust. It forms a complete application to turn an input image into grayscale. ```bash cd web-port go build ./run_web.sh cd ../ cd image-api-wasi-socket-rs cargo build --target wasm32-wasi cd ../ cd image-api-wasi-socket-rs ./runapiwasisocketrs.sh cd ../ ``` The second set of commands create the alternative microservices for the embedded WasmEdge function. ```bash cd functions/grayscale ./build.sh cd ../../ cd image-api-rs cargo build --release ./runapirs.sh cd ../ cd image-api-go go build ./runapigo.sh cd ../ ``` Finally, you should be able to see the web UI in your browser. The starts a non-blocking TCP server inside WasmEdge. The TCP server passes incoming requests to `handleclient()`, which passes HTTP requests to `handlehttp()`, which calls `grayscale()` to process the image data in the request. ```rust fn main() -> std::io::Result<()> { let port = std::env::var(\"PORT\").unwrapor(9005.tostring()); println!(\"new connection at {}\", port); let listener = TcpListener::bind(format!(\"127.0.0.1:{}\", port))?; loop { let = handleclient(listener.accept()?.0); } } fn handle_client(mut stream: TcpStream) -> std::io::Result<()> {"
},
{
"data": "... } fn handle_http(req: Request<Vec<u8>>) -> bytecodec::Result<Response<String>> { ... ... } fn grayscale(image: &[u8]) -> Vec<u8> { let detected = image::guess_format(&image); let mut buf = vec![]; if detected.is_err() { return buf; } let imageformatdetected = detected.unwrap(); let img = image::loadfrommemory(&image).unwrap(); let filtered = img.grayscale(); match imageformatdetected { ImageFormat::Gif => { filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); } _ => { filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); } }; return buf; } ``` Work in progress: It will soon interact with the Dapr sidecar through the . Now, you can build the microservice. It is a simple matter of compiling from Rust to WebAssembly. ```bash cd image-api-wasi-socket-rs cargo build --target wasm32-wasi ``` Deploy the WasmEdge microservice in Dapr as follows. ```bash dapr run --app-id image-api-wasi-socket-rs \\ --app-protocol http \\ --app-port 9005 \\ --dapr-http-port 3503 \\ --components-path ../config \\ --log-level debug \\ wasmedge ./target/wasm32-wasi/debug/image-api-wasi-socket-rs.wasm ``` The embedded WasmEdge approach requires us to create a WebAssembly function for the business logic (image processing) first, and then embed it into simple Dapr microservices. The is simple. It uses the macro to makes it easy to call the function from a Go or Rust host embedding the WebAssembly function. It takes and returns base64 encoded image data for the web. ```rust pub fn grayscale(image_data: String) -> String { let imagebytes = imagedata.split(\",\").map(|x| x.parse::<u8>().unwrap()).collect::<Vec<u8>>(); return grayscale::grayscaleinternal(&imagebytes); } ``` The Rust function that actually performs the task is as follows. ```rust pub fn grayscaleinternal(imagedata: &[u8]) -> String { let imageformatdetected: ImageFormat = image::guessformat(&imagedata).unwrap(); let img = image::loadfrommemory(&image_data).unwrap(); let filtered = img.grayscale(); let mut buf = vec![]; match imageformatdetected { ImageFormat::Gif => { filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); } _ => { filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); } }; let mut base64_encoded = String::new(); base64::encodeconfigbuf(&buf, base64::STANDARD, &mut base64_encoded); return base64encoded.tostring(); } ``` The embeds the above imaging processing function in WasmEdge. The is a web server and utilizes the Dapr Go SDK. ```go func main() { s := daprd.NewService(\":9003\") if err := s.AddServiceInvocationHandler(\"/api/image\", imageHandlerWASI); err != nil { log.Fatalf(\"error adding invocation handler: %v\", err) } if err := s.Start(); err != nil && err != http.ErrServerClosed { log.Fatalf(\"error listening: %v\", err) } } ``` The `imageHandlerWASI()` function and calls the image processing (grayscale) function in it via . Build and deploy the Go microservice to Dapr as follows. ```bash cd image-api-go go build dapr run --app-id image-api-go \\ --app-protocol http \\ --app-port 9003 \\ --dapr-http-port 3501 \\ --log-level debug \\ --components-path ../config \\ ./image-api-go ``` The embeds the above imaging processing function in WasmEdge. The is a Tokio and Warp based web server. ```rust pub async fn run_server(port: u16) { prettyenvlogger::init(); let home = warp::get().map(warp::reply); let image = warp::post() .and(warp::path(\"api\")) .and(warp::path(\"image\")) .and(warp::body::bytes()) .map(|bytes: bytes::Bytes| { let v: Vec<u8> = bytes.iter().map(|&x| x).collect(); let res = imageprocesswasmedge_sys(&v); let _encoded = base64::encode(&res); Response::builder() .header(\"content-type\", \"image/png\") .body(res) }); let routes = home.or(image); let routes = routes.with(warp::cors().allowanyorigin()); let log = warp::log(\"dapr_wasm\"); let routes = routes.with(log); warp::serve(routes).run((Ipv4Addr::UNSPECIFIED, port)).await } ``` The `imageprocesswasmedge_sys()` function and calls the image processing (grayscale) function in it via . Build and deploy the Rust microservice to Dapr as follows. ```bash cd image-api-rs cargo build --release dapr stop image-api-rs export LDLIBRARYPATH=/home/coder/.wasmedge/lib64/ dapr run --app-id image-api-rs \\ --app-protocol http \\ --app-port 9004 \\ --dapr-http-port 3502 \\ --components-path ../config \\ --log-level debug \\ ./target/release/image-api-rs ``` That's it! your cool Dapr microservices in WebAssembly!"
}
] | {
"category": "Runtime",
"file_name": "dapr.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
} |
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Clean fqdn cache ``` cilium-dbg fqdn cache clean [flags] ``` ``` -f, --force Skip confirmation -h, --help help for clean -p, --matchpattern string Delete cache entries with FQDNs that match matchpattern ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage fqdn proxy cache"
}
] | {
"category": "Runtime",
"file_name": "cilium-dbg_fqdn_cache_clean.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "This guide is intended as a way to get you off the ground, using Multus CNI to create Kubernetes pods with multiple interfaces. If you're already using Multus and need more detail, see the . This document is a quickstart and a getting started guide in one, intended for your first run-through of Multus CNI. We'll first install Multus CNI, and then we'll setup some configurations so that you can see how multiple interfaces are created for pods. Two things we'll refer to a number of times through this document are: \"Default network\" -- This is your pod-to-pod network. This is how pods communicate among one another in your cluster, how they have connectivity. Generally speaking, this is presented as the interface named `eth0`. This interface is always attached to your pods, so that they can have connectivity among themselves. We'll add interfaces in addition to this. \"CRDs\" -- Custom Resource Definitions. Custom Resources are a way that the Kubernetes API is extended. We use these here to store some information that Multus can read. Primarily, we use these to store the configurations for each of the additional interfaces that are attached to your pods. Our installation method requires that you first have installed Kubernetes and have configured a default network -- that is, a CNI plugin that's used for your pod-to-pod connectivity. We support Kubernetes versions that Kubernetes community supports. Please see in Kubernetes document. To install Kubernetes, you may decide to use , or potentially . After installing Kubernetes, you must install a default network CNI plugin. If you're using kubeadm, refer to the \"\" section in the kubeadm documentation. If it's your first time, we generally recommend using Flannel for the sake of simplicity. Alternatively, for advanced use cases, for installing Multus and a default network plugin at the same time, you may refer to the . To verify that you default network is ready, you may list your Kubernetes nodes with: ``` kubectl get nodes ``` In the case that your default network is ready you will see the `STATUS` column also switch to `Ready` for each node. ``` NAME STATUS ROLES AGE VERSION master-0 Ready master 1h v1.17.1 master-1 Ready master 1h v1.17.1 master-2 Ready master 1h v1.17.1 ``` Our recommended quickstart method to deploy Multus is to deploy using a Daemonset (a method of running pods on each nodes in your cluster), this spins up pods which install a Multus binary and configure Multus for usage. We'll apply a YAML file with `kubectl` from this repo, which installs the Multus components. Recommended installation: ``` kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml ``` See the for more information about this architecture. Alternatively, you may install the thin-plugin with: ``` kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset.yml ``` Starts a Multus daemonset, this runs a pod on each node which places a Multus binary on each node in `/opt/cni/bin` Reads the lexicographically (alphabetically) first configuration file in `/etc/cni/net.d`, and creates a new configuration file for Multus on each node as"
},
{
"data": "this configuration is auto-generated and is based on the default network configuration (which is assumed to be the alphabetically first configuration) Creates a `/etc/cni/net.d/multus.d` directory on each node with authentication information for Multus to access the Kubernetes API. Generally, the first step in validating your installation is to ensure that the Multus pods have run without error, you may see an overview of those by looking at: ``` kubectl get pods --all-namespaces | grep -i multus ``` You may further validate that it has ran by looking at the `/etc/cni/net.d/` directory and ensure that the auto-generated `/etc/cni/net.d/00-multus.conf` exists corresponding to the alphabetically first configuration file. The first thing we'll do is create configurations for each of the additional interfaces that we attach to pods. We'll do this by creating Custom Resources. Part of the quickstart installation creates a \"CRD\" -- a custom resource definition that is the home where we keep these custom resources -- we'll store our configurations for each interface in these. Each configuration we'll add is a CNI configuration. If you're not familiar with them, let's break them down quickly. Here's an example CNI configuration: ``` { \"cniVersion\": \"0.3.0\", \"type\": \"loopback\", \"additional\": \"information\" } ``` CNI configurations are JSON, and we have a structure here that has a few things we're interested in: `cniVersion`: Tells each CNI plugin which version is being used and can give the plugin information if it's using a too late (or too early) version. `type`: This tells CNI which binary to call on disk. Each CNI plugin is a binary that's called. Typically, these binaries are stored in `/opt/cni/bin` on each node, and CNI executes this binary. In this case we've specified the `loopback` binary (which create a loopback-type network interface). If this is your first time installing Multus, you might want to verify that the plugins that are in the \"type\" field are actually on disk in the `/opt/cni/bin` directory. `additional`: This field is put here as an example, each CNI plugin can specify whatever configuration parameters they'd like in JSON. These are specific to the binary you're calling in the `type` field. For an even further example -- take a look at the which shows additional details. If you'd like more information about CNI configuration, you can read . It might also be useful to look at the and see how they're configured. You do not need to reload or refresh the Kubelets when CNI configurations change. These are read on each creation & deletion of pods. So if you change a configuration, it'll apply the next time a pod is created. Existing pods may need to be restarted if they need the new configuration. So, we want to create an additional interface. Let's create a macvlan interface for pods to use. We'll create a custom resource that defines the CNI configuration for interfaces. Note in the following command that there's a `kind: NetworkAttachmentDefinition`. This is our fancy name for our configuration -- it's a custom extension of Kubernetes that defines how we attach networks to our pods. Secondarily, note the `config` field. You'll see that this is a CNI configuration just like we explained"
},
{
"data": "Lastly but very importantly, note under `metadata` the `name` field -- here's where we give this configuration a name, and it's how we tell pods to use this configuration. The name here is `macvlan-conf` -- as we're creating a configuration for macvlan. Here's the command to create this example configuration: ``` cat <<EOF | kubectl create -f - apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: macvlan-conf spec: config: '{ \"cniVersion\": \"0.3.0\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"host-local\", \"subnet\": \"192.168.1.0/24\", \"rangeStart\": \"192.168.1.200\", \"rangeEnd\": \"192.168.1.216\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"gateway\": \"192.168.1.1\" } }' EOF ``` NOTE: This example uses `eth0` as the `master` parameter, this master parameter should match the interface name on the hosts in your cluster. You can see which configurations you've created using `kubectl` here's how you can do that: ``` kubectl get network-attachment-definitions ``` You can get more detail by describing them: ``` kubectl describe network-attachment-definitions macvlan-conf ``` We're going to create a pod. This will look familiar as any pod you might have created before, but, we'll have a special `annotations` field -- in this case we'll have an annotation called `k8s.v1.cni.cncf.io/networks`. This field takes a comma delimited list of the names of your `NetworkAttachmentDefinition`s as we created above. Note in the command below that we have the annotation of `k8s.v1.cni.cncf.io/networks: macvlan-conf` where `macvlan-conf` is the name we used above when we created our configuration. Let's go ahead and create a pod (that just sleeps for a really long time) with this command: ``` cat <<EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: samplepod annotations: k8s.v1.cni.cncf.io/networks: macvlan-conf spec: containers: name: samplepod command: [\"/bin/ash\", \"-c\", \"trap : TERM INT; sleep infinity & wait\"] image: alpine EOF ``` You may now inspect the pod and see what interfaces are attached, like so: ``` kubectl exec -it samplepod -- ip a ``` You should note that there are 3 interfaces: `lo` a loopback interface `eth0` our default network `net1` the new interface we created with the macvlan configuration. For additional confirmation, use `kubectl describe pod samplepod` and there will be an annotations section, similar to the following: ``` Annotations: k8s.v1.cni.cncf.io/networks: macvlan-conf k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"cbr0\", \"ips\": [ \"10.244.1.73\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-conf\", \"interface\": \"net1\", \"ips\": [ \"192.168.1.205\" ], \"mac\": \"86:1d:96:ff:55:0d\", \"dns\": {} }] ``` This metadata tells us that we have two CNI plugins running successfully. You can add more interfaces to a pod by creating more custom resources and then referring to them in pod's annotation. You can also reuse configurations, so for example, to attach two macvlan interfaces to a pod, you could create a pod like so: ``` cat <<EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: samplepod annotations: k8s.v1.cni.cncf.io/networks: macvlan-conf,macvlan-conf spec: containers: name: samplepod command: [\"/bin/ash\", \"-c\", \"trap : TERM INT; sleep infinity & wait\"] image: alpine EOF ``` Note that the annotation now reads `k8s.v1.cni.cncf.io/networks: macvlan-conf,macvlan-conf`. Where we have the same configuration used twice, separated by a comma. If you were to create another custom resource with the name `foo` you could use that such as: `k8s.v1.cni.cncf.io/networks: foo,macvlan-conf`, and use any number of attachments."
}
] | {
"category": "Runtime",
"file_name": "quickstart.md",
"project_name": "Multus",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "| Name | GitHub ID (not Twitter ID) | GPG fingerprint | |--||| | Akihiro Suda | | | | Jan Dubois | | | | Anders F Bjrklund | | | | Balaji Vijayakumar | | | See https://github.com/lima-vm/.github/blob/main/SECURITY.md for how to report security issues."
}
] | {
"category": "Runtime",
"file_name": "MAINTAINERS.md",
"project_name": "Lima",
"subcategory": "Container Runtime"
} |
[
{
"data": "containerd-config.toml - configuration file for containerd The config.toml file is a configuration file for the containerd daemon. The file must be placed at /etc/containerd/config.toml or specified with the --config option of containerd to be used by the daemon. If the file does not exist at the appropriate location or is not provided via the --config option containerd uses its default configuration settings, which can be displayed with the containerd config(1) command. The TOML file used to configure the containerd daemon settings has a short list of global settings followed by a series of sections for specific areas of daemon configuration. There is also a section for plugins that allows each containerd plugin to have an area for plugin-specific configuration and settings. version : The version field in the config file specifies the configs version. If no version number is specified inside the config file then it is assumed to be a version 1 config and parsed as such. Please use version = 2 to enable version 2 config as version 1 has been deprecated. root : The root directory for containerd metadata. (Default: \"/var/lib/containerd\") state : The state directory for containerd (Default: \"/run/containerd\") plugin_dir : The directory for dynamic plugins to be stored [grpc] : Section for gRPC socket listener settings. Contains the following properties: address (Default: \"/run/containerd/containerd.sock\") tcp_address tcp_tls_cert tcp_tls_key uid (Default: 0) gid (Default: 0) max_recv_message_size max_send_message_size [ttrpc] : Section for TTRPC settings. Contains properties: address (Default: \"\") uid (Default: 0) gid (Default: 0) [debug] : Section to enable and configure a debug socket listener. Contains four properties: address (Default: \"/run/containerd/debug.sock\") uid (Default: 0) gid (Default: 0) level (Default: \"info\") sets the debug log level. Supported levels are: \"trace\", \"debug\", \"info\", \"warn\", \"error\", \"fatal\", \"panic\" format (Default: \"text\") sets log format. Supported formats are \"text\" and \"json\" [metrics] : Section to enable and configure a metrics listener. Contains two properties: address (Default: \"\") Metrics endpoint does not listen by default grpc_histogram (Default: false) Turn on or off gRPC histogram metrics disabled_plugins : Disabled plugins are IDs of plugins to disable. Disabled plugins won't be initialized and started. required_plugins : Required plugins are IDs of required plugins. Containerd exits if any required plugin doesn't exist or fails to be initialized or started. [plugins] : The plugins section contains configuration options exposed from installed plugins. The following plugins are enabled by default and their settings are shown below. Plugins that are not enabled by default will provide their own configuration values documentation. [plugins.\"io.containerd.monitor.v1.cgroups\"] has one option no_prometheus (Default: false) [plugins.\"io.containerd.service.v1.diff-service\"] has one option default, a list by default set to [\"walking\"] [plugins.\"io.containerd.gc.v1.scheduler\"] has several options that perform advanced tuning for the scheduler: pause_threshold is the maximum amount of time GC should be scheduled (Default: 0.02), deletion_threshold guarantees GC is scheduled after n number of deletions (Default: 0 [not triggered]), mutation_threshold guarantees GC is scheduled after n number of database mutations (Default: 100), schedule_delay defines the delay after trigger event before scheduling a GC (Default \"0ms\" [immediate]), startup_delay defines the delay after startup before scheduling a GC (Default \"100ms\") [plugins.\"io.containerd.runtime.v2.task\"] specifies options for configuring the runtime shim: platforms specifies the list of supported platforms sched_core Core scheduling is a feature that allows only trusted tasks to run concurrently on cpus sharing compute resources (eg: hyperthreads on a core). (Default: false) [plugins.\"io.containerd.service.v1.tasks-service\"] has performance options: blockio_config_file (Linux only) specifies path to blockio class definitions (Default: \"\"). Controls I/O scheduler priority and bandwidth throttling. See for details of the file format. rdt_config_file (Linux only) specifies path to a configuration used for configuring RDT (Default:"
},
{
"data": "Enables support for Intel RDT, a technology for cache and memory bandwidth management. See for details of the file format. [plugins.\"io.containerd.grpc.v1.cri\".containerd] contains options for the CRI plugin, and child nodes for CRI options: default_runtime_name (Default: \"runc\") specifies the default runtime name [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes] one or more container runtimes, each with a unique name [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.<runtime>] a runtime named `<runtime>` [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.<runtime>.options] options for the named `<runtime>`, most important: BinaryName specifies the path to the actual runtime to be invoked by the shim, e.g. `\"/usr/bin/runc\"` oom_score : The out of memory (OOM) score applied to the containerd daemon process (Default: 0) [cgroup] : Section for Linux cgroup specific settings path (Default: \"\") Specify a custom cgroup path for created containers [proxy_plugins] : Proxy plugins configures plugins which are communicated to over gRPC type (Default: \"\") address (Default: \"\") timeouts : Timeouts specified as a duration <!-- [timeouts] \"io.containerd.timeout.shim.cleanup\" = \"5s\" \"io.containerd.timeout.shim.load\" = \"5s\" \"io.containerd.timeout.shim.shutdown\" = \"3s\" \"io.containerd.timeout.task.state\" = \"2s\" --> imports : Imports is a list of additional configuration files to include. This allows to split the main configuration file and keep some sections separately (for example vendors may keep a custom runtime configuration in a separate file without modifying the main `config.toml`). Imported files will overwrite simple fields like `int` or `string` (if not empty) and will append `array` and `map` fields. Imported files are also versioned, and the version can't be higher than the main config. stream_processors accepts (Default: \"[]\") Accepts specific media-types returns (Default: \"\") Returns the media-type path (Default: \"\") Path or name of the binary args (Default: \"[]\") Args to the binary The following is a complete config.toml default configuration example: ```toml version = 2 root = \"/var/lib/containerd\" state = \"/run/containerd\" oom_score = 0 imports = [\"/etc/containerd/runtime_*.toml\", \"./debug.toml\"] [grpc] address = \"/run/containerd/containerd.sock\" uid = 0 gid = 0 [debug] address = \"/run/containerd/debug.sock\" uid = 0 gid = 0 level = \"info\" [metrics] address = \"\" grpc_histogram = false [cgroup] path = \"\" [plugins] [plugins.\"io.containerd.monitor.v1.cgroups\"] no_prometheus = false [plugins.\"io.containerd.service.v1.diff-service\"] default = [\"walking\"] [plugins.\"io.containerd.gc.v1.scheduler\"] pause_threshold = 0.02 deletion_threshold = 0 mutation_threshold = 100 schedule_delay = 0 startup_delay = \"100ms\" [plugins.\"io.containerd.runtime.v2.task\"] platforms = [\"linux/amd64\"] sched_core = true [plugins.\"io.containerd.service.v1.tasks-service\"] blockioconfigfile = \"\" rdtconfigfile = \"\" ``` The following is an example partial configuration with two runtimes: ```toml [plugins] [plugins.\"io.containerd.grpc.v1.cri\"] [plugins.\"io.containerd.grpc.v1.cri\".containerd] defaultruntimename = \"runc\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes] [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc] privilegedwithouthost_devices = false runtime_type = \"io.containerd.runc.v2\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options] BinaryName = \"/usr/bin/runc\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.other] privilegedwithouthost_devices = false runtime_type = \"io.containerd.runc.v2\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.other.options] BinaryName = \"/usr/bin/path-to-runtime\" ``` The above creates two named runtime configurations - named `runc` and `other` - and sets the default runtime to `runc`. The above are used solely for runtimes invoked via CRI. To use the non-default \"other\" runtime in this example, a spec will include the runtime handler named \"other\" to specify the desire to use the named runtime config. The CRI specification includes a , which will reference the named runtime. It is important to note the naming convention. Runtimes are under `[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes]`, with each runtime given a unique name, e.g. `[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc]`. In addition, each runtime can have shim-specific options under `[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.<runtime>.options]`, for example, `[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options]`. The `io.containerd.runc.v2` runtime is used to run OCI-compatible runtimes on Linux, such as runc. In the example above, the `runtime_type` field specifies the shim to use (`io.containerd.runc.v2`) while the `BinaryName` field is a shim-specific option which specifies the path to the OCI runtime. For the example configuration named \"runc\", the shim will launch `/usr/bin/runc` as the OCI runtime. For the example configuration named \"other\", the shim will launch `/usr/bin/path-to-runtime` instead. Please file any specific issues that you encounter at https://github.com/containerd/containerd. Phil Estes <estesp@gmail.com> ctr(8), containerd-config(8), containerd(8)"
}
] | {
"category": "Runtime",
"file_name": "containerd-config.toml.5.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
} |
[
{
"data": "title: \"Restore Reference\" layout: docs The page outlines how to use the `velero restore` command, configuration options for restores, and describes the main process Velero uses to perform restores. To see all commands for restores, run `velero restore --help`. To see all options associated with a specific command, provide the `--help` flag to that command. For example, `velero restore create --help` shows all options associated with the `create` command. ```Usage: velero restore [command] Available Commands: create Create a restore delete Delete restores describe Describe restores get Get restores logs Get restore logs ``` The following is an overview of Velero's restore process that starts after you run `velero restore create`. The Velero client makes a call to the Kubernetes API server to create a object. The `RestoreController` notices the new Restore object and performs validation. The `RestoreController` fetches basic information about the backup being restored, like the (BSL). It also fetches a tarball of the cluster resources in the backup, any volumes that will be restored using Restic, and any volume snapshots to be restored. The `RestoreController` then extracts the tarball of backup cluster resources to the /tmp folder and performs some pre-processing on the resources, including: Sorting the resources to help Velero decide the to use. Attempting to discover the resources by their Kubernetes . If a resource is not discoverable, Velero will exclude it from the restore. See more about how . Applying any configured . Verify the target namespace, if you have configured restore option. The `RestoreController` begins restoring the eligible resources one at a time. Velero extracts the current resource into a Kubernetes resource object. Depending on the type of resource and restore options you specified, Velero will make the following modifications to the resource or preparations to the target cluster before attempting to create the resource: The `RestoreController` makes sure the target namespace exists. If the target namespace does not exist, then the `RestoreController` will create a new one on the cluster. If the resource is a Persistent Volume (PV), the `RestoreController` will the PV and its namespace. If the resource is a Persistent Volume Claim (PVC), the `RestoreController` will modify the . Execute the resources `RestoreItemAction` , if you have configured one. Update the resource objects namespace if you've configured . The `RestoreController` adds a `velero.io/backup-name` label with the backup name and a `velero.io/restore-name` with the restore name to the resource. This can help you easily identify restored resources and which backup they were restored from. The `RestoreController` creates the resource object on the target cluster. If the resource is a PV then the `RestoreController` will restore the PV data from the , , or depending on how the PV was backed up. If the resource already exists in the target cluster, which is determined by the Kubernetes API during resource creation, the `RestoreController` will skip the"
},
{
"data": "The only are Service Accounts, which Velero will attempt to merge differences between the backed up ServiceAccount into the ServiceAccount on the target cluster. You can to update resources instead of skipping them using the `--existing-resource-policy`. Once the resource is created on the target cluster, Velero may take some additional steps or wait for additional processes to complete before moving onto the next resource to restore. If the resource is a Pod, the `RestoreController` will execute any and wait for the hook to finish. If the resource is a PV restored by Restic, the `RestoreController` waits for Restics restore to complete. The `RestoreController` sets a timeout for any resources restored with Restic during a restore. The default timeout is 4 hours, but you can configure this be setting using `--restic-timeout` restore option. If the resource is a Custom Resource Definition, the `RestoreController` waits for its availability in the cluster. The timeout is 1 minute. If any failures happen finishing these steps, the `RestoreController` will log an error in the restore result and will continue restoring. By default, Velero will restore resources in the following order: Custom Resource Definitions Namespaces StorageClasses VolumeSnapshotClass VolumeSnapshotContents VolumeSnapshots PersistentVolumes PersistentVolumeClaims Secrets ConfigMaps ServiceAccounts LimitRanges Pods ReplicaSets Clusters ClusterResourceSets It's recommended that you use the default order for your restores. You are able to customize this order if you need to by setting the `--restore-resource-priorities` flag on the Velero server and specifying a different resource order. This customized order will apply to all future restores. You don't have to specify all resources in the `--restore-resource-priorities` flag. Velero will append resources not listed to the end of your customized list in alphabetical order. ```shell velero server \\ --restore-resource-priorities=customresourcedefinitions,namespaces,storageclasses,\\ volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,\\ volumesnapshots.snapshot.storage.k8s.io,persistentvolumes,persistentvolumeclaims,secrets,\\ configmaps,serviceaccounts,limitranges,pods,replicasets.apps,clusters.cluster.x-k8s.io,\\ clusterresourcesets.addons.cluster.x-k8s.io ``` Velero has three approaches when restoring a PV, depending on how the backup was taken. When restoring a snapshot, Velero statically creates the PV and then binds it to a restored PVC. Velero's PV rename and remap process is used only in this case because this is the only case where Velero creates the PV resource directly. When restoring with Restic, Velero uses Kubernetes to provision the PV after creating the PVC. In this case, the PV object is not actually created by Velero. When restoring with the , the PV is created from a CSI snapshot by the CSI driver. Velero doesnt create the PV directly. Instead Velero creates a PVC with its DataSource referring to the CSI VolumeSnapshot object. PV data backed up by durable snapshots is restored by VolumeSnapshot plugins. Velero calls the plugins interface to create a volume from a snapshot. The plugin returns the volumes `volumeID`. This ID is created by storage vendors and will be updated in the PV object created by Velero, so that the PV object is connected to the volume restored from a snapshot. For more information on Restic restores, see the page. A PV backed up by CSI snapshots is restored by the . This happens when restoring the PVC object that has been snapshotted by"
},
{
"data": "The CSI VolumeSnapshot object name is specified with the PVC during backup as the annotation `velero.io/volume-snapshot-name`. After validating the VolumeSnapshot object, Velero updates the PVC by adding a `DataSource` field and setting its value to the VolumeSnapshot name. When restoring PVs, if the PV being restored does not exist on the target cluster, Velero will create the PV using the name from the backup. Velero will rename a PV before restoring if both of the following conditions are met: The PV already exists on the target cluster. The PVs claim namespace has been . If both conditions are met, Velero will create the PV with a new name. The new name is the prefix `velero-clone-` and a random UUID. Velero also preserves the original name of the PV by adding an annotation `velero.io/original-pv-name` to the restored PV object. If you attempt to restore the PV's referenced PVC into its original namespace without remapping the namespace, Velero will not rename the PV. If a PV's referenced PVC exists already for that namespace, the restored PV creation attempt will fail, with an `Already Exist` error from the Kubernetes API Server. PVC objects are created the same way as other Kubernetes resources during a restore, with some specific changes: For a dynamic binding PVCs, Velero removes the fields related to bindings from the PVC object. This enables the default Kubernetes to be used for this PVC. The fields include: volumeName pv.kubernetes.io/bind-completed annotation pv.kubernetes.io/bound-by-controller annotation For a PVC that is bound by Velero Restore, if the target PV has been renamed by the , the RestoreController renames the `volumeName` field of the PVC object. Velero can change the storage class of persistent volumes and persistent volume claims during restores. To configure a storage class mapping, create a config map in the Velero namespace like the following: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: velero labels: velero.io/plugin-config: \"\" velero.io/change-storage-class: RestoreItemAction data: <old-storage-class>: <new-storage-class> ``` Velero can update the selected-node annotation of persistent volume claim during restores, if selected-node doesn't exist in the cluster then it will remove the selected-node annotation from PersistentVolumeClaim. To configure a node mapping, create a config map in the Velero namespace like the following: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: change-pvc-node-selector-config namespace: velero labels: velero.io/plugin-config: \"\" velero.io/change-pvc-node-selector: RestoreItemAction data: <old-node-name>: <new-node-name> ``` Velero can restore resources into a different namespace than the one they were backed up from. To do this, use the `--namespace-mappings` flag: ```bash velero restore create <RESTORE_NAME> \\ --from-backup <BACKUP_NAME> \\ --namespace-mappings old-ns-1:new-ns-1,old-ns-2:new-ns-2 ``` For example, A Persistent Volume object has a reference to the Persistent Volume Claims namespace in the field `Spec.ClaimRef.Namespace`. If you specify that Velero should remap the target namespace during the restore, Velero will change the `Spec.ClaimRef.Namespace` field on the PV object from `old-ns-1` to `new-ns-1`. By default, Velero is configured to be non-destructive during a restore. This means that it will never overwrite data that already exists in your"
},
{
"data": "When Velero attempts to create a resource during a restore, the resource being restored is compared to the existing resources on the target cluster by the Kubernetes API Server. If the resource already exists in the target cluster, Velero skips restoring the current resource and moves onto the next resource to restore, without making any changes to the target cluster. An exception to the default restore policy is ServiceAccounts. When restoring a ServiceAccount that already exists on the target cluster, Velero will attempt to merge the fields of the ServiceAccount from the backup into the existing ServiceAccount. Secrets and ImagePullSecrets are appended from the backed-up ServiceAccount. Velero adds any non-existing labels and annotations from the backed-up ServiceAccount to the existing resource, leaving the existing labels and annotations in place. You can change this policy for a restore by using the `--existing-resource-policy` restore flag. The available options are `none` (default) and `update`. If you choose to `update` existing resources during a restore (`--existing-resource-policy=update`), Velero will attempt to update an existing resource to match the resource being restored: If the existing resource in the target cluster is the same as the resource Velero is attempting to restore, Velero will add a `velero.io/backup-name` label with the backup name and a `velero.io/restore-name` label with the restore name to the existing resource. If patching the labels fails, Velero adds a restore error and continues restoring the next resource. If the existing resource in the target cluster is different from the backup, Velero will first try to patch the existing resource to match the backup resource. If the patch is successful, Velero will add a `velero.io/backup-name` label with the backup name and a `velero.io/restore-name` label with the restore name to the existing resource. If the patch fails, Velero adds a restore warning and tries to add the `velero.io/backup-name` and `velero.io/restore-name` labels on the resource. If the labels patch also fails, then Velero logs a restore error and continues restoring the next resource. You can also configure the existing resource policy in a object. NOTE: Update of a resource only applies to the Kubernetes resource data such as its spec. It may not work as expected for certain resource types such as PVCs and Pods. In case of PVCs for example, data in the PV is not restored or overwritten in any way. `update` existing resource policy works in a best-effort way, which means when restore's `--existing-resource-policy` is set to `update`, Velero will try to update the resource if the resource already exists, if the update fails, Velero will fall back to the default non-destructive way in the restore, and just logs a warning without failing the restore. There are two ways to delete a Restore object: Deleting with `velero restore delete` will delete the Custom Resource representing the restore, along with its individual log and results files. It will not delete any objects that were created by the restore in your"
},
{
"data": "Deleting with `kubectl -n velero delete restore` will delete the Custom Resource representing the restore. It will not delete restore log or results files from object storage, or any objects that were created during the restore in your cluster. During a restore, Velero deletes Auto assigned NodePorts by default and Services get new auto assigned nodePorts after restore. Velero auto detects explicitly specified NodePorts using `last-applied-config` annotation and they are preserved after restore. NodePorts can be explicitly specified as `.spec.ports[*].nodePort` field on Service definition. It is not always possible to set nodePorts explicitly on some big clusters because of operational complexity. As the Kubernetes states, \"if you want a specific port number, you can specify a value in the `nodePort` field. The control plane will either allocate you that port or report that the API transaction failed. This means that you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use.\"\" The clusters which are not explicitly specifying nodePorts may still need to restore original NodePorts in the event of a disaster. Auto assigned nodePorts are typically defined on Load Balancers located in front of cluster. Changing all these nodePorts on Load Balancers is another operation complexity you are responsible for updating after disaster if nodePorts are changed. Use the `velero restore create ` command's `--preserve-nodeports` flag to preserve Service nodePorts always, regardless of whether nodePorts are explicitly specified or not. This flag is used for preserving the original nodePorts from a backup and can be used as `--preserve-nodeports` or `--preserve-nodeports=true`. If this flag is present, Velero will not remove the nodePorts when restoring a Service, but will try to use the nodePorts from the backup. Trying to preserve nodePorts may cause port conflicts when restoring on situations below: If the nodePort from the backup is already allocated on the target cluster then Velero prints error log as shown below and continues the restore operation. ``` time=\"2020-11-23T12:58:31+03:00\" level=info msg=\"Executing item action for services\" logSource=\"pkg/restore/restore.go:1002\" restore=velero/test-with-3-svc-20201123125825 time=\"2020-11-23T12:58:31+03:00\" level=info msg=\"Restoring Services with original NodePort(s)\" cmd=output/bin/linux/amd64/velero logSource=\"pkg/restore/serviceaction.go:61\" pluginName=velero restore=velero/test-with-3-svc-20201123125825 time=\"2020-11-23T12:58:31+03:00\" level=info msg=\"Attempting to restore Service: hello-service\" logSource=\"pkg/restore/restore.go:1107\" restore=velero/test-with-3-svc-20201123125825 time=\"2020-11-23T12:58:31+03:00\" level=error msg=\"error restoring hello-service: Service \\\"hello-service\\\" is invalid: spec.ports[0].nodePort: Invalid value: 31536: provided port is already allocated\" logSource=\"pkg/restore/restore.go:1170\" restore=velero/test-with-3-svc-20201123125825 ``` If the nodePort from the backup is not in the nodePort range of target cluster then Velero prints error log as below and continues with the restore operation. Kubernetes default nodePort range is 30000-32767 but on the example cluster nodePort range is 20000-22767 and tried to restore Service with nodePort 31536. ``` time=\"2020-11-23T13:09:17+03:00\" level=info msg=\"Executing item action for services\" logSource=\"pkg/restore/restore.go:1002\" restore=velero/test-with-3-svc-20201123130915 time=\"2020-11-23T13:09:17+03:00\" level=info msg=\"Restoring Services with original NodePort(s)\" cmd=output/bin/linux/amd64/velero logSource=\"pkg/restore/serviceaction.go:61\" pluginName=velero restore=velero/test-with-3-svc-20201123130915 time=\"2020-11-23T13:09:17+03:00\" level=info msg=\"Attempting to restore Service: hello-service\" logSource=\"pkg/restore/restore.go:1107\" restore=velero/test-with-3-svc-20201123130915 time=\"2020-11-23T13:09:17+03:00\" level=error msg=\"error restoring hello-service: Service \\\"hello-service\\\" is invalid: spec.ports[0].nodePort: Invalid value: 31536: provided port is not in the valid range. The range of valid ports is 20000-22767\" logSource=\"pkg/restore/restore.go:1170\" restore=velero/test-with-3-svc-20201123130915 ```"
}
] | {
"category": "Runtime",
"file_name": "restore-reference.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "The edge cloud can run application logic microservices very close to the client device. The microservices could be stateless computational tasks, such as and , which offload computation from the client. The microservices could also that sync with backend databases. The edge cloud has advantages such as low latency, high security, and high performance. Operationally, WasmEdge can be embedded into cloud-native infrastructure via its SDKs in , and . It is also an OCI compliant runtime that can be directly as a lightweight and high-performance alternative to Linux containers. The following application frameworks have been tested to work with WasmEdge-based microservices. Linkerd MOSN Envoy"
}
] | {
"category": "Runtime",
"file_name": "microservice.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
} |
[
{
"data": "The best way to get started is to deploy Kubernetes with Kube-router is with a cluster installer. Please see the to deploy Kubernetes cluster with Kube-router using Please see the to deploy Kubernetes cluster with Kube-router using k0s by default uses kube-router as a CNI option. Please see the to deploy Kubernetes cluster with Kube-router using by default uses for its NetworkPolicy enforcement. Please see the to deploy kube-router on manually installed clusters When running in an AWS environment that requires an explicit proxy you need to inject the proxy server as a in your kube-router deployment Example: ```yaml env: name: HTTP_PROXY value: \"http://proxy.example.com:80\" ``` Azure does not support IPIP packet encapsulation which is the default packet encapsulation that kube-router uses. If you need to use an overlay network in an Azure environment with kube-router, please ensure that you set `--overlay-encap=fou`. See for more information. Depending on what functionality of kube-router you want to use, multiple deployment options are possible. You can use the flags `--run-firewall`, `--run-router`, `--run-service-proxy`, `--run-loadbalancer` to selectively enable only required functionality of kube-router. Also you can choose to run kube-router as agent running on each cluster node. Alternativley you can run kube-router as pod on each node through daemonset. ```sh Usage of kube-router: --advertise-cluster-ip Add Cluster IP of the service to the RIB so that it gets advertises to the BGP peers. --advertise-external-ip Add External IP of service to the RIB so that it gets advertised to the BGP peers. --advertise-loadbalancer-ip Add LoadbBalancer IP of service status as set by the LB provider to the RIB so that it gets advertised to the BGP peers. --advertise-pod-cidr Add Node's POD cidr to the RIB so that it gets advertised to the BGP peers. (default true) --auto-mtu Auto detect and set the largest possible MTU for kube-bridge and pod interfaces (also accounts for IPIP overlay network when enabled). (default true) --bgp-graceful-restart Enables the BGP Graceful Restart capability so that routes are preserved on unexpected restarts --bgp-graceful-restart-deferral-time duration BGP Graceful restart deferral time according to RFC4724 4.1, maximum 18h. (default 6m0s) --bgp-graceful-restart-time duration BGP Graceful restart time according to RFC4724 3, maximum 4095s. (default 1m30s) --bgp-holdtime duration This parameter is mainly used to modify the holdtime declared to BGP peer. When Kube-router goes down abnormally, the local saving time of BGP route will be affected. Holdtime must be in the range 3s to 18h12m16s. (default 1m30s) --bgp-port uint32 The port open for incoming BGP connections and to use for connecting with other BGP peers. (default 179) --cache-sync-timeout duration The timeout for cache synchronization (e.g. '5s', '1m'). Must be greater than 0. (default 1m0s) --cleanup-config Cleanup iptables rules, ipvs, ipset configuration and exit. --cluster-asn uint ASN number under which cluster nodes will run iBGP. --disable-source-dest-check Disable the source-dest-check attribute for AWS EC2 instances. When this option is false, it must be set some other way. (default true) --enable-cni Enable CNI plugin. Disable if you want to use kube-router features alongside another CNI plugin. (default true) --enable-ibgp Enables peering with nodes with the same ASN, if disabled will only peer with external BGP peers (default true) --enable-ipv4 Enables IPv4 support (default true) --enable-ipv6 Enables IPv6 support --enable-overlay When enable-overlay is set to true, IP-in-IP tunneling is used for pod-to-pod networking across nodes in different subnets. When set to false no tunneling is used and routing infrastructure is expected to route traffic for pod-to-pod networking across nodes in different subnets (default true) --enable-pod-egress SNAT traffic from Pods to destinations outside the"
},
{
"data": "(default true) --enable-pprof Enables pprof for debugging performance and memory leak issues. --excluded-cidrs strings Excluded CIDRs are used to exclude IPVS rules from deletion. --hairpin-mode Add iptables rules for every Service Endpoint to support hairpin traffic. --health-port uint16 Health check port, 0 = Disabled (default 20244) -h, --help Print usage information. --hostname-override string Overrides the NodeName of the node. Set this if kube-router is unable to determine your NodeName automatically. --injected-routes-sync-period duration The delay between route table synchronizations (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 1m0s) --iptables-sync-period duration The delay between iptables rule synchronizations (e.g. '5s', '1m'). Must be greater than 0. (default 5m0s) --ipvs-graceful-period duration The graceful period before removing destinations from IPVS services (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 30s) --ipvs-graceful-termination Enables the experimental IPVS graceful terminaton capability --ipvs-permit-all Enables rule to accept all incoming traffic to service VIP's on the node. (default true) --ipvs-sync-period duration The delay between ipvs config synchronizations (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 5m0s) --kubeconfig string Path to kubeconfig file with authorization information (the master location is set by the master flag). --loadbalancer-default-class Handle loadbalancer services without a class (default true) --loadbalancer-ip-range strings CIDR values from which loadbalancer services addresses are assigned (can be specified multiple times) --loadbalancer-sync-period duration The delay between checking for missed services (e.g. '5s', '1m'). Must be greater than 0. (default 1m0s) --masquerade-all SNAT all traffic to cluster IP/node port. --master string The address of the Kubernetes API server (overrides any value in kubeconfig). --metrics-addr string Prometheus metrics address to listen on, (Default: all interfaces) --metrics-path string Prometheus metrics path (default \"/metrics\") --metrics-port uint16 Prometheus metrics port, (Default 0, Disabled) --nodeport-bindon-all-ip For service of NodePort type create IPVS service that listens on all IP's of the node. --nodes-full-mesh Each node in the cluster will setup BGP peering with rest of the nodes. (default true) --overlay-encap string Valid encapsulation types are \"ipip\" or \"fou\" (if set to \"fou\", the udp port can be specified via \"overlay-encap-port\") (default \"ipip\") --overlay-encap-port uint16 Overlay tunnel encapsulation port (only used for \"fou\" encapsulation) (default 5555) --overlay-type string Possible values: subnet,full - When set to \"subnet\", the default, default \"--enable-overlay=true\" behavior is used. When set to \"full\", it changes \"--enable-overlay=true\" default behavior so that IP-in-IP tunneling is used for pod-to-pod networking across nodes regardless of the subnet the nodes are in. (default \"subnet\") --override-nexthop Override the next-hop in bgp routes sent to peers with the local ip. --peer-router-asns uints ASN numbers of the BGP peer to which cluster nodes will advertise cluster ip and node's pod cidr. (default []) --peer-router-ips ipSlice The ip address of the external router to which all nodes will peer and advertise the cluster ip and pod cidr's. (default []) --peer-router-multihop-ttl uint8 Enable eBGP multihop supports -- sets multihop-ttl. (Relevant only if ttl >= 2) --peer-router-passwords strings Password for authenticating against the BGP peer defined with \"--peer-router-ips\". --peer-router-passwords-file string Path to file containing password for authenticating against the BGP peer defined with \"--peer-router-ips\". --peer-router-passwords will be preferred if both are set. --peer-router-ports uints The remote port of the external BGP to which all nodes will peer. If not set, default BGP port (179) will be used. (default []) --router-id string BGP router-id. Must be specified in a ipv6 only cluster, \"generate\" can be specified to generate the router id. --routes-sync-period duration The delay between route updates and advertisements (e.g. '5s', '1m', '2h22m'). Must be greater than 0. (default 5m0s) --run-firewall Enables Network Policy -- sets up iptables to provide ingress firewall for"
},
{
"data": "(default true) --run-loadbalancer Enable loadbalancer address allocator --run-router Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP. (default true) --run-service-proxy Enables Service Proxy -- sets up IPVS for Kubernetes Services. (default true) --runtime-endpoint string Path to CRI compatible container runtime socket (used for DSR mode). Currently known working with containerd. --service-cluster-ip-range strings CIDR values from which service cluster IPs are assigned (can be specified up to 2 times) (default [10.96.0.0/12]) --service-external-ip-range strings Specify external IP CIDRs that are used for inter-cluster communication (can be specified multiple times) --service-node-port-range string NodePort range specified with either a hyphen or colon (default \"30000-32767\") --service-tcp-timeout duration Specify TCP timeout for IPVS services in standard duration syntax (e.g. '5s', '1m'), default 0s preserves default system value (default: 0s) --service-tcpfin-timeout duration Specify TCP FIN timeout for IPVS services in standard duration syntax (e.g. '5s', '1m'), default 0s preserves default system value (default: 0s) --service-udp-timeout duration Specify UDP timeout for IPVS services in standard duration syntax (e.g. '5s', '1m'), default 0s preserves default system value (default: 0s) -v, --v string log level for V logs (default \"0\") -V, --version Print version information. ``` Kube-router need to access kubernetes API server to get information on pods, services, endpoints, network policies etc. The very minimum information it requires is the details on where to access the kubernetes API server. This information can be passed as: ```sh kube-router --master=http://192.168.1.99:8080/` or `kube-router --kubeconfig=<path to kubeconfig file> ``` If you run kube-router as agent on the node, ipset package must be installed on each of the nodes (when run as daemonset, container image is prepackaged with ipset) If you choose to use kube-router for pod-to-pod network connectivity then Kubernetes controller manager need to be configured to allocate pod CIDRs by passing `--allocate-node-cidrs=true` flag and providing a `cluster-cidr` (i.e. by passing --cluster-cidr=10.1.0.0/16 for e.g.) If you choose to run kube-router as daemonset in Kubernetes version below v1.15, both kube-apiserver and kubelet must be run with `--allow-privileged=true` option. In later Kubernetes versions, only kube-apiserver must be run with `--allow-privileged=true` option and if PodSecurityPolicy admission controller is enabled, you should create PodSecurityPolicy, allowing privileged kube-router pods. Additionally, when run in daemonset mode, it is highly recommended that you keep netfilter related userspace host tooling like `iptables`, `ipset`, and `ipvsadm` in sync with the versions that are distributed by Alpine inside the kube-router container. This will help avoid conflicts that can potentially arise when both the host's userspace and kube-router's userspace tooling modifies netfilter kernel definitions. See: for more information. If you choose to use kube-router for pod-to-pod network connecitvity then Kubernetes cluster must be configured to use CNI network plugins. On each node CNI conf file is expected to be present as /etc/cni/net.d/10-kuberouter.conf `bridge` CNI plugin and `host-local` for IPAM should be used. A sample conf file that can be downloaded as ```sh wget -O /etc/cni/net.d/10-kuberouter.conf https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/cni/10-kuberouter.conf` ``` Additionally, the aforementioned `bridge` and `host-local` CNI plugins need to exist for the container runtime to reference if you have kube-router manage the pod-to-pod network. Additionally, if you use `hostPort`'s on any of your pods, you'll need to install the `hostport` plugin. As of kube-router v2.1.X, these plugins will be installed to `/opt/cni/bin` for you during the `initContainer` phase if kube-router finds them missing. Most container runtimes will know to look for your plugins there by default, however, you may have to configure them if you are having problems with your pods coming up. - This is quickest way to deploy kube-router in Kubernetes (dont forget to ensure the requirements"
},
{
"data": "Just run: ```sh kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kube-router-all-service-daemonset.yaml ``` Above will run kube-router as pod on each node automatically. You can change the arguments in the daemonset definition as required to suit your needs. Some sample deployment configuration can be found with different arguments used to select a set of the services kube-router should run. You can choose to run kube-router as agent runnng on each node. For e.g if you just want kube-router to provide ingress firewall for the pods then you can start kube-router as: ```sh kube-router --master=http://192.168.1.99:8080/ --run-firewall=true --run-service-proxy=false --run-router=false ``` Please delete kube-router daemonset and then clean up all the configurations done (to ipvs, iptables, ipset, ip routes etc) by kube-router on the node by running below command. ```sh docker run --privileged --net=host \\ --mount type=bind,source=/lib/modules,target=/lib/modules,readonly \\ --mount type=bind,source=/run/xtables.lock,target=/run/xtables.lock,bind-propagation=rshared \\ cloudnativelabs/kube-router /usr/local/bin/kube-router --cleanup-config ``` ```sh $ ctr image pull docker.io/cloudnativelabs/kube-router:latest $ ctr run --privileged -t --net-host \\ --mount type=bind,src=/lib/modules,dst=/lib/modules,options=rbind:ro \\ --mount type=bind,src=/run/xtables.lock,dst=/run/xtables.lock,options=rbind:rw \\ docker.io/cloudnativelabs/kube-router:latest kube-router-cleanup /usr/local/bin/kube-router --cleanup-config ``` If you have a kube-proxy in use, and want to try kube-router just for service proxy you can do ```sh kube-proxy --cleanup-iptables ``` followed by ```sh kube-router --master=http://192.168.1.99:8080/ --run-service-proxy=true --run-firewall=false --run-router=false ``` and if you want to move back to kube-proxy then clean up config done by kube-router by running ```sh kube-router --cleanup-config ``` and run kube-proxy with the configuration you have. kube-router can advertise Cluster, External and LoadBalancer IPs to BGP peers. It does this by: locally adding the advertised IPs to the nodes' `kube-dummy-if` network interface advertising the IPs to its BGP peers To set the default for all services use the `--advertise-cluster-ip`, `--advertise-external-ip` and `--advertise-loadbalancer-ip` flags. To selectively enable or disable this feature per-service use the `kube-router.io/service.advertise.clusterip`, `kube-router.io/service.advertise.externalip` and `kube-router.io/service.advertise.loadbalancerip` annotations. e.g.: `$ kubectl annotate service my-advertised-service \"kube-router.io/service.advertise.clusterip=true\"` `$ kubectl annotate service my-advertised-service \"kube-router.io/service.advertise.externalip=true\"` `$ kubectl annotate service my-advertised-service \"kube-router.io/service.advertise.loadbalancerip=true\"` `$ kubectl annotate service my-non-advertised-service \"kube-router.io/service.advertise.clusterip=false\"` `$ kubectl annotate service my-non-advertised-service \"kube-router.io/service.advertise.externalip=false\"` `$ kubectl annotate service my-non-advertised-service \"kube-router.io/service.advertise.loadbalancerip=false\"` By combining the flags with the per-service annotations you can choose either a opt-in or opt-out strategy for advertising IPs. Advertising LoadBalancer IPs works by inspecting the services `status.loadBalancer.ingress` IPs that are set by external LoadBalancers like for example MetalLb. This has been successfully tested together with in ARP mode. Service availability both externally and locally (within the cluster) can be controlled via the Kubernetes standard and via the custom kube-router service annotation: `kube-router.io/service.local: true`. Refer to the previously linked upstream Kubernetes documentation for more information on `spec.internalTrafficPolicy` and `spec.externalTrafficPolicy`. In order to keep backwards compatibility the `kube-router.io/service.local: true` annotation effectively overrides `spec.internalTrafficPolicy` and `spec.externalTrafficPolicy` and forces kube-router to behave as if both were set to `Local`. Communication from a Pod that is behind a Service to its own ClusterIP:Port is not supported by default. However, it can be enabled per-service by adding the `kube-router.io/service.hairpin=` annotation, or for all Services in a cluster by passing the flag `--hairpin-mode=true` to kube-router. Additionally, the `hairpin_mode` sysctl option must be set to `1` for all veth interfaces on each node. This can be done by adding the `\"hairpinMode\": true` option to your CNI configuration and rebooting all cluster nodes if they are already running kubernetes. Hairpin traffic will be seen by the pod it originated from as coming from the Service ClusterIP if it is logging the source IP. 10-kuberouter.conf ```json { \"name\":\"mynet\", \"type\":\"bridge\", \"bridge\":\"kube-bridge\", \"isDefaultGateway\":true, \"hairpinMode\":true, \"ipam\": { \"type\":\"host-local\" } } ``` To enable hairpin traffic for Service `my-service`: ```sh kubectl annotate service my-service"
},
{
"data": "``` If you want to also hairpin externalIPs declared for Service `my-service` (note, you must also either enable global hairpin or service hairpin (see above ^^^) for this to have an effect): ```sh kubectl annotate service my-service \"kube-router.io/service.hairpin.externalips=\" ``` By default, as traffic ingresses into the cluster, kube-router will source nat the traffic to ensure symmetric routing if it needs to proxy that traffic to ensure it gets to a node that has a service pod that is capable of servicing the traffic. This has a potential to cause issues when network policies are applied to that service since now the traffic will appear to be coming from a node in your cluster instead of the traffic originator. This is an issue that is common to all proxy's and all Kubernetes service proxies in general. You can read more information about this issue at: In addition to the fix mentioned in the linked upstream documentation (using `service.spec.externalTrafficPolicy`), kube-router also provides , which by its nature preserves the source IP, to solve this problem. For more information see the section above. Kube-router uses LVS for service proxy. LVS support rich set of . You can annotate the service to choose one of the scheduling alogirthms. When a service is not annotated `round-robin` scheduler is selected by default ```sh $ kubectl annotate service my-service \"kube-router.io/service.scheduler=lc\" $ kubectl annotate service my-service \"kube-router.io/service.scheduler=rr\" $ kubectl annotate service my-service \"kube-router.io/service.scheduler=sh\" $ kubectl annotate service my-service \"kube-router.io/service.scheduler=dh\" ``` If you would like to use `HostPort` functionality below changes are required in the manifest. By default kube-router assumes CNI conf file to be `/etc/cni/net.d/10-kuberouter.conf`. Add an environment variable `KUBEROUTERCNICONFFILE` to kube-router manifest and set it to `/etc/cni/net.d/10-kuberouter.conflist` Modify `kube-router-cfg` ConfigMap with CNI config that supports `portmap` as additional plug-in ```json { \"cniVersion\":\"0.3.0\", \"name\":\"mynet\", \"plugins\":[ { \"name\":\"kubernetes\", \"type\":\"bridge\", \"bridge\":\"kube-bridge\", \"isDefaultGateway\":true, \"ipam\":{ \"type\":\"host-local\" } }, { \"type\":\"portmap\", \"capabilities\":{ \"snat\":true, \"portMappings\":true } } ] } ``` Update init container command to create `/etc/cni/net.d/10-kuberouter.conflist` file Restart the container runtime For an e.g manifest please look at with necessary changes required for `HostPort` functionality. As of 0.2.6 we support experimental graceful termination of IPVS destinations. When possible the pods's TerminationGracePeriodSeconds is used, if it cannot be retrived for some reason the fallback period is 30 seconds and can be adjusted with `--ipvs-graceful-period` cli-opt graceful termination works in such a way that when kube-router receives a delete endpoint notification for a service it's weight is adjusted to 0 before getting deleted after he termination grace period has passed or the Active & Inactive connections goes down to 0. The maximum transmission unit (MTU) determines the largest packet size that can be transmitted through your network. MTU for the pod interfaces should be set appropriately to prevent fragmentation and packet drops thereby achieving maximum performance. If `auto-mtu` is set to true (`auto-mtu` is set to true by default as of kube-router 1.1), kube-router will determine right MTU for both `kube-bridge` and pod interfaces. If you set `auto-mtu` to false kube-router will not attempt to configure MTU. However you can choose the right MTU and set in the `cni-conf.json` section of the `10-kuberouter.conflist` in the kube-router . For e.g. ```json cni-conf.json: | { \"cniVersion\":\"0.3.0\", \"name\":\"mynet\", \"plugins\":[ { \"name\":\"kubernetes\", \"type\":\"bridge\", \"mtu\": 1400, \"bridge\":\"kube-bridge\", \"isDefaultGateway\":true, \"ipam\":{ \"type\":\"host-local\" } } ] } ``` If you set MTU yourself via the CNI config, you'll also need to set MTU of `kube-bridge` manually to the right value to avoid packet fragmentation in case of existing nodes on which `kube-bridge` is already created. On node reboot or in case of new nodes joining the cluster both the pod's interface and `kube-bridge`"
}
] | {
"category": "Runtime",
"file_name": "user-guide.md",
"project_name": "Kube-router",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "Reference: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md Currently, Ceph Cluster stores the state of the system in `Status.State`. But, we want to implement the usage of `Status.Conditions` instead of using `Status.State`. The usage of `Status.Phase` is deprecated over time because it contradicts the system design principle and hampered evolution. So rather than encouraging clients to infer the implicit properties from phases, the usage of `Status.Condition` is preferred. Conditions are more extensible since the addition of new conditions doesn't invalidate decisions based on existing conditions, and are also better suited to reflect conditions that may toggle back and forth and/or that may not be mutually exclusive. Conditions simply represent the latest available observation of an object's state. They are an extension mechanism intended to be used when the details of observation are not a priori known or would not apply to all instances of a given Kind. Objects can have multiple Conditions, and new types of Conditions can also be added in the future by the third-party controllers. Thus, Conditions are thereby represented using list/slice, where each having the similar structure. The necessary system states for the rook-ceph can be portrayed as follows: Ignored : If any of the resources gets ignored for multiple reasons Progressing : Marks the start of reconcile of Ceph Cluster Ready : When Reconcile completes successfully Not Ready : Either when cluster is Updated or Updating is blocked Connecting : When the Ceph Cluster is in the state of Connecting Connected : When the Ceph Cluster gets connected Available : The Ceph Cluster is healthy and is ready to use Failure : If any failure occurs in the Ceph Cluster Cluster Expanding : If the Cluster is Expanding Upgrading : When the Cluster gets an Upgrade Reference: https://github.com/openshift/custom-resource-status/: The `Status` of the Condition can be toggled between True or False according to the state of the cluster which it goes through. This can be shown to the user in the clusterCR with along with the information about the Conditions like the `Reason`, `Message` etc. Also a readable status, which basically states the final condition of the cluster along with the Message, which gives out some detail about the Condition like whether the Cluster is 'ReadytoUse' or if there is an Update available, we can update the MESSAGE as 'UpdateAvailable'. This could make it more understandable of the state of cluster to the user. Also, a Condition which states that the Cluster is undergoing an Upgrading can be added. Cluster Upgrade happens when there is a new version is available and changes the current Cluster"
},
{
"data": "This will help the user to know the status of the Cluster Upgrade in progress. NAME DATADIRHOSTPATH MONCOUNT AGE CONDITION MESSAGE HEALTH rook-ceph /var/lib/rook 3 114s Available ReadyToUse HEALTH_OK We can add Conditions simply in the Custom Resource struct as: type ClusterStatus struct{ FinalCondition ConditionType `json:\"finalcondition,omitempty\"` Message string `json:\"message,omitempty\"` Condition []RookConditions `json:\"conditions,omitempty\"` CephStatus *CephStatus `json:\"ceph,omitempty\"` } After that we can just make changes inside rook ceph codebase as necessary. The `setStatusCondition()` field will be fed with the `newCondition` variable which holds the entries for the new Conditions. The `FindStatusCondition` will return the Condition if it is having the same `ConditionType` as the `newCondition` otherwise, it will return `nil`. If `nil` is returned then `LastHeartbeatTime` and `LastTransitionTime` is updated and gets appended to the `Condition`. The `Condition.Status` gets updated if the value is different from the `existingCondition.Status`. Rest of the fields of the `Status.Condition` are also updated. The `FinalCondition` will be holding the final condition the cluster is in. This will be displayed into the readable status along with a message, which is an extra useful information for the users. The definition of the type Conditions can have the following details: Type RookConditionType `json:\"type\" description:\"type of Rook condition\"` Status ConditionStatus `json:\"status\" description:\"status of the condition, one of True, False, Unknown\"` Reason *string `json:\"reason,omitempty\" description:\"one-word CamelCase reason for the condition's last transition\"` Message *string `json:\"message,omitempty\" description:\"human-readable message indicating details about last transition\"` LastHeartbeatTime *unversioned.Time `json:\"lastHeartbeatTime,omitempty\" description:\"last time we got an update on a given condition\"` LastTransitionTime *unversioned.Time `json:\"lastTransitionTime,omitempty\" description:\"last time the condition transition from one status to another\"` The fields `Reason`, `Message`, `LastHeartbeatTime`, `LastTransitionTime` are optional field. Though the use of `Reason` field is encouraged. Condition Types field specifies the current state of the system. Condition status values may be `True`, `False`, or `Unknown`. The absence of a condition should be interpreted the same as Unknown. How controllers handle Unknown depends on the Condition in question. `Reason` is intended to be a one-word, CamelCase representation of the category of cause of the current status, and `Message` is intended to be a human-readable phrase or sentence, which may contain specific details of the individual occurrence. `Reason` is intended to be used in concise output, such as one-line kubectl get output, and in summarizing occurrences of causes, whereas `Message` is intended to be presented to users in detailed status explanations, such as `kubectl describe output`. In the CephClusterStatus, we can either remove the `Status.State` and"
},
{
"data": "fields and call the `Conditions` structure from inside the `ClusterStatus`, or we can just add the `Conditions` structure by keeping the already included fields.The first method is preferred because the `Conditions` structure contains the `Conditions.Type` and `Conditions.Message` which is similar to the `Status.State` and `Status.Message`. According to the above changes, necessary changes are to be made everywhere `ClusterStatus` or one of its fields are referred. Consider a cluster is being created. The RookConditions is an array that can store multiple Conditions. So the progression of the cluster being created can be seen in the RookConditions as shown in the example below. The Ceph Cluster gets created after it establishes a successful Connection. The `RookCondition` will show in the slice that the `Connecting` Condition will be in `Condition.Status` False. The `Connected` and `Progressing` Types will be set to True. Before: ClusterStatus{ State : Creating, Message : The Cluster is getting created, } After: ClusterStatus{ RookConditions{ { Type : Connecting, Status : False, Reason : ClusterConnecting, Message : The Cluster is Connecting, }, { Type : Connected, Status : True, Reason : ClusterConnected, Message : The Cluster is Connected, }, { Type : Progressing, Status : True, Reason : ClusterCreating, Message : The Cluster is getting created, }, }, } When a Cluster is getting updated, the `NotReady` Condition will be set to `True` and the `Ready` Condition will be set to `False`. Before: ClusterStatus{ State : Updating, Message : The Cluster is getting updated, } After: ClusterStatus{ RookConditions{ { Type : Connecting, Status : False, Reason : ClusterConnecting, Message : The Cluster is Connecting, }, { Type : Connected, Status : True, Reason : ClusterConnected, Message : The Cluster is Connected, }, { Type : Progressing, Status : False, Reason : ClusterCreating, Message : The Cluster is getting created, }, { Type : Ready, Status : False, Reason : ClusterReady, Message : The Cluster is ready, }, { Type : Available, Status : True, Reason : ClusterAvailable, Message : The Cluster is healthy and available to use, }, { Type : NotReady, Status : True, Reason : ClusterUpdating, Message : The Cluster is getting Updated, }, }, } In the examples mentioned above, the `LastTransitionTime` and `LastHeartbeatTime` is not added. These fields will also be included in the actual implementation and works in way such that when there is any change in the `Condition.Status` of a Condition, then the `LastTransitionTime` of that particular `Condition` will gets updated. For eg. in the second example indicated above, the `Condition.Status` of the `Condition` is shifted from `True` to `False` while cluster is Updating. So the `LastTranisitionTime` will gets updated when the shifting happens. `LastHeartbeatTime` gets updated whenever the `Condition` is getting updated."
}
] | {
"category": "Runtime",
"file_name": "rook-ceph-status-conditions.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "v1.4.1 released! This release introduces improvements and bug fixes as described below about stability, performance, space efficiency, resilience, and so on. Please try it and feedback. Thanks for all the contributions! Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.4.1. Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions . Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.1 from v1.3.x/v1.4.0, which are only supported source versions. Follow the upgrade instructions . N/A Please follow up on about any outstanding issues found after this release. ) - @c3y1huang @chriscchien ) - @yangchiu @mantissahz ) - @PhanLe1010 @roger-ryao ) - @c3y1huang @smallteeths @roger-ryao ) - @yangchiu @PhanLe1010 ) - @derekbit @roger-ryao ) - @derekbit @chriscchien ) - @derekbit @roger-ryao ) - @yangchiu @derekbit ) - @mantissahz @chriscchien ) - @derekbit @chriscchien ) - @mantissahz @roger-ryao ) - @shuo-wu @roger-ryao ) - @mantissahz @chriscchien ) - @c3y1huang @chriscchien ) - @yangchiu @weizhe0422 ) - @achims311 @roger-ryao ) - @derekbit @roger-ryao ) - @derekbit @chriscchien ) - @c3y1huang @chriscchien ) - @yangchiu @smallteeths ) - @derekbit ) - @yangchiu @derekbit ) - @derekbit @weizhe0422 @chriscchien ) - @yangchiu @c3y1huang ) - @yangchiu @derekbit ) - @derekbit @chriscchien ) - @ChanYiLin @roger-ryao ) - @hedefalk @shuo-wu @chriscchien ) - @yangchiu @derekbit ) - @smallteeths @chriscchien @ChanYiLin @PhanLe1010 @achims311 @c3y1huang @chriscchien @derekbit @hedefalk @innobead @mantissahz @roger-ryao @shuo-wu @smallteeths @weizhe0422 @yangchiu"
}
] | {
"category": "Runtime",
"file_name": "CHANGELOG-1.4.1.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "sidebar_position: 7 Below are theoretical limits for JuiceFS, in real use, performance and file system size will be limited by the metadata engine and object storage of your choice. Directory tree depth: unlimited File name length: 255 Bytes Symbolic link length: 4096 Bytes Number of hard links: 2^31 Number of files in single directory: 2^31 Number of files in a single volume: unlimited Single file size: 2^(26+31) Total file size: 4EiB"
}
] | {
"category": "Runtime",
"file_name": "spec-limits.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Name | Type | Description | Notes | - | - | - Id | string | | Size | int64 | | Prefault | Pointer to bool | | [optional] [default to false] `func NewSgxEpcConfig(id string, size int64, ) *SgxEpcConfig` NewSgxEpcConfig instantiates a new SgxEpcConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewSgxEpcConfigWithDefaults() *SgxEpcConfig` NewSgxEpcConfigWithDefaults instantiates a new SgxEpcConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *SgxEpcConfig) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o SgxEpcConfig) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *SgxEpcConfig) SetId(v string)` SetId sets Id field to given value. `func (o *SgxEpcConfig) GetSize() int64` GetSize returns the Size field if non-nil, zero value otherwise. `func (o SgxEpcConfig) GetSizeOk() (int64, bool)` GetSizeOk returns a tuple with the Size field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *SgxEpcConfig) SetSize(v int64)` SetSize sets Size field to given value. `func (o *SgxEpcConfig) GetPrefault() bool` GetPrefault returns the Prefault field if non-nil, zero value otherwise. `func (o SgxEpcConfig) GetPrefaultOk() (bool, bool)` GetPrefaultOk returns a tuple with the Prefault field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *SgxEpcConfig) SetPrefault(v bool)` SetPrefault sets Prefault field to given value. `func (o *SgxEpcConfig) HasPrefault() bool` HasPrefault returns a boolean if a field has been set."
}
] | {
"category": "Runtime",
"file_name": "SgxEpcConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "This guide shows you how to configure Piraeus Datastore when using . To complete this guide, you should be familiar with: editing `LinstorCluster` resources. MicroK8s is distributed as a . Because Snaps store their state in a separate directory (`/var/snap`) the LINSTOR CSI Driver needs to be updated to use a new path for mounting volumes. To change the LINSTOR CSI Driver, so that it uses the MicroK8s state paths, apply the following `LinstorCluster`: ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: patches: target: name: linstor-csi-node kind: DaemonSet patch: | apiVersion: apps/v1 kind: DaemonSet metadata: name: linstor-csi-node spec: template: spec: containers: name: linstor-csi volumeMounts: mountPath: /var/lib/kubelet name: publish-dir $patch: delete mountPath: /var/snap/microk8s/common/var/lib/kubelet name: publish-dir mountPropagation: Bidirectional ```"
}
] | {
"category": "Runtime",
"file_name": "microk8s.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "The information below, as well as information about past vulnerabilities, can be found at https://docs.ceph.com/en/latest/security/ A new major Ceph release is made every year, and security and bug fixes are backported to the last two releases. For the current active releases and the estimated end-of-life for each, please refer to https://docs.ceph.com/en/latest/releases/ To report a vulnerability, please send email to security@ceph.io Please do not file a public ceph tracker issue for a vulnerability. We urge reporters to provide as much information as is practical (a reproducer, versions affected, fix if available, etc.), as this can speed up the process considerably. Please let us know to whom credit should be given and with what affiliations. If this issue is not yet disclosed publicly and you have any disclosure date in mind, please share the same along with the report. Although you are not required to, you may encrypt your message using the following GPG key: 6EEF26FFD4093B99: Ceph Security Team (security@ceph.io) Download: Fingerprint: A527 D019 21F9 7178 C232 66C1 6EEF 26FF D409 3B99 The report will be acknowledged within three business days or less. The team will investigate and update the email thread with relevant information and may ask for additional information or guidance surrounding the reported issue. If the team does not confirm the report, no further action will be taken and the issue will be closed. If the team confirms the report, a unique CVE identifier will be assigned and shared with the reporter. The team will take action to fix the issue. If a reporter has no disclosure date in mind, a Ceph security team member will coordinate a release date (CRD) with the list members and share the mutually agreed disclosure date with the reporter. The vulnerability disclosure / release date is set excluding Friday and holiday periods. Embargoes are preferred for Critical and High impact issues. Embargo should not be held for more than 90 days from the date of vulnerability confirmation, except under unusual circumstances. For Low and Moderate issues with limited impact and an easy workaround or where an issue that is already public, a standard patch release process will be followed to fix the vulnerability once CVE is assigned. Medium and Low severity issues will be released as part of the next standard release cycle, with at least a 7 days advanced notification to the list members prior to the release date. The CVE fix details will be included in the release notes, which will be linked in the public announcement. Commits will be handled in a private repository for review and testing and a new patch version will be released from this private repository. If a vulnerability is unintentionally already fixed in the public repository, a few days are given to downstream stakeholders/vendors to prepare for updating before the public disclosure. An announcement will be made disclosing the vulnerability. The fastest place to receive security announcements is via the ceph-announce@ceph.io or oss-security@lists.openwall.com mailing lists. (These lists are low-traffic). If the report is considered embargoed, we ask you to not disclose the vulnerability before it has been fixed and announced, unless you received a response from the Ceph security team that you can do so. This holds true until the public disclosure date that was agreed upon by the list. Thank you for improving the security of Ceph and its ecosystem. Your efforts and responsible disclosure are greatly appreciated and will be acknowledged."
}
] | {
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "Ceph",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Extend CSI snapshot to support Longhorn BackingImage In Longhorn, we have BackingImage for VM usage. We would like to extend the CSI Snapshotter to support BackingImage management. Extend the CSI snapshotter to support: Create Longhorn BackingImage Delete Longhorn BackingImage Creating a new PVC from CSI snapshot that is associated with a Longhorn BackingImage Can support COW over each relative base image for delta data transfer for better space efficiency. (Will be in next improvement) User can backup a BackingImage based volume and restore it in another cluster without manually preparing BackingImage in a new cluster. With this improvement, users can use standard CSI VolumeSnapshot as the unified interface for BackingImage creation, deletion and restoration of a Volume. To use this feature, users need to deploy the CSI snapshot CRDs and related Controller The instructions are already on our document: https://longhorn.io/docs/1.4.1/snapshots-and-backups/csi-snapshot-support/enable-csi-snapshot-support/ Create a VolumeSnapshotClass with type `bi` which refers to BackingImage ```yaml kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 metadata: name: longhorn-snapshot-vsc driver: driver.longhorn.io deletionPolicy: Delete parameters: type: bi export-type: qcow2 # default to raw if it is not provided ``` Users can create a BackingImage of a Volume by creation of VolumeSnapshot. Example below for a Volume named `test-vol` ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc source: persistentVolumeClaimName: test-vol ``` Longhorn will create a BackingImage exported from this Volume. Users can create a volume based on a prior created VolumeSnapshot. Example below for a Volume named `test-vol-restore` ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-vol-restore spec: storageClassName: longhorn dataSource: name: test-snapshot-backing kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 5Gi ``` Longhorn will create a Volume based on the BackingImage associated with the VolumeSnapshot. Users can request the creation of a Volume based on a prior BackingImage which was not created via the CSI VolumeSnapshot. With the BackingImage already existing, users need to create the VolumeSnapshotContent with an associated VolumeSnapshot. The `snapshotHandle` of the VolumeSnapshotContent needs to point to an existing BackingImage. Example below for a Volume named `test-restore-existing-backing` and an existing BackingImage `test-bi` For pre-provisioning, users need to provide following query parameters: `backingImageDataSourceType`: `sourceType` of existing BackingImage, e.g. `export-from-volume`, `download` `backingImage`: Name of the BackingImage you should also provide the `sourceParameters` of existing BackingImage in the `snapshotHandle` for validation. `export-from-volume`: you should provide `volume-name` `export-type` `download`: you should proviide `url` `checksum`: optional ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: test-existing-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc driver: driver.longhorn.io deletionPolicy: Delete source: snapshotHandle: bi://backing?backingImageDataSourceType=export-from-volume&backingImage=test-bi&volume-name=vol-export-src&export-type=qcow2 volumeSnapshotRef: name: test-snapshot-existing-backing namespace: default ``` ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot-existing-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc source: volumeSnapshotContentName: test-existing-backing ``` ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-restore-existing-backing spec: storageClassName: longhorn dataSource: name: test-snapshot-existing-backing kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 5Gi ``` Longhorn will create a Volume based on the BackingImage associated with the VolumeSnapshot and the VolumeSnapshotContent. Users can request the creation of a Volume based on a BackingImage which was not created yet with following 2 kinds of data sources. `download`: Download a file from a URL as a BackingImage. `export-from-volume`: Export an existing in-cluster volume as a backing"
},
{
"data": "Users need to create the VolumeSnapshotContent with an associated VolumeSnapshot. The `snapshotHandle` of the VolumeSnapshotContent needs to provide the parameters for the data source. Example below for a volume named `test-on-demand-backing` and an non-existing BackingImage `test-bi` with two different data sources. `download`: Users need to provide following parameters `backingImageDataSourceType`: `download` for on-demand download. `backingImage`: Name of the BackingImage `url`: The file from a URL as a BackingImage. `backingImageChecksum`: Optional. Used for checking the checksum of the file. example yaml: ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: test-on-demand-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc driver: driver.longhorn.io deletionPolicy: Delete source: snapshotHandle: bi://backing?backingImageDataSourceType=download&backingImage=test-bi&url=https%3A%2F%2Flonghorn-backing-image.s3-us-west-1.amazonaws.com%2Fparrot.qcow2&backingImageChecksum=bd79ab9e6d45abf4f3f0adf552a868074dd235c4698ce7258d521160e0ad79ffe555b94e7d4007add6e1a25f4526885eb25c53ce38f7d344dd4925b9f2cb5d3b volumeSnapshotRef: name: test-snapshot-on-demand-backing namespace: default ``` `export-from-volume`: Users need to provide following parameters `backingImageDataSourceType`: `export-form-volume` for on-demand export. `backingImage`: Name of the BackingImage `volume-name`: Volume to be exported for the BackingImage `export-type`: Currently Longhorn supports `raw` or `qcow2` example yaml: ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: test-on-demand-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc driver: driver.longhorn.io deletionPolicy: Delete source: snapshotHandle: bi://backing?backingImageDataSourceType=export-from-volume&backingImage=test-bi&volume-name=vol-export-src&export-type=qcow2 volumeSnapshotRef: name: test-snapshot-on-demand-backing namespace: default ``` Users then can create corresponding VolumeSnapshot and PVC ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot-on-demand-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc source: volumeSnapshotContentName: test-on-demand-backing ``` ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-restore-on-demand-backing spec: storageClassName: longhorn dataSource: name: test-snapshot-on-demand-backing kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 5Gi ``` No changes necessary We add a new type `bi` to the parameter `type` in the VolumeSnapshotClass. It means that the CSI VolumeSnapshot created with this VolumeSnapshotClass is associated with a Longhorn BackingImage. When the users create VolumeSnapshot and the volumeSnapshotClass `type` is `bi` ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc source: persistentVolumeClaimName: test-vol ``` We do: Get the name of the Volume The name of the BackingImage will be same as the VolumeSnapshot `test-snapshot-backing`. Check if a BackingImage with the same name as the requested VolumeSnapshot already exists. Return success without creating a new BackingImage. Create a BackingImage. Get `export-type` from VolumeSnapshotClass parameter `export-type`, default to `raw.` Encode the `snapshotId` as `bi://backing?backingImageDataSourceType=export-from-volume&backingImage=test-snapshot-backing&volume-name=${VolumeName}&export-type=raw` This `snaphotId` will be used in the later CSI CreateVolume and DeleteSnapshot call. If VolumeSource type is `VolumeContentSource_Snapshot`, decode the `snapshotId` to get the parameters. `bi://backing?backingImageDataSourceType=${TYPE}&backingImage=${BACKINGIMAGENAME}&backingImageChecksum=${backingImageChecksum}&${OTHERPARAMETES}` If BackingImage with the given name already exists, create the volume. If BackingImage with the given name does not exists, we prepare it first. There are 2 kinds of types which are `export-from-volume` and `download`. For `download`, it means we have to prepare the BackingImage before creating the Volume. We first decode other parameters from `snapshotId` and create the BackingImage. For `export-from-volume`, it means we have to prepare the BackingImage before creating the Volume. We first decode other parameters from `snapshotId` and create the BackingImage. NOTE: we already have related code for preparing the BackingImage with type `download` or `export-from-volume` before creating a Volume, Decode the `snapshotId` to get the name of the BackingImage. Then we delete the BackingImage directly. Integration test plan. Deploy the csi snapshot CRDs, Controller as instructed at https://longhorn.io/docs/1.4.1/snapshots-and-backups/csi-snapshot-support/enable-csi-snapshot-support/ Create a VolumeSnapshotClass with type `bi` ```yaml kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 metadata: name: longhorn-snapshot-vsc driver: driver.longhorn.io deletionPolicy: Delete parameters: type: bi ``` Success Create a Volume `test-vol` of 5GB. Create PV/PVC for the"
},
{
"data": "Create a workload using the Volume. Write some data to the Volume. Create a VolumeSnapshot with following yaml: ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc source: persistentVolumeClaimName: test-vol ``` Verify that BacingImage is created. Verify the properties of BackingImage `sourceType` is `export-from-volume` `volume-name` is `test-vol` `export-type` is `raw` Delete the VolumeSnapshot `test-snapshot-backing` Verify the BacingImage is deleted Create a Volume `test-vol` of 5GB. Create PV/PVC for the Volume. Create a workload using the Volume. Write some data to the Volume. Create a VolumeSnapshot with following yaml: ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc source: persistentVolumeClaimName: test-vol ``` Verify that BacingImage is created. Create a new PVC with following yaml: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-restore-pvc spec: storageClassName: longhorn dataSource: name: test-snapshot-backing kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 5Gi ``` Attach the PVC `test-restore-pvc` to a workload and verify the data Delete the PVC Create a BackingImage `test-bi` using longhorn test raw image `https://longhorn-backing-image.s3-us-west-1.amazonaws.com/parrot.qcow2` Create a VolumeSnapshotContent with `snapshotHandle` pointing to BackingImage `test-bi` and provide the parameters. ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: test-existing-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc driver: driver.longhorn.io deletionPolicy: Delete source: snapshotHandle: bi://backing?backingImageDataSourceType=download&backingImage=test-bi&url=https%3A%2F%2Flonghorn-backing-image.s3-us-west-1.amazonaws.com%2Fparrot.qcow2&backingImageChecksum=bd79ab9e6d45abf4f3f0adf552a868074dd235c4698ce7258d521160e0ad79ffe555b94e7d4007add6e1a25f4526885eb25c53ce38f7d344dd4925b9f2cb5d3b volumeSnapshotRef: name: test-snapshot-existing-backing namespace: default ``` Create a VolumeSnapshot associated with the VolumeSnapshotContent ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot-existing-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc source: volumeSnapshotContentName: test-existing-backing ``` Create a PVC with the following yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-restore-existing-backing spec: storageClassName: longhorn dataSource: name: test-snapshot-existing-backing kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 5Gi ``` Attach the PVC `test-restore-existing-backing` to a workload and verify the data Type `download` Create a VolumeSnapshotContent with `snapshotHandle` providing the required parameters and BackingImage name `test-bi` ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: test-on-demand-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc driver: driver.longhorn.io deletionPolicy: Delete source: snapshotHandle: bi://backing?backingImageDataSourceType=download&backingImage=test-bi&url=https%3A%2F%2Flonghorn-backing-image.s3-us-west-1.amazonaws.com%2Fparrot.qcow2&backingImageChecksum=bd79ab9e6d45abf4f3f0adf552a868074dd235c4698ce7258d521160e0ad79ffe555b94e7d4007add6e1a25f4526885eb25c53ce38f7d344dd4925b9f2cb5d3b volumeSnapshotRef: name: test-snapshot-on-demand-backing namespace: default ``` Create a VolumeSnapshot associated with the VolumeSnapshotContent ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot-on-demand-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc source: volumeSnapshotContentName: test-on-demand-backing ``` Create a PVC with the following yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-restore-on-demand-backing spec: storageClassName: longhorn dataSource: name: test-snapshot-on-demand-backing kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 5Gi ``` Verify BackingImage `test-bi` is created Attach the PVC `test-restore-on-demand-backing` to a workload and verify the data Type `export-from-volume` Success Create a Volme `test-vol` and write some data to it. Create a VolumeSnapshotContent with `snapshotHandle` providing the required parameters and BackingImage name `test-bi` ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: test-on-demand-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc driver: driver.longhorn.io deletionPolicy: Delete source: snapshotHandle: bi://backing?backingImageDataSourceType=export-from-volume&backingImage=test-bi&volume-name=test-vol&export-type=qcow2 volumeSnapshotRef: name: test-snapshot-on-demand-backing namespace: default ``` Create a VolumeSnapshot associated with the VolumeSnapshotContent ```yaml apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: test-snapshot-on-demand-backing spec: volumeSnapshotClassName: longhorn-snapshot-vsc source: volumeSnapshotContentName: test-on-demand-backing ``` Create a PVC with the following yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-restore-on-demand-backing spec: storageClassName: longhorn dataSource: name: test-snapshot-on-demand-backing kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: ReadWriteOnce resources: requests: storage: 5Gi ``` Verify BackingImage `test-bi` is created Attach the PVC `test-restore-on-demand-backing` to a workload and verify the data No upgrade strategy needed We need to update the docs and examples to reflect the new type of parameter `type` in the VolumeSnapshotClass."
}
] | {
"category": "Runtime",
"file_name": "20230417-extend-csi-snapshot-to-support-backingimage.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "sidebar_position: 3 sidebar_label: \"PVC Autoresizing\" The component \"hwameistor-pvc-autoresizer\" provides the ability to automatically resize Persistent Volume Claims (PVCs). The resizing behavior is controlled by the `ResizePolicy` custom resource definition (CRD). An example of CR is as below: ```yaml apiVersion: hwameistor.io/v1alpha1 kind: ResizePolicy metadata: name: resizepolicy1 spec: warningThreshold: 60 resizeThreshold: 80 nodePoolUsageLimit: 90 ``` The three fields `warningThreshold`, `resizeThreshold`, and `nodePoolUsageLimit` are all of type integer and represent percentages. `warningThreshold` currently does not have any associated alert actions. It serves as a target ratio, indicating that the usage rate of the volume will be below this percentage after resizing is completed. `resizeThreshold` indicates a usage rate at which the resizing action will be triggered when the volume's usage rate reaches this percentage. `nodePoolUsageLimit` represents the upper limit of storage pool usage on a node. If the usage rate of a pool reaches this percentage, volumes assigned to that pool will not automatically resize. This is an examle of CR with label selectors. ```yaml apiVersion: hwameistor.io/v1alpha1 kind: ResizePolicy metadata: name: example-policy spec: warningThreshold: 60 resizeThreshold: 80 nodePoolUsageLimit: 90 storageClassSelector: matchLabels: pvc-resize: auto namespaceSelector: matchLabels: pvc-resize: auto pvcSelector: matchLabels: pvc-resize: auto ``` The `ResizePolicy` has three label selectors: `pvcSelector` indicates that PVCs selected by this selector will automatically resize according to the policy that selected them. `namespaceSelector` indicates that PVCs under namespaces selected by this selector will automatically resize according to this policy. `storageClassSelector` indicates that PVCs created from storage classes selected by this selector will automatically resize according to this policy. These three selectors have an \"AND\" relationship. If you specify multiple selectors in a `ResizePolicy`, the PVCs must match all of the selectors in order to be associated with that policy. If no selectors are specified in the `ResizePolicy`, it becomes a cluster-wide `ResizePolicy`, acting as the default policy for all PVCs in the entire cluster."
}
] | {
"category": "Runtime",
"file_name": "pvc_autoresizing.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "This package is a fork of the go-yaml library and is intended solely for consumption by kubernetes projects. In this fork, we plan to support only critical changes required for kubernetes, such as small bug fixes and regressions. Larger, general-purpose feature requests should be made in the upstream go-yaml library, and we will reject such changes in this fork unless we are pulling them from upstream. This fork is based on v2.4.0: https://github.com/go-yaml/yaml/releases/tag/v2.4.0 Introduction The yaml package enables Go programs to comfortably encode and decode YAML values. It was developed within as part of the project, and is based on a pure Go port of the well-known C library to parse and generate YAML data quickly and reliably. Compatibility The yaml package supports most of YAML 1.1 and 1.2, including support for anchors, tags, map merging, etc. Multi-document unmarshalling is not yet implemented, and base-60 floats from YAML 1.1 are purposefully not supported since they're a poor design and are gone in YAML 1.2. Installation and usage The import path for the package is gopkg.in/yaml.v2. To install it, run: go get gopkg.in/yaml.v2 API documentation -- If opened in a browser, the import path itself leads to the API documentation: API stability The package API for yaml v2 will remain stable as described in . License The yaml package is licensed under the Apache License 2.0. Please see the LICENSE file for details. Example ```Go package main import ( \"fmt\" \"log\" \"gopkg.in/yaml.v2\" ) var data = ` a: Easy! b: c: 2 d: [3, 4] ` // Note: struct fields must be public in order for unmarshal to // correctly populate the data. type T struct { A string B struct { RenamedC int `yaml:\"c\"` D []int `yaml:\",flow\"` } } func main() { t := T{} err := yaml.Unmarshal([]byte(data), &t) if err != nil { log.Fatalf(\"error: %v\", err) } fmt.Printf(\" t:\\n%v\\n\\n\", t) d, err := yaml.Marshal(&t) if err != nil { log.Fatalf(\"error: %v\", err) } fmt.Printf(\" t dump:\\n%s\\n\\n\", string(d)) m := make(map[interface{}]interface{}) err = yaml.Unmarshal([]byte(data), &m) if err != nil { log.Fatalf(\"error: %v\", err) } fmt.Printf(\" m:\\n%v\\n\\n\", m) d, err = yaml.Marshal(&m) if err != nil { log.Fatalf(\"error: %v\", err) } fmt.Printf(\" m dump:\\n%s\\n\\n\", string(d)) } ``` This example will generate the following output: ``` t: {Easy! {2 [3 4]}} t dump: a: Easy! b: c: 2 d: [3, 4] m: map[a:Easy! b:map[c:2 d:[3 4]]] m dump: a: Easy! b: c: 2 d: 3 4 ```"
}
] | {
"category": "Runtime",
"file_name": "README.md",
"project_name": "Stash by AppsCode",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "This document provides the description of the CRI plugin configuration. The CRI plugin config is part of the containerd config (default path: `/etc/containerd/config.toml`). See for more information about containerd config. Note that the `[plugins.\"io.containerd.grpc.v1.cri\"]` section is specific to CRI, and not recognized by other containerd clients such as `ctr`, `nerdctl`, and Docker/Moby. While containerd and Kubernetes use the legacy `cgroupfs` driver for managing cgroups by default, it is recommended to use the `systemd` driver on systemd-based hosts for compliance of of cgroups. To configure containerd to use the `systemd` driver, set the following option in `/etc/containerd/config.toml`: ```toml version = 2 [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options] SystemdCgroup = true ``` In addition to containerd, you have to configure the `KubeletConfiguration` to use the \"systemd\" cgroup driver. The `KubeletConfiguration` is typically located at `/var/lib/kubelet/config.yaml`: ```yaml kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: \"systemd\" ``` kubeadm users should also see . Note: Kubernetes v1.28 supports automatic detection of the cgroup driver as an alpha feature. With the `KubeletCgroupDriverFromCRI` kubelet feature gate enabled, the kubelet automatically detects the cgroup driver from the CRI runtime and the `KubeletConfiguration` configuration step above is not needed. When determining the cgroup driver, containerd uses the `SystemdCgroup` setting from runc-based runtime classes, starting from the default runtime class. If no runc-based runtime classes have been configured containerd relies on auto-detection based on determining if systemd is running. Note that all runc-based runtime classes should be configured to have the same `SystemdCgroup` setting in order to avoid unexpected behavior. The automatic cgroup driver configuration for kubelet feature is supported in containerd v2.0 and later. The default snapshotter is set to `overlayfs` (akin to Docker's `overlay2` storage driver): ```toml version = 2 [plugins.\"io.containerd.grpc.v1.cri\".containerd] snapshotter = \"overlayfs\" ``` See for other supported snapshotters. The following example registers custom runtimes into containerd: ```toml version = 2 [plugins.\"io.containerd.grpc.v1.cri\".containerd] defaultruntimename = \"crun\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes] [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.crun] runtime_type = \"io.containerd.runc.v2\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.crun.options] BinaryName = \"/usr/local/bin/crun\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.gvisor] runtime_type = \"io.containerd.runsc.v1\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.kata] runtime_type = \"io.containerd.kata.v2\" ``` In addition, you have to install the following `RuntimeClass` resources into the cluster with the `cluster-admin` role: ```yaml apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: crun handler: crun apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: gvisor handler: gvisor apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: kata handler: kata ``` To apply a runtime class to a pod, set `.spec.runtimeClassName`: ```yaml apiVersion: v1 kind: Pod spec: runtimeClassName: crun ``` See also . The explanation and default value of each configuration item are as follows: <details> <p> ```toml version = 2 [plugins.\"io.containerd.grpc.v1.cri\"] disabletcpservice = true streamserveraddress = \"127.0.0.1\" streamserverport = \"0\" streamidletimeout = \"4h\" enable_selinux = false selinuxcategoryrange = 1024 sandbox_image = \"registry.k8s.io/pause:3.9\" statscollectperiod = 10 enabletlsstreaming = false toleratemissinghugetlb_controller = true ignoreimagedefined_volumes = false netnsmountsunderstatedir = false [plugins.\"io.containerd.grpc.v1.cri\".x509keypair_streaming] tlscertfile = \"\" tlskeyfile = \"\" maxcontainerloglinesize = 16384 disable_cgroup = false disable_apparmor = false restrictoomscore_adj = false maxconcurrentdownloads = 3 disableprocmount = false unsetseccompprofile = \"\" enableunprivilegedports = false enableunprivilegedicmp = false enable_cdi = true cdispecdirs = [\"/etc/cdi\", \"/var/run/cdi\"] drainexecsynciotimeout = \"0s\""
},
{
"data": "snapshotter = \"overlayfs\" no_pivot = false disablesnapshotannotations = true discardunpackedlayers = false defaultruntimename = \"runc\" ignoreblockionotenablederrors = false ignorerdtnotenablederrors = false [plugins.\"io.containerd.grpc.v1.cri\".containerd.default_runtime] [plugins.\"io.containerd.grpc.v1.cri\".containerd.untrustedworkloadruntime] [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc] runtime_type = \"io.containerd.runc.v2\" pod_annotations = [] container_annotations = [] privilegedwithouthost_devices = false privilegedwithouthostdevicesalldevicesallowed = false baseruntimespec = \"\" cniconfdir = \"/etc/cni/net.d\" cnimaxconf_num = 1 snapshotter = \"\" sandboxer = \"\" io_type = \"\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options] NoPivotRoot = false NoNewKeyring = false ShimCgroup = \"\" IoUid = 0 IoGid = 0 BinaryName = \"\" Root = \"\" SystemdCgroup = false CriuImagePath = \"\" CriuWorkPath = \"\" [plugins.\"io.containerd.grpc.v1.cri\".cni] bin_dir = \"/opt/cni/bin\" conf_dir = \"/etc/cni/net.d\" maxconfnum = 1 conf_template = \"\" ip_pref = \"ipv4\" [plugins.\"io.containerd.grpc.v1.cri\".image_decryption] key_model = \"node\" [plugins.\"io.containerd.grpc.v1.cri\".registry] config_path = \"\" ``` </p> </details> Here is a simple example for a default registry hosts configuration. Set `config_path = \"/etc/containerd/certs.d\"` in your config.toml for containerd. Make a directory tree at the config path that includes `docker.io` as a directory representing the host namespace to be configured. Then add a `hosts.toml` file in the `docker.io` to configure the host namespace. It should look like this: ``` $ tree /etc/containerd/certs.d /etc/containerd/certs.d docker.io hosts.toml $ cat /etc/containerd/certs.d/docker.io/hosts.toml server = \"https://docker.io\" [host.\"https://registry-1.docker.io\"] capabilities = [\"pull\", \"resolve\"] ``` To specify a custom certificate: ``` $ cat /etc/containerd/certs.d/192.168.12.34:5000/hosts.toml server = \"https://192.168.12.34:5000\" [host.\"https://192.168.12.34:5000\"] ca = \"/path/to/ca.crt\" ``` See for the further information. The recommended way to run untrusted workload is to use api introduced in Kubernetes 1.12 to select RuntimeHandlers configured to run untrusted workload in `plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes`. However, if you are using the legacy `io.kubernetes.cri.untrusted-workload`pod annotation to request a pod be run using a runtime for untrusted workloads, the RuntimeHandler `plugins.\"io.containerd.grpc.v1.cri\"cri.containerd.runtimes.untrusted` must be defined first. When the annotation `io.kubernetes.cri.untrusted-workload` is set to `true` the `untrusted` runtime will be used. For example, see . Ideally the cni config should be placed by system admin or cni daemon like calico, weaveworks etc. However, this is useful for the cases when there is no cni daemonset to place cni config. The cni config template uses the [golang template](https://golang.org/pkg/text/template/) format. Currently supported values are: `.PodCIDR` is a string of the first CIDR assigned to the node. `.PodCIDRRanges` is a string array of all CIDRs assigned to the node. It is usually used for support. `.Routes` is a string array of all routes needed. It is usually used for dualstack support or single stack but IPv4 or IPv6 is decided at runtime. The can be used to render the cni config. For example, you can use the following template to add CIDRs and routes for dualstack in the CNI config: ``` \"ipam\": { \"type\": \"host-local\", \"ranges\": [{{range $i, $range := .PodCIDRRanges}}{{if $i}}, {{end}}[{\"subnet\": \"{{$range}}\"}]{{end}}], \"routes\": [{{range $i, $route := .Routes}}{{if $i}}, {{end}}{\"dst\": \"{{$route}}\"}{{end}}] } ``` The config options of the CRI plugin follow the [Kubernetes deprecation policy of \"admin-facing CLI components\"](https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-a-flag-or-cli). In summary, when a config option is announced to be deprecated: It is kept functional for 6 months or 1 release (whichever is longer); A warning is emitted when it is used."
}
] | {
"category": "Runtime",
"file_name": "config.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
} |
[
{
"data": "This file documents the list of steps to perform to create a new Antrea release. We use `<TAG>` as a placeholder for the release tag (e.g. `v1.4.0`). For a minor release On the code freeze date (typically one week before the actual scheduled release date), create a release branch for the new minor release (e.g `release-1.4`). after that time, only bug fixes should be merged into the release branch, by the fix after it has been merged into main. The maintainer in charge of that specific minor release can either do the cherry-picking directly or ask the person who contributed the fix to do it. Open a PR (labelled with `kind/release`) against the appropriate release branch with the following commits: a commit to update the . For a minor release, all significant changes and all bug fixes (labelled with `action/release-note` since the first version of the previous minor release should be mentioned, even bug fixes which have already been included in some patch release. For a patch release, you will mention all the bug fixes since the previous release with the same minor version. The commit message must be exactly `\"Update CHANGELOG for <TAG> release\"`, as a bot will look for this commit and cherry-pick it to update the main branch (starting with Antrea v1.0). The script may be used to easily generate links to PRs and the Github profiles of PR authors. Use `prepare-changelog.sh -h` to get the usage. a commit to update as needed, using the following commit message: `\"Set VERSION to <TAG>\"`. Before committing, ensure that you run `make -C build/charts/ helm-docs` and include the changes. Run all the tests for the PR, investigating test failures and re-triggering the tests as needed. Github worfklows are run automatically whenever the head branch is updated. Jenkins tests need to be . Cloud tests need to be triggered manually through the . Admin access is required. For each job (AKS, EKS, GKE), click on `Build with Parameters`, and enter the name of your fork as `ANTREA_REPO` and the name of your branch as `ANTREAGITREVISION`. Test starting times need to be staggered: if multiple jobs run at the same time, the Jenkins worker may run out-of-memory. Request a review from the other maintainers, and anyone else who may need to review the release notes. In case of feedback, you may want to consider waiting for all the tests to succeed before updating your"
},
{
"data": "Once all the tests have run successfully once, address review comments, get approval for your PR, and merge. this is the only case for which the \"Rebase and merge\" option should be used instead of the \"Squash and merge\" option. This is important, in order to ensure that changes to the CHANGELOG are preserved as an individual commit. You will need to enable the \"Allow rebase merging\" setting in the repository settings temporarily, and remember to disable it again right after you merge. Make the release on Github with the release branch as the target and copy the relevant section of the CHANGELOG as the release description (make sure all the markdown links work). The script can be used to create the release draft. Use `draft-release.sh -h` to get the usage. You typically should not be checking the `Set as a pre-release` box. This would only be necessary for a release candidate (e.g., `<TAG>` is `1.4.0-rc.1`), which we do not have at the moment. There is no need to upload any assets as this will be done automatically by a Github workflow, after you create the release. the `Set as the latest release` box is checked by default. If you are creating a patch release for an older minor version of Antrea, you should uncheck the box. After a while (time for the relevant Github workflows to complete), check that: the Docker image has been pushed to with the correct tag. This is handled by a Github worfklow defined in a separate Github repository and it can take some time for this workflow to complete. See this for more information. the assets have been uploaded to the release (`antctl` binaries and yaml manifests). This is handled by the `Upload assets to release` workflow. In particular, the following link should work: `https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea.yml`. After the appropriate Github workflow completes, a bot will automatically submit a PR to update the CHANGELOG in the main branch. You should verify the contents of the PR and merge it (no need to run the tests, use admin privileges). For a minor release Finally, open a PR against the main branch with a single commit, to update to the next minor version (with `-dev` suffix). For example, if the release was for `v1.4.0`, the VERSION file should be updated to `v1.5.0-dev`. Before committing, ensure that you run `make -C build/charts/ helm-docs` and include the changes. Note that after a patch release, the VERSION file in the main branch is never updated, so no additional commit is needed."
}
] | {
"category": "Runtime",
"file_name": "release.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "In this example, we'll present how to use and stored in a slot of a `Table` instance to implement indirect function invocation. The code in the following example is verified on wasmedge-sdk v0.5.0 wasmedge-sys v0.10.0 wasmedge-types v0.3.0 Let's start off by getting all imports right away so you can follow along ```rust // If the version of rust used is less than v1.63, please uncomment the follow attribute. // #![feature(explicitgenericargswithimpl_trait)] use wasmedge_sdk::{ config::{CommonConfigOptions, ConfigBuilder}, error::HostFuncError, host_function, params, types::Val, Caller, Executor, Func, ImportObjectBuilder, RefType, Store, Table, TableType, ValType, WasmVal, WasmValue, }; ``` In this example we defines a native function `real_add` that takes two numbers and returns their sum. This function will be registered as a host function into WasmEdge runtime environment ```rust fn realadd(caller: &Caller, input: Vec<WasmValue>) -> Result<Vec<WasmValue>, HostFuncError> { println!(\"Rust: Entering Rust function real_add\"); if input.len() != 2 { return Err(HostFuncError::User(1)); } let a = if input[0].ty() == ValType::I32 { input[0].to_i32() } else { return Err(HostFuncError::User(2)); }; let b = if input[1].ty() == ValType::I32 { input[1].to_i32() } else { return Err(HostFuncError::User(3)); }; let c = a + b; println!(\"Rust: calculating in real_add c: {:?}\", c); println!(\"Rust: Leaving Rust function real_add\"); Ok(vec![WasmValue::from_i32(c)]) } ``` The first thing we need to do is to create a `Table` instance. After that, we register the table instance along with an import module into the WasmEdge runtime environment. Now let's see the code. ```rust // create an executor let config = ConfigBuilder::new(CommonConfigOptions::default()).build()?; let mut executor = Executor::new(Some(&config), None)?; // create a store let mut store = Store::new()?; // create a table instance let result = Table::new(TableType::new(RefType::FuncRef, 10, Some(20))); assert!(result.is_ok()); let table = result.unwrap(); // create an import object let import = ImportObjectBuilder::new() .with_table(\"my-table\", table)? .build(\"extern\")?; // register the import object into the store store.registerimportmodule(&mut executor, &import)?; ``` In the code snippet above, we create a `Table` instance with the initial size of `10` and the maximum size of 20. The element type of the `Table` instance is `reference to function`. In the previous steps, we defined a native function `realadd` and registered a `Table` instance named `my-table` into the runtime environment. Now we'll save a reference to `readadd` function to a slot of `my-table`. ```rust // get the imported module instance let instance = store .module_instance(\"extern\") .expect(\"Not found module instance named 'extern'\"); // get the exported table instance let mut table = instance .table(\"my-table\") .expect(\"Not found table instance named 'my-table'\"); // create a host function let hostfunc = Func::wrap::<(i32, i32), i32, !>(Box::new(realadd), None)?; // store the reference to host_func at the given index of the table instance table.set(3, Val::FuncRef(Some(hostfunc.asref())))?; ``` We save the reference to `host_func` into the third slot of `my-table`. Next, we can retrieve the function reference from the table instance by index and call the function via its reference. ```rust // retrieve the function reference at the given index of the table instance let value = table.get(3)?; if let Val::FuncRef(Some(func_ref)) = value { // get the function type by func_ref let functy = funcref.ty()?; // arguments asserteq!(functy.args_len(), 2); let paramtys = functy.args().unwrap(); asserteq!(paramtys, [ValType::I32, ValType::I32]); // returns asserteq!(functy.returns_len(), 1); let returntys = functy.returns().unwrap(); asserteq!(returntys, [ValType::I32]); // call the function by func_ref let returns = func_ref.call(&mut executor, params!(1, 2))?; assert_eq!(returns.len(), 1); asserteq!(returns[0].toi32(), 3); } ``` The complete code of this example can be found in ."
}
] | {
"category": "Runtime",
"file_name": "table_and_funcref.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
} |
[
{
"data": "2021-10, initialized by Zhang, Qi A Parse Graph is a unidirectional graph. It is consist of a set of nodes and edges. A node represent a network protocol header, and an edge represent the linkage of two protocol headers which is adjacent in the packet. An example of a parse graph have 5 nodes and 6 edges. ](https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZ3JhcGggVERcbiAgICBBKChNQUMpKSAtLT4gQigoSVB2NCkpXG4gICAgQSgoTUFDKSkgLS0-IEMoKElQdjYpKVxuICAgIEIgLS0-IEQoKFRDUCkpXG4gICAgQyAtLT4gRCgoVENQKSlcbiAgICBCIC0tPiBFKChVRFApKVxuICAgIEMgLS0-IEUoKFVEUCkpXG4gICAgIiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRhcmtcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0) A Node or an Edge is described by a json object. There is no json representation for a parse graph, software should load all json objects of nodes and edges then build the parse graph logic in memory. A json object of Node will include below properties: type* This should always be \"node\". name* This is the name of the protocol. layout* This is an array of fields in the protocol header which also imply the bit order. For example, json object of mac header as below: ``` { \"type\" : \"node\", \"name\" : \"mac\", \"layout\" : [ { \"name\" : \"src\", \"size\" : \"48\", \"format\" : \"mac\", }, { \"name\" : \"dst\", \"size\" : \"48\", \"format\" : \"mac\", }, { \"name\" : \"ethertype\", \"size\" : \"16\", } ] } ``` For each field, there are properties can be defined: name* The name of the field, typically it should be unique to all fields in the same node, except when it is \"reserved\". size* Size of the field, note, the unit is \"bit\" but not \"byte\". Sometime a field's size can be decided by another field's value, for example, a geneve header's \"options\" field's size is decided by \"optlen\" field's value, so we have below: ``` \"name\" : \"geneve\", \"layout\" : [ ...... { \"name\" : \"reserved\", \"size\" : \"8\" }, { \"name\" : \"options\", \"size\" : \"optlen<<5\" } ], ``` Since when \"optlen\" increases 1 which means 4 bytes (32 bits) increase of \"options\"'s size so the bit value should shift left 5. format* Defined the input string format of the value, all formats are described in the section Input Format which also described the default format if it is not explicitly defined. default* Defined the default value of the field when a protocol header instance is created by the node. If not defined, the default value is always 0. The default value can be overwritten when forging a packet with specific value of the field. For example, we defined the default ipv4 address as below: ``` \"name\" : \"ipv4\", \"layout\" : [ ...... { \"name\" : \"src\", \"size\" : \"32\", \"format\" : \"ipv4\", \"default\" : \"1.1.1.1\" }, { \"name\" : \"dst\", \"size\" : \"32\", \"format\" : \"ipv4\", \"default\" : \"2.2.2.2\" } ] ``` readonly* Define if a field is read only or not, typically it will be used together with \"default\". For example, the version of IPv4 header should be 4 and can't be overwritten. ``` \"name\" : \"ipv4\", \"layout\" : [ { \"name\" : \"version\", \"size\" : \"4\", \"default\" : \"4\", \"readonly\" : \"true\" }, ...... ], ``` A reserved field implies it is \"readonly\" and should always be 0. optional* A field could be optional depends on some flag as another field. For example, the GRE header has couple optional fields. ``` \"name\" : \"gre\", \"layout\" : [ { \"name\" : \"c\", \"size\" : \"1\", }, { \"name\" : \"reserved\", \"size\" : \"1\", }, { \"name\" : \"k\", \"size\" : \"1\", }, { \"name\" : \"s\", \"size\" : \"1\", },"
},
{
"data": "{ \"name\" : \"checksum\", \"size\" : \"16\", \"optional\" : \"c=1\", }, { \"name\" : \"reserved\", \"size\" : \"16\", \"optional\" : \"c=1\", }, { \"name\" : \"key\", \"size\" : \"32\", \"optional\" : \"k=1\" }, { \"name\" : \"sequencenumber\", \"size\" : \"32\", \"optional\" : \"s=1\" } ] ``` The expresion of an optional field can use \"&\" or \"|\" combine multiple conditions, for example for gtpu header, we have below optional fields. ``` \"name\" : \"gtpu\", \"layout\" : [ ...... { \"name\" : \"e\", \"size\" : \"1\" }, { \"name\" : \"s\", \"size\" : \"1\" }, { \"name\" : \"pn\", \"size\" : \"1\" }, ...... { \"name\" : \"teid\", \"size\" : \"16\" }, { \"name\" : \"sequencenumber\", \"size\" : \"16\", \"optional\" : \"e=1|s=1|pn=1\", }, ...... ] ``` autoincrease* Some field's value cover the length of the payload or size of an optional field in the same header, so it should be auto increased during packet forging. For example the \"totallength\" of ipv4 header is a autoincrease feild. ``` \"name\" : \"ipv4\", \"layout\" : [ ...... { \"name\" : \"totallength\", \"size\" : \"16\", \"default\" : \"20\", \"autoincrease\" : \"true\", }, ...... ] ``` A field which is autoincrease also imply its readonly. increaselength* Typically this should only be enabled for an optional field to trigger another field's autoincrease. For example, the gtpc's \"messagelength\" field cover all the data start from field \"teid\", so its default size is 4 bytes which cover sequencenumber + 8 reserved bit, and should be increased if \"teid\" exist or any payload be appended. ``` \"name\" : \"gtpc\", \"layout\" : [ ...... { \"name\" : \"messagelength\", \"size\" : \"16\", \"default\" : \"4\", \"autoincrease\" : \"true\", }, { \"name\" : \"teid\", \"size\" : \"32\", \"optional\" : \"t=1\", \"increaselength\" : \"true\" }, { \"name\" : \"sequencenumber\", \"size\" : \"24\", }, { \"name\" : \"reserved\", \"size\" : \"8\", } ] ``` attributes* This defines an array of attributes, the attribute does not define the data belongs to current protocol header, but it impact the behaviour during applying actions of an edge when the protocol header is involved. For example, a geneve node has attribute \"udpport\" which define the udp tunnel port, so when it is appended after a udp header, the udp header's dst port is expected to be changed to this value. ``` \"name\" : \"geneve\", \"fields\" : [ ...... ], \"attributes\" : [ { \"name\" : \"udpport\", \"size\" : \"16\", \"default\" : \"6081\" } ] ``` An attribute can only have below properties which take same effect when they are in field. name size (must be fixed value) default format A json object of Edge will include below properties: type* This should always be \"edge\". start* This is the start node of the edge. end* This is the end node of the edge. actions* This is an array of actions the should be applied during packet forging. For example, when append a ipv4 headers after a mac header, the \"ethertype\" field of mac should be set to \"0x0800\": ``` { \"type\" : \"edge\", \"start\" : \"mac\", \"end\" : \"ipv4\", \"actions\" : [ { \"dst\" : \"start.ethertype\", \"src\" : \"0x0800\" } ] } ``` Each action should have two properties: dst* This describe the target field to set, it is formatted as <node>.<field> node must be \"start\" or \"end\". src* This describe the value to set, it could be a const value or same format as dst's. For example when append a vlan header after mac, we will have below actions: ``` { \"type\" : \"edge\", \"start\" : \"mac\", \"end\" : \"vlan\", \"actions\" : [ { \"dst\" : \"start.ethertype\", \"src\" : \"end.tpid\" }, { \"dst\" : \"end.ethertype\", \"src\" : \"start.ethertype\" } ] } ``` To avoid duplication, multiple edges can be aggregate into the one json object if there actions are"
},
{
"data": "So, multiple node name can be added to start or end with seperateor \",\". For example, all ipv6 and ipv6 extention header share the same actions when append a udp header ``` { \"type\" : \"edge\", \"start\" : \"ipv6,ipv6srh,ipv6crh16,ipv6crh32\", \"end\" : \"udp\", \"actions\" : [ { \"dst\" : \"start.nextheader\", \"src\" : \"17\" } ] } ``` Another examples is gre and nvgre share the same actions when be appanded after a ipv4 header: ``` { \"type\" : \"edge\", \"start\" : \"ipv4\", \"end\" : \"gre,nvgre\", \"actions\" : [ { \"dst\" : \"start.protocol\", \"src\" : \"47\" } ] } ``` A path defines a sequence of nodes which is the input parameter for a packet forging, a packet forging should fail if the path can't be recognised as a subgraph of the parser graph. A json object of a path should include below properties: type* This should always be \"path\". stack* This is an array of node configurations which also imply the protocol header sequence of a packet. Below is an example to forge an ipv4 / udp packet with default value. ``` { \"type\" : \"path\", \"stack\" : [ { \"header\" : \"mac\" }, { \"header\" : \"ipv4\" }, { \"header\" : \"udp\" }, ] } ``` A node configuration can have below properties: header* This is a protocol name (a node name). fields* This is an array of 3 member tuples: name* The name of the field or attribute that belongs to the node, note a readonly field should not be selected. value* The value to set the field or attribute. mask* This is optional, if it is not defined, corresponding bit of the mask should be set to 0, and it should be ignored for an attribute. actions* This is optional. When this json file is the input of flow adding commands, it can be used directly as the flow rule's action. An example to forge a ipv4 packet with src ip address 192.168.0.1 and dst ip address 192.168.0.2, also take ip address as mask. ``` { \"type\" : \"path\", \"stack\" : [ { \"header\" : \"mac\", }, { \"header\" : \"ipv4\", \"fields\" : [ { \"name\" : \"src\", \"value\" : \"192.168.0.1\", \"mask\" : \"255.255.255.255\" }, { \"name\" : \"dst\", \"value\" : \"192.168.0.2\", \"mask\" : \"255.255.255.255\" } ] } ], \"actions\" : \"redirect-to-queue 3\" } ``` Every field or attribute is associated with an Input Format, so the software can figure out how to parse default value in the node or a config value in the path. Currently we have 8 predefined format and don't support customised format. u8* accept number from 0 to 255 or hex from 0x0 to 0xff. u16* accept number from 0 to 65535 or hex from 0x0 to 0xffff. u32* accept number from 0 to 4294967295 or hex from 0x0 to 0xffffffff u64* accept number from 0 to 2^64 -1 or hex from 0x0 to 0xffffffffffffffff mac* accept xx:xx:xx:xx:xx:xx , x in hex from 0 to f ipv4* accept n.n.n.n , n from 0 to 255 ipv6* accept xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, x in hex from 0 to f bytearray* accept u8,u8,u8..... If format is not defined for a field or attribute, the default format will be selected base on size as below, and the MSB should be ignored by software if the value exceeds the limitation. | Size | Default Format | | - | -- | | 1 - 8 | u8 | | 9 - 16 | u16 | | 17 - 32 | u32 | | 33 - 64 | u64 | | > 64 | bytearray | | variable size | bytearray"
}
] | {
"category": "Runtime",
"file_name": "spec.md",
"project_name": "FD.io",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "sidebar_position: 5 sidebar_label: \"Volume\" On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. One problem is the loss of files when a container crashes. The kubelet restarts the container but with a clean state. A second problem occurs when sharing files between containers running together in a `Pod`. The Kubernetes volume abstraction solves both of these problems. Kubernetes supports many types of volumes. A Pod can use any number of volume types simultaneously. Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond the lifetime of a pod. When a pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes. For any kind of volume in a given pod, data is preserved across container restarts. At its core, a volume is a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used. To use a volume, specify the volumes to provide for the Pod in `.spec.volumes` and declare where to mount those volumes into containers in `.spec.containers[*].volumeMounts`. See also the official documentation provided by Kubernetes:"
}
] | {
"category": "Runtime",
"file_name": "volume.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Replace volume spec `recurringJobs` with the label-driven model. Abstract volume recurring jobs to a new CRD named \"RecurringJob\". The names or groups of recurring jobs can be referenced in volume labels. Users can set a recurring job to the `Default` group and Longhorn will automatically apply when the volume has no job labels. Only one cron job will be created per recurring job. Users can also update the recurring job, and Longhorn reflects the changes to the associated cron job. When instruct Longhorn to delete a recurring job, this will also remove the associated cron job. StorageClass now should use `recurringJobSelector` instead to refer to the recurring job names. During the version upgrade, existing volume spec `recurringJobs` and storageClass `recurringJobs` will automatically translate to volume labels, and recurring job CRs will get created. https://github.com/longhorn/longhorn/issues/467 Phase 1: Each recurring job can be in single or multiple groups. Each group can have single or multiple recurring jobs. The jobs and groups can be reference with the volume label. The recurring job in `default` group will automatically apply to a volume that has no job labels. Can create multiple recurring jobs with UI or YAML. The recurring job should include settings; `name`, `groups`, `task`, `cron`, `retain`, `concurrency`, `labels`. Phase2: The StorageClass and upgrade migration are still dependent on volume spec, thus complete removal of volume spec should be done in phase 2. Does the snapshot/backup operation one by one. The operation order can be defined as sequential, consistent (for volume group snapshot), or throttled (with a concurrent number as a parameter) in the future. https://github.com/longhorn/longhorn/pull/2737#issuecomment-887985811 As a Longhorn user / System administrator. I want to directly update recurring jobs referenced in multiple volumes. So I do not need to update each volume with cron job definition. As a Longhorn user / System administrator. I want the ability to set one or multiple `backup` and `snapshot` recurring jobs as default. All volumes without any recurring job label should automatically apply with the default recurring jobs. So I can be assured that all volumes without any recurring job label will automatically apply with default. As a Longhorn user / System administrator I want Longhorn to automatically convert existing volume spec `recurringJobs` to volume labels, and create associate recurring job CRs. So I don't have to manually create recurring job CRs and patch the labels. Create a recurring job on UI `Recurring Job` page or via `kubectl`. In UI, Navigate to Volume, `Recurring Jobs Schedule`. User can choose from `Job` or job `Group` from the tab. On `Job` tab, User sees existing recurring jobs that volume had labeled. User able to select `Backup` or `Snapshot` for the `Type` from the drop-down list. User able to edit the `Schedule`, `Retain`, `Concurrency` and `Labels`. On the job `Group` tab. User sees all existing recurring job groups from the `Name` drop-down list. User selects the job from the drop-down list. User sees all recurring `jobs` under the `group`. Click `Save` updates to the volume label. Update the recurring job CRs also reflect on the cron job and UI `Recurring Jobs Schedule`. Before enhancement Recurring jobs can only be added and updated per volume spec. Create cron job definitions for each volume causing duplicated setup effort. The recurring job can only be updated per volume. After enhancement Recurring jobs can be added and update as the volume"
},
{
"data": "Can select a recurring job from the UI drop-down menu and will automatically show the information from the recurring job CRs. Update the recurring job definition will automatically apply to all volumes with the job label. Add `default` to one or multiple recurring jobs `Groups` in UI or `kubectl`. Longhorn automatically applies the `default` group recurring jobs to all volumes without job labels. Before enhancement Default recurring jobs are set via StorageClass at PVC creation only. No default recurring job can be set up for UI-created volumes. Updating StorageClass does not reflect on the existing volumes. After enhancement Have the option to set default recurring jobs via `StorageClass` or `RecurringJob`. Longhorn recurring job controller automatically applies default recurring jobs to all volumes without the job labels. Longhorn adds the default recurring jobs when all job labels are removed from the volume. When the `RecurringJobSelector` is set in the `StorageClass`, it will be used as default instead. Perform upgrade. StorageClass `recurringJobs` will get convert to `recurringJobSelector`. Recurring job CRs will get created from `recurringJobs`. Volume will be labeled with `recurring-job.longhorn.io/<jobTask>-<jobRetain>-<hash(jobCron)>-<hash(jobLabelJSON)>: enabled` from volume spec `recurringJobs`. Recurring job CRs will get created from volume spec `recurringJobs`. When the config is identical among multiple volumes, only one will get created and volumes will share this recurring job CR. Volume spec `recurringJobs` will get removed. Add new HTTP endpoints: GET `/v1/recurringjobs` to list of recurring jobs. GET `/v1/recurringjobs/{name}` to get specific recurring job. DELETE `/v1/recurringjobs/{name}` to delete specific recurring job. POST `/v1/recurringjobs` to create a recurring job. PUT `/v1/recurringjobs/{name}` to update specific recurring job. `/v1/ws/recurringjobs` and `/v1/ws/{period}/recurringjobs` for websocket stream. Add new RESTful APIs for the new `RecurringJob` CRD: `Create` `Update` `List` `Get` `Delete` Add new APIs for users to update recurring jobs for individual volume. The : `/v1/volumes/<VOLUME_NAME>?action=recurringJobAdd`, expect request's body in form {name:<name>, isGroup:<bool>}. `/v1/volumes/<VOLUME_NAME>?action=recurringJobList`. `/v1/volumes/<VOLUME_NAME>?action=recurringJobDelete`, expect request's body in form {name:<name>, isGroup:<bool>}. Update the ClusterRole to include `recurringjob`. Printer column should include `Name`, `Groups`, `Task`, `Cron`, `Retain`, `Concurrency`, `Age`, `Labels`. ``` NAME GROUPS TASK CRON RETAIN CONCURRENCY AGE LABELS snapshot1 [\"default\",\"group1\"] snapshot * 1 2 14m {\"label/1\":\"a\",\"label/2\":\"b\"} ``` The `Name`: String used to reference the recurring job in volume with the label `recurring-job.longhorn.io/<Name>: enabled`. The `Groups`: Array of strings that set groupings to the recurring job. This is used to reference the recurring job group in volume with the label `recurring-job-group.longhorn.io/<Name>: enabled`. When including `default`, the recurring job will be added to the volume label if no other job exists in the volume label. The `Task`: String of either one of `backup` or `snapshot`. Also, add validation in the CRD YAML with pattern regex match. The `Cron`: String in cron expression represents recurring job scheduling. The `Retain`: Integer of the number of snapshots/backups to keep for the volume. The `Concurrency`: Integer of the concurrent job to run by each cron job. The `Age`: Date of the CR creation timestamp. The `Labels`: Dictionary of the labels. Add new command `recurring-job <job.name> --longhorn-manager <URL>` and remove old command `snapshot`. > Get the `recurringJob.Spec` on execution using Kubernetes API. Get volumes by label selector `recurring-job.longhorn.io/<job.name>: enabled` to filter out volumes. Get volumes by label selector `recurring-job-group.longhorn.io/<job.group>: enabled` to filter out volumes if the job is set with a group. Filter and create a list of the volumes in the state `attached` or setting `allow-recurring-job-while-volume-detached`. Use the concurrent number parameter to throttle goroutine with"
},
{
"data": "Each goroutine creates `NewJob()` and `job.run()` for the volumes. The job `snapshotName` format will be `<job.name>-c-<RandomID>`. The `updateRecurringJobs` method is responsible to add the default label if not other labels exist. > Since the storage class and upgrade migration contains recurringJobs spec. So we will keep the `VolumeSpec.RecurringJobs` in code to create the recurring jobs for volumes from the `storageClass`. > In case names are duplicated between different `storageClasses`, only one recurring job CR will be created. Add new method input `recurringJobSelector`: Convert `Volume.Spec.RecurringJobs` to `recurringJobSelector`. Add recurring job label if `recurringJobSelector` method input is not empty. For `CreateVolume` and `UpdateVolume` add a function similar to `fixupMetadata` that handles recurring jobs: Add recurring job labels if `Volume.Spec.RecurringJobs` is not empty. Then unset `Volume.Spec.RecurringJobs`. Label with `default` job-group if no other recurring job label exists. The CSI controller can use `recurringJobSelector` for volume creation. Put `recurringJobSelector` to `vol.RecurringJobSelector` at HTTP API layer to use for adding volume recurring job label in `VolumeManager.CreateVolume`. The `CreateVolume` method will have a new input `recurringJobSelector`. Get `recurringJobs` from parameters, validate and create recurring job CRs via API if not already exist. The code structure will be the same as other controllers. Add the finalizer to the recurring job CRs if not exist. The controller will be informed by `recurringJobInformer` and `enqueueRecurringJob`. Create and update CronJob per recurring job. Generate a new cron job object. Include labels `recurring-job.longhorn.io`. ``` recurring-job.longhorn.io: <Name> ``` Compose command, ``` longhorn-manager -d\\ recurring-job <job.name>\\ --manager-url <url> ``` Create new cron job with annotation `last-applied-cronjob-spec` or update cron job if the new cron job spec is different from the `last-applied-cronjob-spec`. Use defer to clean up CronJob. When a recurring job gets deleted. Delete the cron job with selected labels: `recurring-job.longhorn.io/<Name>`. Remove the finalizer. A new page for `Recurring Job` to create/update/delete recurring jobs. ``` Recurring Job [Custom Column] ==================================================================================================================== [Go] | Name | Group | Type | Schedule | Labels | Retain | Concurrency =================================================================================================================== [] | Name | Group | Type | Schedule | Labels | Retain | Concurrency | Operation | +-+--+--+--+--+--+-+-+--| [] | dummy | aa, bb | backup | 00:00 every day | k1:v1, k2:v2 | 20 | 10 | [Icon] v | | Update | Delete =================================================================================================================== [<] [1] [>] ``` Scenario: Add Recurring Job Given user sees `Create` on top left of the page. When user click `Create`. Then user sees a pop-up form. ``` Name [ ] Groups + Task [Backup] Schedule [00:00 every day] Retain [20] Concurrency [10] Labels + ``` Field with `*` is mendatory User can click on `+` next to `Group` to add more groups. User can click on the `Schedule` field and a window will pop-up for `Cron` and `Generate Cron`. `Retain` cannot be `0`. `Concurrency` cannot be `0`. User can click on `+` next to `Labels` to add more labels. When user click `OK`. Then frontend POST `/v1/recurringjobs` to create a recurring job. ``` curl -X POST -H \"Content-Type: application/json\" \\ -d '{\"name\": \"sample\", \"groups\": [\"group-1\", \"group-2\"], \"task\": \"snapshot\", \"cron\": \" *\", \"retain\": 2, \"concurrency\": 1, \"labels\": {\"label/1\": \"a\"}}' \\ http://54.251.150.85:30944/v1/recurringjobs | jq { \"actions\": {}, \"concurrency\": 1, \"cron\": \" *\", \"groups\": [ \"group-1\", \"group-2\" ], \"id\": \"sample\", \"labels\": { \"label/1\": \"a\" }, \"links\": { \"self\": \"http://54.251.150.85:30944/v1/recurringjobs/sample\" }, \"name\": \"sample\", \"retain\": 2, \"task\": \"snapshot\", \"type\": \"recurringJob\" } ``` Scenario: Update Recurring Job Given an `Operation` drop-down list next to the recurring"
},
{
"data": "When user click `Edit`. Then user sees a pop-up form. ``` Name [sample] Groups [group-1] [group-2] Task [Backup] Schedule [00:00 every day] Retain [20] Concurrency [10] Labels ``` `Name` field should be immutable. `Task` field should be immutable. And user edit the fields in the form. When user click `Save`. Then frontend PUT `/v1/recurringjobs/{name}` to update specific recurring job. ``` curl -X PUT -H \"Content-Type: application/json\" \\ -d '{\"name\": \"sample\", \"groups\": [\"group-1\", \"group-2\"], \"task\": \"snapshot\", \"cron\": \" *\", \"retain\": 2, \"concurrency\": 1, \"labels\": {\"label/1\": \"a\", \"label/2\": \"b\"}}' \\ http://54.251.150.85:30944/v1/recurringjobs/sample | jq { \"actions\": {}, \"concurrency\": 1, \"cron\": \" *\", \"groups\": [ \"group-1\", \"group-2\" ], \"id\": \"sample\", \"labels\": { \"label/1\": \"a\", \"label/2\": \"b\" }, \"links\": { \"self\": \"http://54.251.150.85:30944/v1/recurringjobs/sample\" }, \"name\": \"sample\", \"retain\": 2, \"task\": \"snapshot\", \"type\": \"recurringJob\" } ``` Scenario: Delete Recurring Job Given an `Operation` drop-down list next to the recurring job. When user click `Delete`. Then user should see a pop-up window for confirmation. When user click `OK`. Then frontend DELETE `/v1/recurringjobs/{name}` to delete specific recurring job. ``` curl -X DELETE http://54.251.150.85:30944/v1/recurringjobs/sample | jq ``` Also need a button for batch deletion on top left of the table. Scenario: Select From Recurring Job or Job Group When user should be able to choose if want to add recurring job as `Job` or `Group` from the tab. Scenario: Add Recurring Job Group On Volume Page Given user go to job `Group` tab. When user click `+ New`. And Frontend can GET `/v1/recurringjobs` to list of recurring jobs. And Frontend need to gather all `groups` from data. ``` curl -X GET http://54.251.150.85:30783/v1/recurringjobs | jq { \"data\": [ { \"actions\": {}, \"concurrency\": 2, \"cron\": \" *\", \"groups\": [ \"group2\", \"group3\" ], \"id\": \"backup1\", \"labels\": null, \"links\": { \"self\": \"http://54.251.150.85:30783/v1/recurringjobs/backup1\" }, \"name\": \"backup1\", \"retain\": 1, \"task\": \"backup\", \"type\": \"recurringJob\" }, { \"actions\": {}, \"concurrency\": 2, \"cron\": \" *\", \"groups\": [ \"default\", \"group1\" ], \"id\": \"snapshot1\", \"labels\": { \"label/1\": \"a\", \"label/2\": \"b\" }, \"links\": { \"self\": \"http://54.251.150.85:30783/v1/recurringjobs/snapshot1\" }, \"name\": \"snapshot1\", \"retain\": 1, \"task\": \"snapshot\", \"type\": \"recurringJob\" } ], \"links\": { \"self\": \"http://54.251.150.85:30783/v1/recurringjobs\" }, \"resourceType\": \"recurringJob\", \"type\": \"collection\" } ``` Then the user selects the group from the drop-down list. When user click on `Save`. Then frontend POST `/v1/volumes/<VOLUME_NAME>?action=recurringJobAdd` with request body `{name: <group-name>, isGroup: true}`. ``` curl -X POST -H \"Content-Type: application/json\" \\ -d '{\"name\": \"test3\", \"isGroup\": true}' \\ http://54.251.150.85:30783/v1/volumes/pvc-4011f9a6-bae3-43e3-a2a1-893997d0aa63\\?action\\=recurringJobAdd | jq { \"data\": [ { \"actions\": {}, \"id\": \"default\", \"isGroup\": true, \"links\": { \"self\": \"http://54.251.150.85:30783/v1/volumerecurringjobs/default\" }, \"name\": \"default\", \"type\": \"volumeRecurringJob\" }, { \"actions\": {}, \"id\": \"test3\", \"isGroup\": true, \"links\": { \"self\": \"http://54.251.150.85:30783/v1/volumerecurringjobs/test3\" }, \"name\": \"test3\", \"type\": \"volumeRecurringJob\" } ], \"links\": { \"self\": \"http://54.251.150.85:30783/v1/volumes/pvc-4011f9a6-bae3-43e3-a2a1-893997d0aa63\" }, \"resourceType\": \"volumeRecurringJob\", \"type\": \"collection\" } ``` And user sees all `jobs` with the `group`. Scenario: Remove Recurring Job Group On Volume Page Given user go to job `Group` tab. When user click the `bin` icon of the recurring job group. Then frontend `/v1/volumes/<VOLUME_NAME>?action=recurringJobDelete` with request body `{name: <group-name>, isGroup: true}`. ``` curl -X POST -H \"Content-Type: application/json\" \\ -d '{\"name\": \"test3\", \"isGroup\": true}' \\ http://54.251.150.85:30783/v1/volumes/pvc-4011f9a6-bae3-43e3-a2a1-893997d0aa63\\?action\\=recurringJobDelete | jq { \"data\": [ { \"actions\": {}, \"id\": \"default\", \"isGroup\": true, \"links\": { \"self\": \"http://54.251.150.85:30783/v1/volumerecurringjobs/default\" }, \"name\": \"default\", \"type\": \"volumeRecurringJob\" } ], \"links\": { \"self\": \"http://54.251.150.85:30783/v1/volumes/pvc-4011f9a6-bae3-43e3-a2a1-893997d0aa63\" }, \"resourceType\": \"volumeRecurringJob\", \"type\": \"collection\" } ``` Scenario: Add Recurring Job On Volume Page Given user go to `Job` tab. When user click `+ New`. And user sees the name is"
},
{
"data": "And user can select `Backup` or `Snapshot` from the drop-down list. And user can edit `Schedule`, `Labels`, `Retain` and `Concurrency`. When user click on `Save`. Then frontend POST /v1/recurringjobs to create a recurring job. ``` curl -X POST -H \"Content-Type: application/json\" \\ -d '{\"name\": \"backup1\", \"groups\": [], \"task\": \"backup\", \"cron\": \" *\", \"retain\": 2, \"concurrency\": 1, \"labels\": {\"label/1\": \"a\"}}' \\ http://54.251.150.85:30944/v1/recurringjobs | jq { \"actions\": {}, \"concurrency\": 1, \"cron\": \" *\", \"groups\": [], \"id\": \"backup1\", \"labels\": { \"label/1\": \"a\" }, \"links\": { \"self\": \"http://54.251.150.85:30783/v1/recurringjobs/backup1\" }, \"name\": \"backup1\", \"retain\": 2, \"task\": \"backup\", \"type\": \"recurringJob\" } ``` And frontend POST `/v1/volumes/<VOLUME_NAME>?action=recurringJobAdd` with request body `{name: <job-name>, isGroup: false}`. ``` curl -X POST -H \"Content-Type: application/json\" \\ -d '{\"name\": \"backup1\", \"isGroup\": false}' \\ http://54.251.150.85:30783/v1/volumes/pvc-4011f9a6-bae3-43e3-a2a1-893997d0aa63\\?action\\=recurringJobAdd | jq { \"data\": [ { \"actions\": {}, \"id\": \"default\", \"isGroup\": true, \"links\": { \"self\": \"http://54.251.150.85:30783/v1/volumerecurringjobs/default\" }, \"name\": \"default\", \"type\": \"volumeRecurringJob\" }, { \"actions\": {}, \"id\": \"backup1\", \"isGroup\": false, \"links\": { \"self\": \"http://54.251.150.85:30783/v1/volumerecurringjobs/backup1\" }, \"name\": \"backup1\", \"type\": \"volumeRecurringJob\" } ], \"links\": { \"self\": \"http://54.251.150.85:30783/v1/volumes/pvc-4011f9a6-bae3-43e3-a2a1-893997d0aa63\" }, \"resourceType\": \"volumeRecurringJob\", \"type\": \"collection\" } ``` Scenario: Delete Recurring Job On Volume Page Same as Scenario: Remove Recurring Job Group in Volume Page with request body `{name: <group-name>, isGroup: false}`. ``` curl -X POST -H \"Content-Type: application/json\" \\ -d '{\"name\": \"backup1\", \"isGroup\": false}' \\ http://54.251.150.85:30783/v1/volumes/pvc-4011f9a6-bae3-43e3-a2a1-893997d0aa63\\?action\\=recurringJobDelete | jq { \"data\": [ { \"actions\": {}, \"id\": \"default\", \"isGroup\": true, \"links\": { \"self\": \"http://54.251.150.85:30783/v1/volumerecurringjobs/default\" }, \"name\": \"default\", \"type\": \"volumeRecurringJob\" } ], \"links\": { \"self\": \"http://54.251.150.85:30783/v1/volumes/pvc-4011f9a6-bae3-43e3-a2a1-893997d0aa63\" }, \"resourceType\": \"volumeRecurringJob\", \"type\": \"collection\" } ``` Scenario: Keep Recurring Job Details Updated On Volume Page Frontend can monitor new websocket `/v1/ws/recurringjobs` and `/v1/ws/{period}/recurringjobs`. When a volume is labeled with a none-existing recurring job or job-group. UI should show warning icon. The existing recurring job test cases need to be fixed or replaced. Scenario: test recurring job groups (S3/NFS) Given create `snapshot1` recurring job with `group-1, group-2` in groups. set cron job to run every 2 minutes. set retain to 1. create `backup1` recurring job with `group-1` in groups. set cron job to run every 3 minutes. set retain to 1 And volume `test-job-1` created, attached, and healthy. volume `test-job-2` created, attached, and healthy. When set `group1` recurring job in volume `test-job-1` label. set `group2` recurring job in volume `test-job-2` label. And write some data to volume `test-job-1`. write some data to volume `test-job-2`. And wait for 2 minutes. And write some data to volume `test-job-1`. write some data to volume `test-job-2`. And wait for 1 minute. Then volume `test-job-1` should have 3 snapshots after scheduled time. volume `test-job-2` should have 2 snapshots after scheduled time. And volume `test-job-1` should have 1 backup after scheduled time. volume `test-job-2` should have 0 backup after scheduled time. Scenario: test recurring job set with default in groups Given 1 volume created, attached, and healthy. When create `snapshot1` recurring job with `default, group-1` in groups. create `snapshot2` recurring job with `default` in groups.. create `snapshot3` recurring job with `` in groups. create `backup1` recurring job with `default, group-1` in groups. create `backup2` recurring job with `default` in groups. create `backup3` recurring job with `` in groups. Then default `snapshot1` cron job should exist. default `snapshot2` cron job should exist. `snapshot3` cron job should not exist. default `backup1` cron job should exist. default `backup2` cron job should exist. `backup3` cron job should not exist. When set `snapshot3` recurring job in volume label. Then should contain `default` job-group in volume"
},
{
"data": "should contain `snapshot3` job in volume labels. And default `snapshot1` cron job should exist. default `snapshot2` cron job should exist. `snapshot3` cron job should exist. default `backup1` cron job should exist. default `backup2` cron job should exist. `backup3` cron job should not exist. When delete recurring job-group `default` in volume label. And default `snapshot1` cron job should not exist. default `snapshot2` cron job should not exist. `snapshot3` cron job should exist. default `backup1` cron job should not exist. default `backup2` cron job should not exist. `backup3` cron job should not exist. When delete all recurring jobs in volume label. Then default `snapshot1` cron job should exist. default `snapshot2` cron job should exist. `snapshot3` cron job should not exist. default `backup1` cron job should exist. default `backup2` cron job should exist. `backup3` cron job should not exist. When add `snapshot3` recurring job with `default` in groups. add `backup3` recurring job with `default` in groups. Then default `snapshot1` cron job should exist. default `snapshot2` cron job should exist. default `snapshot3` cron job should exist. default `backup1` cron job should exist. default `backup2` cron job should exist. default `backup3` cron job should exist. When remove `default` from `snapshot3` recurring job groups. Then default `snapshot1` cron job should exist. default `snapshot2` cron job should exist. `snapshot3` cron job should not exist. default `backup1` cron job should exist. default `backup2` cron job should exist. default `backup3` cron job should exist. When remove `default` from all recurring jobs groups. Then `snapshot1` cron job should not exist. `snapshot2` cron job should not exist. `snapshot3` cron job should not exist. `backup1` cron job should not exist. `backup2` cron job should not exist. `backup3` cron job should not exist. Scenario: test delete recurring job Given 1 volume created, attached, and healthy. When create `snapshot1` recurring job with `default, group-1` in groups. create `snapshot2` recurring job with `default` in groups.. create `snapshot3` recurring job with `` in groups. create `backup1` recurring job with `default, group-1` in groups. create `backup2` recurring job with `default` in groups. create `backup3` recurring job with `` in groups. Then default `snapshot1` cron job should exist. default `snapshot2` cron job should exist. `snapshot3` cron job should not exist. default `backup1` cron job should exist. default `backup2` cron job should exist. `backup3` cron job should not exist. When delete `snapshot-2` recurring job. Then default `snapshot1` cron job should exist. default `snapshot2` cron job should not exist. `snapshot3` cron job should not exist. default `backup1` cron job should exist. default `backup2` cron job should exist. `backup3` cron job should not exist. When delete `backup-1` recurring job. delete `backup-2` recurring job. delete `backup-3` recurring job. Then default `snapshot1` cron job should exist. default `snapshot2` cron job should not exist. `snapshot3` cron job should not exist. default `backup1` cron job should not exist. default `backup2` cron job should not exist. `backup3` cron job should not exist. When add `snapshot1` recurring job to volume label. add `snapshot3` recurring job to volume label. And default `snapshot1` cron job should exist. default `snapshot2` cron job should not exist. `snapshot3` cron job should exist. And delete `snapshot1` recurring job. delete `snapshot3` recurring job. Then default `snapshot1` cron job should not exist. default `snapshot2` cron job should not exist. `snapshot3` cron job should not exist. Scenario: test volume with a none-existing recurring job label and later on added back. Given create `snapshot1` recurring"
},
{
"data": "create `backup1` recurring job. And 1 volume created, attached, and healthy. add `snapshot1` recurring job to volume label. add `backup1` recurring job to volume label. And `snapshot1` cron job exist. `backup1` cron job exist. When delete `snapshot1` recurring job. delete `backup1` recurring job. Then `snapshot1` cron job should not exist. `backup1` cron job should not exist. And `snapshot1` recurring job should exist in volume label. `backup1` recurring job should exist in volume label. When create `snapshot1` recurring job. create `backup1` recurring job. Then `snapshot1` cron job should exist. `backup1` cron job should exist. Scenario: test recurring job with multiple volumes Given volume `test-job-1` created, attached and healthy. And create `snapshot1` recurring job with `default` in groups. create `snapshot2` recurring job with `` in groups. create `backup1` recurring job with `default` in groups. create `backup2` recurring job with `` in groups. And volume `test-job-1` should have recurring job-group `default` label. And default `snapshot1` cron job exist. default `backup1` cron job exist. When create and attach volume `test-job-2`. wait for volume `test-job-2` to be healthy. Then volume `test-job-2` should have recurring job-group `default` label. When add `snapshot2` in `test-job-2` volume label. add `backup2` in `test-job-2` volume label. Then default `snapshot1` cron job should exist. `snapshot2` cron job should exist. default `backup1` cron job should exist. `backup2` cron job should exist. And volume `test-job-1` should have recurring job-group `default` label. volume `test-job-2` should have recurring job `snapshot2` label. volume `test-job-2` should have recurring job `backup2` label. Scenario: test recurring job snapshot Given volume `test-job-1` created, attached, and healthy. volume `test-job-2` created, attached, and healthy. When create `snapshot1` recurring job with `default` in groups. Then should have 1 cron job. And volume `test-job-1` should have volume-head 1 snapshot. volume `test-job-2` should have volume-head 1 snapshot. When write some data to volume `test-job-1`. write some data to volume `test-job-2`. Then volume `test-job-1` should have 2 snapshots after scheduled time. volume `test-job-2` should have 2 snapshots after scheduled time. When write some data to volume `test-job-1`. write some data to volume `test-job-2`. And wait for `snapshot1` cron job scheduled time. Then volume `test-job-1` should have 3 snapshots after scheduled time. volume `test-job-2` should have 3 snapshots after scheduled time. Scenario: test recurring job backup (S3/NFS) Given volume `test-job-1` created, attached, and healthy. volume `test-job-2` created, attached, and healthy. When create `backup1` recurring job with `default` in groups. Then should have 1 cron job. And volume `test-job-1` should have 0 backup. volume `test-job-2` should have 0 backup. When write some data to volume `test-job-1`. write some data to volume `test-job-2`. And wait for `backup1` cron job scheduled time. Then volume `test-job-1` should have 1 backups. volume `test-job-2` should have 1 backups. When write some data to volume `test-job-1`. write some data to volume `test-job-2`. And wait for `backup1` cron job scheduled time. Then volume `test-job-1` should have 2 backups. volume `test-job-2` should have 2 backups. Scenario: test recurring job while volume is detached Given volume `test-job-1` created, and detached. volume `test-job-2` created, and detached. And attach volume `test-job-1` and write some data. attach volume `test-job-2` and write some data. And detach volume `test-job-1`. detach volume `test-job-2`. When create `snapshot1` recurring job running at 1 minute interval, and with `default` in groups, and with `retain` set to `2`. And 1 cron job should be created. And wait for 2 minutes. Then attach volume `test-job-1` and wait until"
},
{
"data": "And volume `test-job-1` should have only 1 snapshot. When wait for 1 minute. Then volume `test-job-1` should have only 2 snapshots. When set setting `allow-recurring-job-while-volume-detached` to `true`. And wait for 2 minutes. Then attach volume `test-job-2` and wait until healthy. And volume `test-job-2` should have only 2 snapshots. Scenario: test recurring job while volume is detached Given volume `test-job-1` created, and detached. volume `test-job-2` created, and detached. When create `snapshot1` recurring job running at 1 minute interval, And wait until job pod created and complete Then monitor the job pod logs. And should see `Cannot create job for test-job-1 volume in state detached`. should see `Cannot create job for test-job-2 volume in state detached`. Scenario: test recurring job upgrade migration Given cluster with Longhorn version prior to v1.2.0. And storageclass with recurring job `snapshot1`. And volume `test-job-1` created, and attached. When upgrade Longhorn to v1.2.0. Then should have recurring job CR created with format `<jobTask>-<jobRetain>-<hash(jobCron)>-<hash(jobLabelJSON)>`. And volume should be labeled with `recurring-job.longhorn.io/<jobTask>-<jobRetain>-<hash(jobCron)>-<hash(jobLabelJSON)>: enabled`. And recurringJob should be removed in volume spec. And storageClass in `longhorn-storageclass` configMap should not have `recurringJobs`. storageClass in `longhorn-storageclass` configMap should have `recurringJobSelector`. ``` recurringJobSelector: '[{\"name\":\"snapshot-1-97893a05-77074ba4\",\"isGroup\":false},{\"name\":\"backup-1-954b3c8c-59467025\",\"isGroup\":false}]' ``` When create new PVC. And volume should be labeled with items in `recurringJobSelector`. And recurringJob should not exist in volume spec. Scenario: test recurring job concurrency Given create `snapshot1` recurring job with `concurrency` set to `2`. include `snapshot1` recurring job `default` in groups. When create volume `test-job-1`. create volume `test-job-2`. create volume `test-job-3`. create volume `test-job-4`. create volume `test-job-5`. Then monitor the cron job pod log. And should see 2 jobs created concurrently. When update `snapshot1` recurring job with `concurrency` set to `3`. Then monitor the cron job pod log. And should see 3 jobs created concurrently. Create `v110to120/upgrade.go` Translate `storageClass` `recurringJobs` to `recurringJobSelector`. Convert the `recurringJobs` to `recurringJobSelector` object. ``` { Name: <jobTask>-<jobRetain>-<hash(jobCron)>-<hash(jobLabelJSON)> IsGroup: false, } ``` Add `recurringJobSelector` to `longhorn-storageclass` configMap. Remove `recurringJobs` in configMap. Update configMap. ``` parameters: fromBackup: \"\" numberOfReplicas: \"3\" recurringJobSelector: '[{\"name\":\"snapshot-1-97893a05-77074ba4\",\"isGroup\":false},{\"name\":\"backup-1-954b3c8c-59467025\",\"isGroup\":false}]' staleReplicaTimeout: \"2880\" provisioner: driver.longhorn.io ``` Translate volume spec `recurringJobs` to volume labels. List all volumes and its spec `recurringJobs` and create labels in format `recurring-job.longhorn.io/<jobTask>-<jobRetain>-<hash(jobCron)>-<hash(jobLabelJSON)>: enabled`. Update volume labels and remove volume spec `recurringJobs`. ``` labels: longhornvolume: pvc-d37caaed-5cda-43b1-ae49-9d0490ffb3db recurring-job.longhorn.io/backup-1-954b3c8c-59467025: enabled recurring-job.longhorn.io/snapshot-1-97893a05-77074ba4: enabled ``` translate volume spec `recurringJobs` to recurringJob CRs. Gather the recurring jobs from `recurringJobSelector` and volume labels. Create recurringJob CRs. ``` NAME GROUPS TASK CRON RETAIN CONCURRENCY AGE LABELS snapshot-1-97893a05-77074ba4 snapshot /1 * 1 10 13m backup-1-954b3c8c-59467025 backup /2 * 1 10 13m {\"interval\":\"2m\"} ``` Cleanup applied volume cron jobs. Get all applied cron jobs for volumes. Delete cron jobs. The migration translates existing volume recurring job with format `recurring-job.longhorn.io/<jobTask>-<jobRetain>-<hash(jobCron)>-<hash(jobLabelJSON)>: enabled`. The name maps to the recurring job CR `<jobTask>-<jobRetain>-<hash(jobCron)>-<hash(jobLabelJSON)>`. The migration translates existing volume recurring job with format `recurring-job.longhorn.io/<jobTask>-<jobRetain>-<hash(jobCron)>-<hash(jobLabelJSON)>: enabled`. The numbers could look random and also differs from the recurring job name of the CR name created by the StorageClass - `recurring-job.longhorn.io/<name>: enabled`. This is because there is no info to determine if the volume spec `recurringJob` is coming from a `storageClass` or which `storageClass`. Should note this behavior in the document to lessen the confusion unless there is a better solution. After the migration, the `<hash(jobCron)>-<hash(jobLabelJSON)>` in volume label and recurring job name could look random and confusing. Users might want to rename it to something more meaningful. Currently, the only way is to create a new recurring job CR and replace the volume label. `None`"
}
] | {
"category": "Runtime",
"file_name": "20210624-label-driven-recurring-job.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Inspect the hive ``` cilium-operator hive [flags] ``` ``` --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. (default true) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host"
},
{
"data": "--gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) -h, --help help for hive --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for Ingress. (default \"cilium-ingress\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created ``` - Run cilium-operator - Output the dependencies graph in graphviz dot format"
}
] | {
"category": "Runtime",
"file_name": "cilium-operator_hive.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "The current maintainers of the Curve project are: Pan WANG, , <aspirer2004@gmail.com>, core maintainer XiaoCui Li, , <ilixiaocui@163.com>, core maintainer opencurveadmin, , <hzchenwei7@corp.netease.com>, project management Curve: https://github.com/opencurve/community/blob/master/teams/Curve/team.json CurveAdm: https://github.com/opencurve/community/blob/master/teams/CurveAdm/team.json"
}
] | {
"category": "Runtime",
"file_name": "MAINTAINERS.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Show contents of table \"node-addresses\" ``` cilium-dbg statedb node-addresses [flags] ``` ``` -h, --help help for node-addresses -w, --watch duration Watch for new changes with the given interval (e.g. --watch=100ms) ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Inspect StateDB"
}
] | {
"category": "Runtime",
"file_name": "cilium-dbg_statedb_node-addresses.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "title: Enabling Multi-Cloud, Multi-Hop Networking and Routing menu_order: 70 search_type: Documentation Before multi-cloud networking can be enabled, you must configure the network to allow connections through Weave Net's control and data ports on the Docker hosts. By default, the control port defaults to TCP 6783, and the data ports to UDP 6783/6784. To override Weave Nets default ports, specify a port using the `WEAVEPORT` setting. For example, if WEAVEPORT is set to `9000`, then Weave uses TCP 9000 for its control port and UDP 9000/9001 for its data port. Important! It is recommended that all peers be given the same setting. A network of containers across more than two hosts can be established even when there is only partial connectivity between the hosts. Weave Net routes traffic between containers as long as there is at least one path of connected hosts between them. For example, if a Docker host in a local data center can connect to hosts in GCE and EC2, but the latter two cannot connect to each other, containers in the latter two can still communicate and Weave Net in this instance will route the traffic via the local data center. See Also"
}
] | {
"category": "Runtime",
"file_name": "multi-cloud-multi-hop.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "| Case ID | Title | Priority | Smoke | Status | Other | |||-|-|--| -- | | M00001 | testing creating spiderMultusConfig with cniType: macvlan and checking the net-attach-conf config if works | p1 | smoke | done | | | M00002 | testing creating spiderMultusConfig with cniType: ipvlan and checking the net-attach-conf config if works | p1 | smoke | done | | | M00003 | testing creating spiderMultusConfig with cniType: sriov and checking the net-attach-conf config if works | p1 | smoke | done | | | M00004 | testing creating spiderMultusConfig with cniType: custom and checking the net-attach-conf config if works | p1 | smoke | done | | | M00005 | testing creating spiderMultusConfig with cniType: custom and invalid json config, expect error happened | p2 | | done | | | M00006 | testing creating spiderMultusConfig with cniType: macvlan with vlanId with two master with bond config and checking the net-attach-conf config if works | p1 | smoke | done | | | M00007 | Manually delete the net-attach-conf of multus, it will be created automatically | p1 | smoke | done | | M00008 | After deleting spiderMultusConfig, the corresponding net-attach-conf will also be deleted | p2 | | done | | | M00009 | Update spidermultusConfig: add new bond config | p1 | smoke | done | | | M00010 | Customize net-attach-conf name via annotation multus.spidernet.io/cr-name | p2 | | done | | | M00011 | webhook validation for multus.spidernet.io/cr-name | p3 | | done | | | M00012 | Change net-attach-conf version via annotation multus.spidernet.io/cni-version | p2 | | done | | | M00013 | webhook validation for multus.spidernet.io/cni-version | p3 | | done | | | M00014 | Already have multus cr, spidermultus should take care of it | p3 | | done | | | M00015 | The value of webhook verification cniType is inconsistent with cniConf | p3 | | done | | | M00016 | vlan is not in the range of 0-4094 and will not be created | p3 | | done | | | M00017 | set disableIPAM to true and see if multus's nad has ipam config | p3 | | done | | | M00018 | set sriov.enableRdma to true and see if multus's nad has rdma config | p3 | | done | | | M00019 | set spidermultusconfig.spec to empty and see if works | p3 | | done | | | M00020 | annotating custom names that are too long or empty should fail | p3 | | done | |"
}
] | {
"category": "Runtime",
"file_name": "spidermultus.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "This week we have been working on getting Docker integrated with the final RCs for the OCI runtime-spec and runc. We currently have a that is ready for review. This is important for the containerd project because we need a stable runtime like runc and have a spec that is not constantly changing so that people can begin integrating with containerd and know that the APIs we expose and APIs/specs that we depend on are stable for the long term. We finished moving a few of our external dependencies into the containerd organization this week. There were a few projects that we built outside but wanted to bring these under the project to ensure that our dependencies are held to the same standards as the rest of our codebase. This project contains the runc bindings that we consume in containerd to interface with runc and other OCI runtimes. It lets us interact with the binary and handles many of the common options like working with the console socket and other IO operations. This package contains helpers for handling fifos. Fifos are a little more complex than regular pipes and sometimes requires special handling and blocking semantics depending on the flags used with opening the fifo on either end. This package helps to handle many of the common use cases that we use in containerd. The console package is a refresh of the `term` package from Docker. It provides a cleaner API for working with the current console for a program or creating new terminals and keeping the flags in sync for proxying reads and writes between the two sides of the console. The cgroups package is currently used in containerd for collecting stats from cgroups that are created for a container. It exposes a package for exporting cgroup level metrics to prometheus for containers. The btrfs package handles interfacing with btrfs for our snapshotter. It binds to the btrfs C library to create subvolumes and handle any other interaction with the filesystem. continuity provides a transport agnostic filesystem metadata manifest. This allows us to work with filesystems at the file level instead of interacting with a \"layer\". We also intend to concentrate a rich set of file system utility packages for use in containerd."
}
] | {
"category": "Runtime",
"file_name": "2017-05-05.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
} |
[
{
"data": "are a common set of labels that allows tools to work interoperably, describing objects in a common manner that all tools can understand. `app.kubernetes.io/name`: Is the name of the binary running in a container(combination of \"ceph-\"+daemonType). `app.kubernetes.io/instance`: A unique name identifying the instance of an application. Due to the nature of how resources are named in Rook, this is guaranteed to be unique per CephCluster namespace but not unique within the entire Kubernetes cluster. `app.kubernetes.io/component`: This is populated with the Kind of the resource controlling this application. For example, `cephclusters.ceph.rook.io` or `cephfilesystems.ceph.rook.io`. `app.kubernetes.io/part-of`: This is populated with the Name of the resource controlling this application. `app.kubernetes.io/managed-by`: `rook-ceph-operator` is the tool being used to manage the operation of an application `app.kubernetes.io/created-by`: `rook-ceph-operator` is the controller/user who created this resource `rook.io/operator-namespace`: The namespace in which rook-ceph operator is running. An Example of Recommended Labels on Ceph mon with ID=a will look like: ``` app.kubernetes.io/name : \"ceph-mon\" app.kubernetes.io/instance : \"a\" app.kubernetes.io/component : \"cephclusters.ceph.rook.io\" app.kubernetes.io/part-of : \"rook-ceph\" app.kubernetes.io/managed-by : \"rook-ceph-operator\" app.kubernetes.io/created-by : \"rook-ceph-operator\" rook.io/operator-namespace : \"rook-ceph\" ``` Another example on CephFilesystem with ID=a: ``` app.kubernetes.io/name : \"ceph-mds\" app.kubernetes.io/instance : \"myfs-a\" app.kubernetes.io/component : \"cephfilesystems.ceph.rook.io\" app.kubernetes.io/part-of : \"myfs\" app.kubernetes.io/managed-by : \"rook-ceph-operator\" app.kubernetes.io/created-by : \"rook-ceph-operator\" rook.io/operator-namespace : \"rook-ceph\" ``` !!! Note A totally unique string for an application can be built up from (a) app.kubernetes.io/component, (b) app.kubernetes.io/part-of, (c) the resource's namespace, (d) app.kubernetes.io/name, and (e) app.kubernetes.io/instance fields. For the example above, we could join those fields with underscore connectors like this: cephclusters.ceph.rook.iorook-cephrook-cephceph-mona. Note that this full spec can easily exceed the 64-character limit imposed on Kubernetes labels."
}
] | {
"category": "Runtime",
"file_name": "interacting-with-rook-resources.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "(daemon-behavior)= This specification covers some of the Incus daemon's behavior. On every start, Incus checks that its directory structure exists. If it doesn't, it creates the required directories, generates a key pair and initializes the database. Once the daemon is ready for work, Incus scans the instances table for any instance for which the stored power state differs from the current one. If an instance's power state was recorded as running and the instance isn't running, Incus starts it. For those signals, Incus assumes that it's being temporarily stopped and will be restarted at a later time to continue handling the instances. The instances will keep running and Incus will close all connections and exit cleanly. Indicates to Incus that the host is going down. Incus will attempt a clean shutdown of all the instances. After 30 seconds, it kills any remaining instance. The instance `power_state` in the instances table is kept as it was so that Incus can restore the instances as they were after the host is done rebooting. Write a memory profile dump to the file specified with `--memprofile`."
}
] | {
"category": "Runtime",
"file_name": "daemon-behavior.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "title: \"ark server\" layout: docs Run the ark server Run the ark server ``` ark server [flags] ``` ``` -h, --help help for server --log-level the level at which to log. Valid values are debug, info, warning, error, fatal, panic. (default info) --metrics-address string the address to expose prometheus metrics (default \":8085\") --plugin-dir string directory containing Ark plugins (default \"/plugins\") ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Back up and restore Kubernetes cluster resources."
}
] | {
"category": "Runtime",
"file_name": "ark_server.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "(network-integrations)= ```{note} Network integrations are currently only available for the {ref}`network-ovn`. ``` Network integrations can be used to connect networks on the local Incus deployment to remote networks hosted on Incus or other platforms. At this time the only type of network integrations supported is OVN which makes use of OVN interconnection gateways to peer OVN networks together across multiple deployments. For this to work one needs a working OVN interconnection setup with: OVN interconnection `NorthBound` and `SouthBound` databases Two or more OVN clusters with their availability-zone names set properly (`name` property) All OVN clusters need to have the `ovn-ic` daemon running OVN clusters configured to advertise and learn routes from interconnection At least one server marked as an OVN interconnection gateway More details can be found in the . A network integration can be created with `incus network integration create`. Integrations are global to the Incus deployment, they are not tied to a network or project. An example for an OVN integration would be: ``` incus network integration create ovn-region ovn incus network integration set ovn-region ovn.northbound_connection tcp:[192.0.2.12]:6645,tcp:[192.0.3.13]:6645,tcp:[192.0.3.14]:6645 incus network integration set ovn-region ovn.southbound_connection tcp:[192.0.2.12]:6646,tcp:[192.0.3.13]:6646,tcp:[192.0.3.14]:6646 ``` To make use of a network integration, one needs to peer with it. This is done through `incus network peer create`, for example: ``` incus network peer create default region ovn-region --type=remote ``` The following configuration options are available for all network integrations: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group network_integration-common start --> :end-before: <!-- config group network_integration-common end --> ``` Those options are specific to the OVN network integrations: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group network_integration-ovn start --> :end-before: <!-- config group network_integration-ovn end --> ```"
}
] | {
"category": "Runtime",
"file_name": "network_integrations.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "Target version: Rook 1.1 Currently all of the storage for Ceph monitors (data, logs, etc..) is provided using HostPath volume mounts. Supporting PV storage for Ceph monitors in environments with dynamically provisioned volumes (AWS, GCE, etc...) will allow monitors to migrate without requiring the monitor state to be rebuilt, and avoids the operational complexity of dealing with HostPath storage. The general approach taken in this design document is to augment the CRD with a persistent volume claim template describing the storage requirements of monitors. This template is used by Rook to dynamically create a volume claim for each monitor. The monitor specification in the CRD is updated to include a persistent volume claim template that is used to generate PVCs for monitor database storage. ```go type MonSpec struct { Count int `json:\"count\"` AllowMultiplePerNode bool `json:\"allowMultiplePerNode\"` VolumeClaimTemplate *v1.PersistentVolumeClaim } ``` The `VolumeClaimTemplate` is used by Rook to create PVCs for monitor storage. The current set of template fields used by Rook when creating PVCs are `StorageClassName` and `Resources`. Rook follows the standard convention of using the default storage class when one is not specified in a volume claim template. If the storage resource requirements are not specified in the claim template, then Rook will use a default value. This is possible because unlike the storage requirements of OSDs (xf: StorageClassDeviceSets), reasonable defaults (e.g. 5-10 GB) exist for monitor daemon storage needs. Logs and crash data. The current implementation continues the use of a HostPath volume based on `dataDirHostPath` for storing daemon log and crash data. This is a temporary exception that will be resolved as we converge on an approach that works for all Ceph daemon types. Finally, the entire volume claim template may be left unspecified in the CRD in which case the existing HostPath mechanism is used for all monitor storage. When a new monitor is created it uses the current storage specification found in the CRD. Once a monitor has been created, it's backing storage is not changed. This makes upgrades particularly simple because existing monitors continue to use the same storage. Once a volume claim template is defined in the CRD new monitors will be created with PVC storage. In order to remove old monitors based on HostPath storage first define a volume claim template in the CRD and then fail over each monitor. Like `StatefulSets` removal of an monitor deployment does not automatically remove the underlying PVC. This is a safety mechanism so that the data is not automatically destroyed. The PVCs can be removed manually once the cluster is healthy. Rook currently makes explicit scheduling decisions for monitors by using node selectors to force monitor node affinity. This means that the volume binding should not occur until the pod is scheduled onto a specific node in the"
},
{
"data": "This should be done by using the `WaitForFirstConsumer` binding policy on the storage class used to provide PVs to monitors: ``` kind: StorageClass volumeBindingMode: WaitForFirstConsumer ``` When using existing HostPath storage or non-local PVs that can migrate (e.g. network volumes like RBD or EBS) existing monitor scheduling will continue to work as expected. However, because monitors are scheduled without considering the set of available PVs, when using statically provisioned local volumes Rook expects volumes to be available. Therefore, when using locally provisioned volumes take care to ensure that each node has storage provisioned. Note that these limitations are currently imposed because of the explicit scheduling implementation in Rook. These restrictions will be removed or significantly relaxed once monitor scheduling is moved under control of Kubernetes itself (ongoing work). In previous versions of Rook the operator made explicit scheduling (placement) decisions when creating monitor deployments. These decisions were made by implementing a custom scheduling algorithm, and using the pod node selector to enforce the placement decision. Unfortunately, schedulers are difficult to write correctly, and manage. Furthermore, by maintaining a separate scheduler from Kubernetes global policies are difficult to achieve. Despite the benefits of using the Kubernetes scheduler, there are important use cases for using a node selector on a monitor deployment: pinning a monitor to a node when HostPath-based storage is used. In this case Rook must prevent k8s from moving a pod away from the node that contains its storage. The node selector is used to enforce this affinity. Unfortunately, node selector use is mutually exclusive with kubernetes schedulinga pod cannot be scheduled by Kubernetes and then atomically have its affinity set to that placement decision. The workaround in Rook is to use a temporary canary pod that is scheduled by Kubernetes, but whose placement is enforced by Rook. The canary deployment is a deployment configured identically to a monitor deployment, except the container entrypoints have no side affects. The canary deployments are used to solve a fundamental bootstrapping issue: we want to avoid making explicit scheduling decisions in Rook, but in some configurations a node selector needs to be used to pin a monitor to a node. Previous versions of Rook performed periodic health checks that included checks on monitor health as well as looking for scheduling violations. The health checks related to scheduling violations have been removed. Fundamentally a placement violation requires understanding or accessing the scheduling algorithm. The rescheduling or eviction aspects of Rook's scheduling caused more problems than it helped, so going with K8s scheduling is the right thing. If/when K8s has eviction policies in the future we could then make use of it (e.g. with `*RequiredDuringExecution` variants of anti-affinity rules are available). The `PreferredCount` feature has been removed. The CRD monitor count specifies a target minimum number of monitors to maintain. Additionally, a preferred count is available which will be the desired number of sufficient number of nodes are available. Unfortunately, this calculation relies on determining if monitor pods may be placed on a node, requiring knowledge of the scheduling policy and"
},
{
"data": "The scenario to watch out for is an endless loop in which the health check is determining a bad placement but the k8s schedule thinks otherwise. It is generally desirable in production to have multiple zones availability. A whole storage class could go unavailable then and enough mons could stay up to keep the cluster running. In such a case the distribution of mons can be made more homogeneous for each zone, by this we can leverage the spread of mons for providing high availability. This could be a useful scenario for a datacenter with own storage class in contrast to ones provided by cloud providers. To support this scenario, we can do something similar to the mon deployment for . The operator will have zonal awareness, associating mon deployments based on zones configuration provided. The type of failure domain used for this scenario is commonly \"zone\", but can be set to a different failure domain. The topology of the K8s cluster is to be determined by the admin, outside the scope of Rook. Rook will simply detect the topology labels that have been added to the nodes. If the desired failure domain is a \"zone\", the `topology.kubernetes.io/zone` label should be added to the nodes. Any of the supported by OSDs can be used also for this scenario. For example: ```yaml topology.kubernetes.io/zone=a topology.kubernetes.io/zone=b topology.kubernetes.io/zone=c ``` The core changes to rook are to associate the mons with the required zones. Each zone will have at most one mon assigned using labels for nodeAffinity. If there are more zones than mons, some zones may not have a mon. The new configuration will support the `zones` configuration where the failure domains available must be listed. If the mon.zones are specified, the mon canary pod should not specify the volume for the mon store. The whole purpose of the mon canaries is to allow the k8s scheduler to decide where the mon should be scheduled. So if we don't add a volume to the canary pod, we can delay the creation of the mon PVC until the canary pod is done. Then after the mon canary is done scheduling, the operator can look at the node where it was scheduled and determine which zone (or other topology) the node belonged to. For the mon daemon pod, then the operator would create the PVC from the volumeClaimTemplate matching the zone where the mon canary was created. This way mon canary pod will be able to take care of all scheduling tasks. All zones must be listed in the cluster CR. If mons are expected to run on a subset of zones, the needed node affinity must be added to the placement.mon on the cluster CR so the mon canaries and daemons will be scheduled in the correct zones. In this example, each of the mons will be backed by a volumeClaimTemplate specific to a different zone: ```yaml mon: count: 3 allowMultiplePerNode: false zones: name: a volumeClaimTemplate: spec: storageClassName: zone-a-storage resources: requests: storage: 10Gi name: b volumeClaimTemplate: spec: storageClassName: zone-b-storage resources: requests: storage: 10Gi name: c volumeClaimTemplate: spec: storageClassName: zone-c-storage resources: requests: storage: 10Gi ```"
}
] | {
"category": "Runtime",
"file_name": "ceph-mon-pv.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Status: Accepted Some features may take a while to get fully implemented, and we don't necessarily want to have long-lived feature branches A simple feature flag implementation allows code to be merged into main, but not used unless a flag is set. Allow unfinished features to be present in Velero releases, but only enabled when the associated flag is set. A robust feature flag library. When considering the , the timelines involved presented a problem in balancing a release and longer-running feature work. A simple implementation of feature flags can help protect unfinished code while allowing the rest of the changes to ship. A new command line flag, `--features` will be added to the root `velero` command. `--features` will accept a comma-separated list of features, such as `--features EnableCSI,Replication`. Each feature listed will correspond to a key in a map in `pkg/features/features.go` defining whether a feature should be enabled. Any code implementing the feature would then import the map and look up the key's value. For the Velero client, a `features` key can be added to the `config.json` file for more convenient client invocations. A new `features` package will be introduced with these basic structs: ```go type FeatureFlagSet struct { flags map[string]bool } type Flags interface { // Enabled reports whether or not the specified flag is found. Enabled(name string) bool // Enable adds the specified flags to the list of enabled flags. Enable(names ...string) // All returns all enabled features All() []string } // NewFeatureFlagSet initializes and populates a new FeatureFlagSet func NewFeatureFlagSet(flags ...string) FeatureFlagSet ``` When parsing the `--features` flag, the entire `[]string` will be passed to `NewFeatureFlagSet`. Additional features can be added with the `Enable` function. Parsed features will be printed as an `Info` level message on server start up. No verification of features will be done in order to keep the implementation minimal. On the client side, `--features` and the `features` key in `config.json` file will be additive, resulting in the union of both. To disable a feature, the server must be stopped and restarted with a modified `--features` list. Similarly, the client process must be stopped and restarted without features. Omitted Omitted"
}
] | {
"category": "Runtime",
"file_name": "feature-flags.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List VTEP CIDR and their corresponding VTEP MAC/IP List VTEP CIDR and their corresponding VTEP MAC/IP. ``` cilium-dbg bpf vtep list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the VTEP mappings for IP/CIDR <-> VTEP MAC/IP"
}
] | {
"category": "Runtime",
"file_name": "cilium-dbg_bpf_vtep_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "(storage-dir)= The directory storage driver is a basic backend that stores its data in a standard file and directory structure. This driver is quick to set up and allows inspecting the files directly on the disk, which can be convenient for testing. However, Incus operations are {ref}`not optimized <storage-drivers-features>` for this driver. The `dir` driver in Incus is fully functional and provides the same set of features as other drivers. However, it is much slower than all the other drivers because it must unpack images and do instant copies of instances, snapshots and images. Unless specified differently during creation (with the `source` configuration option), the data is stored in the `/var/lib/incus/storage-pools/` directory. (storage-dir-quotas)= <!-- Include start dir quotas --> The `dir` driver supports storage quotas when running on either ext4 or XFS with project quotas enabled at the file system level. <!-- Include end dir quotas --> The following configuration options are available for storage pools that use the `dir` driver and for storage volumes in these pools. Key | Type | Default | Description :-- | : | : | :- `rsync.bwlimit` | string | `0` (no limit) | The upper limit to be placed on the socket I/O when `rsync` must be used to transfer storage entities `rsync.compression` | bool | `true` | Whether to use compression while migrating storage pools `source` | string | - | Path to an existing directory {{volume_configuration}} Key | Type | Condition | Default | Description :-- | : | :-- | : | :- `security.shared` | bool | custom block volume | same as `volume.security.shared` or `false` | Enable sharing the volume across multiple instances `security.shifted` | bool | custom volume | same as `volume.security.shifted` or `false` | {{enableIDshifting}} `security.unmapped` | bool | custom volume | same as `volume.security.unmapped` or `false` | Disable ID mapping for the volume `size` | string | appropriate driver | same as `volume.size` | Size/quota of the storage volume `snapshots.expiry` | string | custom volume | same as `volume.snapshots.expiry` | {{snapshotexpiryformat}} `snapshots.pattern` | string | custom volume | same as `volume.snapshots.pattern` or `snap%d` | {{snapshotpatternformat}} [^*] `snapshots.schedule` | string | custom volume | same as `volume.snapshots.schedule` | {{snapshotscheduleformat}} To enable storage buckets for local storage pool drivers and allow applications to access the buckets via the S3 protocol, you must configure the {config:option}`server-core:core.storagebucketsaddress` server setting. Storage buckets do not have any configuration for `dir` pools. Unlike the other storage pool drivers, the `dir` driver does not support bucket quotas via the `size` setting."
}
] | {
"category": "Runtime",
"file_name": "storage_dir.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "Events are messages about actions that have occurred over Incus. Using the API endpoint `/1.0/events` directly or via will connect to a WebSocket through which logs and life-cycle messages will be streamed. Incus Currently supports three event types. `logging`: Shows all logging messages regardless of the server logging level. `operation`: Shows all ongoing operations from creation to completion (including updates to their state and progress metadata). `lifecycle`: Shows an audit trail for specific actions occurring over Incus. ```yaml location: cluster_name metadata: action: network-updated requestor: protocol: unix username: root source: /1.0/networks/incusbr0 timestamp: \"2021-03-14T00:00:00Z\" type: lifecycle ``` `location`: The cluster member name (if clustered). `timestamp`: Time that the event occurred in RFC3339 format. `type`: The type of event this is (one of `logging`, `operation`, or `lifecycle`). `metadata`: Information about the specific event type. `message`: The log message. `level`: The log-level of the log. `context`: Additional information included in the event. `id`: The UUID of the operation. `class`: The type of operation (`task`, `token`, or `websocket`). `description`: A description of the operation. `created_at`: The operation's creation date. `updated_at`: The operation's date of last change. `status`: The current state of the operation. `status_code`: The operation status code. `resources`: Resources affected by this operation. `metadata`: Operation specific metadata. `may_cancel`: Whether the operation may be canceled. `err`: Error message of the operation. `location`: The cluster member name (if clustered). `action`: The life-cycle action that occurred. `requestor`: Information about who is making the request (if applicable). `source`: Path to what is being acted upon. `context`: Additional information included in the event. | Name | Description | Additional Information | | :- | :-- | : | | `certificate-created` | A new certificate has been added to the server trust store. | | | `certificate-deleted` | The certificate has been deleted from the trust store. | | | `certificate-updated` | The certificate's configuration has been updated. | | | `cluster-certificate-updated` | The certificate for the whole cluster has changed. | | | `cluster-disabled` | Clustering has been disabled for this machine. | | | `cluster-enabled` | Clustering has been enabled for this machine. | | | `cluster-group-created` | A new cluster group has been created. | | | `cluster-group-deleted` | A cluster group has been deleted. | | | `cluster-group-renamed` | A cluster group has been renamed. | | | `cluster-group-updated` | A cluster group has been updated. | | | `cluster-member-added` | A new machine has joined the cluster. | | | `cluster-member-removed` | The cluster member has been removed from the cluster. | | | `cluster-member-renamed` | The cluster member has been renamed. | `old_name`: the previous name. | | `cluster-member-updated` | The cluster member's configuration been edited. | | | `cluster-token-created` | A join token for adding a cluster member has been created. | | | `config-updated` | The server configuration has changed. | | | `image-alias-created` | An alias has been created for an existing image. | `target`: the original instance. | | `image-alias-deleted` | An alias has been deleted for an existing image. | `target`: the original instance. | | `image-alias-renamed` | The alias for an existing image has been renamed. | `old_name`: the previous name. | | `image-alias-updated` | The configuration for an image alias has changed. | `target`: the original instance. | | `image-created` | A new image has been added to the image store. | `type`: `container` or `vm`. | | `image-deleted` | The image has been deleted from the image"
},
{
"data": "| | | `image-refreshed` | The local image copy has updated to the current source image version. | | | `image-retrieved` | The raw image file has been downloaded from the server. | `target`: destination server. | | `image-secret-created` | A one-time key to fetch this image has been created. | | | `image-updated` | The image's configuration has changed. | | | `instance-backup-created` | A backup of the instance has been created. | | | `instance-backup-deleted` | The instance backup has been deleted. | | | `instance-backup-renamed` | The instance backup has been renamed. | `old_name`: the previous name. | | `instance-backup-retrieved` | The raw instance backup file has been downloaded. | | | `instance-console` | Connected to the console of the instance. | `type`: `console` or `vga`. | | `instance-console-reset` | The console buffer has been reset. | | | `instance-console-retrieved` | The console log has been downloaded. | | | `instance-created` | A new instance has been created. | | | `instance-deleted` | The instance has been deleted. | | | `instance-exec` | A command has been executed on the instance. | `command`: the command to be executed. | | `instance-file-deleted` | A file on the instance has been deleted. | `file`: path to the file. | | `instance-file-pushed` | The file has been pushed to the instance. | `file-source`: local file path. `file-destination`: destination file path. `info`: file information. | | `instance-file-retrieved` | The file has been downloaded from the instance. | `file-source`: instance file path. `file-destination`: destination file path. | | `instance-log-deleted` | The instance's specified log file has been deleted. | | | `instance-log-retrieved` | The instance's specified log file has been downloaded. | | | `instance-metadata-retrieved` | The instance's image metadata has been downloaded. | | | `instance-metadata-template-created` | A new image template file for the instance has been created. | `path`: relative file path. | | `instance-metadata-template-deleted` | The image template file for the instance has been deleted. | `path`: relative file path. | | `instance-metadata-template-retrieved` | The image template file for the instance has been downloaded. | `path`: relative file path. | | `instance-metadata-updated` | The instance's image metadata has changed. | | | `instance-paused` | The instance has been put in a paused state. | | | `instance-ready` | The instance is ready. | | | `instance-renamed` | The instance has been renamed. | `old_name`: the previous name. | | `instance-restarted` | The instance has restarted. | | | `instance-restored` | The instance has been restored from a snapshot. | `snapshot`: name of the snapshot being restored. | | `instance-resumed` | The instance has resumed after being paused. | | | `instance-shutdown` | The instance has shut down. | | | `instance-snapshot-created` | A snapshot of the instance has been created. | | | `instance-snapshot-deleted` | The instance snapshot has been deleted. | | | `instance-snapshot-renamed` | The instance snapshot has been renamed. | `old_name`: the previous name. | | `instance-snapshot-updated` | The instance snapshot's configuration has changed. | | | `instance-started` | The instance has started. | | | `instance-stopped` | The instance has stopped. | | | `instance-updated` | The instance's configuration has changed. | | | `network-acl-created` | A new network ACL has been created. | | | `network-acl-deleted` | The network ACL has been deleted. | | | `network-acl-renamed` | The network ACL has been renamed. | `old_name`: the previous"
},
{
"data": "| | `network-acl-updated` | The network ACL configuration has changed. | | | `network-created` | A network device has been created. | | | `network-deleted` | The network device has been deleted. | | | `network-forward-created` | A new network forward has been created. | | | `network-forward-deleted` | The network forward has been deleted. | | | `network-forward-updated` | The network forward has been updated. | | | `network-peer-created` | A new network peer has been created. | | | `network-peer-deleted` | The network peer has been deleted. | | | `network-peer-updated` | The network peer has been updated. | | | `network-renamed` | The network device has been renamed. | `old_name`: the previous name. | | `network-updated` | The network device's configuration has changed. | | | `network-zone-created` | A new network zone has been created. | | | `network-zone-deleted` | The network zone has been deleted. | | | `network-zone-record-created` | A new network zone record has been created. | | | `network-zone-record-deleted` | The network zone record has been deleted. | | | `network-zone-record-updated` | The network zone record has been updated. | | | `network-zone-updated` | The network zone has been updated. | | | `operation-cancelled` | The operation has been canceled. | | | `profile-created` | A new profile has been created. | | | `profile-deleted` | The profile has been deleted. | | | `profile-renamed` | The profile has been renamed . | `old_name`: the previous name. | | `profile-updated` | The profile's configuration has changed. | | | `project-created` | A new project has been created. | | | `project-deleted` | The project has been deleted. | | | `project-renamed` | The project has been renamed. | `old_name`: the previous name. | | `project-updated` | The project's configuration has changed. | | | `storage-pool-created` | A new storage pool has been created. | `target`: cluster member name. | | `storage-pool-deleted` | The storage pool has been deleted. | | | `storage-pool-updated` | The storage pool's configuration has changed. | `target`: cluster member name. | | `storage-volume-backup-created` | A new backup for the storage volume has been created. | `type`: `container`, `virtual-machine`, `image`, or `custom`. | | `storage-volume-backup-deleted` | The storage volume's backup has been deleted. | | | `storage-volume-backup-renamed` | The storage volume's backup has been renamed. | `old_name`: the previous name. | | `storage-volume-backup-retrieved` | The storage volume's backup has been downloaded. | | | `storage-volume-created` | A new storage volume has been created. | `type`: `container`, `virtual-machine`, `image`, or `custom`. | | `storage-volume-deleted` | The storage volume has been deleted. | | | `storage-volume-renamed` | The storage volume has been renamed. | `old_name`: the previous name. | | `storage-volume-restored` | The storage volume has been restored from a snapshot. | `snapshot`: name of the snapshot being restored. | | `storage-volume-snapshot-created` | A new storage volume snapshot has been created. | `type`: `container`, `virtual-machine`, `image`, or `custom`. | | `storage-volume-snapshot-deleted` | The storage volume's snapshot has been deleted. | | | `storage-volume-snapshot-renamed` | The storage volume's snapshot has been renamed. | `old_name`: the previous name. | | `storage-volume-snapshot-updated` | The configuration for the storage volume's snapshot has changed. | | | `storage-volume-updated` | The storage volume's configuration has changed. | | | `warning-acknowledged` | The warning's status has been set to \"acknowledged\". | | | `warning-deleted` | The warning has been deleted. | | | `warning-reset` | The warning's status has been set to \"new\". | |"
}
] | {
"category": "Runtime",
"file_name": "events.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "rkt is and accepts contributions via GitHub pull requests. This document outlines some of the conventions on development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted. By contributing to this project you agree to the Developer Certificate of Origin (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the contribution. See the file for details. The project has a mailing list and two discussion channels in IRC: Email: IRC: # on freenode.org, for general discussion IRC: # on freenode.org, for development discussion Please avoid emailing maintainers found in the MAINTAINERS file directly. They are very busy and read the mailing lists. Fork the repository on GitHub Read for build and for test instructions Play with the project, submit bugs, submit patches! This is a rough outline of what a contributor's workflow looks like: Create a topic branch from where you want to base your work (usually master). Make commits of logical units. Make sure your commit messages are in the proper format (see below). Push your changes to a topic branch in your fork of the repository. Make sure the pass, and add any new tests as appropriate. Submit a pull request to the original repository. Submit a comment with the sole content \"@reviewer PTAL\" (please take a look) in GitHub and replace \"@reviewer\" with the correct recipient. When addressing pull request review comments add new commits to the existing pull request or, if the added commits are about the same size as the previous commits, squash them into the existing commits. Once your PR is labelled as \"reviewed/lgtm\" squash the addressed commits in one commit. If your PR addresses multiple subsystems reorganize your PR and create multiple commits per subsystem. Your contribution is ready to be"
},
{
"data": "Thanks for your contributions! Go style in the rkt project essentially just means following the upstream conventions: - It's recommended to set a save hook in your editor of choice that runs `goimports` against your code. Project docs should follow the [Documentation style and formatting guide](https://github.com/coreos/docs/tree/master/STYLE.md). Thank you for documenting! We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ``` scripts: add the test-cluster command this uses tmux to setup a test cluster that you can easily kill and start for debugging. Fixes #38 ``` The format can be described more formally as follows: ``` <subsystem>: <what changed> <BLANK LINE> <why this change was made> <BLANK LINE> <footer> ``` The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools. The pull request title and the first paragraph of the pull request description is being used to generate the changelog of the next release. The convention follows the same rules as for commit messages. The PR title reflects the what and the first paragraph of the PR description reflects the why. In most cases one can reuse the commit title as the PR title and the commit messages as the PR description for the PR. If your PR includes more commits spanning multiple subsystems one should change the PR title and the first paragraph of the PR description to reflect a summary of all changes involved. A large PR must be split into multiple commits, each with clear commit messages. Intermediate commits should compile and pass tests. Exceptions to non-compilable must have a valid reason, i.e. dependency bumps. Do not add entries in the changelog yourself. They will be overwritten when creating a new release."
}
] | {
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
} |
[
{
"data": "Just as Kilo can connect a Kubernetes cluster to external services over WireGuard, it can connect multiple independent Kubernetes clusters. This enables clusters to provide services to other clusters over a secure connection. For example, a cluster on AWS with access to GPUs could run a machine learning service that could be consumed by workloads running in a another location, e.g. an on-prem cluster without GPUs. Unlike services exposed via Ingresses or NodePort Services, multi-cluster services can remain private and internal to the clusters. Note: in order for connected clusters to be fully routable, the allowed IPs that they declare must be non-overlapping, i.e. the Kilo, pod, and service CIDRs. Consider two clusters, `cluster1` with: kubeconfig: `KUBECONFIG1`; and service CIDR: `$SERVICECIDR1` and `cluster2` with: kubeconfig: `KUBECONFIG2` service CIDR: `$SERVICECIDR2`; and In order to give `cluster2` access to a service running on `cluster1`, start by peering the nodes: ```shell for n in $(kubectl --kubeconfig $KUBECONFIG1 get no -o name | cut -d'/' -f2); do kgctl --kubeconfig $KUBECONFIG1 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR1 | kubectl --kubeconfig $KUBECONFIG2 apply -f - done for n in $(kubectl --kubeconfig $KUBECONFIG2 get no -o name | cut -d'/' -f2); do kgctl --kubeconfig $KUBECONFIG2 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR2 | kubectl --kubeconfig $KUBECONFIG1 apply -f - done ``` Now, Pods on `cluster1` can ping, cURL, or otherwise make requests against Pods and Services in `cluster2` and vice-versa. At this point, Kilo has created a fully routable network between the two clusters. However, as it stands the external Services can only be accessed by using their clusterIPs directly. For example, a Pod in `cluster2` would need to use the URL `http://$CLUSTERIPFROMCLUSTER1` to make an HTTP request against a Service running in `cluster1`. In other words, the Services are not yet Kubernetes-native. We can easily change that by creating a Kubernetes Service in `cluster2` to mirror the Service in `cluster1`: ```shell cat <<EOF | kubectl --kubeconfig $KUBECONFIG2 apply -f - apiVersion: v1 kind: Service metadata: name: important-service spec: ports: port: 80 apiVersion: v1 kind: Endpoints metadata: name: important-service subsets: addresses: ip: $(kubectl --kubeconfig $KUBECONFIG1 get service important-service -o jsonpath='{.spec.clusterIP}') # The cluster IP of the important service on cluster1. ports: port: 80 EOF ``` Now, `important-service` can be used and discovered on `cluster2` just like any other Kubernetes Service. That means that a Pod in `cluster2` could directly use the Kubernetes DNS name for the Service when making HTTP requests, for example: `http://important-service.default.svc.cluster.local`. Notice that this mirroring is ad-hoc, requiring manual administration of each Service. This process can be fully automated using to discover and mirror Kubernetes Services between connected clusters."
}
] | {
"category": "Runtime",
"file_name": "multi-cluster-services.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "This latest stable version of Longhorn 1.5 introduces several improvements and bug fixes that are intended to improve system quality, resilience, and stability. The Longhorn team appreciates your contributions and anticipates receiving feedback regarding this release. Note: For more information about release-related terminology, see . Ensure that your cluster is running Kubernetes v1.21 or later before installing Longhorn v1.5.4. You can install Longhorn using a variety of tools, including Rancher, Kubectl, and Helm. For more information about installation methods and requirements, see in the Longhorn documentation. Ensure that your cluster is running Kubernetes v1.21 or later before upgrading from Longhorn v1.4.x to v1.5.4. Longhorn only allows upgrades from supported versions. For more information about upgrade paths and procedures, see in the Longhorn documentation. For information about important changes, including feature incompatibility, deprecation, and removal, see in the Longhorn documentation. For information about issues identified after this release, see . - @yangchiu @ejweber - @votdev @roger-ryao - @ChanYiLin @chriscchien - @nitendra-suse - @ChanYiLin @mantissahz - @ejweber @chriscchien - @mantissahz @james-munson - @mantissahz @chriscchien - @derekbit @roger-ryao - @ChanYiLin @chriscchien - @ChanYiLin @roger-ryao - @ChanYiLin @chriscchien - @PhanLe1010 @roger-ryao - @yangchiu @ChanYiLin - @mantissahz @chriscchien - @c3y1huang @chriscchien - @Vicente-Cheng @chriscchien - @yangchiu @ejweber - @c3y1huang @roger-ryao - @yangchiu @derekbit - @c3y1huang - @ChanYiLin - @c3y1huang - @ejweber @chriscchien - @mantissahz @roger-ryao - @yangchiu @c3y1huang - @shuo-wu @roger-ryao - @yangchiu @shuo-wu - @yangchiu @shuo-wu - @yangchiu @mantissahz @PhanLe1010 @c3y1huang - @ejweber @chriscchien - @ejweber @roger-ryao - @ejweber @chriscchien - @ejweber @chriscchien - @ejweber @chriscchien - @yangchiu - @ChanYiLin @chriscchien - @yangchiu @ejweber - @chriscchien @scures - @ChanYiLin - @mantissahz @chriscchien - @yangchiu @c3y1huang - @derekbit @roger-ryao - @c3y1huang @chriscchien @roger-ryao - @ChanYiLin @mantissahz - @c3y1huang @chriscchien - @c3y1huang @chriscchien - @mantissahz @c3y1huang - @mantissahz @roger-ryao - @PhanLe1010 @chriscchien - @m-ildefons @roger-ryao - @ejweber @roger-ryao - @james-munson @roger-ryao - @james-munson @chriscchien - @yangchiu @PhanLe1010 - @yangchiu @ChanYiLin - @ejweber @roger-ryao - @ChanYiLin @shuo-wu @roger-ryao - @yangchiu @c3y1huang - @ChanYiLin @roger-ryao - @james-munson @roger-ryao - @PhanLe1010 @chriscchien - @PhanLe1010 @roger-ryao - @derekbit @chriscchien - @mantissahz @PhanLe1010 - @ejweber @chriscchien - @scures @roger-ryao - @yangchiu @mantissahz - @roger-ryao - @m-ildefons @roger-ryao - @james-munson @roger-ryao - @ChanYiLin @roger-ryao - @m-ildefons - @mantissahz @chriscchien - @ejweber @roger-ryao - @c3y1huang @roger-ryao - @c3y1huang - @james-munson @ChanYiLin @PhanLe1010 @Vicente-Cheng @c3y1huang @chriscchien @derekbit @ejweber @innobead @james-munson @m-ildefons @mantissahz @nitendra-suse @roger-ryao @scures @shuo-wu @votdev @yangchiu"
}
] | {
"category": "Runtime",
"file_name": "CHANGELOG-1.5.4.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Note: If Kata Containers and / or containerd are packaged by your distribution, we recommend you install these versions to ensure they are updated when new releases are available. Warning: These instructions install the newest versions of Kata Containers and containerd from binary release packages. These versions may not have been tested with your distribution version. Since your package manager is not being used, it is your responsibility to ensure these packages are kept up-to-date when new versions are released. If you decide to proceed and install a Kata Containers release, you can still check for the latest version of Kata Containers by running `kata-runtime check --only-list-releases`. Note: If your distribution packages Kata Containers, we recommend you install that version. If it does not, or you wish to perform a manual installation, continue with the steps below. Download a release from: https://github.com/kata-containers/kata-containers/releases Note that Kata Containers uses so you should install a version that does not include a dash (\"-\"), since this indicates a pre-release version. Unpack the downloaded archive. Kata Containers packages use a `/opt/kata/` prefix so either add that to your `PATH`, or create symbolic links for the following commands. The advantage of using symbolic links is that the `systemd(1)` configuration file for containerd will not need to be modified to allow the daemon to find this binary (see the below). | Command | Description | |-|-| | `/opt/kata/bin/containerd-shim-kata-v2` | The main Kata 2.x binary | | `/opt/kata/bin/kata-collect-data.sh` | Data collection script used for | | `/opt/kata/bin/kata-runtime` | Utility command | Check installation by showing version details: ```bash $ kata-runtime --version ``` Note: If your distribution packages containerd, we recommend you install that version. If it does not, or you wish to perform a manual installation, continue with the steps below. Download a release from: https://github.com/containerd/containerd/releases Unpack the downloaded archive. Configure containerd Download the standard `systemd(1)` service file and install to `/etc/systemd/system/`: https://raw.githubusercontent.com/containerd/containerd/main/containerd.service > Notes: > > - You will need to reload the systemd configuration after installing this > file. > > - If you have not created a symbolic link for > `/opt/kata/bin/containerd-shim-kata-v2`, you will need to modify this > file to ensure the containerd daemon's `PATH` contains `/opt/kata/`. > See the `Environment=` command in `systemd.exec(5)` for further > details. Add the Kata Containers configuration to the containerd configuration file: ```toml [plugins] [plugins.\"io.containerd.grpc.v1.cri\"] [plugins.\"io.containerd.grpc.v1.cri\".containerd] defaultruntimename = \"kata\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes] [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.kata] runtime_type = \"io.containerd.kata.v2\" ``` > Note: > > The containerd daemon needs to be able to find the > `containerd-shim-kata-v2` binary to allow Kata Containers to be created. Start the containerd service. You are now ready to run Kata Containers. You can perform a simple test by running the following commands: ```bash $ image=\"docker.io/library/busybox:latest\" $ sudo ctr image pull \"$image\" $ sudo ctr run --runtime \"io.containerd.kata.v2\" --rm -t \"$image\" test-kata uname -r ``` The last command above shows details of the kernel version running inside the container, which will likely be different to the host kernel version."
}
] | {
"category": "Runtime",
"file_name": "containerd-install.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "name: Bug Report about: Report a bug encountered while operating JuiceFS labels: kind/bug <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! --> What happened: What you expected to happen: How to reproduce it (as minimally and precisely as possible): Anything else we need to know? Environment: JuiceFS version (use `juicefs --version`) or Hadoop Java SDK version: Cloud provider or hardware configuration running JuiceFS: OS (e.g `cat /etc/os-release`): Kernel (e.g. `uname -a`): Object storage (cloud provider and region, or self maintained): Metadata engine info (version, cloud provider managed or self maintained): Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage): Others:"
}
] | {
"category": "Runtime",
"file_name": "bug-report.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "As we know, we can interact with cgroups in two ways, `cgroupfs` and `systemd`. The former is achieved by reading and writing cgroup `tmpfs` files under `/sys/fs/cgroup` while the latter is done by configuring a transient unit by requesting systemd. Kata agent uses `cgroupfs` by default, unless you pass the parameter `--systemd-cgroup`. For systemd, kata agent configures cgroups according to the following `linux.cgroupsPath` format standard provided by `runc` (`[slice]:[prefix]:[name]`). If you don't provide a valid `linux.cgroupsPath`, kata agent will treat it as `\"system.slice:kata_agent:<container-id>\"`. Here slice is a systemd slice under which the container is placed. If empty, it defaults to system.slice, except when cgroup v2 is used and rootless container is created, in which case it defaults to user.slice. Note that slice can contain dashes to denote a sub-slice (e.g. user-1000.slice is a correct notation, meaning a `subslice` of user.slice), but it must not contain slashes (e.g. user.slice/user-1000.slice is invalid). A slice of `-` represents a root slice. Next, prefix and name are used to compose the unit name, which is `<prefix>-<name>.scope`, unless name has `.slice` suffix, in which case prefix is ignored and the name is used as is. The kata agent will translate the parameters in the `linux.resources` of `config.json` into systemd unit properties, and send it to systemd for configuration. Since systemd supports limited properties, only the following parameters in `linux.resources` will be applied. We will simply treat hybrid mode as legacy mode by the way. CPU v1 | runtime spec resource | systemd property name | | | | | `cpu.shares` | `CPUShares` | v2 | runtime spec resource | systemd property name | | -- | -- | | `cpu.shares` | `CPUShares` | | `cpu.period` | `CPUQuotaPeriodUSec`(v242) | | `cpu.period` & `cpu.quota` | `CPUQuotaPerSecUSec` | MEMORY v1 | runtime spec resource | systemd property name | | | | | `memory.limit` | `MemoryLimit` | v2 | runtime spec resource | systemd property name | | | | | `memory.low` | `MemoryLow` | | `memory.max` | `MemoryMax` | | `memory.swap` & `memory.limit` | `MemorySwapMax` | PIDS | runtime spec resource | systemd property name | | | | | `pids.limit ` | `TasksMax` | CPUSET | runtime spec resource | systemd property name | | | -- | | `cpuset.cpus` | `AllowedCPUs`(v244) | | `cpuset.mems` | `AllowedMemoryNodes`(v244) | `session.rs` and `system.rs` in `src/agent/rustjail/src/cgroups/systemd/interface` are automatically generated by `zbus-xmlgen`, which is is an accompanying tool provided by `zbus` to generate Rust code from `D-Bus XML interface descriptions`. The specific commands to generate these two files are as follows: ```shell // system.rs zbus-xmlgen --system org.freedesktop.systemd1 /org/freedesktop/systemd1 // session.rs zbus-xmlgen --session org.freedesktop.systemd1 /org/freedesktop/systemd1 ``` The current implementation of `cgroups/systemd` uses `system.rs` while `session.rs` could be used to build rootless containers in the future."
}
] | {
"category": "Runtime",
"file_name": "agent-systemd-cgroup.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "(howto-cluster-groups)= Cluster members can be assigned to {ref}`cluster-groups`. By default, all cluster members belong to the `default` group. To create a cluster group, use the command. For example: incus cluster group create gpu To assign a cluster member to one or more groups, use the command. This command removes the specified cluster member from all the cluster groups it currently is a member of and then adds it to the specified group or groups. For example, to assign `server1` to only the `gpu` group, use the following command: incus cluster group assign server1 gpu To assign `server1` to the `gpu` group and also keep it in the `default` group, use the following command: incus cluster group assign server1 default,gpu To add a cluster member to a specific group without removing it from other groups, use the command. For example, to add `server1` to the `gpu` group and also keep it in the `default` group, use the following command: incus cluster group add server1 gpu With cluster groups, you can target an instance to run on one of the members of the cluster group, instead of targeting it to run on a specific member. ```{note} {config:option}`cluster-cluster:scheduler.instance` must be set to either `all` (the default) or `group` to allow instances to be targeted to a cluster group. See {ref}`clustering-instance-placement` for more information. ``` To launch an instance on a member of a cluster group, follow the instructions in {ref}`cluster-target-instance`, but use the group name prefixed with `@` for the `--target` flag. For example: incus launch images:ubuntu/22.04 c1 --target=@gpu"
}
] | {
"category": "Runtime",
"file_name": "cluster_groups.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "title: Managing Services - Exporting, Importing, Binding and Routing menu_order: 30 search_type: Documentation This section contains the following topics: * * Services running in containers on a Weave network can be made accessible to the outside world (and, more generally, to other networks) from any Weave Net host, irrespective of where the service containers are located. Returning to the , you can expose the netcat service running on `HOST1` and make it accessible to the outside world via `$HOST2`. First, expose the application network to `$HOST2`, as explained in : host2$ weave expose 10.2.1.132 Then add a NAT rule that routes the traffic from the outside world to the destination container service. host2$ iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 2211 \\ -j DNAT --to-destination $(weave dns-lookup a1):4422 In this example, it is assumed that the \"outside world\" is connecting to `$HOST2` via 'eth0'. The TCP traffic to port 2211 on the external IPs will be routed to the 'nc' service running on port 4422 in the container a1. With the above in place, you can connect to the 'nc' service from anywhere using: echo 'Hello, world.' | nc $HOST2 2211 Note: Due to the way routing is handled in the Linux kernel, this won't work when run on `$HOST2`. Similar NAT rules to the above can be used to expose services not just to the outside world but also to other, internal, networks. Applications running in containers on a Weave network can be given access to services, which are only reachable from certain Weave hosts, regardless of where the actual application containers are located. Expanding on the , you now decide to add a third, non-containerized, netcat service. This additional netcat service runs on `$HOST3`, and listens on port 2211, but it is not on the Weave network. An additional caveat is that `$HOST3` can only be reached from `$HOST1`, which is not accessible via `$HOST2`. Nonetheless, you still need to make the `$HOST3` service available to an application that is running in a container on `$HOST2`. To satisfy this scenario, first by running the following on `$HOST1`: host1$ weave expose -h host1.weave.local 10.2.1.3 Then add a NAT rule, which routes from the above IP to the destination"
},
{
"data": "host1$ iptables -t nat -A PREROUTING -p tcp -d 10.2.1.3 --dport 3322 \\ -j DNAT --to-destination $HOST3:2211 This allows any application container to reach the service by connecting to 10.2.1.3:3322. So if `$HOST3` is running a netcat service on port 2211: host3$ nc -lk -p 2211 You can now connect to it from the application container running on `$HOST2` using: root@a2:/# echo 'Hello, world.' | nc host1 3322 Note that you should be able to run this command from any application container. Importing a service provides a degree of indirection that allows late and dynamic binding, similar to what can be achieved with a proxy. Referring back to the , the application containers are completely unaware that the service they are accessing at `10.2.1.3:3322` actually resides on `$HOST3:2211`. You can point application containers to another service location by changing the above NAT rule, without altering the applications. You can combine the service export and service import features to establish connectivity between applications and services residing on disjointed networks, even if those networks are separated by firewalls and have overlapping IP ranges. Each network imports its services into Weave Net, while at the same time, exports from Weave Net any services that are required by its applications. In this scenario, there are no application containers (although, there could be). Weave Net is acting as an address translation and routing facility, and uses the Weave container network as an intermediary. Expanding on the , you can also import an additional netcat service running on `$HOST3` into Weave Net via `$HOST1`. Begin importing the service onto `$HOST2` by first exposing the application network: host2$ weave expose 10.2.1.3 Then add a NAT rule which routes traffic from the `$HOST2` network (for example, anything that can connect to `$HOST2`) to the service endpoint on the Weave network: host2$ iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 4433 \\ -j DNAT --to-destination 10.2.1.3:3322 Now any host on the same network as `$HOST2` is able to access the service: echo 'Hello, world.' | nc $HOST2 4433 Furthermore, as explained in Binding Services, service locations can be dynamically altered without having to change any of the applications that access them. For example, you can move the netcat service to `$HOST4:2211` and it will retain its 10.2.1.3:3322 endpoint on the Weave network. See Also *"
}
] | {
"category": "Runtime",
"file_name": "service-management.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "Longhorn can reclaim disk space by allowing the filesystem to trim/unmap the unused blocks occupied by removed files. https://github.com/longhorn/longhorn/issues/836 Longhorn volumes support the operation `unmap`, which is actually the filesystem trim. Since some unused blocks are in the snapshots, these blocks can be freed if the snapshots is no longer required. Longhorn tgt should support module `UNMAP`. When the filesystem of a Longhorn volume receives cmd `fstrim`, the iSCSI initiator actually sends this `UNMAP` requests to the target. To understand the iscsi protocol message of `UNMAP` then start the implementation, we can refer to Section \"3.54 UNMAP command\" of . By design, snapshots of a Longhorn volume are immutable and lots of the blocks of removed files may be in the snapshots. This implicitly means we have to skip these blocks and free blocks in the current volume head only if we do nothing else. It will greatly degrade the effectiveness of this feature. To release as much space as possible, we can do unmap for all continuous unavailable (removed or system) snapshots behinds the volume head, which is similar to the snapshot prune operation. Longhorn volumes won't mark snapshots as removed hence most of the time there is no continuous unavailable snapshots during the trim. To make it more practicable, we introduce a new global setting for all volumes. It automatically marks the latest snapshot and its ancestors as removed and stops at the snapshot containing multiple children. Besides, there is a per-volume option that can overwrite the global setting and directly indicate if this automatic removal is enabled. By default, it will be ignored and the volumes follow the global setting. Before the enhancement, there is no way to reclaim the space. To shrink the volume, users have to launch a new volume with a new filesystem, and copy the existing files from the old volume filesystem to the new one, then switch to use the new volume. After the enhancement, users can directly reclaim the space by trimming the filesystem via cmd `fstrim` or Longhorn UI. Besides, users can enable the new option so that Longhorn can automatically mark the snapshot chain as removed then trim the blocks recorded in the snapshots. Users can enable the option for a specific volume by modifying the volume option `volume.Spec.UnmapMarkSnapChainRevmoed`, or directly set the global setting `Remove Snapshots During Filesystem Trim`. For an existing Longhorn volume that contains a filesystem and there are files removed from the filesystem, users can directly run cmd `fstrim <filesystem mount point>` or Click Longhorn UI button `Trim"
},
{
"data": "Users will observe that the snapshot chain are marked as removed. And both these snapshots and the volume head will be shrunk. Volume APIs: Add `updateUnmapMarkSnapChainRemoved`: Control if Longhorn will remove snapshots during the filesystem trim, or just follows the global setting. Add `trimFilesystem`: Trim the filesystem of the volume. Best Effort. Engine APIs: Add `unmap-mark-snap-chain-removed`: `--enable` or `disable`. Control if the engine and all its replicas will mark the snapshot chain as removed once receiving a `UNMAP` request. Add a setting `Remove Snapshots During Filesystem Trim`. Add fields for CRDs: `volume.Spec.UnmapMarkSnapChainRemoved`, `engine.Spec.UnmapMarkSnapChainRemoved`, `replica.Spec.UnmapMarkDiskChainRemoved`. Add 2 HTTP APIs mentioned above: `updateUnmapMarkSnapChainRemoved` and `trimFilesystem`. Update controllers . `Volume Controller`: Update the engine and replica field based on `volume.Spec.UnmapMarkSnapChainRemoved` and the global setting. Enqueue the change for the field and the global setting. `Engine Controller`: The monitor thread should compare the CR field `engine.Spec.UnmapMarkSnapChainRemoved` with the current option value inside the engine process, then call the engine API `unmap-mark-snap-chain-removed` if there is a mismatching. The process creation function should specify the option `unmap-mark-snap-chain-removed`. `Replica Controller`: The process creation function should specify the option `unmap-mark-disk-chain-removed`. Update dependency `rancher/tgt`, `longhorn/longhornlib`, and `longhorn/sparse-tools` for the operation `UNMAP` support. Add new option `unmap-mark-snap-chain-removed` for the engine process creation call. Add new option `unmap-mark-disk-chain-removed` for the replica process creation call. Add a new API `unmap-mark-snap-chain-removed` to update the field for the engine and all its replicas. The engine process should be able to recognize the request of `UNMAP` from the tgt, then forwards the requests to all its replicas via the dataconn service. This is similar to data R/W. When each replica receive a trim/unmap request, it should decide if the snapshot chain can be marked as removed, then collect all trimmable snapshots, punch holes to these snapshots and the volume head, then calculate the trimmed space. Update the dependencies. Add the corresponding proxy API for the new engine API. Add 2 new operations for Longhorn volume. API `updateUnmapMarkSnapChainRemoved`: The backend accepts 3 values of the input `UnmapMarkSnapChainRemoved`: `\"enabled\"`, `\"disabled\"`, `\"ignored\"`. The UI can rename this option to `Remove Current Snapshot Chain during Filesystem Trim`, and value `\"ignored\"` to `follow the global setting`. API `trimFilesystem`: No input is required. The volume creation call accepts a new option `UnmapMarkSnapChainRemoved`. This is almost the same as the above update API. Test if the unused blocks in the volume head and the snapshots can be trimmed correctly without corrupting other files, and if the snapshot removal mechanism works when the option is enabled or disabled. Test if the filesystem trim works correctly when there are continuous writes into the volume. N/A"
}
] | {
"category": "Runtime",
"file_name": "20221103-filesystem-trim.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Managing recurring jobs for Longhorn Volumes is challenging for users utilizing gitops. Primary because gitops operates at the Kubernetes resource level while recurring job labeling is specific to individual Longhorn Volumes. This document proposes the implementation of a solution that allows configuring recurring jobs directly on PVCs. By adopting this approach, users will have the capability to manage Volume recurring jobs through the PVCs. https://github.com/longhorn/longhorn/issues/5791 Introduce support for enabling/disabling PVCs as a recurring job label source for the corresponding Volume. The recurring job labels on PVCs are reflected on the associated Volume when the PVC is set as the recurring job label source. Sync Volume recurring job labels to PVC. The existing behavior of recurring jobs will remain unchanged, with the Volume's recurring job labeling as the source of truth. When the PVC is enabled as the recurring job label source, its recurring job labels will override all recurring job labels of the associated Volume. As a user, I want to be able to set the RecurringJob label on the PVC. I expect that any updates made to the RecurringJob labels on the PVC will automatically reflect on the associated Volume. To enable or disable the PVC as the recurring job label source, users can manage it by adding or removing the `recurring-job.longhorn.io/source: enable` label to the PVC. Once the PVC is set as the recurring job label source, any recurring job labels added or removed from the PVC will be automatically synchronized by Longhorn to the associated Volume. `None` If the PVC is labeled with `recurring-job.longhorn.io/source: enable`, the volume controller checks and updates the Volume to ensure the recurring job labels stay synchronized with the PVC by detecting recurring job label differences. As of now, Longhorn includes a feature that automatically removes the Volume recurring job label associated with a deleting RecurringJob. This is also applicable to the PVC. Update PVC recurring job label should reflect on the Volume. Delete RecurringJob custom resource should delete the recurring job labels on both Volume and PVC. `None` `None`"
}
] | {
"category": "Runtime",
"file_name": "20230517-set-recurring-job-to-pvc.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Run cilium-operator-generic ``` cilium-operator-generic [flags] ``` ``` --auto-create-cilium-pod-ip-pools map Automatically create CiliumPodIPPool resources on startup. Specify pools in the form of <pool>=ipv4-cidrs:<cidr>,[<cidr>...];ipv4-mask-size:<size> (multiple pools can also be passed by repeating the CLI flag) --bgp-announce-lb-ip Announces service IPs of type LoadBalancer via BGP --bgp-config-path string Path to file containing the BGP configuration (default \"/var/lib/cilium/bgp/config.yaml\") --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cilium-endpoint-gc-interval duration GC interval for cilium endpoints (default 5m0s) --cilium-pod-labels string Cilium Pod's labels. Used to detect if a Cilium pod is running to remove the node taints where its running and set NetworkUnavailable to false (default \"k8s-app=cilium\") --cilium-pod-namespace string Name of the Kubernetes namespace in which Cilium is deployed in. Defaults to the same namespace defined in k8s-namespace --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --cluster-pool-ipv4-cidr strings IPv4 CIDR Range for Pods in cluster. Requires 'ipam=cluster-pool' and 'enable-ipv4=true' --cluster-pool-ipv4-mask-size int Mask size for each IPv4 podCIDR per node. Requires 'ipam=cluster-pool' and 'enable-ipv4=true' (default 24) --cluster-pool-ipv6-cidr strings IPv6 CIDR Range for Pods in cluster. Requires 'ipam=cluster-pool' and 'enable-ipv6=true' --cluster-pool-ipv6-mask-size int Mask size for each IPv6 podCIDR per node. Requires 'ipam=cluster-pool' and 'enable-ipv6=true' (default 112) --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --cnp-status-cleanup-burst int Maximum burst of requests to clean up status nodes updates in CNPs (default 20) --cnp-status-cleanup-qps float Rate used for limiting the clean up of the status nodes updates in CNP, expressed as qps (default 10) --config string Configuration file (default \"$HOME/ciliumd.yaml\") --config-dir string Configuration directory that contains a file for each option --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. -D, --debug Enable debugging mode --enable-cilium-endpoint-slice If set to true, the CiliumEndpointSlice feature is"
},
{
"data": "If any CiliumEndpoints resources are created, updated, or deleted in the cluster, all those changes are broadcast as CiliumEndpointSlice updates to all of the Cilium agents. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-ipv4 Enable IPv4 support (default true) --enable-ipv6 Enable IPv6 support (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-metrics Enable Prometheus metrics --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. (default true) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host network. --gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) -h, --help help for cilium-operator-generic --identity-allocation-mode string Method to use for identity allocation (default \"kvstore\") --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for"
},
{
"data": "(default \"cilium-ingress\") --instance-tags-filter map EC2 Instance tags in the form of k1=v1,k2=v2 (multiple k/v pairs can also be passed by repeating the CLI flag --ipam string Backend to use for IPAM (default \"cluster-pool\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-namespace string Name of the Kubernetes namespace in which Cilium Operator is deployed in --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --kvstore string Key-value store type --kvstore-opt map Key-value store options e.g. etcd.address=127.0.0.1:4001 --leader-election-lease-duration duration Duration that non-leader operator candidates will wait before forcing to acquire leadership (default 15s) --leader-election-renew-deadline duration Duration that current acting master will retry refreshing leadership in before giving up the lock (default 10s) --leader-election-retry-period duration Duration that LeaderElector clients should wait between retries of the actions (default 2s) --limit-ipam-api-burst int Upper burst limit when accessing external APIs (default 20) --limit-ipam-api-qps float Queries per second limit when accessing external IPAM APIs (default 4) --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-operator, configmap example for syslog driver: {\"syslog.level\":\"info\",\"syslog.facility\":\"local4\"} --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --nodes-gc-interval duration GC interval for CiliumNodes (default 5m0s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --parallel-alloc-workers int Maximum number of parallel IPAM workers (default 50) --pod-restart-selector string cilium-operator will delete/restart any pods with these labels if the pod is not managed by Cilium. If this option is empty, then all pods may be restarted (default \"k8s-app=kube-dns\") --remove-cilium-node-taints Remove node taint \"node.cilium.io/agent-not-ready\" from Kubernetes nodes once Cilium is up and running (default true) --set-cilium-is-up-condition Set CiliumIsUp Node condition to mark a Kubernetes Node that a Cilium pod is up and running in that node (default true) --set-cilium-node-taints Set node taint \"node.cilium.io/agent-not-ready\" from Kubernetes nodes if Cilium is scheduled but not up and running --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created --subnet-ids-filter strings Subnets IDs (separated by commas) --subnet-tags-filter map Subnets tags in the form of k1=v1,k2=v2 (multiple k/v pairs can also be passed by repeating the CLI flag --synchronize-k8s-nodes Synchronize Kubernetes nodes to kvstore and perform CNP GC (default true) --synchronize-k8s-services Synchronize Kubernetes services to kvstore (default true) --unmanaged-pod-watcher-interval int Interval to check for unmanaged kube-dns pods (0 to disable) (default 15) --version Print version information ``` - Generate the autocompletion script for the specified shell - Inspect"
}
] | {
"category": "Runtime",
"file_name": "cilium-operator-generic.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "title: \"Documentation Style Guide\" layout: docs This style guide is adapted from the . This page outlines writing style guidelines for the Velero documentation and you should use this page as a reference you write or edit content. Note that these are guidelines, not rules. Use your best judgment as you write documentation, and feel free to propose changes to these guidelines. Changes to the style guide are made by the Velero maintainers as a group. To propose a change or addition create an issue/PR, or add a suggestion to the and attend the meeting to participate in the discussion. The Velero documentation uses the Markdown renderer. {{< table caption=\"Do and Don't - Use present tense\" >}} |Do|Don't| | | | |This `command` starts a proxy.|This command will start a proxy.| {{< /table >}} Exception: Use future or past tense if it is required to convey the correct meaning. {{< table caption=\"Do and Don't - Use active voice\" >}} |Do|Don't| | | | |You can explore the API using a browser.|The API can be explored using a browser.| |The YAML file specifies the replica count.|The replica count is specified in the YAML file.| {{< /table >}} Exception: Use passive voice if active voice leads to an awkward sentence construction. Use simple and direct language. Avoid using unnecessary phrases, such as saying \"please.\" {{< table caption=\"Do and Don't - Use simple and direct language\" >}} |Do|Don't| | | | |To create a ReplicaSet, ...|In order to create a ReplicaSet, ...| |See the configuration file.|Please see the configuration file.| |View the Pods.|With this next command, we'll view the Pods.| {{< /table >}} {{< table caption=\"Do and Don't - Addressing the reader\" >}} |Do|Don't| | | | |You can create a Deployment by ...|We'll create a Deployment by ...| |In the preceding output, you can see...|In the preceding output, we can see ...| {{< /table >}} Prefer English terms over Latin abbreviations. {{< table caption=\"Do and Don't - Avoid Latin phrases\" >}} |Do|Don't| | | | |For example, ...|e.g., ...| |That is, ...|i.e., ...| {{< /table >}} Exception: Use \"etc.\" for et cetera. Using \"we\" in a sentence can be confusing, because the reader might not know whether they're part of the \"we\" you're describing. {{< table caption=\"Do and Don't - Avoid using we\" >}} |Do|Don't| | | | |Version 1.4 includes ...|In version 1.4, we have added ...| |Kubernetes provides a new feature for ...|We provide a new feature ...| |This page teaches you how to use Pods.|In this page, we are going to learn about Pods.| {{< /table >}} Many readers speak English as a second language. Avoid jargon and idioms to help them understand better. {{< table caption=\"Do and Don't - Avoid jargon and idioms\" >}} |Do|Don't| | | | |Internally, ...|Under the hood, ...| |Create a new cluster.|Turn up a new cluster.| {{< /table >}} Avoid making promises or giving hints about the future. If you need to talk about a beta feature, put the text under a heading that identifies it as beta"
},
{
"data": "Also avoid words like recently, \"currently\" and \"new.\" A feature that is new today might not be considered new in a few months. {{< table caption=\"Do and Don't - Avoid statements that will soon be out of date\" >}} |Do|Don't| | | | |In version 1.4, ...|In the current version, ...| |The Federation feature provides ...|The new Federation feature provides ...| {{< /table >}} This documentation uses U.S. English spelling and grammar. When you refer to an API object, use the same uppercase and lowercase letters that are used in the actual object name. Typically, the names of API objects use . Don't split the API object name into separate words. For example, use PodTemplateList, not Pod Template List. Refer to API objects without saying \"object,\" unless omitting \"object\" leads to an awkward sentence construction. {{< table caption=\"Do and Don't - Do and Don't - API objects\" >}} |Do|Don't| | | | |The Pod has two containers.|The pod has two containers.| |The Deployment is responsible for ...|The Deployment object is responsible for ...| |A PodList is a list of Pods.|A Pod List is a list of pods.| |The two ContainerPorts ...|The two ContainerPort objects ...| |The two ContainerStateTerminated objects ...|The two ContainerStateTerminateds ...| {{< /table >}} Use angle brackets for placeholders. Tell the reader what a placeholder represents. Display information about a Pod: kubectl describe pod <pod-name> -n <namespace> If the pod is in the default namespace, you can omit the '-n' parameter. {{< table caption=\"Do and Don't - Bold interface elements\" >}} |Do|Don't| | | | |Click Fork.|Click \"Fork\".| |Select Other.|Select \"Other\".| {{< /table >}} {{< table caption=\"Do and Don't - Use italics for new terms\" >}} |Do|Don't| | | | |A cluster is a set of nodes ...|A \"cluster\" is a set of nodes ...| |These components form the control plane.|These components form the control plane.| {{< /table >}} {{< table caption=\"Do and Don't - Use code style for filenames, directories, paths, object field names and namespaces\" >}} |Do|Don't| | | | |Open the `envars.yaml` file.|Open the envars.yaml file.| |Go to the `/docs/tutorials` directory.|Go to the /docs/tutorials directory.| |Open the `/data/concepts.yaml` file.|Open the /\\data/concepts.yaml file.| {{< /table >}} {{< table caption=\"Do and Don't - Use code style for filenames, directories, paths, object field names and namespaces\" >}} |Do|Don't| | | | |events are recorded with an associated \"stage.\"|events are recorded with an associated \"stage\".| |The copy is called a \"fork.\"|The copy is called a \"fork\".| {{< /table >}} Exception: When the quoted word is a user input. Example: My user ID is IM47g. Did you try the password mycatisawesome? For inline code in an HTML document, use the `<code>` tag. In a Markdown document, use the backtick (`` ` ``). {{< table caption=\"Do and Don't - Use code style for filenames, directories, paths, object field names and namespaces\" >}} |Do|Don't| | | | |The `kubectl run` command creates a Deployment.|The \"kubectl run\" command creates a Deployment.| |For declarative management, use `kubectl apply`.|For declarative management, use \"kubectl apply\".| |Use single backticks to enclose inline"
},
{
"data": "For example, `var example = true`.|Use two asterisks (``) or an underscore (`_`) to enclose inline code. For example, var example = true.| |Use triple backticks (\\`\\`\\`) before and after a multi-line block of code for fenced code blocks.|Use multi-line blocks of code to create diagrams, flowcharts, or other illustrations.| |Use meaningful variable names that have a context.|Use variable names such as 'foo','bar', and 'baz' that are not meaningful and lack context.| |Remove trailing spaces in the code.|Add trailing spaces in the code, where these are important, because a screen reader will read out the spaces as well.| {{< /table >}} {{< table caption=\"Do and Don't - Starting a sentence with a component tool or component name\" >}} |Do|Don't| | | | |The `kubeadm` tool bootstraps and provisions machines in a cluster.|`kubeadm` tool bootstraps and provisions machines in a cluster.| |The kube-scheduler is the default scheduler for Kubernetes.|kube-scheduler is the default scheduler for Kubernetes.| {{< /table >}} For field values of type string or integer, use normal style without quotation marks. {{< table caption=\"Do and Don't - Use normal style for string and integer field values\" >}} |Do|Don't| | | | |Set the value of `imagePullPolicy` to `Always`.|Set the value of `imagePullPolicy` to \"Always\".| |Set the value of `image` to `nginx:1.16`.|Set the value of `image` to nginx:1.16.| |Set the value of the `replicas` field to `2`.|Set the value of the `replicas` field to 2.| {{< /table >}} {{< table caption=\"Do and Don't - Don't include the command prompt\" >}} |Do|Don't| | | | |kubectl get pods|$ kubectl get pods| {{< /table >}} Verify that the Pod is running on your chosen node: ``` kubectl get pods --output=wide ``` The output is similar to this: ``` NAME READY STATUS RESTARTS AGE IP NODE nginx 1/1 Running 0 13s 10.200.0.4 worker0 ``` A list of Velero-specific terms and words to be used consistently across the site. {{< table caption=\"Velero.io word list\" >}} |Trem|Usage| | | | |Kubernetes|Kubernetes should always be capitalized.| |Docker|Docker should always be capitalized.| |Velero|Velero should always be capitalized.| |VMware|VMware should always be correctly capitalized.| |On-premises|On-premises or on-prem rather than on-premise or other variations.| |Backup|Backup rather than back up, back-up or other variations.| |Plugin|Plugin rather than plug-in or other variations.| |Allowlist|Use allowlist instead of whitelist.| |Denylist|Use denylist instead of blacklist.| {{< /table >}} People accessing this documentation may use a screen reader or other assistive technology (AT). are linear output devices, they output items on a page one at a time. If there is a lot of content on a page, you can use headings to give the page an internal structure. A good page structure helps all readers to easily navigate the page or filter topics of interest. {{< table caption=\"Do and Don't - Headings\" >}} |Do|Don't| | | | |Include a title on each page or blog post.|Include more than one title headings (#) in a page.| |Use ordered headings to provide a meaningful high-level outline of your content.|Use headings level 4 through 6, unless it is absolutely"
},
{
"data": "If your content is that detailed, it may need to be broken into separate articles.| |Use sentence case for headings. For example, Extend kubectl with plugins|Use title case for headings. For example, Extend Kubectl With Plugins| {{< /table >}} {{< table caption=\"Do and Don't - Paragraphs\" >}} |Do|Don't| | | | |Try to keep paragraphs under 6 sentences.|Write long-winded paragraphs.| |Use three hyphens (``) to create a horizontal rule for breaks in paragraph content.|Use horizontal rules for decoration.| {{< /table >}} {{< table caption=\"Do and Don't - Links\" >}} |Do|Don't| | | | |Write hyperlinks that give you context for the content they link to. For example: Certain ports are open on your machines. See for more details.|Use ambiguous terms such as click here. For example: Certain ports are open on your machines. See for more details.| |Write Markdown-style links: ``. For example: `` and the output is .|Write HTML-style links: `Visit our tutorial!`| {{< /table >}} Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a listwhether it is an ordered or unordered listit will be announced to the user that there is a group of list items. The user can then use the arrow keys to move up and down between the various items in the list. Website navigation links can also be marked up as list items; after all they are nothing but a group of related links. End each item in a list with a period if one or more items in the list are complete sentences. For the sake of consistency, normally either all items or none should be complete sentences. Ordered lists that are part of an incomplete introductory sentence can be in lowercase and punctuated as if each item was a part of the introductory sentence. Use the number one (`1.`) for ordered lists. Use (`+`), (`*`), or (`-`) for unordered lists - be consistent within the same document. Leave a blank line after each list. Indent nested lists with four spaces (for example, ). List items may consist of multiple paragraphs. Each subsequent paragraph in a list item must be indented by either four spaces or one tab. The semantic purpose of a data table is to present tabular data. Sighted users can quickly scan the table but a screen reader goes through line by line. A table is used to create a descriptive title for a data table. Assistive technologies (AT) use the HTML table caption element to identify the table contents to the user within the page structure. If you need to create a table, create the table in markdown and use the table to include a caption. ``` {{</* table caption=\"Configuration parameters\" >}} Parameter | Description | Default :|:|:- `timeout` | The timeout for requests | `30s` `logLevel` | The log level for log output | `INFO` {{< /table */>}} ``` Note: This shortcode does not support markdown reference-style links. Use inline-style links in tables. See more information about ."
}
] | {
"category": "Runtime",
"file_name": "style-guide.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "title: IP Addresses, Routes and Networks menu_order: 40 search_type: Documentation Weave Net runs containers on a private network, which means that IP addresses are isolated from the rest of the Internet, and that you don't have to worry about addresses clashing. You can of course also manually change the IP of any given container or subnet on a Weave network. See, IP is the Internet Protocol, the fundamental basis of network communication between billions of connected devices. The IP address is (for most purposes) the four numbers separated by dots, like `192.168.48.12`. Each number is one byte in size, so can be between 0 and 255. Each IP address lives on a Network, which is some set of those addresses that all know how talk to each other. The network address is some prefix of the IP address, like `192.168.48`. To show which part of the address is the network, we append a slash and then the number of bits in the network prefix, like `/24`. A route is an instruction for how to deal with traffic destined for somewhere else - it specifies a Network, and a way to talk to that network. Every device using IP has a table of routes, so for any destination address it looks up that table, finds the right route, and sends it in the direction indicated. In the IP address `10.4.2.6/8`, the network prefix is the first 8 bits \\- `10`. Written out in full, that network is `10.0.0.0/8`. The most common prefix lengths are 8, 16 and 24, but there is nothing stopping you using a /9 network or a /26. For example, `6.250.3.1/9` is on the `6.128.0.0/9` network. Several websites offer calculators to decode this kind of address, see, for example: . The following is an example route table for a container that is attached to a Weave network: default via 172.17.42.1 dev eth0 10.2.2.0/24 dev ethwe proto kernel scope link src 10.2.2.1 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.170 It has two interfaces: one that Docker gave it called `eth0`, and one that weave gave it called `ethwe`. They are on networks `172.17.0.0/16` and `10.2.2.0/24` respectively, and if you want to talk to any other address on those networks then the routing table tells it to send directly down that interface. If you want to talk to anything else not matching those rules, the default rule says to send it to `172.17.42.1` down the eth0 interface. So, suppose this container wants to talk to another container at address `10.2.2.9`; it will send down the ethwe interface and weave Net will take care of routing the traffic. To talk an external server at address `74.125.133.128`, it looks in its routing table, doesn't find a match, so uses the default rule. See Also"
}
] | {
"category": "Runtime",
"file_name": "ip-addresses.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "As a sandbox project hosted by CNCF, the WasmEdge Runtime follows the . Monitor email aliases. Monitor Slack (delayed response is perfectly acceptable). Triage GitHub issues and perform pull request reviews for other maintainers and the community. The areas of specialization listed in OWNERS.md can be used to help with routing an issue/question to the right person. Triage build issues - file issues for known flaky builds or bugs, and either fix or find someone to fix any main build breakages. During GitHub issue triage, apply all applicable to each new issue. Labels are extremely useful for future issue to follow-up. Which labels to apply is somewhat subjective so just use your best judgment. A few of the most important labels that are not self-explanatory are: good first issue: Mark any issue that can reasonably be accomplished by a new contributor with this label. help wanted: Unless it is immediately obvious that someone is going to work on an issue (and if so assign it), mark it help wanted. question: If it's unclear if an issue is immediately actionable, mark it with the question label. Questions are easy to search for and close out at a later time. Questions can be promoted to other issue types once it's clear they are actionable (at which point the question label should be removed). Make sure that ongoing PRs are moving forward at the right pace or closing them. Participate when called upon in the security release process. Note that although this should be a rare occurrence, if a serious vulnerability is found, the process may take up to several full days of work to implement. This reality should be taken into account when discussing time commitment obligations with employers. A reviewer is a core maintainer within the project. They share in reviewing issues and pull requests and their LGTM counts towards the required LGTM count to merge a code change into the project. Reviewers are part of the organization but do not have write access. Becoming a reviewer is a core aspect in the journey to becoming a committer. A committer is a core maintainer who is responsible for the overall quality and stewardship of the project. They share the same reviewing responsibilities as reviewers, but are also responsible for upholding the project bylaws as well as participating in project level votes. Committers are part of the organization with write access to all repositories. Committers are expected to remain actively involved in the project and participate in voting and discussing proposed project-level changes. Maintainers are first and foremost contributors that have shown they are committed to the long term success of a"
},
{
"data": "Contributors wanting to become maintainers are expected to be deeply involved in contributing code, pull request review, and triage of issues in the project for more than three months. Just contributing does not make you a maintainer, it is about building trust with the current maintainers of the project and being a person that they can depend on and trust to make decisions in the best interest of the project. Periodically, the existing maintainers curate a list of contributors that have shown regular activity on the project over the prior months. From this list, maintainer candidates are selected and proposed in the maintainers forum. After a candidate has been informally proposed in the maintainers forum, the existing maintainers are given seven days to discuss the candidate, raise objections and show their support. Formal voting takes place on a pull request that adds the contributor to the MAINTAINERS file. Candidates must be approved by 2/3 of the current committers by adding their approval or LGTM to the pull request. The reviewer role has the same process but only requires 1/3 of current committers. If a candidate is approved, they will be invited to add their own LGTM or approval to the pull request to acknowledge their agreement. A committer will verify the numbers of votes that have been received and the allotted seven days have passed, then merge the pull request and invite the contributor to the organization. If a maintainer is no longer interested or cannot perform the maintainer duties listed above, they should volunteer to be moved to emeritus status. In extreme cases this can also occur by a vote of the maintainers per the voting process below. In general, we prefer that technical issues and maintainer membership are amicably worked out between the persons involved. If a dispute cannot be decided independently, the maintainers can be called in to decide an issue. If the maintainers themselves cannot decide an issue, the issue will be resolved by voting. The voting process is a simple majority in which each maintainer receives one vote. New projects will be added to the WasmEdge organization via GitHub issue discussion in one of the existing projects in the organization. Once sufficient discussion has taken place (~3-5 business days but depending on the volume of conversation), the maintainers of the project where the issue was opened (since different projects in the organization may have different maintainers) will decide whether the new project should be added. See the section above on voting if the maintainers cannot easily decide. The following licenses and contributor agreements will be used for WasmRuntime projects: for code for new contributions Sections of this document have been borrowed from and projects."
}
] | {
"category": "Runtime",
"file_name": "GOVERNANCE.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
} |
[
{
"data": "The base policy for a VM with its firewall enabled is: block all inbound traffic allow all outbound traffic All firewall rules applied to a VM are applied on top of those defaults. Firewall rules can affect one VM (using the vm target) or many (using the tag or all vms target types). Adding and updating rules takes effect immediately. Adding or removing tags on a VM causes rules that apply to those tags to be added or removed immediately. In the case of two rules that affect the same VM and port, the rule that goes counter to the default policy takes precedence. This means: If you have an incoming BLOCK and an incoming ALLOW rule for the same VM and port, the ALLOW will override. If you have an outgoing BLOCK and an outgoing ALLOW rule for the same VM and port, the BLOCK will override. Rules are created and updated using a JSON payload as in this example: { \"rule\": \"FROM any TO all vms ALLOW tcp port 22\", \"enabled\": true, \"owner_uuid\": \"5c3ea269-75b8-42fa-badc-956684fb4d4e\" } The properties of this payload are: rule* (required): the firewall rule. See the Rule Syntax section below for the syntax. enabled* (boolean, optional): If set to true, the rule will be applied to VMs. If set to false, the rule will be added but not applied. global* (boolean, optional): If set, the rule will be applied to all VMs in the datacenter, regardless of owner. owner_uuid* (UUID, optional): If set, restricts the set of VMs that the rule can be applied to VMs owned by this UUID. Note that only one of owner_uuid or global can be set at a time for a"
},
{
"data": "Firewall rules are in the following format: FROM <from targets> TO <to targets> <action> <protocol> \\ <ports or types> The parameters are the following: from targets and to targets can be any of the following types (see the Target Types section below): vm <uuid> ip <IPv4 or IPv6 address> subnet <subnet CIDR> tag <tag name> tag <tag name>=<tag value> a target list of up to 32 of the above all vms any action can be one of (see the Actions section below): ALLOW BLOCK protocol can be one of (see the Protocols section below): tcp udp icmp icmp6 ports or types can be one of (see the Ports section below): port <port number> (if protocol is tcp or udp) ports <port numbers and ranges> (if protocol is tcp or udp) type <ICMP type> (if protocol is icmp) type <ICMP type> code <ICMP code> (if protocol is icmp) The limits for the parameters are: 24 from targets 24 to targets 8 ports or types vm <uuid> Targets the VM with that UUID. Example: FROM any to vm 04128191-d2cb-43fc-a970-e4deefe970d8 ALLOW tcp port 80 Allows HTTP traffic from any host to VM 04128... ip <IP address> Targets the specified IPv4 or IPv6 address. Example: FROM all vms to (ip 10.2.0.1 OR ip fd22::1234) BLOCK tcp port 25 Blocks SMTP traffic to that IP. subnet <subnet CIDR> Targets the specified IPv4 or IPv6 subnet range. Example: FROM subnet 10.8.0.0/16 TO vm 0f570678-c007-4610-a2c0-bbfcaab9f4e6 ALLOW \\ tcp port 443 Allows HTTPS traffic from a private IPv4 /16 to the specified VM. Example: FROM subnet fd22::/64 TO vm 0f570678-c007-4610-a2c0-bbfcaab9f4e6 ALLOW \\ tcp port 443 Allows HTTPS traffic from a private IPv6 /64 to the specified VM. tag <name> tag <name> = <value> tag \"<name with spaces>\" = \"<value with spaces>\" Targets all VMs with the specified tag, or all VMs with the specified tag and value. Both tag name and value can be quoted if they contain spaces. Examples: FROM all vms TO tag syslog ALLOW udp port 514 Allows syslog traffic from all VMs to syslog servers. FROM tag role = db TO tag role = www ALLOW tcp port 5432 Allows database traffic from databases to webservers. All other VMs with role tags (role = staging, for example) will not be affected by this rule. FROM all vms TO tag \"VM type\" = \"LDAP server\" ALLOW tcp PORT 389 Allow LDAP access from all VMs to LDAP servers. all vms Targets all VMs. Example: FROM all vms TO all vms ALLOW tcp port 22 Allows ssh traffic between all VMs. any Targets any host (any IPv4 address). Example: FROM any TO all vms ALLOW tcp port 80 Allows HTTP traffic from any IP to all VMs. ( <target> OR <target> OR ... ) The vm, ip, subnet and tag target types can be combined into a list surrounded by parentheses and joined by OR. Example: FROM (vm 163dcedb-828d-43c9-b076-625423250ee2 OR tag db) TO (subnet \\ 10.2.2.0/24 OR ip 10.3.0.1) BLOCK tcp port 443 Blocks HTTPS traffic to an internal subnet and IP. ALLOW BLOCK Actions can be one of ALLOW or BLOCK. Note that certain combinations of actions and directions will essentially have no effect on the behaviour of a VM's firewall. For example: FROM any TO all vms BLOCK tcp port 143 Since the default rule set blocks all incoming ports, this rule doesn't really have an effect on the"
},
{
"data": "Another example: FROM all vms TO any ALLOW tcp port 25 Since the default policy allows all outbound traffic, this rule doesn't have an effect. tcp udp icmp icmp6 The protocol can be one of tcp, udp or icmp(6). The protocol dictates whether ports or types can be used (see the Ports section below). port <port number> ( port <port number> AND port <port number> ... ) ports <port number or range> ports <port number or range>, <port number or range>, ... type <icmp type> type <icmp type> code <icmp code> ( type <icmp type> AND type <icmp type> code <icmp code> AND ... ) For TCP and UDP, this specifies the port numbers that the rule applies to. Port numbers must be between 1 and 65535, inclusive. Ranges are written as two port numbers separated by a - (hyphen), with the lower number coming first, with optional spaces around the hyphen. Port ranges are inclusive, so writing the range \"20 - 22\" would cause the rule to apply to the ports 20, 21 and 22. For ICMP, this specifies the ICMP type and optional code that the rule applies to. Types and codes must be between 0 and 255, inclusive. Examples: FROM tag www TO any ALLOW tcp (port 80 AND port 443) Allows HTTP and HTTPS traffic from any IP to all webservers. FROM tag www TO any ALLOW tcp ports 80, 443, 8000-8100 Allows traffic on HTTP, HTTPS and common alternative HTTP ports from any IP to all webservers. FROM any TO all vms ALLOW icmp TYPE 8 CODE 0 Allows pinging all VMs. The IPv6 equivalent would be: FROM any TO all vms ALLOW icmp6 TYPE 128 CODE 0 And to block outgoing replies: FROM all vms TO any BLOCK icmp TYPE 0 FROM all vms TO any BLOCK icmp6 TYPE 129 FROM all vms TO tag syslog ALLOW udp port 514 Allows syslog traffic from all VMs to syslog servers. FROM tag role = db TO tag role = www ALLOW tcp port 5432 Allows database traffic from databases to webservers. FROM all vms TO all vms ALLOW tcp port 22 Allows ssh traffic between all VMs. FROM any TO all vms ALLOW tcp port 80 Allow HTTP traffic from any host to all VMs. This section explains error messages. The rule you're trying to create doesn't contain any targets that will actually cause rules to be applied to VMs. Targets that will cause rules to be applied are: tag vm all vms Some examples of rules that would cause this message include: FROM any TO any ALLOW tcp port 22 FROM ip 192.168.1.3 TO subnet 192.168.1.0/24 ALLOW tcp port 22 ipfilter(7), vmadm(8), ipf(8), fwadm(8)"
}
] | {
"category": "Runtime",
"file_name": "fwrule.7.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
} |
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Maglev lookup table ``` -h, --help help for maglev ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Load-balancing configuration - Get Maglev lookup table for given service by ID - List Maglev lookup tables"
}
] | {
"category": "Runtime",
"file_name": "cilium-dbg_bpf_lb_maglev.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "| Case ID | Title | Priority | Smoke | Status | Other | | - | | -- | -- | | -- | | K00001 | The metric should work fine. | p1 | true | done | |"
}
] | {
"category": "Runtime",
"file_name": "metric.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "English Spiderpool introduces the SpiderMultusConfig CR to automate the management of Multus NetworkAttachmentDefinition CR and extend the capabilities of Multus CNI configurations. Multus is a CNI plugin project that enables Pods to access multiple network interfaces by leveraging third-party CNI plugins. While Multus allows management of CNI configurations through CRDs, manually writing JSON-formatted CNI configuration strings can lead to error: Human errors in writing JSON format may cause troubleshooting difficulties and Pod startup failures. There are numerous CNIs with various configuration options, making it difficult to remember them all, and users often need to refer to documentation, resulting in a poor user experience. To address these issues, SpiderMultusConfig automatically generates the Multus CR based on its `spec`. It offers several features: In case of accidental deletion of a Multus CR, SpiderMultusConfig will automatically recreate it, improving operational fault tolerance. Support for various CNIs, such as Macvlan, IPvlan, Ovs, and SR-IOV. The annotation `multus.spidernet.io/cr-name` allows users to define a custom name for Multus CR. The annotation `multus.spidernet.io/cni-version` enables specifying a specific CNI version. A robust webhook mechanism is involved to proactively detect and prevent human errors, reducing troubleshooting efforts. Spiderpool's CNI plugins, including and are integrated, enhancing the overall configuration experience. It is important to note that when creating a SpiderMultusConfig CR with the same name as an existing Multus CR, the Multus CR instance will be managed by SpiderMultusConfig, and its configuration will be overwritten. To avoid overwriting existing Multus CR instances, it is recommended to either refrain from creating SpiderMultusConfig CR instances with the same name or specify a different name for the generated Multus CR using the `multus.spidernet.io/cr-name` annotation in the SpiderMultusConfig CR. A ready Kubernetes cluster. has been installed. Refer to to install Spiderpool. SpiderMultusConfig CR supports various types of CNIs. The following sections explain how to create these configurations. Multus's NetworkAttachmentDefinition CR specifies the NIC on the node through the field `master`. When an application uses this CR but multiple Pod copies of the application are scheduled to different nodes, and the NIC name specified by `master` does not exist on some nodes, This will cause some Pod replicas to not function properly. In this regard, you can refer to this chapter to make the NIC names on the nodes consistent. In this chapter, udev will be used to change the NIC name of the node. udev is a subsystem used for device management in Linux systems. It can define device attributes and behaviors through rule files. The following are the steps to change the node NIC name through udev. You need to do the following on each node where you want to change the NIC name: Confirm that the NIC name needs to be changed. You can use the `ip link show` to view it and set the NIC status to `down`, for example, `ens256` in this article. ```bash ~# ip link set ens256"
},
{
"data": "~# ip link show ens256 4: ens256: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000 link/ether 00:50:56:b4:99:16 brd ff:ff:ff:ff:ff:ff ``` Create a udev rule file: Create a new rule file in the /etc/udev/rules.d/ directory, for example: `10-network.rules`, and write the udev rule as follows. ```shell ~# vim 10-network.rules SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{address}==\"<MAC address>\", NAME=\"<New NIC name>\" ~# cat 10-network.rules SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{address}==\"00:50:56:b4:99:16\", NAME=\"eth1\" ``` Cause the udev daemon to reload the configuration file. ```bash ~# udevadm control --reload-rules ``` Trigger the add event of all devices to make the configuration take effect. ```bash ~# udevadm trigger -c add ``` Check that the NIC name has been changed successfully. ```bash ~# ip link set eth1 up ~# ip link show eth1 4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:50:56:b4:99:16 brd ff:ff:ff:ff:ff:ff ``` Note: Before changing the NIC name, make sure to understand the configuration of the system and network, understand the impact that the change may have on other related components or configurations, and it is recommended to back up related configuration files and data. The exact steps may vary depending on the Linux distribution (Centos 7 is used in this article). Here is an example of creating Macvlan SpiderMultusConfig configurations: master: `ens192` is used as the master interface parameter. ```bash MACVLANMASTERINTERFACE=\"ens192\" MACVLANMULTUSNAME=\"macvlan-$MACVLANMASTERINTERFACE\" cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ${MACVLANMULTUSNAME} namespace: kube-system spec: cniType: macvlan enableCoordinator: true macvlan: master: ${MACVLANMASTERINTERFACE} EOF ``` Create the Macvlan SpiderMultusConfig using the provided configuration. This will automatically generate the corresponding Multus NetworkAttachmentDefinition CR and manage its lifecycle. ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -n kube-system NAME AGE macvlan-ens192 26m ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system macvlan-ens192 -oyaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"spiderpool.spidernet.io/v2beta1\",\"kind\":\"SpiderMultusConfig\",\"metadata\":{\"annotations\":{},\"name\":\"macvlan-ens192\",\"namespace\":\"kube-system\"},\"spec\":{\"cniType\":\"macvlan\",\"enableCoordinator\":true,\"macvlan\":{\"master\":[\"ens192\"]}}} creationTimestamp: \"2023-09-11T09:02:43Z\" generation: 1 name: macvlan-ens192 namespace: kube-system ownerReferences: apiVersion: spiderpool.spidernet.io/v2beta1 blockOwnerDeletion: true controller: true kind: SpiderMultusConfig name: macvlan-ens192 uid: 94bbd704-ff9d-4318-8356-f4ae59856228 resourceVersion: \"5288986\" uid: d8fa48c8-0877-440d-9b66-88edd7af5808 spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"macvlan-ens192\",\"plugins\":[{\"type\":\"macvlan\",\"master\":\"ens192\",\"mode\":\"bridge\",\"ipam\":{\"type\":\"spiderpool\"}},{\"type\":\"coordinator\"}]}' ``` Here is an example of creating IPvlan SpiderMultusConfig configurations: master: `ens192` is used as the master interface parameter. When using IPVlan as the cluster's CNI, the kernel version must be higher than 4.2. A single main interface cannot be used by both Macvlan and IPvlan simultaneously. ```bash IPVLANMASTERINTERFACE=\"ens192\" IPVLANMULTUSNAME=\"ipvlan-$IPVLANMASTERINTERFACE\" cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ${IPVLANMULTUSNAME} namespace: kube-system spec: cniType: ipvlan enableCoordinator: true ipvlan: master: ${IPVLANMASTERINTERFACE} EOF ``` Create the IPvlan SpiderMultusConfig using the provided configuration. This will automatically generate the corresponding Multus NetworkAttachmentDefinition CR and manage its lifecycle. ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -n kube-system NAME AGE ipvlan-ens192 12s ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system ipvlan-ens192 -oyaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"spiderpool.spidernet.io/v2beta1\",\"kind\":\"SpiderMultusConfig\",\"metadata\":{\"annotations\":{},\"name\":\"ipvlan-ens192\",\"namespace\":\"kube-system\"},\"spec\":{\"cniType\":\"ipvlan\",\"enableCoordinator\":true,\"ipvlan\":{\"master\":[\"ens192\"]}}} creationTimestamp: \"2023-09-14T10:21:26Z\" generation: 1 name: ipvlan-ens192 namespace: kube-system ownerReferences: apiVersion: spiderpool.spidernet.io/v2beta1 blockOwnerDeletion: true controller: true kind: SpiderMultusConfig name: ipvlan-ens192 uid: accac945-9296-440e-abe8-6f6938fdb895 resourceVersion: \"5950921\" uid: e24afb76-e552-4f73-bab0-8fd345605c2a spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"ipvlan-ens192\",\"plugins\":[{\"type\":\"ipvlan\",\"master\":\"ens192\",\"ipam\":{\"type\":\"spiderpool\"}},{\"type\":\"coordinator\"}]}' ``` Here is an example of creating Sriov SpiderMultusConfig configuration: Basic example ```bash cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: sriov-demo namespace: kube-system spec: cniType: sriov enableCoordinator: true sriov: resourceName:"
},
{
"data": "vlanID: 100 EOF ``` After creation, check the corresponding Multus NetworkAttachmentDefinition CR: ```shell ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system sriov-demo -o yaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: sriov-demo namespace: kube-system annotations: k8s.v1.cni.cncf.io/resourceName: spidernet.io/sriov_netdeivce ownerReferences: apiVersion: spiderpool.spidernet.io/v2beta1 blockOwnerDeletion: true controller: true kind: SpiderMultusConfig name: sriov-demo uid: b08ce054-1ae8-414a-b37c-7fd6988b1b8e resourceVersion: \"153002297\" uid: 4413e1fa-ce15-4acf-bce8-48b5028c0568 spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"sriov-demo\",\"plugins\":[{\"vlan\":100,\"type\":\"sriov\",\"ipam\":{\"type\":\"spiderpool\"}},{\"type\":\"coordinator\"}]}' ``` For more information, refer to Enable RDMA feature ```shell cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: sriov-rdma namespace: kube-system spec: cniType: sriov enableCoordinator: true sriov: enableRdma: true resourceName: spidernet.io/sriov_netdeivce vlanID: 100 EOF ``` After creation, check the corresponding Multus NetworkAttachmentDefinition CR: ```shell ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system sriov-rdma -o yaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: sriov-rdma namespace: kube-system annotations: k8s.v1.cni.cncf.io/resourceName: spidernet.io/sriov_netdeivce ownerReferences: apiVersion: spiderpool.spidernet.io/v2beta1 blockOwnerDeletion: true controller: true kind: SpiderMultusConfig name: sriov-rdma uid: b08ce054-1ae8-414a-b37c-7fd6988b1b8e resourceVersion: \"153002297\" uid: 4413e1fa-ce15-4acf-bce8-48b5028c0568 spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"sriov-rdma\",\"plugins\":[{\"vlan\":100,\"type\":\"sriov\",\"ipam\":{\"type\":\"spiderpool\"}},{\"type\":\"rdma\"},{\"type\":\"coordinator\"}]}' ``` Configure Sriov-CNI Network Bandwidth We can configure the network bandwidth of Sriov through SpiderMultusConfig: ```shell cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: sriov-bandwidth namespace: kube-system spec: cniType: sriov enableCoordinator: true sriov: resourceName: spidernet.io/sriov_netdeivce vlanID: 100 minTxRateMbps: 100 MaxTxRateMbps: 1000 EOF ``` minTxRateMbps and MaxTxRateMbps configure the transmission bandwidth range for pods created with this configuration: [100,1000]. After creation, check the corresponding Multus NetworkAttachmentDefinition CR: ```shell ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system sriov-rdma -o yaml apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: sriov-bandwidth namespace: kube-system annotations: k8s.v1.cni.cncf.io/resourceName: spidernet.io/sriov_netdeivce ownerReferences: apiVersion: spiderpool.spidernet.io/v2beta1 blockOwnerDeletion: true controller: true kind: SpiderMultusConfig name: sriov-bandwidth uid: b08ce054-1ae8-414a-b37c-7fd6988b1b8e spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"sriov-bandwidth\",\"plugins\":[{\"vlan\":100,\"type\":\"sriov\",\"minTxRate\": 100, \"maxTxRate\": 1000,\"ipam\":{\"type\":\"spiderpool\"}},{\"type\":\"rdma\"},{\"type\":\"coordinator\"}]}' ``` The Ifacer plug-in can help us automatically create a Bond NIC or VLAN NIC when creating a pod to undertake the pod's underlying network. For more information, refer to . If we need a VLAN sub-interface to take over the underlying network of the pod, and the interface has not yet been created on the node. We can inject the configuration of the vlanID in Spidermultusconfig so that when the corresponding Multus NetworkAttachmentDefinition CR is generated, it will be injected The `ifacer` plug-in will dynamically create a VLAN interface on the host when the pod is created, which is used to undertake the pod's underlay network. The following is an example of CNI as IPVlan, IPVLANMASTERINTERFACE as ens192, and vlanID as 100. ```shell ~# IPVLANMASTERINTERFACE=\"ens192\" ~# IPVLANMULTUSNAME=\"ipvlan-$IPVLANMASTERINTERFACE\" ~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ipvlan-ens192-vlan100 namespace: kube-system spec: cniType: ipvlan enableCoordinator: true ipvlan: master: ${IPVLANMASTERINTERFACE} vlanID: 100 EOF ``` When the Spidermultuconfig object is created, view the corresponding Multus network-attachment-definition object: ```shell ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system macvlan-conf -o=jsonpath='{.spec.config}' | jq { \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-ens192-vlan100\", \"plugins\": [ { \"type\": \"ifacer\", \"interfaces\": [ \"ens192\" ], \"vlanID\": 100 }, { \"type\": \"ipvlan\", \"ipam\": { \"type\": \"spiderpool\" }, \"master\": \"ens192.100\", \"mode\": \"bridge\" }, { \"type\": \"coordinator\", } ] } ``` `ifacer` is called first in the CNI chaining sequence. Depending on the configuration, `ifacer` will create a sub-interface with a VLAN tag of 100 named ens192.100 based on"
},
{
"data": "main CNI: The value of the master field of IPVlan is: `ens192.100`, which is the VLAN sub-interface created by 'ifacer': `ens192.100`. Note: The NIC created by `ifacer` is not persistent, and will be lost if the node is restarted or manually deleted. Restarting the pod is automatically added back. Sometimes the network administrator has already created the VLAN sub-interface, and we don't need to use `ifacer` to create the VLAN sub-interface. We can directly configure the master field as: `ens192.100` and not configure the VLAN ID, as follows: ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: macvlan-conf namespace: kube-system spec: cniType: macvlan macvlan: master: ens192.100 ippools: ipv4: vlan100 ``` If we need a bond interface to take over the underlying network of the pod, and the bond interface has not yet been created on the node. We can configure multiple master interfaces in Spidermultusconfig so that the corresponding Multus NetworkAttachmentDefinition CR is generated and injected The `ifacer'` plug-in will dynamically create a bond interface on the host when the pod is created, which is used to undertake the underlying network of the pod. The following is an example of CNI as IPVlan, host interface ens192, and ens224 as slave to create a bond interface: ```shell ~# cat << EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ipvlan-conf namespace: kube-system spec: cniType: ipvlan macvlan: master: ens192 ens224 bond: name: bond0 mode: 1 options: \"\" EOF ``` When the Spidermultuconfig object is created, view the corresponding Multus network-attachment-definition object: ```shell ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system ipvlan-conf -o jsonpath='{.spec.config}' | jq { \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-conf\", \"plugins\": [ { \"type\": \"ifacer\", \"interfaces\": [ \"ens192\" \"ens224\" ], \"bond\": { \"name\": \"bond0\", \"mode\": 1 } }, { \"type\": \"ipvlan\", \"ipam\": { \"type\": \"spiderpool\" }, \"master\": \"bond0\", \"mode\": \"bridge\" }, { \"type\": \"coordinator\", } ] } ``` Configuration description: `ifacer` is called first in the CNI chaining sequence. Depending on the configuration, `ifacer` will create a bond interface named 'bond0' based on [\"ens192\",\"ens224\"] with mode 1 (active-backup). main CNI: The value of the master field of IPvlan is: `bond0`, bond0 takes over the network traffic of the pod. Create a Bond If you need a more advanced configuration, you can do so by configuring SpiderMultusConfig: macvlan-conf.spec.macvlan.bond.options. The input format is: \"primary=ens160; arp_interval=1\", use \";\" for multiple parameters. If we need to create a VLAN sub-interface based on the created BOND NIC: bond0, so that the VLAN sub-interface undertakes the underlying network of the pod, we can refer to the following configuration: ```shell ~# cat << EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ipvlan-conf namespace: kube-system spec: cniType: ipvlan macvlan: master: ens192 ens224 vlanID: 100 bond: name: bond0 mode: 1 options: \"\" EOF ``` When creating a pod with the above configuration, `ifacer` will create a bond NIC bond0 and a VLAN NIC bond0.100 on the host. To create other types of CNI configurations, such OVS, refer to . SpiderMultusConfig CR automates the management of Multus NetworkAttachmentDefinition CRs, improving the experience of creating configurations and reducing operational costs."
}
] | {
"category": "Runtime",
"file_name": "spider-multus-config.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "rkt's execution of pods is divided roughly into three separate stages: Stage 0: discovering, fetching, verifying, storing, and compositing of both application (stage2) and stage1 images for execution. Stage 1: execution of the stage1 image from within the composite image prepared by stage0. Stage 2: execution of individual application images within the containment afforded by stage1. This separation of concerns is reflected in the file-system and layout of the composite image prepared by stage0: Stage 0: `rkt` executable, and the pod manifest created at `/var/lib/rkt/pods/prepare/$uuid/pod`. Stage 1: `stage1.aci`, made available at `/var/lib/rkt/pods/run/$uuid/stage1` by `rkt run`. Stage 2: `$app.aci`, made available at `/var/lib/rkt/pods/run/$uuid/stage1/rootfs/opt/stage2/$appname` by `rkt run`, where `$appname` is the name of the app in the pod manifest. The stage1 implementation is what creates the execution environment for the contained applications. This occurs via entrypoints from stage0 on behalf of `rkt run` and `rkt enter`. These entrypoints are executable programs located via annotations from within the stage1 ACI manifest, and executed from within the stage1 of a given pod at `/var/lib/rkt/pods/$state/$uuid/stage1/rootfs`. Stage2 is the deployed application image. Stage1 is the vehicle for getting there from stage0. For any given pod instance, stage1 may be replaced by a completely different implementation. This allows users to employ different containment strategies on the same host running the same interchangeable ACIs. `coreos.com/rkt/stage1/run` rkt prepares the pod's stage1 and stage2 images and pod manifest under `/var/lib/rkt/pods/prepare/$uuid`, acquiring an exclusive advisory lock on the directory. Upon a successful preparation, the directory will be renamed to `/var/lib/rkt/pods/run/$uuid`. chdirs to `/var/lib/rkt/pods/run/$uuid`. resolves the `coreos.com/rkt/stage1/run` entrypoint via annotations found within `/var/lib/rkt/pods/run/$uuid/stage1/manifest`. executes the resolved entrypoint relative to `/var/lib/rkt/pods/run/$uuid/stage1/rootfs`. It is the responsibility of this entrypoint to consume the pod manifest and execute the constituent apps in the appropriate environments as specified by the pod manifest. The environment variable `RKTLOCKFD` contains the file descriptor number of the open directory handle for `/var/lib/rkt/pods/run/$uuid`. It is necessary that stage1 leave this file descriptor open and in its locked state for the duration of the `rkt run`. In the bundled rkt stage1 which includes systemd-nspawn and systemd, the entrypoint is a static Go program found at `/init` within the stage1 ACI rootfs. The majority of its execution entails generating a systemd-nspawn argument list and writing systemd unit files for the constituent apps before executing systemd-nspawn. Systemd-nspawn then boots the stage1 systemd with the just-written unit files for launching the contained apps. The `/init` program's primary job is translating a pod manifest to systemd-nspawn systemd.services. An alternative stage1 could forego systemd-nspawn and systemd altogether, or retain these and introduce something like novm or qemu-kvm for greater isolation by first starting a VM. All that is required is an executable at the place indicated by the `coreos.com/rkt/stage1/run` entrypoint that knows how to apply the pod manifest and prepared ACI file-systems. The resolved entrypoint must inform rkt of its PID for the benefit of `rkt"
},
{
"data": "Stage1 implementors have two options for doing so; only one must be implemented: `/var/lib/rkt/pods/run/$uuid/pid`: the PID of the process that will be given to the \"enter\" entrypoint. `/var/lib/rkt/pods/run/$uuid/ppid`: the PID of the parent of the process that will be given to the \"enter\" entrypoint. That parent process must have exactly one child process. The entrypoint of a stage1 may also optionally inform rkt of the \"pod cgroup\", the `name=systemd` cgroup the pod's applications are expected to reside under, via the `subcgroup` file. If this file is written, it must be written before the `pid` or `ppid` files are written. This information is useful for any external monitoring system that wishes to reliably link a given cgroup to its associated rkt pod. The file should be written in the pod directory at `/var/lib/rkt/pods/run/$uuid/subcgroup`. The file's contents should be a text string, for example of the form `machine-rkt\\xuuid.scope`, which will match the control in the cgroup hierarchy of the `ppid` or `pid` of the pod. Any stage1 that supports and expects machined registration to occur will likely want to write such a file. `--debug` to activate debugging `--net[=$NET1,$NET2,...]` to configure the creation of a contained network. See the for details. `--mds-token=$TOKEN` passes the auth token to the apps via `ACMETADATAURL` env var `--interactive` to run a pod interactively, that is, pass standard input to the application (only for pods with one application) `--local-config=$PATH` to override the local configuration directory `--private-users=$SHIFT` to define a UID/GID shift when using user namespaces. SHIFT is a two-value colon-separated parameter, the first value is the host UID to assign to the container and the second one is the number of host UIDs to assign. `--mutable` activates a mutable environment in stage1. If the stage1 image manifest has no `app` entrypoint annotations declared, this flag will be unset to retain backwards compatibility. `--hostname=$HOSTNAME` configures the host name of the pod. If empty, it will be \"rkt-$PODUUID\". `--disable-capabilities-restriction` gives all capabilities to apps (overrides `retain-set` and `remove-set`) `--disable-paths` disables inaccessible and read-only paths (such as `/proc/sysrq-trigger`) `--disable-seccomp` disables seccomp (overrides `retain-set` and `remove-set`) `--dns-conf-mode=resolv=(host|stage0|none|default),hosts=(host|stage0|default)`: Configures how the stage1 should set up the DNS configuration files `/etc/resolv.conf` and `/etc/hosts`. For all, `host` means to bind-mount the host's version of that file. `none` means the stage1 should not create it. `stage0` means the stage0 has created an entry in the stage1's rootfs, which should be exposed in the apps. `default` means the standard behavior, which for `resolv.conf` is to create /etc/rkt-resolv.conf iff a CNI plugin specifies it, and for `hosts` is to create a fallback if the app does not provide it. This interface version is not yet finalized, thus marked as experimental. `--mutable` to run a mutable pod `--ipc=[auto|private|parent]` (default to `auto` if missing): Allows to stay in the host IPC"
},
{
"data": "The default to `auto` does what makes more sense for the flavor (`parent` for `stage1-fly` and `private` for `stage1-coreos` and `stage1-kvm`). `coreos.com/rkt/stage1/enter` rkt verifies the pod and image to enter are valid and running chdirs to `/var/lib/rkt/pods/run/$uuid` resolves the `coreos.com/rkt/stage1/enter` entrypoint via annotations found within `/var/lib/rkt/pods/run/$uuid/stage1/manifest` executes the resolved entrypoint relative to `/var/lib/rkt/pods/run/$uuid/stage1/rootfs` In the bundled rkt stage1, the entrypoint is a statically-linked C program found at `/enter` within the stage1 ACI rootfs. This program enters the namespaces of the systemd-nspawn container's PID 1 before executing the `/enterexec` program. `enterexec` then `chroot`s into the ACI's rootfs, loading the application and its environment. An alternative stage1 would need to do whatever is appropriate for entering the application environment created by its own `coreos.com/rkt/stage1/run` entrypoint. `--pid=$PID` passes the PID of the process that is PID 1 in the container. rkt finds that PID by one of the two supported methods described in the `rkt run` section. `--appname=$NAME` passes the app name of the specific application to enter. the separator `--` cmd to execute. optionally, any cmd arguments. `coreos.com/rkt/stage1/gc` The gc entrypoint deals with garbage collecting resources allocated by stage1. For example, it removes the network namespace of a pod. `--debug` to activate debugging UUID of the pod `--local-config`: The rkt configuration directory - defaults to `/etc/rkt` if not supplied. `coreos.com/rkt/stage1/stop` The optional stop entrypoint initiates an orderly shutdown of stage1. In the bundled rkt stage 1, the entrypoint is sending SIGTERM signal to systemd-nspawn. For kvm flavor, it is calling `systemctl halt` on the container (through SSH). `--force` to force the stopping of the pod. E.g. in the bundled rkt stage 1, stop sends SIGKILL UUID of the pod Some entrypoints need to perform actions in the context of stage1 or stage2. As such they need to cross stage boundaries (thus the name) and depend on the `enter` entrypoint existence. All crossing entrypoints receive additional options for entering via the following environmental flags: `RKTSTAGE1ENTERCMD` specify the command to be called to enter a stage1 or a stage2 environment `RKTSTAGE1ENTERPID` specify the PID of the stage1 to enter `RKTSTAGE1ENTERAPP` optionally specify the application name of the stage2 to enter (Experimental, to be stabilized in version 5) `coreos.com/rkt/stage1/app/add` This is a crossing entrypoint. `--app` application name `--debug` to activate debugging `--uuid` UUID of the pod `--disable-capabilities-restriction` gives all capabilities to apps (overrides `retain-set` and `remove-set`) `--disable-paths` disables inaccessible and read-only paths (such as `/proc/sysrq-trigger`) `--disable-seccomp` disables seccomp (overrides `retain-set` and `remove-set`) `--private-users=$SHIFT` to define a UID/GID shift when using user namespaces. SHIFT is a two-value colon-separated parameter, the first value is the host UID to assign to the container and the second one is the number of host UIDs to assign. (Experimental, to be stabilized in version 5) `coreos.com/rkt/stage1/app/start` This is a crossing entrypoint. `--app` application name `--debug` to activate debugging (Experimental, to be stabilized in version 5) `coreos.com/rkt/stage1/app/stop` This is a crossing entrypoint. `--app` application name `--debug` to activate debugging (Experimental, to be stabilized in version 5) `coreos.com/rkt/stage1/app/rm` This is a crossing"
},
{
"data": "`--app` application name `--debug` to activate debugging (Experimental, to be stabilized in version 5) `coreos.com/rkt/stage1/attach` This is a crossing entrypoint. `--action` action to perform (`auto-attach`, `custom-attach` or `list`) `--app` application name `--debug` to activate debugging `--tty-in` whether to attach TTY input (`true` or `false`) `--tty-out` whether to attach TTY output (`true` or `false`) `--stdin` whether to attach stdin (`true` or `false`) `--stdout` whether to attach stdout (`true` or `false`) `--stderr` whether to attach stderr (`true` or `false`) The stage1 command line interface is versioned using an annotation with the name `coreos.com/rkt/stage1/interface-version`. If the annotation is not present, rkt assumes the version is 1. ```json { \"acKind\": \"ImageManifest\", \"acVersion\": \"0.8.11\", \"name\": \"foo.com/rkt/stage1\", \"labels\": [ { \"name\": \"version\", \"value\": \"0.0.1\" }, { \"name\": \"arch\", \"value\": \"amd64\" }, { \"name\": \"os\", \"value\": \"linux\" } ], \"annotations\": [ { \"name\": \"coreos.com/rkt/stage1/run\", \"value\": \"/ex/run\" }, { \"name\": \"coreos.com/rkt/stage1/enter\", \"value\": \"/ex/enter\" }, { \"name\": \"coreos.com/rkt/stage1/gc\", \"value\": \"/ex/gc\" }, { \"name\": \"coreos.com/rkt/stage1/stop\", \"value\": \"/ex/stop\" }, { \"name\": \"coreos.com/rkt/stage1/interface-version\", \"value\": \"2\" } ] } ``` Pods and applications can be annotated at runtime to signal support for specific features. Stage1 images can support mutable pod environments, where, once a pod has been started, applications can be added/started/stopped/removed while the actual pod is running. This information is persisted at runtime in the pod manifest using the `coreos.com/rkt/stage1/mutable` annotation. If the annotation is not present, `false` is assumed. Stage1 images can support attachable applications, where I/O and TTY from each applications can be dynamically redirected and attached to. In that case, this information is persisted at runtime in each application manifest using the following annotations: `coreos.com/rkt/stage2/stdin` `coreos.com/rkt/stage2/stdout` `coreos.com/rkt/stage2/stderr` The following paths are reserved for the stage1 image, and they will be populated at runtime. When creating a stage1 image, developers SHOULD NOT use these paths to store content in the image's filesystem. `opt/stage2` This directory path is used for extracting the ACI of every app in the pod. Each app's rootfs will appear under this directory, e.g. `/var/lib/rkt/pods/run/$uuid/stage1/rootfs/opt/stage2/$appname/rootfs`. `rkt/status` This directory path is used for storing the apps' exit statuses. For example, if an app named `foo` exits with status = `42`, stage1 should write `42` in `/var/lib/rkt/pods/run/$uuid/stage1/rootfs/rkt/status/foo`. Later the exit status can be retrieved and shown by `rkt status $uuid`. `rkt/env` This directory path is used for passing environment variables to each app. For example, environment variables for an app named `foo` will be stored in `rkt/env/foo`. `rkt/iottymux` This directory path is used for TTY and streaming attach helper. When attach mode is enabled each application will have a `rkt/iottymux/$appname/` directory, used by the I/O and TTY mux sidecar. `rkt/supervisor-status` This path is used by the pod supervisor to signal its readiness. Once the supervisor in the pod has reached its ready state, it MUST write a `rkt/supervisor-status -> ready` symlink. A symlink missing or pointing to a different target means that the pod supervisor is not ready."
}
] | {
"category": "Runtime",
"file_name": "stage1-implementors-guide.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
} |
[
{
"data": "The `yaml` Project is released on an as-needed basis. The process is as follows: An issue is proposing a new release with a changelog since the last release All must LGTM this release An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` The release issue is closed An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`"
}
] | {
"category": "Runtime",
"file_name": "RELEASE.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
} |
[
{
"data": "Currently, Longhorn uses a blocking way for communication with the remote backup target, so there will be some potential voluntary or involuntary factors (ex: network latency) impacting the functions relying on remote backup target like listing backups or even causing further after the backup target operation. This enhancement is to propose an asynchronous way to pull backup volumes and backups from the remote backup target (S3/NFS) then persistently saved via cluster custom resources. This can resolve the problems above mentioned by asynchronously querying the list of backup volumes and backups from the remote backup target for final consistent available results. It's also scalable for the costly resources created by the original blocking query operations. https://github.com/longhorn/longhorn/issues/1761 https://github.com/longhorn/longhorn/issues/1955 https://github.com/longhorn/longhorn/issues/2536 https://github.com/longhorn/longhorn/issues/2543 Decrease the query latency when listing backup volumes or backups in the circumstances like lots of backup volumes, lots of backups, or the network latency between the Longhorn cluster and the remote backup target. Automatically adjust the backup target poll interval. Supports multiple backup targets. Supports API pagination for listing backup volumes and backups. Change the list command behavior, and add inspect-volume and head command. The `backup list` command includes listing all backup volumes and the backups and read these config. We'll change the `backup list` behavior to perform list only, but not read the config. Add a new `backup inspect-volume` command to support read backup volume config. Add a new `backup head` command to support get config metadata. Create a BackupTarget CRD backuptargets.longhorn.io to save the backup target URL, credential secret, and poll interval. Create a BackupVolume CRD backupvolumes.longhorn.io to save to backup volume config. Create a Backup CRD backups.longhorn.io to save the backup config. At the existed `setting_controller`, which is responsible creating/updating the default BackupTarget CR with settings `backup-target`, `backup-target-credential-secret`, and `backupstore-poll-interval`. Create a new controller `backuptargetcontroller`, which is responsible for creating/deleting the BackupVolume CR. Create a new controller `backupvolumecontroller`, which is responsible: deleting Backup CR and deleting backup volume from remote backup target if delete BackupVolume CR event comes in. updating BackupVolume CR status. creating/deleting Backup CR. Create a new controller `backup_controller`, which is responsible: deleting backup from remote backup target if delete Backup CR event comes in. calling longhorn engine/replica to perform snapshot backup to the remote backup target. updating the Backup CR status, and deleting backup from the remote backup target. The HTTP endpoints CRUD methods related to backup volume and backup will interact with BackupVolume CR and Backup CR instead interact with the remote backup target. Before this enhancement, when the user's environment under the circumstances that the remote backup target has lots of backup volumes or backups, and the latency between the longhorn manager to the remote backup target is high. Then when the user clicks the `Backup` on the GUI, the user might hit list backup volumes or list backups timeout issue (the default timeout is 1 minute). We choose to not create a new setting for the user to increase the list timeout value is because the browser has its timeout value also. Let's say the list backups needs 5 minutes to"
},
{
"data": "Even we allow the user to increase the longhorn manager list timeout value, we can't change the browser default timeout value. Furthermore, some browser doesn't allow the user to change default timeout value like Google Chrome. After this enhancement, when the user's environment under the circumstances that the remote backup target has lots of backup volumes or backups, and the latency between the longhorn manager to the remote backup target is high. Then when the user clicks the `Backup` on the GUI, the user can eventually list backup volumes or list backups without a timeout issue. The user environment is under the circumstances that the remote backup target has lots of backup volumes and the latency between the longhorn manager to the remote backup target is high. Then, the user can list all backup volumes on the GUI. The user environment is under the circumstances that the remote backup target has lots of backups and the latency between the longhorn manager to the remote backup target is high. Then, the user can list all backups on the GUI. The user creates a backup on the Longhorn GUI. Now the backup will create a Backup CR, then the backup_controller reconciles it to call Longhorn engine/replica to perform a backup to the remote backup target. None. For The current list and inspect command behavior are: `backup ls --volume-only`: List all backup volumes and read it's config (`volume.cfg`). For example: ```json $ backup ls s3://backupbucket@minio/ --volume-only { \"pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\": { \"Name\": \"pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\", \"Size\": \"2147483648\", \"Labels\": {}, \"Created\": \"2021-05-12T00:52:01Z\", \"LastBackupName\": \"backup-c5f548b7e86b4b56\", \"LastBackupAt\": \"2021-05-17T05:31:01Z\", \"DataStored\": \"121634816\", \"Messages\": {} }, \"pvc-7a8ded68-862d-4abb-a08c-8cf9664dab10\": { \"Name\": \"pvc-7a8ded68-862d-4abb-a08c-8cf9664dab10\", \"Size\": \"10737418240\", \"Labels\": {}, \"Created\": \"2021-05-10T02:43:02Z\", \"LastBackupName\": \"backup-432f4d6afa31481f\", \"LastBackupAt\": \"2021-05-10T06:04:02Z\", \"DataStored\": \"140509184\", \"Messages\": {} } } ``` `backup ls --volume <volume-name>`: List all backups and read it's config (`backupbackup<backup-hash>.cfg`). For example: ```json $ backup ls s3://backupbucket@minio/ --volume pvc-004d8edb-3a8c-4596-a659-3d00122d3f07 { \"pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\": { \"Name\": \"pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\", \"Size\": \"2147483648\", \"Labels\": {}, \"Created\": \"2021-05-12T00:52:01Z\", \"LastBackupName\": \"backup-c5f548b7e86b4b56\", \"LastBackupAt\": \"2021-05-17T05:31:01Z\", \"DataStored\": \"121634816\", \"Messages\": {}, \"Backups\": { \"s3://backupbucket@minio/?backup=backup-02224cb26b794e73\\u0026volume=pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\": { \"Name\": \"backup-02224cb26b794e73\", \"URL\": \"s3://backupbucket@minio/?backup=backup-02224cb26b794e73\\u0026volume=pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\", \"SnapshotName\": \"backup-23c4fd9a\", \"SnapshotCreated\": \"2021-05-17T05:23:01Z\", \"Created\": \"2021-05-17T05:23:04Z\", \"Size\": \"115343360\", \"Labels\": {}, \"IsIncremental\": true, \"Messages\": null }, ... \"s3://backupbucket@minio/?backup=backup-fa78d89827664840\\u0026volume=pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\": { \"Name\": \"backup-fa78d89827664840\", \"URL\": \"s3://backupbucket@minio/?backup=backup-fa78d89827664840\\u0026volume=pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\", \"SnapshotName\": \"backup-ac364071\", \"SnapshotCreated\": \"2021-05-17T04:42:01Z\", \"Created\": \"2021-05-17T04:42:03Z\", \"Size\": \"115343360\", \"Labels\": {}, \"IsIncremental\": true, \"Messages\": null } } } } ``` `backup inspect <backup>`: Read a single backup config (`backupbackup<backup-hash>.cfg`). For example: ```json $ backup inspect s3://backupbucket@minio/?backup=backup-fa78d89827664840\\u0026volume=pvc-004d8edb-3a8c-4596-a659-3d00122d3f07 { \"Name\": \"backup-fa78d89827664840\", \"URL\": \"s3://backupbucket@minio/?backup=backup-fa78d89827664840\\u0026volume=pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\", \"SnapshotName\": \"backup-ac364071\", \"SnapshotCreated\": \"2021-05-17T04:42:01Z\", \"Created\": \"2021-05-17T04:42:03Z\", \"Size\": \"115343360\", \"Labels\": {}, \"IsIncremental\": true, \"VolumeName\": \"pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\", \"VolumeSize\": \"2147483648\", \"VolumeCreated\": \"2021-05-12T00:52:01Z\", \"Messages\": null } ``` After this enhancement, the list and inspect command behavior are: `backup ls --volume-only`: List all backup volume names. For example: ```json $ backup ls s3://backupbucket@minio/ --volume-only { \"pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\": {}, \"pvc-7a8ded68-862d-4abb-a08c-8cf9664dab10\": {} } ``` `backup ls --volume <volume-name>`: List all backup names. For example: ```json $ backup ls s3://backupbucket@minio/ --volume pvc-004d8edb-3a8c-4596-a659-3d00122d3f07 { \"pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\": { \"Backups\": { \"backup-02224cb26b794e73\": {}, ... \"backup-fa78d89827664840\": {} } } } ``` `backup inspect-volume <volume>`: Read a single backup volume config (`volume.cfg`). For example: ```json $ backup inspect-volume s3://backupbucket@minio/?volume=pvc-004d8edb-3a8c-4596-a659-3d00122d3f07 { \"Name\": \"pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\", \"Size\": \"2147483648\", \"Labels\": {}, \"Created\": \"2021-05-12T00:52:01Z\", \"LastBackupName\": \"backup-c5f548b7e86b4b56\", \"LastBackupAt\": \"2021-05-17T05:31:01Z\", \"DataStored\": \"121634816\", \"Messages\": {} } ``` `backup inspect <backup>`: Read a single backup config"
},
{
"data": "For example: ```json $ backup inspect s3://backupbucket@minio/?backup=backup-fa78d89827664840\\u0026volume=pvc-004d8edb-3a8c-4596-a659-3d00122d3f07 { \"Name\": \"backup-fa78d89827664840\", \"URL\": \"s3://backupbucket@minio/?backup=backup-fa78d89827664840\\u0026volume=pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\", \"SnapshotName\": \"backup-ac364071\", \"SnapshotCreated\": \"2021-05-17T04:42:01Z\", \"Created\": \"2021-05-17T04:42:03Z\", \"Size\": \"115343360\", \"Labels\": {}, \"IsIncremental\": true, \"VolumeName\": \"pvc-004d8edb-3a8c-4596-a659-3d00122d3f07\", \"VolumeSize\": \"2147483648\", \"VolumeCreated\": \"2021-05-12T00:52:01Z\", \"Messages\": null } ``` `backup head <config>`: Get the config metadata. For example: ```json { \"ModificationTime\": \"2021-05-17T04:42:03Z\", } ``` Generally speaking, we want to separate the list, read, and head commands. The Longhorn manager HTTP endpoints. | HTTP Endpoint | Before | After | | | -- | - | | GET `/v1/backupvolumes` | read all backup volumes from the remote backup target | read all the BackupVolume CRs | | GET `/v1/backupvolumes/{volName}` | read a backup volume from the remote backup target | read a BackupVolume CR with the given volume name | | DELETE `/v1/backupvolumes/{volName}` | delete a backup volume from the remote backup target | delete the BackupVolume CR with the given volume name, `backupvolumecontroller` reconciles to delete a backup volume from the remote backup target | | POST `/v1/volumes/{volName}?action=snapshotBackup` | create a backup to the remote backup target | create a new Backup, `backup_controller` reconciles to create a backup to the remote backup target | | GET `/v1/backupvolumes/{volName}?action=backupList` | read a list of backups from the remote backup target | read a list of Backup CRs with the label filter `volume=<backup-volume-name>` | | GET `/v1/backupvolumes/{volName}?action=backupGet` | read a backup from the remote backup target | read a Backup CR with the given backup name | | DELETE `/v1/backupvolumes/{volName}?action=backupDelete` | delete a backup from the remote backup target | delete the Backup CR, `backup_controller` reconciles to delete a backup from reomte backup target | Create a new BackupTarget CRD `backuptargets.longhorn.io`. ```yaml metadata: name: the backup target name (`default`), since currently we only support one backup target. spec: backupTargetURL: the backup target URL. (string) credentialSecret: the backup target credential secret. (string) pollInterval: the backup target poll interval. (metav1.Duration) syncRequestAt: the time to request run sync the remote backup target. (*metav1.Time) status: ownerID: the node ID which is responsible for running operations of the backup target controller. (string) available: records if the remote backup target is available or not. (bool) lastSyncedAt: records the last time the backup target was running the reconcile process. (*metav1.Time) ``` Create a new BackupVolume CRD `backupvolumes.longhorn.io`. ```yaml metadata: name: the backup volume name. spec: syncRequestAt: the time to request run sync the remote backup volume. (*metav1.Time) fileCleanupRequired: indicate to delete the remote backup volume config or not. (bool) status: ownerID: the node ID which is responsible for running operations of the backup volume controller. (string) lastModificationTime: the backup volume config last modification time. (Time) size: the backup volume size. (string) labels: the backup volume labels. (map[string]string) createAt: the backup volume creation time. (string) lastBackupName: the latest volume backup name. (string) lastBackupAt: the latest volume backup time. (string) dataStored: the backup volume block count. (string) messages: the error messages when call longhorn engine on list or inspect backup volumes. (map[string]string) lastSyncedAt: records the last time the backup volume was synced into the cluster. (*metav1.Time) ``` Create a new Backup CRD `backups.longhorn.io`. ```yaml metadata: name: the backup name. labels: longhornvolume=<backup-volume-name>`: this label indicates which backup volume the backup belongs to. spec: fileCleanupRequired: indicate to delete the remote backup config or not (and the related block files if needed). (bool) snapshotName: the snapshot"
},
{
"data": "(string) labels: the labels of snapshot backup. (map[string]string) backingImage: the backing image. (string) backingImageURL: the backing image URL. (string) status: ownerID: the node ID which is responsible for running operations of the backup controller. (string) backupCreationIsStart: to indicate the snapshot backup creation is start or not. (bool) url: the snapshot backup URL. (string) snapshotName: the snapshot name. (string) snapshotCreateAt: the snapshot creation time. (string) backupCreateAt: the snapshot backup creation time. (string) size: the snapshot size. (string) labels: the labels of snapshot backup. (map[string]string) messages: the error messages when calling longhorn engine on listing or inspecting backups. (map[string]string) lastSyncedAt: records the last time the backup was synced into the cluster. (*metav1.Time) ``` At the existed `setting_controller`. Watches the changes of Setting CR `settings.longorn.io` field `backup-target`, `backup-target-credential-secret`, and `backupstore-poll-interval`. The setting controller is responsible for creating/updating the default BackupTarget CR. The setting controller creates a timer goroutine according to `backupstore-poll-interval`. Once the timer up, updates the BackupTarget CR `spec.syncRequestAt = time.Now()`. If the `backupstore-poll-interval = 0`, do not updates the BackupTarget CR `spec.syncRequestAt`. Create a new `backuptargetcontroller`. Watches the change of BackupTarget CR. The backup target controller is responsible for creating/updating/deleting BackupVolume CR metadata+spec. The reconcile loop steps are: Check if the current node ID == BackupTarget CR spec.responsibleNodeID. If no, skip the reconcile process. Check if the `status.lastSyncedAt < spec.syncRequestAt`. If no, skip the reconcile process. Call the longhorn engine to list all the backup volumes `backup ls --volume-only` from the remote backup target `backupStoreBackupVolumes`. If the remote backup target is unavailable: Updates the BackupTarget CR `status.available=false` and `status.lastSyncedAt=time.Now()`. Skip the current reconcile process. List in cluster BackupVolume CRs `clusterBackupVolumes`. Find the difference backup volumes `backupVolumesToPull = backupStoreBackupVolumes - clusterBackupVolumes` and create BackupVolume CR `metadata.name`. Find the difference backup volumes `backupVolumesToDelete = clusterBackupVolumes - backupStoreBackupVolumes` and delete BackupVolume CR. List in cluster BackupVolume CRs `clusterBackupVolumes` again and updates the BackupVolume CR `spec.syncRequestAt = time.Now()`. Updates the BackupTarget CR status: `status.available=true`. `status.lastSyncedAt = time.Now()`. For the Longhorn manager HTTP endpoints: DELETE `/v1/backupvolumes/{volName}`: Update the BackupVolume CR `spec.fileCleanupRequired=true` with the given volume name. Delete a BackupVolume CR with the given volume name. Create a new controller `backupvolumecontroller`. Watches the change of BackupVolume CR. The backup volume controller is responsible for deleting Backup CR and deleting backup volume from remote backup target if delete BackupVolume CR event comes in, and updating BackupVolume CR status field, and creating/deleting Backup CR. The reconcile loop steps are: Check if the current node ID == BackupTarget CR spec.responsibleNodeID. If no, skip the reconcile process. If the delete BackupVolume CR event comes in: updates Backup CRs `spec.fileCleanupRequired=true` if BackupVolume CR `spec.fileCleanupRequired=true`. deletes Backup CR with the given volume name. deletes the backup volume from the remote backup target `backup rm --volume <volume-name> <url>` if `spec.fileCleanupRequired=true`. remove the finalizer. Check if the `status.lastSyncedAt < spec.syncRequestAt`. If no, skip the reconcile process. Call the longhorn engine to list all the backups `backup ls --volume <volume-name>` from the remote backup target `backupStoreBackups`. List in cluster Backup CRs `clusterBackups`. Find the difference backups `backupsToPull = backupStoreBackups - clusterBackups` and create Backup CR `metadata.name` + `metadata.labels[\"longhornvolume\"]=<backup-volume-name>`. Find the difference backups `backupsToDelete = clusterBackups - backupStoreBackups` and delete Backup"
},
{
"data": "Call the longhorn engine to get the backup volume config's last modification time `backup head <volume-config>` and compares to `status.lastModificationTime`. If the config last modification time not changed, updates the `status.lastSyncedAt` and return. Call the longhorn engine to read the backup volumes' config `backup inspect-volume <volume-name>`. Updates the BackupVolume CR status: according to the backup volumes' config. `status.lastModificationTime` and `status.lastSyncedAt`. Updates the Volume CR `status.lastBackup` and `status.lastBackupAt`. For the Longhorn manager HTTP endpoints: POST `/v1/volumes/{volName}?action=snapshotBackup`: Generate the backup name <backup-name>. Create a new Backup CR with ```yaml metadata: name: <backup-name> labels: longhornvolume: <backup-volume-name> spec: snapshotName: <snapshot-name> labels: <snapshot-backup-labels> backingImage: <backing-image> backingImageURL: <backing-image-URL> ``` DELETE `/v1/backupvolumes/{volName}?action=backupDelete`: Update the Backup CR `spec.fileCleanupRequired=true` with the given volume name. Delete a Backup CR with the given backup name. Create a new controller `backup_controller`. Watches the change of Backup CR. The backup controller is responsible for updating the Backup CR status field and creating/deleting backup to/from the remote backup target. The reconcile loop steps are: Check if the current node ID == BackupTarget CR spec.responsibleNodeID. If no, skip the reconcile process. If the delete Backup CR event comes in: delete the backup from the remote backup target `backup rm <url>` if Backup CR `spec.fileCleanupRequired=true`. update the BackupVolume CR `spec.syncRequestAt=time.Now()`. remove the finalizer. Check if the Backup CR `spec.snapshotName != \"\"` and `status.backupCreationIsStart == false`. If yes: call longhorn engine/replica for backup creation. updates Backup CR `status.backupCreationIsStart = true`. fork a go routine to monitor the backup creation progress. After backup creation finished (progress = 100): update the BackupVolume CR `spec.syncRequestAt = time.Now()` if BackupVolume CR exist. create the BackupVolume CR `metadata.name` if BackupVolume CR not exist. If Backup CR `status.lastSyncedAt != nil`, the backup config had be synced, skip the reconcile process. Call the longhorn engine to read the backup config `backup inspect <backup-url>`. Updates the Backup CR status field according to the backup config. Updates the Backup CR `status.lastSyncedAt`. For the Longhorn manager HTTP endpoints: GET `/v1/backupvolumes`: read all the BackupVolume CRs. GET `/v1/backupvolumes/{volName}`: read a BackupVolume CR with the given volume name. GET `/v1/backupvolumes/{volName}?action=backupList`: read a list of Backup CRs with the label filter `volume=<backup-volume-name>`. GET `/v1/backupvolumes/{volName}?action=backupGet`: read a Backup CR with the given backup name. With over 1k backup volumes and over 1k backups under pretty high network latency (700-800ms per operation) from longhorn manager to the remote backup target: Test basic backup and restore operations. The user configures the remote backup target URL/credential and poll interval to 5 mins. The user creates two backups on vol-A and vol-B. The user can see the backup volume for vol-A and vol-B in Backup GUI. The user can see the two backups under vol-A and vol-B in Backup GUI. When the user deletes one of the backups of vol-A on the Longhorn GUI, the deleted one will be deleted after the remote backup target backup be deleted. When the user deletes backup volume vol-A on the Longhorn GUI, the backup volume will be deleted after the remote backup target backup volume is deleted, and the backup of vol-A will be deleted also. The user can see the backup volume for vol-B in Backup"
},
{
"data": "The user can see two backups under vol-B in Backup GUI. The user changes the remote backup target to another backup target URL/credential, the user can't see the backup volume and backup of vol-B in Backup GUI. The user configures the `backstore-poll-interval` to 1 minute. The user changes the remote backup target to the original backup target URL/credential, after 1 minute later, the user can see the backup volume and backup of vol-B. Create volume from the vol-B backup. Test DR volume operations. Create two clusters (cluster-A and cluster-B) both points to the same remote backup target. At cluster A, create a volume and run a recurring backup to the remote backup target. At cluster B, after `backupstore-poll-interval` seconds, the user can list backup volumes or list volume backups on the Longhorn GUI. At cluster B, create a DR volume from the backup volume. At cluster B, check the DR volume `status.LastBackup` and `status.LastBackupAt` is updated periodically. At cluster A, delete the backup volume on the GUI. At cluster B, after `backupstore-poll-interval` seconds, the deleted backup volume does not exist on the Longhorn GUI. At cluster B, the DR volume `status.LastBackup` and `status.LastBackupAt` won't be updated anymore. Test Backup Target URL clean up. The user configures the remote backup target URL/credential and poll interval to 5 mins. The user creates one backup on vol-A. Change the backup target setting setting to empty. Within 5 mins the poll interval triggered: The default BackupTarget CR `status.available=false`. The default BackupTarget CR `status.lastSyncedAt` be updated. All the BackupVolume CRs be deleted. All the Backup CRs be deleted. The vol-A CR `status.lastBackup` and `status.lastBackupAt` be cleaned up. The GUI displays the backup target not available. Test switch Backup Target URL. The user configures the remote backup target URL/credential to S3 and poll interval to 5 mins. The user creates one backup on vol-A to S3. The user changes the remote backup URL/credential to NFS and poll interval to 5 mins. The user creates one backup on vol-A to NFS. The user changes the remote backup target URL/credential to S3. Within 5 mins the poll interval triggered: The default BackupTarget CR `status.available=true`. The default BackupTarget CR `status.lastSyncedAt` be updated. The BackupVolume CRs be synced as the data in S3. The Backup CRs be synced as the data in S3. The vol-A CR `status.lastBackup` and `status.lastBackupAt` be synced as the data in S3. Test Backup Target credential secret changed. The user configures the remote backup target URL/credential and poll interval to 5 mins. The user creates one backup on vol-A. Change the backup target credential secret setting to empty. Within 5 mins the poll interval triggered: The default BackupTarget CR `status.available=false`. The default BackupTarget CR `status.lastSyncedAt` be updated. The GUI displays the backup target not available. None. With this enhancement, the user might want to trigger run synchronization immediately. We could either: have a button on the `Backup` to update the BackupTarget CR `spec.syncRequestAt = time.Now()` or have a button on the `Backup` -> `Backup Volume` page to have a button to update the BackupVolume CR `spec.syncRequestAt = time.Now()`. updates the `spec.syncRequestAt = time.Now()` when the user clicks the `Backup` or updates the `spec.syncRequestAt = time.Now()` when the user clicks `Backup` -> `Backup Volume`."
}
] | {
"category": "Runtime",
"file_name": "20210525-async-pull-backups.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "kube-router currently has basic health checking in form of heartbeats sent from each controller to the healthcontroller each time the main loop completes successfully. The health port is by default 20244 but can be changed with the startup option. The health path is `/healthz` ```sh --health-port=<port number> ``` If port is set to 0 (zero) no HTTP endpoint will be made available but the health controller will still run and print out any missed heartbeats to STDERR of kube-router If a controller does not send a heartbeat within controllersynctime + 5 seconds the component will be flagged as unhealthy. If any of the running components is failing the whole kube-router state will be marked as failed in the /healthz endpoint For example, if kube-router is started with ```sh --run-router=true --run-firewall=true --run-service-proxy=true --run-loadbalancer=true ``` If the route controller, policy controller or service controller exits it's main loop and does not publish a heartbeat the `/healthz` endpoint will return a error 500 signaling that kube-router is not healthy."
}
] | {
"category": "Runtime",
"file_name": "health.md",
"project_name": "Kube-router",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "This section describes the typical release cycle of rkt: A GitHub sets the target date for a future rkt release. Releases occur approximately every two to three weeks. Issues grouped into the next release milestone are worked on in order of priority. Changes are submitted for review in the form of a GitHub Pull Request (PR). Each PR undergoes review and must pass continuous integration (CI) tests before being accepted and merged into the main line of rkt source code. The day before each release is a short code freeze during which no new code or dependencies may be merged. Instead, this period focuses on polishing the release, with tasks concerning: Documentation Usability tests Issues triaging Roadmap planning and scheduling the next release milestone Organizational and backlog review Build, distribution, and install testing by release manager This section shows how to perform a release of rkt. Only parts of the procedure are automated; this is somewhat intentional (manual steps for sanity checking) but it can probably be further scripted, please help. The following example assumes we're going from version 1.1.0 (`v1.1.0`) to 1.2.0 (`v1.2.0`). Let's get started: Start at the relevant milestone on GitHub (e.g. https://github.com/rkt/rkt/milestones/v1.2.0): ensure all referenced issues are closed (or moved elsewhere, if they're not done). Close the milestone. Update the to remove the release you're performing, if necessary Ensure that `stage1/aci/aci-manifest.in` is the same version of appc/spec vendored with rkt. Otherwise, update it. Branch from the latest master, make sure your git status is clean Ensure the build is clean! `git clean -ffdx && ./autogen.sh && ./configure --enable-tpm=no --enable-functional-tests && make && make check` should work Integration tests on CI should be green Update the . Try to capture most of the salient changes since the last release, but don't go into unnecessary detail (better to link/reference the documentation wherever possible). `scripts/changelog.sh` will help generating an initial list of changes. Correct/fix entries if necessary, and group them by category. The rkt version is , so the first thing to do is bump it: Run `scripts/bump-release v1.2.0`. This should generate two commits: a bump to the actual release (e.g. v1.2.0, including CHANGELOG updates), and then a bump to the release+git (e.g. v1.2.0+git). The actual release version should only exist in a single commit! Sanity check what the script did with `git diff HEAD^^` or similar. As well as changing the actual version, it also attempts to fix a bunch of references in the documentation etc. If the script didn't work, yell at the author and/or fix it. It can almost certainly be improved. File a PR and get a review from another . This is useful to a) sanity check the diff, and b) be very explicit/public that a release is happening Ensure the CI on the release PR is green! Merge the PR Check out the release commit and build it! `git checkout HEAD^` should"
},
{
"data": "You want to be at the commit where the version is without \"+git\". Sanity check configure.ac (2nd line). Build rkt inside rkt (so make sure you have rkt in your $PATH): `export BUILDDIR=$PWD/release-build && mkdir -p $BUILDDIR && sudo BUILDDIR=$BUILDDIR ./scripts/build-rir.sh` Sanity check the binary: Check `release-build/target/bin/rkt version` Check `ldd release-build/target/bin/rkt`: it can contain linux-vdso.so, libpthread.so, libc.so, libdl.so and ld-linux-x86-64.so but nothing else. Check `ldd release-build/target/tools/init`: same as above. Build convenience packages: `sudo BUILDDIR=$BUILDDIR ./scripts/build-rir.sh --exec=./scripts/pkg/build-pkgs.sh -- 1.2.0` (add correct version) Sign a tagged release and push it to GitHub: Grab the release key (see details below) and add a signed tag: `GITCOMMITTERNAME=\"CoreOS Application Signing Key\" GITCOMMITTEREMAIL=\"security@coreos.com\" git tag -u $RKTSUBKEYID'!' -s v1.2.0 -m \"rkt v1.2.0\"` Push the tag to GitHub: `git push --tags` Now we switch to the GitHub web UI to conduct the release: Start a on Github Tag \"v1.2.0\", release title \"v1.2.0\" Copy-paste the release notes you added earlier in You can also add a little more detail and polish to the release notes here if you wish, as it is more targeted towards users (vs the changelog being more for developers); use your best judgement and see previous releases on GH for examples. Attach the release. This is a simple tarball: ``` export RKTVER=\"1.2.0\" export NAME=\"rkt-v$RKTVER\" mkdir $NAME cp release-build/target/bin/rkt release-build/target/bin/stage1-{coreos,kvm,fly}.aci $NAME/ cp -r dist/* $NAME/ sudo chown -R root:root $NAME/ tar czvf $NAME.tar.gz --numeric-owner $NAME/ ``` Attach packages, as well as each stage1 file individually so they can be fetched by the ACI discovery mechanism: ``` cp release-build/target/bin/*.deb . cp release-build/target/bin/*.rpm . cp release-build/target/bin/stage1-coreos.aci stage1-coreos-$RKTVER-linux-amd64.aci cp release-build/target/bin/stage1-kvm.aci stage1-kvm-$RKTVER-linux-amd64.aci cp release-build/target/bin/stage1-fly.aci stage1-fly-$RKTVER-linux-amd64.aci ``` Sign all release artifacts. rkt project key must be used to sign the generated binaries and images.`$RKTSUBKEYID` is the key ID of rkt project Yubikey. Connect the key and run `gpg2 --card-status` to get the ID. The public key for GPG signing can be found at and is assumed as trusted. The following commands are used for public release signing: ``` for i in $NAME.tar.gz stage1-.aci .deb *.rpm; do gpg2 -u $RKTSUBKEYID'!' --armor --output ${i}.asc --detach-sign ${i}; done for i in $NAME.tar.gz stage1-.aci .deb *.rpm; do gpg2 --verify ${i}.asc ${i}; done ``` Once signed and uploaded, double-check that all artifacts and signatures are on github. There should be 8 files in attachments (1x tar.gz, 3x ACI, 4x armored signatures). Publish the release! Clean your git tree: `sudo git clean -ffdx`. Now it's announcement time: send an email to rkt-dev@googlegroups.com describing the release. Generally this is higher level overview outlining some of the major features, not a copy-paste of the release notes. Use your discretion and see for examples. Make sure to include a list of authors that contributed since the previous release - something like the following might be handy: ``` git log v1.1.0..v1.2.0 --pretty=format:\"%an\" | sort | uniq | tr '\\n' ',' | sed -e 's#,#, #g' -e 's#, $#\\n#' ```"
}
] | {
"category": "Runtime",
"file_name": "release.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
} |
[
{
"data": "This document assumes the reader is familiar with running Firecracker and issuing API commands over its API socket. For a more details on how to run Firecracker, check out the . Familiarity with socket programming, in particular Unix sockets, is also assumed. Host kernel config has: `CONFIGVHOSTVSOCK=m` Guest kernel config has: `CONFIGVIRTIOVSOCKETS=y` To confirm that vsock can be used, run below command inside guest: ```bash ls /dev/vsock ``` and confirm that the `/dev/vsock` device is available. Reference the guest kernel configuration that Firecracker is using in its CI can be found . The Firecracker vsock device aims to provide full virtio-vsock support to software running inside the guest VM, while bypassing vhost kernel code on the host. To that end, Firecracker implements the virtio-vsock device model, and mediates communication between AFUNIX sockets (on the host end) and AFVSOCK sockets (on the guest end). In order to provide channel multiplexing the guest `AF_VSOCK` ports are mapped 1:1 to `AF_UNIX` sockets on the host. The virtio-vsock device must be configured with a path to an `AF_UNIX` socket on the host (e.g. `/path/to/v.sock`). There are two scenarios to be considered, depending on where the connection is initiated. When a microvm having a vsock device attached is started, Firecracker will begin listening on an AF_UNIX socket (e.g. `/path/to/v.sock`). When the host needs to initiate a connection, it should connect to that Unix socket, then send a connect command, in text form, specifying the destination AF_VSOCK port: \"CONNECT PORT\\\\n\". Where PORT is the decimal port number, and \"\\\\n\" is EOL (ASCII 0x0A). Following that, the same connection will be forwarded by Firecracker to the guest software listening on that port, thus establishing the requested channel. If the connection has been established, Firecracker will send an acknowledgement message to the connecting end (host-side), in the form \"OK PORT\\\\n\", where `PORT` is the vsock port number assigned to the host end. If no one is listening, Firecracker will terminate the host connection. Client A initiates connection to Server A in : Host: At VM configuration time, add a virtio-vsock device, with some path specified in `uds_path`; Guest: create an AFVSOCK socket and `listen()` on `<portnum>`; Host: `connect()` to AFUNIX at `udspath`. Host: `send()` \"CONNECT `<port_num>`\\\\n\". Guest: `accept()` the new connection. Host: `read()` \"OK `<assignedhostsideport>`\\\\n\". The channel is established between the sockets obtained at steps 3 (host) and 5"
},
{
"data": "When the virtio-vsock device model in Firecracker detects a connection request coming from the guest (a VIRTIOVSOCKOP_REQUEST packet), it tries to forward the connection to an AF_UNIX socket listening on the host, at `/path/to/v.sockPORT` (or whatever path was configured via the `udspath` property of the vsock device), where `PORT` is the destination port (in decimal), as specified in the connection request packet. If no such socket exists, or no one is listening on it, a connection cannot be established, and a VIRTIOVSOCKOP_RST packet will be sent back to the guest. Client B initiates connection to Server B in : Host: At VM configuration time, add a virtio-vsock device, with some `uds_path` (e.g. `/path/to/v.sock`). Host: create and listen on an AFUNIX socket at `/path/to/v.sockPORT`. Guest: create an AFVSOCK socket and connect to `HOSTCID` (i.e. integer value 2) and `PORT`; Host: `accept()` the new connection. The channel is established between the sockets obtained at steps 4 (host) and 3 (guest). The virtio-vsock device will require a CID, and the path to a backing AF_UNIX socket: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT 'http://localhost/vsock' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"guest_cid\": 3, \"uds_path\": \"./v.sock\" }' ``` Once the microvm is started, Firecracker will create and start listening on the AFUNIX socket at `udspath`. Incoming connections will get forwarded to the guest microvm, and translated to AF_VSOCK. The destination port is expected to be specified by sending the text command \"CONNECT `<port_num>`\\\\n\", immediately after the AF_UNIX connection is established. Connections initiated from within the guest will be forwarded to AF_UNIX sockets expected to be listening at `./v.sock<portnum>`. I.e. a guest connection to port 52 will get forwarded to `./v.sock_52`. The examples below assume a running microvm, with a vsock device configured as shown and version 1.7.4.0 or later. First, make sure the vsock port is bound and listened to on the guest side. Say, port 52: ```bash socat VSOCK-LISTEN:52,fork - ``` On the host side, connect to `./v.sock` and issue a connection request to that port: ```bash $ socat - UNIX-CONNECT:./v.sock CONNECT 52 ``` `socat` will display the connection acknowledgement message: ```console OK 1073741824 ``` The connection should now be established (in the above example, between `socat` on the guest and the host side). First make sure the AF_UNIX corresponding to your desired port is listened to on the host side: ```bash socat - UNIX-LISTEN:./v.sock_52 ``` On the guest side, create an AF_VSOCK socket and connect it to the previously chosen port on the host (CID=2): ```bash socat - VSOCK-CONNECT:2:52 ``` Vsock snapshot support is currently limited. Please see ."
}
] | {
"category": "Runtime",
"file_name": "vsock.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
} |
[
{
"data": "```yaml apiVersion: v1 kind: ConfigMap metadata: name: carina-csi-config namespace: kube-system labels: class: carina data: config.json: |- { \"diskSelector\": [ { \"name\": \"carina-vg-ssd\", \"re\": [\"loop2+\"], \"policy\": \"LVM\", \"nodeLabel\": \"kubernetes.io/hostname\" }, { \"name\": \"carina-raw-hdd\", \"re\": [\"vdb+\", \"sd+\"], \"policy\": \"RAW\", \"nodeLabel\": \"kubernetes.io/hostname\" } ], \"diskScanInterval\": \"300\", # disk scan interval in seconds \"diskGroupPolicy\": \"type\", # the policy to group local disks \"schedulerStrategy\": \"spreadout\" # binpack or spreadout } ``` Each carina-node will scan local disks and group them into different groups. ```shell $ kubectl exec -it csi-carina-node-cmgmm -c csi-carina-node -n kube-system bash $ pvs PV VG Fmt Attr PSize PFree /dev/vdc carina-vg-hdd lvm2 a-- <80.00g <79.95g /dev/vdd carina-vg-hdd lvm2 a-- <80.00g 41.98g $ vgs VG #PV #LV #SN Attr VSize VFree carina-vg-hdd 2 10 0 wz--n- 159.99g <121.93g ``` With `\"diskSelector\": [\"loop+\", \"vd+\"]`and `diskGroupPolicy: LVM`, carina will create below VG: ```shell $ kubectl exec -it csi-carina-node-cmgmm -c csi-carina-node -n kube-system bash $ pvs PV VG Fmt Attr PSize PFree /dev/loop0 carina-vg-hdd lvm2 a-- <80.00g <79.95g /dev/loop1 carina-vg-hdd lvm2 a-- <80.00g 79.98g $ vgs VG #PV #LV #SN Attr VSize VFree carina-vg-hdd 2 10 0 wz--n- 159.99g <121.93g ``` If changing diskSelector to `\"diskSelector\": [\"loop0\", \"vd+\"]`, then carina will automatically remove related disks. ```shell $ kubectl exec -it csi-carina-node-cmgmm -c csi-carina-node -n kube-system bash $ pvs PV VG Fmt Attr PSize PFree /dev/loop0 carina-vg-hdd lvm2 a-- <80.00g <79.95g $ vgs VG #PV #LV #SN Attr VSize VFree carina-vg-hdd 1 10 0 wz--n- 79.99g <79.93g ```"
}
] | {
"category": "Runtime",
"file_name": "disk-manager.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, or nationality. Examples of unacceptable behavior by participants include: The use of sexualized language or imagery Personal attacks Trolling or insulting/derogatory comments Public or private harassment Publishing others' private information, such as physical or electronic addresses, without explicit permission Other unethical or unprofessional conduct. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers commit themselves to fairly and consistently applying these principles to every aspect of managing this project. Project maintainers who do not follow or enforce the Code of Conduct may be permanently removed from the project team. This code of conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a project maintainer, Brandon Philips <brandon.philips@coreos.com>, and/or Rithu John <rithu.john@coreos.com>. This Code of Conduct is adapted from the Contributor Covenant (http://contributor-covenant.org), version 1.2.0, available at http://contributor-covenant.org/version/1/2/0/ CoreOS events are working conferences intended for professional networking and collaboration in the CoreOS community. Attendees are expected to behave according to professional standards and in accordance with their employers policies on appropriate workplace behavior. While at CoreOS events or related social networking opportunities, attendees should not engage in discriminatory or offensive speech or actions including but not limited to gender, sexuality, race, age, disability, or religion. Speakers should be especially aware of these concerns. CoreOS does not condone any statements by speakers contrary to these standards. CoreOS reserves the right to deny entrance and/or eject from an event (without refund) any individual found to be engaging in discriminatory or offensive speech or actions. Please bring any concerns to the immediate attention of designated on-site staff, Brandon Philips <brandon.philips@coreos.com>, and/or Rithu John <rithu.john@coreos.com>."
}
] | {
"category": "Runtime",
"file_name": "code-of-conduct.md",
"project_name": "Flannel",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "Aside from setting options for Multus, one of the goals of configuration is to set the configuration for your default network. The default network is also sometimes referred as the \"primary CNI plugin\", the \"primary network\", or a \"default CNI plugin\" and is the CNI plugin that is used to implement in your cluster. Common examples include Flannel, Weave, Calico, Cillium, and OVN-Kubernetes, among others. Here we will refer to this as your default CNI plugin or default network. Following is the example of multus config file, in `/etc/cni/net.d/`. Example configuration using `clusterNetwork` (see also ) ``` { \"cniVersion\": \"0.3.1\", \"name\": \"node-cni-network\", \"type\": \"multus\", \"kubeconfig\": \"/etc/kubernetes/node-kubeconfig.yaml\", \"confDir\": \"/etc/cni/multus/net.d\", \"cniDir\": \"/var/lib/cni/multus\", \"binDir\": \"/opt/cni/bin\", \"logFile\": \"/var/log/multus.log\", \"logLevel\": \"debug\", \"logOptions\": { \"maxAge\": 5, \"maxSize\": 100, \"maxBackups\": 5, \"compress\": true }, \"capabilities\": { \"portMappings\": true }, \"namespaceIsolation\": false, \"clusterNetwork\": \"/etc/cni/net.d/99-flannel.conf\", \"defaultNetworks\": [\"sidecarCRD\", \"exampleNetwork\"], \"systemNamespaces\": [\"kube-system\", \"admin\"], \"multusNamespace\": \"kube-system\", allowTryDeleteOnErr: false } ``` This is a general index of options, however note that you must set either the `clusterNetwork` or `delegates` options, see the following sections after the index for details. `name` (string, required): The name of the network `type` (string, required): Must be set to the value of "multus" `confDir` (string, optional): directory for CNI config file that multus reads. default `/etc/cni/multus/net.d` `cniDir` (string, optional): Multus CNI data directory, default `/var/lib/cni/multus` `binDir` (string, optional): additional directory for CNI plugins which multus calls, in addition to the default (the default is typically set to `/opt/cni/bin`) `kubeconfig` (string, optional): kubeconfig file for the out of cluster communication with kube-apiserver. See the example . If you would like to use CRD (i.e. network attachment definition), this is required (bool, optional): Enable or disable logging to `STDERR`. Defaults to true. (string, optional): file path for log file. multus puts log in given file (string, optional): logging level (values in decreasing order of verbosity: \"debug\", \"error\", \"verbose\", or \"panic\") (object, optional): logging option, More detailed log configuration (boolean, optional): Enables a security feature where pods are only allowed to access `NetworkAttachmentDefinitions` in the namespace where the pod resides. Defaults to false. : (string, optional): Used only when `namespaceIsolation` is true, allows specification of comma-delimited list of namespaces which may be referred to outside of namespace isolation. `capabilities` ({}list, optional): supported by at least one of the delegates. (NOTE: Multus only supports portMappings/Bandwidth capability for cluster networks). : The path to a file whose existence denotes that the default network is ready message to next when some missing error. Defaults to false. `systemNamespaces` ([]string, optional): list of namespaces for Kubernetes system (namespaces listed here will not have `defaultNetworks` added) `multusNamespace` (string, optional): namespace for `clusterNetwork`/`defaultNetworks` (the default value is `kube-system`) `retryDeleteOnError` (bool, optional): Enable or disable delegate DEL Using the `clusterNetwork` option and the `delegates` are mutually exclusive. If `clusterNetwork` is set, the `delegates` field is ignored. You must set one or the other. Therefore: Set `clusterNetwork` and if this is set, optionally set the `defaultNetworks`. OR you must* set `delegates`. Options: `clusterNetwork` (string, required if not using `delegates`): the default CNI plugin to be executed. `defaultNetworks` ([]string, optional): Additional / secondary network attachment that is always attached to each"
},
{
"data": "The following values are valid for both `clusterNetwork` and `defaultNetworks` and are processed in the following order: The name of a `NetworkAttachmentDefinition` custom resource in the namespace specified by the `multusNamespace` configuration option The `\"name\"` value in the contents of a CNI JSON configuration file in the CNI configuration directory, The given name for `clusterNetwork` should match the value for `name` key in the contents of the CNI JSON file (e.g. `\"name\": \"test\"` in `my.conf` when `\"clusterNetwork\": \"test\"`) A path to a directory containing CNI json configuration files. The alphabetically first file will be used. Absolute file path for CNI config file If none of the above are found using the value, Multus will raise an error. If for example you have `defaultNetworks` set as: ``` \"defaultNetworks\": [\"sidecarNetwork\", \"exampleNetwork\"], ``` In this example, the values in the expression refer to `NetworkAttachmentDefinition` custom resource names. Therefore, there must be `NetworkAttachmentDefinitions` already created with the names `sidecarNetwork` and `exampleNetwork`. This means that in addition to the cluster network, each pod would be assigned two additional networks by default, and the pod would present three interfaces, e.g. `eth0`, `net1`, and `net2`, with `net1` and `net2` being set by the above described `NetworkAttachmentDefinitions`. Additional attachments as made by setting `k8s.v1.cni.cncf.io/networks` on pods will be made in addition to those set in the `defaultNetworks` configuration option. If `clusterNetwork` is not set, you must use `delegates`. `delegates` ([]map, required if not using `clusterNetwork`). List of CNI configurations to be used as your default CNI plugin(s). Example configuration using `delegates`: ``` { \"cniVersion\": \"0.3.1\", \"name\": \"node-cni-network\", \"type\": \"multus\", \"kubeconfig\": \"/etc/kubernetes/node-kubeconfig.yaml\", \"confDir\": \"/etc/cni/multus/net.d\", \"cniDir\": \"/var/lib/cni/multus\", \"binDir\": \"/opt/cni/bin\", \"delegates\": [{ \"type\": \"weave-net\", \"hairpinMode\": true }, { \"type\": \"macvlan\", ... (snip) }] } ``` You may desire that your default network becomes ready before attaching networks with Multus. This is disabled by default and not used unless you set the `readinessindicatorfile` option to a non-blank value. For example, if you use Flannel as a default network, the recommended method for Flannel to be installed is via a daemonset that also drops a configuration file in `/etc/cni/net.d/`. This may apply to other plugins that place that configuration file upon their readiness, therefore, Multus uses their configuration filename as a semaphore and optionally waits to attach networks to pods until that file exists. In this manner, you may prevent pods from crash looping, and instead wait for that default network to be ready. Only one option is necessary to configure this functionality: `readinessindicatorfile`: The path to a file whose existence denotes that the default network is ready. NOTE: If `readinessindicatorfile` is unset, or is an empty string, this functionality will be disabled, and is disabled by default. You may wish to enable some enhanced logging for Multus, especially during the process where you're configuring Multus and need to understand what is or isn't working with your particular configuration. By default, Multus will log via `STDERR`, which is the standard method by which CNI plugins communicate errors, and these errors are logged by the Kubelet. Optionally, you may disable this method by setting the `logToStderr` option in your CNI configuration: ``` \"logToStderr\": false, ``` Optionally, you may have Multus log to a file on the filesystem. This file will be written locally on each node where Multus is executed. You may configure this via the `LogFile` option in the CNI configuration. By default this additional logging to a flat file is disabled. For example in your CNI configuration, you may set: ``` \"logFile\": \"/var/log/multus.log\", ``` The default logging level is set as `panic` -- this will log only the most critical errors, and is the least verbose logging"
},
{
"data": "The available logging level values, in decreasing order of verbosity are: `debug` `verbose` `error` `panic` You may configure the logging level by using the `LogLevel` option in your CNI configuration. For example: ``` \"logLevel\": \"debug\", ``` If you want a more detailed configuration of the logging, This includes the following parameters: `maxAge` the maximum number of days to retain old log files in their filename `maxSize` the maximum size in megabytes of the log file before it gets rotated `maxBackups` the maximum number of days to retain old log files in their filename `compress` compress determines if the rotated log files should be compressed using gzip For example in your CNI configuration, you may set: ``` \"logOptions\": { \"maxAge\": 5, \"maxSize\": 100, \"maxBackups\": 5, \"compress\": true } ``` The functionality provided by the `namespaceIsolation` configuration option enables a mode where Multus only allows pods to access custom resources (the `NetworkAttachmentDefinitions`) within the namespace where that pod resides. In other words, the `NetworkAttachmentDefinitions` are isolated to usage within the namespace in which they're created. NOTE: The default namespace is special in this scenario. Even with namespace isolation enabled, any pod, in any namespace is allowed to refer to `NetworkAttachmentDefinitions` in the default namespace. This allows you to create commonly used unprivileged `NetworkAttachmentDefinitions` without having to put them in all namespaces. For example, if you had a `NetworkAttachmentDefinition` named `foo` the default namespace, you may reference it in an annotation with: `default/foo`. NOTE: You can also add additional namespaces which can be referred to globally using the `global-namespaces` option (see next section). For example, if a pod is created in the namespace called `development`, Multus will not allow networks to be attached when defined by custom resources created in a different namespace, say in the `default` network. Consider the situation where you have a system that has users of different privilege levels -- as an example, a platform which has two administrators: a Senior Administrator and a Junior Administrator. The Senior Administrator may have access to all namespaces, and some network configurations as used by Multus are considered to be privileged in that they allow access to some protected resources available on the network. However, the Junior Administrator has access to only a subset of namespaces, and therefore it should be assumed that the Junior Administrator cannot create pods in their limited subset of namespaces. The `namespaceIsolation` feature provides for this isolation, allowing pods created in given namespaces to only access custom resources in the same namespace as the pod. Namespace Isolation is disabled by default. ``` \"namespaceIsolation\": true, ``` Let's setup an example where we: Create a custom resource in a namespace called `privileged` Create a pod in a namespace called `development`, and have annotations that reference a custom resource in the `privileged` namespace. The creation of this pod should be disallowed by Multus (as we'll have the use of the custom resources limited only to those custom resources created within the same namespace as the pod). Given the above scenario with a Junior & Senior Administrator. You may assume that the Senior Administrator has access to all namespaces, whereas the Junior Administrator has access only to the `development` namespace. Firstly, we show that we have a number of namespaces available: ``` [user@kube-master ~]$ kubectl get namespaces NAME STATUS AGE default Active 7h27m development Active 3h kube-public Active 7h27m kube-system Active 7h27m privileged Active 4s ``` We'll create a `NetworkAttachmentDefinition` in the `privileged`"
},
{
"data": "``` [user@kube-master ~]$ cat cr.yml apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: macvlan-conf spec: config: '{ \"cniVersion\": \"0.3.0\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"host-local\", \"subnet\": \"192.168.1.0/24\", \"rangeStart\": \"192.168.1.200\", \"rangeEnd\": \"192.168.1.216\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"gateway\": \"192.168.1.1\" } }' [user@kube-master ~]$ kubectl create -f cr.yml -n privileged networkattachmentdefinition.k8s.cni.cncf.io/macvlan-conf created [user@kube-master ~]$ kubectl get networkattachmentdefinition.k8s.cni.cncf.io -n privileged NAME AGE macvlan-conf 11s ``` Next, we'll create a pod with an annotation that references the privileged namespace. Pay particular attention to the annotation that reads `k8s.v1.cni.cncf.io/networks: privileged/macvlan-conf` -- where it contains a reference to a `namespace/configuration-name` formatted network attachment name. In this case referring to the `macvlan-conf` in the namespace called `privileged`. ``` [user@kube-master ~]$ cat example.pod.yml apiVersion: v1 kind: Pod metadata: name: samplepod annotations: k8s.v1.cni.cncf.io/networks: privileged/macvlan-conf spec: containers: name: samplepod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: dougbtv/centos-network [user@kube-master ~]$ kubectl create -f example.pod.yml -n development pod/samplepod created ``` You'll note that pod fails to spawn successfully. If you check the Multus logs, you'll see an entry such as: ``` 2018-12-18T21:41:32Z [error] GetNetworkDelegates: namespace isolation enabled, annotation violates permission, pod is in namespace development but refers to target namespace privileged ``` This error expresses that the pod resides in the namespace named `development` but refers to a `NetworkAttachmentDefinition` outside of that namespace, in this case, the namespace named `privileged`. In a positive example, you'd instead create the `NetworkAttachmentDefinition` in the `development` namespace, and you'd have an annotation that either A. does not reference a namespace, or B. refers to the same annotation. A positive example may be: ``` [user@kube-master ~]$ kubectl create -f cr.yml -n development networkattachmentdefinition.k8s.cni.cncf.io/macvlan-conf created [user@kube-master ~]$ cat positive.example.pod apiVersion: v1 kind: Pod metadata: name: samplepod annotations: k8s.v1.cni.cncf.io/networks: macvlan-conf spec: containers: name: samplepod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: dougbtv/centos-network [user@kube-master ~]$ kubectl create -f positive.example.pod -n development pod/samplepod created [user@kube-master ~]$ kubectl get pods -n development NAME READY STATUS RESTARTS AGE samplepod 1/1 Running 0 31s ``` The `globalNamespaces` configuration option is only used when `namespaceIsolation` is set to true. `globalNamespaces` specifies a comma-delimited list of namespaces which can be referred to from outside of any given namespace in which a pod resides. ``` \"globalNamespaces\": \"default,namespace-a,namespace-b\", ``` Note that when using `globalNamespaces` the `default` namespace must be specified in the list if you wish to use that namespace, when `globalNamespaces` is not set, the `default` namespace is implied to be used across namespaces. Users may also specify the default network for any given pod (via annotation), for cases where there are multiple cluster networks available within a Kubernetes cluster. Example use cases may include: During a migration from one default network to another (e.g. from Flannel to Calico), it may be practical if both network solutions are able to operate in parallel. Users can then control which network a pod should attach to during the transition period. Some users may deploy multiple cluster networks for the sake of their security considerations, and may desire to specify the default network for individual pods. Follow these steps to specify the default network on a pod-by-pod basis: First, you need to define all your cluster networks as network-attachment-definition objects. Next, you can specify the network you want in pods with the `v1.multus-cni.io/default-network` annotation. Pods which do not specify this annotation will keep using the CNI as defined in the Multus config file. ```yaml apiVersion: v1 kind: Pod metadata: name: pod-example annotations: v1.multus-cni.io/default-network: calico-conf ... ```"
}
] | {
"category": "Runtime",
"file_name": "configuration.md",
"project_name": "Multus",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "title: \"Velero 1.0 Has Arrived: Delivering Enhanced Stability, Usability and Extensibility Features\" slug: velero-1.0-has-arrived excerpt: Just three months after the release of Velero 0.11, the communitys momentum continues with the delivery of the landmark version 1.0 release. author_name: Tom Spoonemore categories: ['velero','release'] tags: ['Velero Team', 'Tom Spoonemore'] Just three months after the release of Velero 0.11, the communitys momentum continues with the delivery of the landmark version 1.0 release. This significant release improves the installation experience, Helm support, the plugin system, and overall stability. We want to thank the community and the team, and acknowledge all of their hard work and amazing contributions to this major milestone. Data protection is always a chief concern for application owners who want to make sure that they can restore a cluster to a known good state, recover from a crashed cluster, or migrate to a new environment. With Velero 1.0, Kubernetes cluster administrators can feel confident that they have a production-grade backup solution for their cluster resources and applications. Following are the highlights of Velero 1.0. Velero now has a new `velero install` command to help get up and running quickly. The new installation lets you specify cloud provider information in one step. If you want to see what changes will be made to your cluster or customize the YAML that the Velero installation will make, you can use the new `--dry-run` option to output the full configuration. Helm is one of the best ways to manage packages for Kubernetes deployments, and while the Velero team has always contributed to the community developing the Velero Helm chart, we can now announce that it is fully supported and a great way to make your Velero server installation quicker, simpler, and more easily customizable. This release overhauls the plugin interface to enhance the extensibility of Velero. It is now easier for developers to contribute and maintain plugins. Weve reworked the import surface, reducing the number of modules that need to be called directly by plugin developers. Weve also improved plugin name checking to prevent collisions of plugins that have the same name. When problems happen, you need to have as much data as possible for troubleshooting. Now Velero traps plugin panics and logs errors, which are annotated with the file and line where the error occurred. Plugin authors now have the flexibility to add custom logic to govern whether a particular item should be restored. This can be helpful in a number of upgrade and migration use cases. In conjunction with the Velero 1.0 release, we are happy to announce an update to the Portworx plugin. Heres what Vick Kelkar, Director Product Management at Portworx, is saying about cloud native data protection and Velero: As organizations move critical applications to Kubernetes, they must be able to meet strict requirements around business continuity and disaster recovery. This means backing up the Kubernetes control plane as well as application"
},
{
"data": "Velero backs up control plane information and now, with the Portworx Enterprise Velero plugin, organizations can back up, protect and migrate their mission data across Kubernetes clusters and environments with zero downtime. Thanks to Vick and the Portworx team for their support and contributions to Velero. We have moved our support for Restic to beta with the 1.0 release. For admins running Kubernetes on-premises or using storage systems that arent yet supported by Velero plugins, Restic offers file system level backups and restores that are fully supported in Velero. While we put in the final touches to meet our bar for stability and performance, we are upgrading the status of Restic to beta in recognition of all the admins that are finding value in file system backup support. Weve now added safety checks to ensure that Velero doesnt overwrite an existing backup in Object Storage. Because that would be bad. Now, if a backup needs to be replaced with the same name, it will need to be deleted first, and then recreated. In the real world, not every backup or restore succeeds fully every time. In the past, Velero marked these incomplete actions with a `failed` status. With Velero 1.0, we have introduced an additional phrase to indicate `partial failure`. This phrase lets a cluster admin know that there are issues with the backup or restore, but indicates that the action was able to finish. The restoration of resources that you selectively backed up is now improved with better support for related items, such as dynamic volumes. Previously, if you partially restored data by using a label selector, dynamically provisioned persistent volumes were not restored because they didn't get a label when they are created. Now Velero has better logic to handle related items and will restore volumes attached to pods that meet the label selector condition. No major release is complete without a few breaking changes. We have a couple in this release. We are saying goodbye to the Heptio Ark API data types. We left these around after the name of Heptio Ark was changed to Velero in version 0.11, but now it is time for them to go. If you have software using the older API data types, youll want to make updates before upgrading to Velero 1.0. Though technically not a breaking change, we have changed the way we are handing Azure secrets. You will create a credential file and pass it to `velero install`. It is now more consistent with how we handle secrets for the other providers and is the method used by both the Helm chart and `velero install`. We still support the old method for now. Velero is better because of our contributors and maintainers. It is because of them that we can bring great software to the community. Please join us during our and catch up with past meetings on YouTube at the and the . You can always find the latest project information at . Look for issues on GitHub marked or if you want to roll up your sleeves and write some code with us. You can find us on . Follow us on Twitter at . Tom Spoonemore Velero SME Product Line Manager, VMware"
}
] | {
"category": "Runtime",
"file_name": "2019-05-20-velero-1.0-has-arrived.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Each brick is regresented with a graph of translators in a brick process. Each translator in the graph has its own set of threads and mem pools and other system resources allocations. Most of the times all these resources are not put to full use. Reducing the resource consumption of each brick is a problem in itself that needs to be addressed. The other aspect to it is, sharing of resources across brick graph, this becomes critical in brick multiplexing scenario. In this document we will be discussing only about the threads. If a brick mux process hosts 50 bricks there are atleast 600+ threads created in that process. Some of these are global threads that are shared by all the brick graphs, and others are per translator threads. The global threads like synctask threads, timer threads, sigwaiter, poller etc. are configurable and do not needs to be reduced. The per translator threads keeps growing as the number of bricks in the process increases. Each brick spawns atleast 10+ threads: io-threads posix threads: Janitor Fsyncer Helper aio-thread changelog and bitrot threads(even when the features are not enabled) io-threads should be made global to the process, having 16+ threads for each brick does not make sense. But io-thread translator is loaded in the graph, and the position of io-thread translator decides from when the fops will be parallelised across threads. We cannot entirely move the io-threads to libglusterfs and say the multiplexing happens from the primary translator or so. Hence, the io-thread orchestrator code is moved to libglusterfs, which ensures there is only one set of io-threads that is shared among the io-threads translator in each brick. This poses performance issues due to lock-contention in the io-threds layer. This also shall be addressed by having multiple locks instead of one global lock for io-threads. Most of the posix threads execute tasks in a timely manner, hence it can be replaced with a timer whose handler register a task to synctask framework, once the task is complete, the timer is registered again. With this we can eliminate the need of one thread for each task. The problem with using synctasks is the performance impact it will have due to make/swapcontext. For task that does not involve network wait, we need not do makecontext, instead the task function with arg can be stored and executed when a synctask thread is free. We need to implement an api in synctask to execute atomic tasks(no network wait) without the overhead of make/swapcontext. This will solve the performance impact associated with using synctask framework. And the other challenge, is to cancel all the tasks pending from a translator. This is important to cleanly detach brick. For this, we need to implement an api in synctask that can cancel all the tasks from a given translator. For future, this will be replced to use global thread-pool(once implemented). In the initial implementation, the threads are not created if the feature is not enabled. We need to share threads across changelog instances if we plan to enable these features in brick mux scenario."
}
] | {
"category": "Runtime",
"file_name": "brickmux-thread-reduction.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Output the dependencies graph in graphviz dot format ``` cilium-operator-aws hive dot-graph [flags] ``` ``` -h, --help help for dot-graph ``` ``` --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in"
},
{
"data": "(default true) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host network. --gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for Ingress. (default \"cilium-ingress\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created ``` - Inspect the hive"
}
] | {
"category": "Runtime",
"file_name": "cilium-operator-aws_hive_dot-graph.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "This tutorial assumes you've already installed and setup CRI-O. If you have not, start . It also assumes you've set up your system to use kubeadm. If you haven't done so, kubeadm expects a POD Network CIDR (`--pod-network-cidr`) to be defined when you install the cluster. The value of `--pod-network-cidr` depends on which CNI plugin you choose. <!-- markdownlint-disable MD013 --> | CNI Plugin | CIDR | Notes | | -- | - | -- | | Bridge plugin (default) | 10.85.0.0/16 | The default bridge plugin is defined . This is only suitable when your cluster has a single node. | | Flannel | 10.244.0.0/16 | This is a good choice for clusters with multiple nodes. | <!-- markdownlint-enable MD013 --> For example, to use the script below with the bridge plugin, run `export CIDR=10.85.0.0/16`. A list of CNI plugins can be found in the Kubernetes documentation. Each plugin will define its own default CIDR. Given you've set CIDR, and assuming you've set the `cgroup_driver` in your CRI-O configuration as `systemd` (which is the default value), all you need to do is start crio (as defined ), and run: `kubeadm init --pod-network-cidr=$CIDR --cri-socket=unix:///var/run/crio/crio.sock` We will assume that the user has installed CRI-O and all necessary packages. We will also assume that all necessary components are configured and everything is working as expected. The user should have a private repo where the container images are pushed. An example of container images for Kubernetes version 1.18.2: ```bash $ kubeadm config images list --image-repository user.private.repo --kubernetes-version=v1.18.2 user.private.repo/kube-apiserver:v1.18.2 user.private.repo/kube-controller-manager:v1.18.2 user.private.repo/kube-scheduler:v1.18.2 user.private.repo/kube-proxy:v1.18.2 user.private.repo/pause:3.2 user.private.repo/etcd:3.4.3-0 user.private.repo/coredns:1.6.7 ``` The user needs to configure the file. Sample of configurations: ```bash $ cat /etc/containers/registries.conf unqualified-search-registries = [\"user.private.repo\"] [[registry]] prefix = \"registry.k8s.io\" insecure = false blocked = false location = \"registry.k8s.io\" [[registry.mirror]] location = \"user.private.repo\" ``` Next the user should reload and restart the CRI-O service to load the configurations."
}
] | {
"category": "Runtime",
"file_name": "kubeadm.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
} |
[
{
"data": "layout: global title: Introduction Check out these 2 short videos to learn about the existing data problems that Alluxio is designed to solve and where Alluxio sits in the big data ecosystem: {:target=\"_blank\"} (3:06) {:target=\"_blank\"} (2:50) Alluxio is world's first open source for analytics and AI for the cloud. It bridges the gap between data driven applications and storage systems, bringing data from the storage tier closer to the data driven applications and makes it easily accessible enabling applications to connect to numerous storage systems through a common interface. Alluxios memory-first tiered architecture enables data access at speeds orders of magnitude faster than existing solutions. In the data ecosystem, Alluxio lies between data driven applications, such as Apache Spark, Presto, Tensorflow, Apache HBase, Apache Hive, or Apache Flink, and various persistent storage systems, such as Amazon S3, Google Cloud Storage, HDFS, IBM Cleversafe, EMC ECS, Ceph, NFS, Minio, and Alibaba OSS. Alluxio unifies the data stored in these different storage systems, presenting unified client APIs and a global namespace to its upper layer data driven applications. The Alluxio project originated from (see ) as the data access layer of the Berkeley Data Analytics Stack (). It is open source under . Alluxio is one of the fastest growing open source projects that has attracted more than from over 300 institutions including Alibaba, Alluxio, Baidu, CMU, Google, IBM, Intel, NJU, Red Hat, Tencent, UC Berkeley, and Yahoo. Today, Alluxio is deployed in production by with the largest deployment exceeding 1,500 nodes. <p align=\"center\"> <img src=\"https://d39kqat1wpn1o5.cloudfront.net/app/uploads/2021/07/alluxio-overview-r071521.png\" width=\"800\" alt=\"Ecosystem\"/> </p> Dora, short for Decentralized Object Repository Architecture, is the foundation of the Alluxio system. As an open-source distributed caching storage system, Dora offers low latency, high throughput, and cost savings, while aiming to provide a unified data layer that can support various data workloads, including AI and data analytics. Dora leverages decentralized storage and metadata management to provide higher performance and availability, as well as pluggable data security and governance, enabling better scalability and more efficient management of large-scale data access. Doras architecture goal: Scalability: Scalability is a top priority for Dora, which needs to support billions of files to meet the demands of data-intensive applications, such as AI training. High Availability: Dora's architecture is designed with high availability in mind, with 99.99% uptime and protection against single points of failure. Performance: Performance is a key goal for Dora, which prioritizes Presto/Trino powered SQL analytics workloads and GPU utilization for AI workloads. The diagram below shows the architecture design of Dora, which consists of four major components: the service registry, scheduler, client, and worker. The worker is the most important component, as it stores both metadata and data that are sharded by key, usually the path of the file. The client runs inside the applications and utilizes the same consistent hash algorithm to determine the appropriate worker for the corresponding file. The service registry is responsible for service discovery and maintains a list of workers. The scheduler handles all asynchronous jobs, such as preloading data to workers. Join our vibrant and fast-growing to connect with users & developers of Alluxio. If you need help running Alluxio or encounter any blockers, send your technical questions to our `#troubleshooting` channel. If you are a developer looking to contribute to Alluxio, check out the `#dev` channel."
}
] | {
"category": "Runtime",
"file_name": "Introduction.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "(instance-properties)= Instance properties are set when the instance is created. They cannot be part of a {ref}`profile <profiles>`. The following instance properties are available: ```{list-table} :header-rows: 1 :widths: 2 1 4 - Property Read-only Description - `name` yes Instance name (see {ref}`instance-name-requirements`) - `architecture` no Instance architecture ``` (instance-name-requirements)= The instance name can be changed only by renaming the instance with the command. Valid instance names must fulfill the following requirements: The name must be between 1 and 63 characters long. The name must contain only letters, numbers and dashes from the ASCII table. The name must not start with a digit or a dash. The name must not end with a dash. The purpose of these requirements is to ensure that the instance name can be used in DNS records, on the file system, in various security profiles and as the host name of the instance itself."
}
] | {
"category": "Runtime",
"file_name": "instance_properties.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "HwameiStor have been used by a number of organizations and users both at PROD or UAT environments. The following is a list that have used HwameiStor. The actual names cannot be made public due to privacy policies: | Organization | Stateful Workloads | Environment | | :-- | :-- | :- | | A bank in SouthWest China | ElasticSearch, MySQL, Prometheus, Registry | PROD, UAT | | A multinational retailer | ElasticSearch, MySQL, Prometheus, PostgresDB, Registry | PROD, UAT | | A retailer in South China | Prometheus, PostgresDB, Registry | PROD | | A IIoT in East China | ElasticSearch, MySQL, Prometheus, Registry | PROD | | A bank in Central China | ElasticSearch, MySQL, Prometheus, PostgresDB, Registry | PROD | | A bank in East China | ElasticSearch, MySQL, Prometheus, PostgresDB, Registry | UAT | | A media in North China | ElasticSearch, MySQL, Prometheus, Registry | PROD | | An IT company in East China | ElasticSearch, MySQL, Prometheus, PostgresDB, Registry | PROD | PROD: production environment UAT: user acceptance testing environment"
}
] | {
"category": "Runtime",
"file_name": "adopters.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Incus is a modern, secure and powerful system container and virtual machine manager. % Include content from ```{include} ../README.md :start-after: <!-- Include start Incus intro --> :end-before: <!-- Include end Incus intro --> ``` % Include content from ```{include} ../README.md :start-after: <!-- Include start security --> :end-before: <!-- Include end security --> ``` See for detailed information. ````{important} % Include content from ```{include} ../README.md :start-after: <!-- Include start security note --> :end-before: <!-- Include end security note --> ``` ```` Incus is free software and developed under the . Its an open source project that warmly welcomes community projects, contributions, suggestions, fixes and constructive feedback. (see if needed) ```{toctree} :hidden: :titlesonly: self Getting Started </tutorial/first_steps> General </general> Client </client> Server </server> Instances </instances> Storage </storage> Networks </networks> Images </images> Projects </projects> Clustering </clustering> API </api> Security </security> Internals </internals> External resources </external_resources> ```"
}
] | {
"category": "Runtime",
"file_name": "index.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |