content
listlengths
1
171
tag
dict
[ { "data": "This directory contains a set of functional tests for rkt. The tests use to spawn various `rkt run` commands and look for expected output. The tests run on the through the user, which is part of the org on Semaphore. This user is authorized against the corresponding GitHub account. The credentials for `rktbot` are currently managed by CoreOS. The tests are executed on Semaphore at each Pull Request (PR). Each GitHub PR page should have a link to the . Developers can disable the tests by adding `[skip ci]` in the last commit message of the PR. Select the \"Other\" language. We don't use \"Go\" language setting, because rkt is not a typical go project (building it with a go get won't get you too far). Also, the \"Go\" setting is creating a proper GOPATH directory structure with some symlinks on top, which rkt does not need at all and some go tools we use do not like the symlinks in GOPATH at all. The tests will run on two VMs. The \"Setup\" and \"Post thread\" sections will be executed on both VMs. The \"Thread 1\" and \"Thread 2\" will be executed in parallel in separate VMs. ``` sudo groupadd rkt sudo gpasswd -a runner rkt ./tests/install-deps.sh ``` ``` ./tests/build-and-run-tests.sh -f none -c ./tests/build-and-run-tests.sh -f kvm -c ``` ``` ./tests/build-and-run-tests.sh -f coreos -c ./tests/build-and-run-tests.sh -f host -c ``` ``` git clean -ffdx ``` The LKVM stage1 or other versions of systemd are not currently tested. It would be possible to add more tests with the following commands: ``` ./tests/build-and-run-tests.sh -f src -s v227 -c ./tests/build-and-run-tests.sh -f src -s master -c ./tests/build-and-run-tests.sh -f src -s v229 -c ``` The build script has the following parameters: `-c` - Run cleanup. Cleanup has two phases: after build and after tests. In the after build phase, this script removes artifacts from external dependencies (like kernel sources in the `kvm`" }, { "data": "In the after tests phase, it removes `rkt` build artifacts and (if the build is running on CI or if the `-x` flag is used) it unmounts the remaining `rkt` mountpoints, removes unused `rkt` NICs and flushes the current state of IPAM IP reservation. `-d` - Run build based on current state of local rkt repository instead of commited changes. `-f` - Select flavor for rkt. You can choose only one from the following list: \"`coreos`, `host`, `kvm`, `none`, `src`\". `-j` - Build without running unit and functional tests. Artifacts are available after build. `-s` - Systemd version. You can choose `master` or a tag from the . `-u` - Show usage message and exit. `-x` - Force after-test cleanup on a non-CI system. WARNING: This flag can affect your system. Use with caution. Select `Ubuntu 14.04 LTS v1503 (beta with Docker support)`. The platform with Docker support means the tests will run in a VM. The tests can be run manually. There is a rule to run unit, functional and all tests. The unit tests can be run with `make unit-check` after you the project. The functional tests require to pass `--enable-functional-tests` to the configure script, then, after building the project, you can run the tests. ``` ./autogen.sh ./configure --enable-functional-tests make -j4 make functional-check ``` For more details about the `--enable-functional-tests` parameter, see . To run all tests, see to configure and build it with functional tests enabled. Instead of `make functional-check` you have to call `make check` to run all tests. You can use a `GOTESTFUNC_ARGS` variable to pass additional parameters to `go test`. This is mostly useful for running only the selected functional tests. The variable is ignored in unit tests. ``` make check GOTESTFUNC_ARGS='-run NameOfTheTest' make functional-check GOTESTFUNC_ARGS='-run NameOfTheTest' ``` Run `go help testflag` to get more information about possible flags accepted by `go test`. Running the benchmark is similar to running the other tests, we just need to pass additional parameters to `go test`: ``` make check GOTESTFUNC_ARGS='-bench=. -run=Benchmark' make functional-check GOTESTFUNC_ARGS='-bench=. -run=Benchmark' ```" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "title: Building Weave Net menu_order: 220 search_type: Documentation You only need to build Weave Net if you want to work on the Weave Net codebase (or you just enjoy building software). Apart from the `weave` shell script, Weave Net is delivered as a set of container images. There is no distribution-specific packaging, so in principle it shouldn't matter which Linux distribution you build under. But naturally, Docker is a prerequisite (version 1.6.0 or later). The only way to build is by using the the build container; the `Makefile` is setup to make this transparent. This method is documented below, and can be run directly on your machine or using a Vagrant VM. The weave git repository should be cloned into `$GOPATH/src/github.com/weaveworks/weave`, in accordance with [the Go workspace conventions](https://golang.org/doc/code.html#Workspaces): ``` $ WEAVE=github.com/weaveworks/weave $ git clone https://$WEAVE $GOPATH/src/$WEAVE $ cd $GOPATH/src/$WEAVE $ git submodule update --init ``` Next install Docker if you haven't already, by following the instructions . Then to actually build, simply do: ``` $ make ``` On a fresh repository, the Makefile will do the following: assemble the build container download specific versions of all the dependencies build the weave components in the build container package them into two Docker images (`weaveworks/weave`, `weaveworks/weaveexec`) export these images as `weave.tar.gz`. The first two steps may take a while - don't worry, they are are cached and should not need to be redone very often. If you aren't running Linux, or otherwise don't want to run the Docker daemon outside a VM, you can use to run a development environment. You'll probably need to install too, for Vagrant to run VMs in. First, check out the code: ``` $ git clone https://github.com/weaveworks/weave $ cd weave $ git submodule update --init ``` The `Vagrantfile` in the top directory constructs a VM that has: Docker installed Go tools installed Weave's dependencies installed `$GOPATH` set to `~` the local working directory mapped as a synced folder into the right place in `$GOPATH`. Once you are in the working directory you can issue: ``` $ vagrant up ``` and wait for a while (don't worry, the long download and package installation is done just once). The working directory is sync'ed with `~/src/github.com/weaveworks/weave` on the VM, so you can edit files and use git and so on in the regular filesystem. To build and run the code, you need to use the VM. To log in and build the weave image, do: ``` $ vagrant ssh vm$ cd src/github.com/weaveworks/weave vm$ make ``` The Docker daemon is also running in this VM, so you can then do: ``` vm$ sudo ./weave launch vm$ sudo docker ps ``` and so on. If you are looking to just do a build and not run anything on this VM, you can do so with: ``` $ vagrant ssh -c 'make -C src/github.com/weaveworks/weave' ``` you should then find a `weave.tar.gz` container snapshot tarball in the top-level directory. You can use that snapshot with `docker load` against a different host, e.g.: ``` $ export DOCKER_HOST=tcp://<HOST:PORT> $ docker load < weave.tar.gz ``` You can provide extra Vagrant configuration by putting a file `Vagrant.local` in the same place as `Vagrantfile`; for instance, to forward additional ports." } ]
{ "category": "Runtime", "file_name": "building.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "On some of rkt's subcommands (, ), the `--net` flag allows you to configure the pod's network. The various options can be grouped by two categories: This document gives a brief overview of the supported plugins. More examples and advanced topics are linked in the section. When `--net=host` is passed the pod's apps will inherit the network namespace of the process that is invoking rkt. If rkt is directly called from the host the apps within the pod will share the network stack and the interfaces with the host machine. This means that every network service that runs in the pod has the same connectivity as if it was started on the host directly. Applications that run in a pod which shares the host network namespace are able to access everything associated with the host's network interfaces: IP addresses, routes, iptables rules and sockets, including abstract Linux sockets. Depending on the host's setup these abstract Linux sockets, used by applications like X11 and D-Bus, might expose critical endpoints to the pod's applications. This risk can be avoided by configuring a separate namespace for pod. If anything other than `host` is passed to `--net=`, the pod will live in a separate network namespace with the help of and its plugin system. The network setup for the pod's network namespace depends on the available CNI configuration files that are shipped with rkt and also configured by the user. Every network must have a unique name and can only be joined once by every pod. Passing a list of comma separated network as in `--net=net1,net2,net3,...` tells rkt which networks should be joined. This is useful for grouping certain pod networks together while separating others. There is also the possibility to load all configured networks by using `--net=all`. rkt ships with two built-in networks, named default and default-restricted. The default network is loaded automatically in three cases: `--net` is not present on the command line `--net` is passed with no options `--net=default`is passed It consists of a loopback device and a veth device. The veth pair creates a point-to-point link between the pod and the host. rkt will allocate an IPv4 address out of 172.16.28.0/24 for the pod's veth interface. It will additionally set the default route in the pod namespace. Finally, it will enable IP masquerading on the host to NAT the egress traffic. Note: The default network must be explicitly listed in order to be loaded when `--net=n1,n2,...` is specified with a list of network names. Example: If you want default networking and two more networks you need to pass `--net=default,net1,net2`. The default-restricted network does not set up the default route and IP masquerading. It only allows communication with the host via the veth interface and thus enables the pod to communicate with the metadata service which runs on the" }, { "data": "If default is not among the specified networks, the default-restricted network will be added to the list of networks automatically. It can also be loaded directly by explicitly passing `--net=default-restricted`. The passing of `--net=none` will put the pod in a network namespace with only the loopback networking. This can be used to completely isolate the pod's network. ```sh $ sudo rkt run --interactive --net=none kinvolk.io/aci/busybox:1.24 (...) / # ip address 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo validlft forever preferredlft forever inet6 ::1/128 scope host validlft forever preferredlft forever / # ip route / # ping localhost PING localhost (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.022 ms ^C ``` The situation here is very straightforward: no routes, the interface lo with the local address. The resolution of localhost is enabled in rkt by default, as it will generate a minimal `/etc/hosts` inside the pod if the image does not provide one. In addition to the default network (veth) described in the previous sections, rkt pods can be configured to join additional networks. Each additional network will result in an new interface being set up in the pod. The type of network interface, IP, routes, etc is controlled via a configuration file residing in `/etc/rkt/net.d` directory. The network configuration files are executed in lexicographically sorted order. Each file consists of a JSON dictionary as shown below: ```json $ cat /etc/rkt/net.d/10-containers.conf { \"name\": \"containers\", \"type\": \"bridge\", \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\" } } ``` This configuration file defines a linux-bridge based network on 10.1.0.0/16 subnet. The following fields apply to all configuration files. Additional fields are specified for various types. name (string): an arbitrary label for the network. By convention the conf file is named with a leading ordinal, dash, network name, and .conf extension. type (string): the type of network/interface to create. The type actually names a network plugin. rkt is bundled with some built-in plugins. ipam (dict): IP Address Management -- controls the settings related to IP address assignment, gateway, and routes. ptp is probably the simplest type of networking and is used to set up default network. It creates a virtual ethernet pair (akin to a pipe) and places one end into pod and the other on the host. `ptp` specific configuration fields are: mtu (integer): the size of the MTU in bytes. ipMasq (boolean): whether to set up IP masquerading on the host. Like the ptp type, `bridge` will create a veth pair and attach one end to the pod. However the host end of the veth will be plugged into a linux-bridge. The configuration file specifies the bridge name and if the bridge does not exist, it will be created. The bridge can optionally be configured to act as the gateway for the network. `bridge` specific configuration fields are: bridge (string): the name of the bridge to create and/or plug into. Defaults to" }, { "data": "isGateway (boolean): whether the bridge should be assigned an IP and act as a gateway. mtu (integer): the size of the MTU in bytes for bridge and veths. ipMasq (boolean): whether to set up IP masquerading on the host. macvlan behaves similar to a bridge but does not provide communication between the host and the pod. macvlan creates a virtual copy of a master interface and assigns the copy a randomly generated MAC address. The pod can communicate with the network that is attached to the master interface. The distinct MAC address allows the pod to be identified by external network services like DHCP servers, firewalls, routers, etc. macvlan interfaces cannot communicate with the host via the macvlan interface. This is because traffic that is sent by the pod onto the macvlan interface is bypassing the master interface and is sent directly to the interfaces underlying network. Before traffic gets sent to the underlying network it can be evaluated within the macvlan driver, allowing it to communicate with all other pods that created their macvlan interface from the same master interface. `macvlan` specific configuration fields are: master (string): the name of the host interface to copy. This field is required. mode (string): one of \"bridge\", \"private\", \"vepa\", or \"passthru\". This controls how traffic is handled between different macvlan interfaces on the same host. See for discussion of modes. Defaults to \"bridge\". mtu (integer): the size of the MTU in bytes for the macvlan interface. Defaults to MTU of the master device. ipMasq (boolean): whether to set up IP masquerading on the host. Defaults to false. ipvlan behaves very similar to macvlan but does not provide distinct MAC addresses for pods. macvlan and ipvlan can't be used on the same master device together. ipvlan creates virtual copies of interfaces like macvlan but does not assign a new MAC address to the copied interface. This does not allow the pods to be distinguished on a MAC level and so cannot be used with DHCP servers. In other scenarios this can be an advantage, e.g. when an external network port does not allow multiple MAC addresses. ipvlan also solves the problem of MAC address exhaustion that can occur with a large number of pods copying the same master interface. ipvlan interfaces are able to have different IP addresses than the master interface and will therefore have the needed distinction for most use-cases. `ipvlan` specific configuration fields are: master (string): the name of the host interface to copy. This field is required. mode (string): one of \"l2\", \"l3\". See . Defaults to \"l2\". mtu (integer): the size of the MTU in bytes for the ipvlan interface. Defaults to MTU of the master device. ipMasq (boolean): whether to set up IP masquerading on the host. Defaults to false. Notes ipvlan can cause problems with duplicated IPv6 link-local addresses since they are partially constructed using the MAC" }, { "data": "This issue is being currently addressed by the ipvlan kernel module developers. The policy for IP address allocation, associated gateway and routes is separately configurable via the `ipam` section of the configuration file. rkt currently ships with two IPAM types: host-local and DHCP. Like the network types, IPAM types can be implemented by third-parties via plugins. host-local type allocates IPs out of specified network range, much like a DHCP server would. The difference is that while DHCP uses a central server, this type uses a static configuration. Consider the following conf: ```json $ cat /etc/rkt/net.d/10-containers.conf { \"name\": \"containers\", \"type\": \"bridge\", \"bridge\": \"rkt1\", \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\" } } ``` This configuration instructs rkt to create `rkt1` Linux bridge and plugs pods into it via veths. Since the subnet is defined as `10.1.0.0/16`, rkt will assign individual IPs out of that range. The first pod will be assigned 10.1.0.2/16, next one 10.1.0.3/16, etc (it reserves 10.1.0.1/16 for gateway). Additional configuration fields: subnet (string): subnet in CIDR notation for the network. rangeStart (string): first IP address from which to start allocating IPs. Defaults to second IP in `subnet` range. rangeEnd (string): last IP address in the allocatable range. Defaults to last IP in `subnet` range. gateway (string): the IP address of the gateway in this subnet. routes (list of strings): list of IP routes in CIDR notation. The routes get added to pod namespace with next-hop set to the gateway of the network. The following shows a more complex IPv6 example in combination with the ipvlan plugin. The gateway is configured for the default route, allowing the pod to access external networks via the ipvlan interface. ```json { \"name\": \"ipv6-public\", \"type\": \"ipvlan\", \"master\": \"em1\", \"mode\": \"l3\", \"ipam\": { \"type\": \"host-local\", \"subnet\": \"2001:0db8:161:8374::/64\", \"rangeStart\": \"2001:0db8:161:8374::1:2\", \"rangeEnd\": \"2001:0db8:161:8374::1:fffe\", \"gateway\": \"fe80::1\", \"routes\": [ { \"dst\": \"::0/0\" } ] } } ``` The DHCP type requires a special client daemon, part of the , to be running on the host. This acts as a proxy between a DHCP client running inside the container and a DHCP service already running on the network, as well as renewing leases appropriately. The DHCP plugin binary can be executed in the daemon mode by launching it with `daemon` argument. However, in rkt the DHCP plugin is bundled in stage1.aci so this requires extracting the binary from it: ``` $ sudo ./rkt fetch --insecure-options=image ./stage1.aci $ sudo ./rkt image extract coreos.com/rkt/stage1 /tmp/stage1 $ sudo cp /tmp/stage1/rootfs/usr/lib/rkt/plugins/net/dhcp . ``` Now start the daemon: ``` $ sudo ./dhcp daemon ``` It is now possible to use the DHCP type by specifying it in the ipam section of the network configuration file: ```json { \"name\": \"lan\", \"type\": \"macvlan\", \"master\": \"eth0\", \"ipam\": { \"type\": \"dhcp\" } } ``` For more information about the DHCP plugin, see the . This plugin is designed to work in conjunction with flannel, a network fabric for" }, { "data": "The basic network configuration is as follows: ```json { \"name\": \"containers\", \"type\": \"flannel\" } ``` This will set up a linux-bridge, connect the container to the bridge and assign container IPs out of the subnet that flannel assigned to the host. For more information included advanced configuration options, see . Apart from the aforementioned plugins bundled with rkt, it is possible to run custom plugins that implement the . CNI plugins are just binaries that receive a JSON configuration file and rkt looks for plugin binaries and configuration files in certain well-defined locations. As we saw before, the default location where rkt looks for CNI configurations is `$LOCALCONFIGDIRECTORY/net.d/`, where `$LOCALCONFIGDIRECTORY` is `/etc/rkt` by default (it can be changed with rkt's `--local-config` flag). rkt looks for plugin binaries in two directories: `/usr/lib/rkt/plugins/net` and `$LOCALCONFIGDIRECTORY/net.d/`. We'll use a the loopback plugin. This is a very simple plugin that just brings up a loopback interface. To build the plugin, you can get the containernetworking/plugins repo, build it, and copy it to one of the directories where rkt looks for plugins: ``` $ go get -d github.com/containernetworking/plugins $ cd $GOPATH/containernetworking/plugins/plugins/main/loopback $ go build $ sudo cp loopback /usr/lib/rkt/plugins/net ``` Then you need a JSON configuration in the appropriate directory: ```json $ cat /etc/rkt/net.d/10-loopback.conf { \"name\": \"loopback-test\", \"type\": \"loopback\" } ``` Finally, just run rkt with `--net` set to the name of the network, in this case `loopback-test`. We'll run it with `--debug` to check that the plugin is actually loaded: ```sh $ sudo rkt --debug run --net=loopback-test --interactive kinvolk.io/aci/busybox --exec=ip -- a (...) networking: loading network loopback-test with type loopback (...) 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo validlft forever preferredlft forever inet6 ::1/128 scope host validlft forever preferredlft forever Container rkt-7d7ec0ef-a6be-4b6f-8abf-0505a402af37 exited successfully. ``` Apps declare their public ports in the image manifest file. A user can expose some or all of these ports to the host when running a pod. Doing so allows services inside the pods to be reachable through the host's IP address. The example below demonstrates an image manifest snippet declaring a single port: ```json \"ports\": [ { \"name\": \"http\", \"port\": 80, \"protocol\": \"tcp\" } ] ``` The pod's TCP port 80 can be mapped to an arbitrary port on the host during rkt invocation: ``` ``` Now, any traffic arriving on host's TCP port 8888 will be forwarded to the pod on port 80. The network that will be chosen for the port forwarding depends on the ipMasq setting of the configured networks. If at least one of them has ipMasq enabled, the forwarded traffic will be passed through the first loaded network that has IP masquerading enabled. If no network is masqueraded, the last loaded network will be used. As a reminder, the sort order of the loaded networks is detailed in the chapter about . rkt also supports socket activation. This is documented in ." } ]
{ "category": "Runtime", "file_name": "overview.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "Below is a non-comprehensive document on some tips and tricks for troubleshooting/debugging/inspecting the behavior of CRI-O. Often with a long-running process, it can be useful to know what that process is up to. CRI-O has built-in functionality to print the go routine stacks to provide such information. All one has to do is send SIGUSR1 to CRI-O, either with `kill` or `systemctl` (if running CRI-O as a systemd unit): ```shell kill -USR1 $crio-pid systemctl kill -s USR1 crio.service ``` CRI-O will catch the signal, and write the routine stacks to `/tmp/crio-goroutine-stacks-$timestamp.log` You may have a need to manually run Go garbage collection for CRI-O. To force garbage collection, send CRI-O SIGUSR2 using `kill` or `systemctl` (if running CRI-O as a systemd unit). ```shell kill -s SIGUSR2 $crio-pid systemctl kill -s USR2 crio.service ```" } ]
{ "category": "Runtime", "file_name": "debugging.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "OCI group announced the andin July 2023. A notable breaking change is that the OCI Artifact Manifest no longer exists in the OCI Image-spec v1.1.0-rc4. Two experimental flags `--image-spec` and `--distribution-spec` were introduced to commands `oras push` and `oras attach` in ORAS CLI v1.0.0 as explained in . To align with the OCI Image-spec v1.1.0-rc4, the flag `--image-spec` and its options are changed in ORAS v1.1.0 accordingly. This document elaborates on the major changes of ORAS CLI v1.1.0 proposed in . Provide different options to allow users to customize the manifest build and distribution behavior Provide an easy-to-use and secure user experience when push content to OCI registries Enable ORAS to work with more OCI registries Using the following flags in `oras push` and `oras attach` respectively with different variables to configure the manifest build and distribution behaviors. Using a flag `--image-spec` with `oras push` Using a flag `--distribution-spec` with `oras attach`, `oras cp`, and `oras manifest push` to configure compatibility with registry when pushing or copying an OCI image manifest. This flag is also applicable to `oras discover` for viewing and filtering the referrers. Use the flag `--image-spec <spec version>` in `oras push` to specify which version of the OCI Image-spec when building and pushing an OCI image manifest. It supports specifying the option `v1.0` or `v1.1` as the spec version. The option `v1.1` is the default behavior in `oras push` since ORAS CLI v1.1.0 so users don't need to manually specify this option. If users want to build an OCI image manifest to a registry that compliant with OCI Spec v1.0, they can specify `--image-spec v1.0`. An OCI image manifest that conforms the OCI Image-spec v1.0.2 will be packed and uploaded. For example ```bash oras push localhost:5000/hello-world:v1 \\ --image-spec v1.0 \\ --artifact-type application/vnd.me.config \\ sbom.json ``` Based on the Referrers API status in the registry, users can use flag `--distribution-spec <spec version>-<api>-<option>` to configure compatibility with registry. | registry support | v1.1-referrers-api | v1.1-referrers-tag | | :-- | | | | OCI spec 1.0 | no | yes | | OCI spec 1.1 without referrers API | no | yes | | OCI spec 1.1 with referrers API support | yes | yes | Using a flag `--distribution-spec v1.1-referrers-api` to disable backward compatibility. It only allows uploading OCI image manifest to OCI v1.1 compliant registry with Referrers API enabled. This is the most strict option for setting compatibility with the registry. Users might choose it for security requirements. For example, using this flag, ORAS will attach OCI image manifest only to an OCI v1.1 compliant registry with Referrers API enabled and no further actions for maintaining references in OCI registries. ```bash oras attach localhost:5000/hello-world:v1 \\ --artifact-type sbom/example \\ --distribution-spec v1.1-referrers-api \\ sbom.json ``` Using `--distribution-spec v1.1-referrers-tag` to enable maximum backward compatibility with the registry. It will first attempt to upload the OCI image manifest with the regardless of whether the registry complies with the OCI Spec v1.0 or v1.1 or supports Referrers API. For example: ```bash oras attach localhost:5000/hello-world:v1 \\ --artifact-type sbom/example \\ --distribution-spec v1.1-referrers-tag \\ sbom.json ``` Similarly, users can use `oras cp`, and `oras manifest push` with the flag `--distribution-spec` to configure compatibility with registry when pushing or copying an OCI image manifest, or use `oras discover` with the flag `--distribution-spec` for filtering the referrers in the view." } ]
{ "category": "Runtime", "file_name": "compatibility-mode.md", "project_name": "ORAS", "subcategory": "Cloud Native Storage" }
[ { "data": "Adding the ability to configure the Ceph config options via the CephCluster CRD. The goal is not to replace the `rook-config-override` ConfigMap since it is still needed in some scenarios such as if mons need settings applied at first launch. Users need to update the `rook-config-override` and add their custom options, e.g., for setting a custom OSD scrubbing schedule (). Adding a new structure to the CephCluster CRD under `.spec` named `cephConfig:`. The `cephConfig` structure will be structured in a way of `Target -> Options` (target being the service to set the options for, e.g., whole cluster `global`, specific OSD `osd.3`). ```yaml spec: cephConfig: global: osdmaxscrubs: \"5\" osdpooldefault_size: \"1\" monwarnonpoolno_redundancy: \"false\" bdevflockretry: \"20\" bluefsbufferedio: \"false\" mondataavail_warn: \"10\" \"osd.3\": bluestorecacheautotune: \"false\" ``` This structure would be equal to a `ceph.conf` like this: ```console [global] osdmaxscrubs = 5 osdpooldefault_size = 1 monwarnonpoolno_redundancy = false bdevflockretry = 20 bluefsbufferedio = false mondataavail_warn = 10 [osd.3] bluestorecacheautotune = false ``` The operator will use the Ceph config store (that is accessed via `ceph config assimilate-conf`) to apply the config options to the Ceph cluster (just after the MONs have been created and have formed quorum, before anything else is created). The operator won't be unsetting any previously set config options or restore config options to their default value. The operator will ignore a few vital config options, such as `fsid`, `keyring` and `mon_host`." } ]
{ "category": "Runtime", "file_name": "ceph-config-via-cluster-crd.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Show contents of table \"l2-announce\" ``` cilium-dbg statedb l2-announce [flags] ``` ``` -h, --help help for l2-announce -w, --watch duration Watch for new changes with the given interval (e.g. --watch=100ms) ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Inspect StateDB" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_statedb_l2-announce.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This directory contains tests that verify the functionality of K8s pods deployed with Sysbox. The tests use crictl + CRI-O + Sysbox to create and manage the pods. They do not use K8s itself to create them (as we've not yet implemented support for run K8s inside the test container; we could use K8s.io KinD or Minikube in the future, but not sure if this will work). The test container comes with crictl + CRI-O pre-installed and configured to use Sysbox, so the tests need only issue crictl commands to generate the pods. ch is just KinD with sysbox containers)." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "`import \"github.com/montanaflynn/stats\"` Package stats is a well tested and comprehensive statistics library package with no dependencies. Example Usage: // start with some source data to use data := []float64{1.0, 2.1, 3.2, 4.823, 4.1, 5.8} // you could also use different types like this // data := stats.LoadRawData([]int{1, 2, 3, 4, 5}) // data := stats.LoadRawData([]interface{}{1.1, \"2\", 3}) // etc... median, _ := stats.Median(data) fmt.Println(median) // 3.65 roundedMedian, _ := stats.Round(median, 0) fmt.Println(roundedMedian) // 4 MIT License Copyright (c) 2014-2020 Montana Flynn (<a href=\"https://montanaflynn.com\">https://montanaflynn.com</a>) * * * * * * * * * * * * * * * * * * * * * * * * * * ``` go var ( // ErrEmptyInput Input must not be empty ErrEmptyInput = statsError{\"Input must not be empty.\"} // ErrNaN Not a number ErrNaN = statsError{\"Not a number.\"} // ErrNegative Must not contain negative values ErrNegative = statsError{\"Must not contain negative values.\"} // ErrZero Must not contain zero values ErrZero = statsError{\"Must not contain zero values.\"} // ErrBounds Input is outside of range ErrBounds = statsError{\"Input is outside of range.\"} // ErrSize Must be the same length ErrSize = statsError{\"Must be the same length.\"} // ErrInfValue Value is infinite ErrInfValue = statsError{\"Value is infinite.\"} // ErrYCoord Y Value must be greater than zero ErrYCoord = statsError{\"Y Value must be greater than zero.\"} ) ``` These are the package-wide error values. All error identification should use these values. <a href=\"https://github.com/golang/go/wiki/Errors#naming\">https://github.com/golang/go/wiki/Errors#naming</a> ``` go var ( EmptyInputErr = ErrEmptyInput NaNErr = ErrNaN NegativeErr = ErrNegative ZeroErr = ErrZero BoundsErr = ErrBounds SizeErr = ErrSize InfValue = ErrInfValue YCoordErr = ErrYCoord EmptyInput = ErrEmptyInput ) ``` Legacy error names that didn't start with Err ``` go func AutoCorrelation(data Float64Data, lags int) (float64, error) ``` AutoCorrelation is the correlation of a signal with a delayed copy of itself as a function of delay ``` go func ChebyshevDistance(dataPointX, dataPointY Float64Data) (distance float64, err error) ``` ChebyshevDistance computes the Chebyshev distance between two data sets ``` go func Correlation(data1, data2 Float64Data) (float64, error) ``` Correlation describes the degree of relationship between two sets of data ``` go func Covariance(data1, data2 Float64Data) (float64, error) ``` Covariance is a measure of how much two sets of data change ``` go func CovariancePopulation(data1, data2 Float64Data) (float64, error) ``` CovariancePopulation computes covariance for entire population between two" }, { "data": "``` go func CumulativeSum(input Float64Data) ([]float64, error) ``` CumulativeSum calculates the cumulative sum of the input slice ``` go func Entropy(input Float64Data) (float64, error) ``` Entropy provides calculation of the entropy ``` go func EuclideanDistance(dataPointX, dataPointY Float64Data) (distance float64, err error) ``` EuclideanDistance computes the Euclidean distance between two data sets ``` go func ExpGeom(p float64) (exp float64, err error) ``` ProbGeom generates the expectation or average number of trials for a geometric random variable with parameter p ``` go func GeometricMean(input Float64Data) (float64, error) ``` GeometricMean gets the geometric mean for a slice of numbers ``` go func HarmonicMean(input Float64Data) (float64, error) ``` HarmonicMean gets the harmonic mean for a slice of numbers ``` go func InterQuartileRange(input Float64Data) (float64, error) ``` InterQuartileRange finds the range between Q1 and Q3 ``` go func ManhattanDistance(dataPointX, dataPointY Float64Data) (distance float64, err error) ``` ManhattanDistance computes the Manhattan distance between two data sets ``` go func Max(input Float64Data) (max float64, err error) ``` Max finds the highest number in a slice ``` go func Mean(input Float64Data) (float64, error) ``` Mean gets the average of a slice of numbers ``` go func Median(input Float64Data) (median float64, err error) ``` Median gets the median number in a slice of numbers ``` go func MedianAbsoluteDeviation(input Float64Data) (mad float64, err error) ``` MedianAbsoluteDeviation finds the median of the absolute deviations from the dataset median ``` go func MedianAbsoluteDeviationPopulation(input Float64Data) (mad float64, err error) ``` MedianAbsoluteDeviationPopulation finds the median of the absolute deviations from the population median ``` go func Midhinge(input Float64Data) (float64, error) ``` Midhinge finds the average of the first and third quartiles ``` go func Min(input Float64Data) (min float64, err error) ``` Min finds the lowest number in a set of data ``` go func MinkowskiDistance(dataPointX, dataPointY Float64Data, lambda float64) (distance float64, err error) ``` MinkowskiDistance computes the Minkowski distance between two data sets Arguments: dataPointX: First set of data points dataPointY: Second set of data points. Length of both data sets must be equal. lambda: aka p or city blocks; With lambda = 1 returned distance is manhattan distance and lambda = 2; it is euclidean distance. Lambda reaching to infinite - distance would be chebysev distance. Return: Distance or error ``` go func Mode(input Float64Data) (mode []float64, err error) ``` Mode gets the mode [most frequent value(s)] of a slice of float64s ``` go func Ncr(n, r int) int ``` Ncr is an N choose R algorithm. Aaron Cannon's algorithm. ``` go func NormBoxMullerRvs(loc float64, scale float64, size int) []float64 ``` NormBoxMullerRvs generates random variates using the BoxMuller transform. For more information please visit: <a href=\"http://mathworld.wolfram.com/Box-MullerTransformation.html\">http://mathworld.wolfram.com/Box-MullerTransformation.html</a> ``` go func NormCdf(x float64, loc float64, scale float64) float64 ``` NormCdf is the cumulative distribution function. ``` go func NormEntropy(loc float64, scale float64) float64 ``` NormEntropy is the differential entropy of the RV. ``` go func NormFit(data []float64) [2]float64 ``` NormFit returns the maximum likelihood estimators for the Normal Distribution. Takes array of float64 values. Returns array of Mean followed by Standard Deviation. ``` go func NormInterval(alpha float64, loc float64, scale float64) [2]float64 ``` NormInterval finds endpoints of the range that contains alpha percent of the distribution. ``` go func NormIsf(p float64, loc float64, scale float64) (x float64) ``` NormIsf is the inverse survival function (inverse of sf). ``` go func NormLogCdf(x float64, loc float64, scale float64) float64 ``` NormLogCdf is the log of the cumulative distribution function. ``` go func NormLogPdf(x float64, loc float64, scale float64) float64 ``` NormLogPdf is the log of the probability density function. ``` go func NormLogSf(x float64, loc float64, scale float64) float64 ``` NormLogSf is the log of the survival function. ``` go func NormMean(loc float64, scale float64) float64 ``` NormMean is the mean/expected value of the distribution. ``` go func NormMedian(loc float64, scale float64) float64 ``` NormMedian is the median of the distribution. ``` go func NormMoment(n int, loc float64, scale float64) float64 ``` NormMoment approximates the non-central (raw) moment of order n. For more information please visit: <a href=\"https://math.stackexchange.com/questions/1945448/methods-for-finding-raw-moments-of-the-normal-distribution\">https://math.stackexchange.com/questions/1945448/methods-for-finding-raw-moments-of-the-normal-distribution</a> ``` go func NormPdf(x float64, loc float64, scale float64) float64 ``` NormPdf is the probability density" }, { "data": "``` go func NormPpf(p float64, loc float64, scale float64) (x float64) ``` NormPpf is the point percentile function. This is based on Peter John Acklam's inverse normal CDF. algorithm: <a href=\"http://home.online.no/~pjacklam/notes/invnorm/\">http://home.online.no/~pjacklam/notes/invnorm/</a> (no longer visible). For more information please visit: <a href=\"https://stackedboxes.org/2017/05/01/acklams-normal-quantile-function/\">https://stackedboxes.org/2017/05/01/acklams-normal-quantile-function/</a> ``` go func NormPpfRvs(loc float64, scale float64, size int) []float64 ``` NormPpfRvs generates random variates using the Point Percentile Function. For more information please visit: <a href=\"https://demonstrations.wolfram.com/TheMethodOfInverseTransforms/\">https://demonstrations.wolfram.com/TheMethodOfInverseTransforms/</a> ``` go func NormSf(x float64, loc float64, scale float64) float64 ``` NormSf is the survival function (also defined as 1 - cdf, but sf is sometimes more accurate). ``` go func NormStats(loc float64, scale float64, moments string) []float64 ``` NormStats returns the mean, variance, skew, and/or kurtosis. Mean(m), variance(v), skew(s), and/or kurtosis(k). Takes string containing any of 'mvsk'. Returns array of m v s k in that order. ``` go func NormStd(loc float64, scale float64) float64 ``` NormStd is the standard deviation of the distribution. ``` go func NormVar(loc float64, scale float64) float64 ``` NormVar is the variance of the distribution. ``` go func Pearson(data1, data2 Float64Data) (float64, error) ``` Pearson calculates the Pearson product-moment correlation coefficient between two variables ``` go func Percentile(input Float64Data, percent float64) (percentile float64, err error) ``` Percentile finds the relative standing in a slice of floats ``` go func PercentileNearestRank(input Float64Data, percent float64) (percentile float64, err error) ``` PercentileNearestRank finds the relative standing in a slice of floats using the Nearest Rank method ``` go func PopulationVariance(input Float64Data) (pvar float64, err error) ``` PopulationVariance finds the amount of variance within a population ``` go func ProbGeom(a int, b int, p float64) (prob float64, err error) ``` ProbGeom generates the probability for a geometric random variable with parameter p to achieve success in the interval of [a, b] trials See <a href=\"https://en.wikipedia.org/wiki/Geometricdistribution\">https://en.wikipedia.org/wiki/Geometricdistribution</a> for more information ``` go func Round(input float64, places int) (rounded float64, err error) ``` Round a float to a specific decimal place or precision ``` go func Sample(input Float64Data, takenum int, replacement bool) ([]float64, error) ``` Sample returns sample from input with replacement or without ``` go func SampleVariance(input Float64Data) (svar float64, err error) ``` SampleVariance finds the amount of variance within a sample ``` go func Sigmoid(input Float64Data) ([]float64, error) ``` Sigmoid returns the input values in the range of -1 to 1 along the sigmoid or s-shaped curve, commonly used in machine learning while training neural networks as an activation function. ``` go func SoftMax(input Float64Data) ([]float64, error) ``` SoftMax returns the input values in the range of 0 to 1 with sum of all the probabilities being equal to one. It is commonly used in machine learning neural networks. ``` go func StableSample(input Float64Data, takenum int) ([]float64, error) ``` StableSample like stable sort, it returns samples from input while keeps the order of original" }, { "data": "" }, { "data": "``` go func StandardDeviation(input Float64Data) (sdev float64, err error) ``` StandardDeviation the amount of variation in the dataset ``` go func StandardDeviationPopulation(input Float64Data) (sdev float64, err error) ``` StandardDeviationPopulation finds the amount of variation from the population ``` go func StandardDeviationSample(input Float64Data) (sdev float64, err error) ``` StandardDeviationSample finds the amount of variation from a sample ``` go func StdDevP(input Float64Data) (sdev float64, err error) ``` StdDevP is a shortcut to StandardDeviationPopulation ``` go func StdDevS(input Float64Data) (sdev float64, err error) ``` StdDevS is a shortcut to StandardDeviationSample ``` go func Sum(input Float64Data) (sum float64, err error) ``` Sum adds all the numbers of a slice together ``` go func Trimean(input Float64Data) (float64, error) ``` Trimean finds the average of the median and the midhinge ``` go func VarGeom(p float64) (exp float64, err error) ``` ProbGeom generates the variance for number for a geometric random variable with parameter p ``` go func VarP(input Float64Data) (sdev float64, err error) ``` VarP is a shortcut to PopulationVariance ``` go func VarS(input Float64Data) (sdev float64, err error) ``` VarS is a shortcut to SampleVariance ``` go func Variance(input Float64Data) (sdev float64, err error) ``` Variance the amount of variation in the dataset ``` go type Coordinate struct { X, Y float64 } ``` Coordinate holds the data in a series ``` go func ExpReg(s []Coordinate) (regressions []Coordinate, err error) ``` ExpReg is a shortcut to ExponentialRegression ``` go func LinReg(s []Coordinate) (regressions []Coordinate, err error) ``` LinReg is a shortcut to LinearRegression ``` go func LogReg(s []Coordinate) (regressions []Coordinate, err error) ``` LogReg is a shortcut to LogarithmicRegression ``` go type Float64Data []float64 ``` Float64Data is a named type for []float64 with helper methods ``` go func LoadRawData(raw interface{}) (f Float64Data) ``` LoadRawData parses and converts a slice of mixed data types to floats ``` go func (f Float64Data) AutoCorrelation(lags int) (float64, error) ``` AutoCorrelation is the correlation of a signal with a delayed copy of itself as a function of delay ``` go func (f Float64Data) Correlation(d Float64Data) (float64, error) ``` Correlation describes the degree of relationship between two sets of data ``` go func (f Float64Data) Covariance(d Float64Data) (float64, error) ``` Covariance is a measure of how much two sets of data change ``` go func (f Float64Data) CovariancePopulation(d Float64Data) (float64, error) ``` CovariancePopulation computes covariance for entire population between two variables ``` go func (f Float64Data) CumulativeSum() ([]float64, error) ``` CumulativeSum returns the cumulative sum of the data ``` go func (f Float64Data) Entropy() (float64, error) ``` Entropy provides calculation of the entropy ``` go func (f Float64Data) GeometricMean() (float64, error) ``` GeometricMean returns the median of the data ``` go func (f Float64Data) Get(i int) float64 ``` Get item in slice ``` go func (f Float64Data) HarmonicMean() (float64, error) ``` HarmonicMean returns the mode of the data ``` go func (f Float64Data) InterQuartileRange() (float64, error) ``` InterQuartileRange finds the range between Q1 and Q3 ``` go func (f Float64Data) Len() int ``` Len returns length of slice ``` go func (f Float64Data) Less(i, j int) bool ``` Less returns if one number is less than another ``` go func (f Float64Data) Max() (float64, error) ``` Max returns the maximum number in the data ``` go func (f Float64Data) Mean() (float64, error) ``` Mean returns the mean of the data ``` go func (f Float64Data) Median() (float64, error) ``` Median returns the median of the data ``` go func (f Float64Data) MedianAbsoluteDeviation() (float64, error) ``` MedianAbsoluteDeviation the median of the absolute deviations from the dataset median ``` go func (f Float64Data) MedianAbsoluteDeviationPopulation() (float64, error) ``` MedianAbsoluteDeviationPopulation finds the median of" } ]
{ "category": "Runtime", "file_name": "DOCUMENTATION.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "](https://travis-ci.com/Nokia/danm) ](https://coveralls.io/github/nokia/danm?branch=master) <img src=\"https://github.com/nokia/danm/raw/master/logowname.png\" width=\"100\"> Want to hang-out with us? Join our Slack under https://danmws.slack.com/! Feel yourself officially invited by clicking on link! DANM Utils is the home to independet Operators built on top of the DANM network management platform, providing value added services to your cluster! Interested in adding outage resiliency to your IPAM, or universal network policy support? Look no further and hop over to https://github.com/nokia/danm-utils today! * DANM is Nokia's solution to bring TelCo grade network management into a Kubernetes cluster! DANM has more than 4 years of history inside the company, is currently deployed into production, and it is finally available for everyone, here on GitHub. The name stands for \"Damn, Another Network Manager!\", because yes, we know: the last thing the K8s world needed is another TelCo company \"revolutionizing\" networking in Kubernetes. But still we hope that potential users checking out our project will involuntarily proclaim \"DANM, that's some good networking stuff!\" :) Please consider for a moment that there is a whole other world out there, with special requirements, and DANM is the result of those needs! We are certainly not saying DANM is THE network solution, but we think it is a damn good one! Want to learn more about this brave new world? Don't hesitate to contact us, we are always quite happy to share the special requirements we need to satisfy each and every day. In any case, DANM is more than just a plugin, it is an End-To-End solution to a whole problem domain. It is: a CNI plugin capable of provisioning IPVLAN interfaces with advanced features an in-built IPAM module with the capability of managing multiple, cluster-wide*, discontinuous L3 networks with managing up to 8M allocations per network! plus providing dynamic, static, or no IP allocation scheme on-demand for both IPv4, and IPv6 a CNI metaplugin capable of attaching multiple network interfaces to a container, either through its own CNI, or through delegating the job to any of the popular CNI solution e.g. SR-IOV, Calico, Flannel" }, { "data": "in parallel* a Kubernetes controller capable of centrally managing both VxLAN and VLAN interfaces of all Kubernetes hosts another Kubernetes controller extending Kubernetes' Service-based service discovery concept to work over all network interfaces of a Pod a standard Kubernetes Validating and Mutating Webhook responsible for making you adhere to the schemas, and also automating network resource management for tenant users in a production-grade environment Just kidding as DANM is always free, but if you want to install a production grade, open-source Kubernetes-based bare metal CaaS infrastructure by default equipped with DANM and with a single click of a button nonetheless; just head over to Linux Foundation Akraino Radio Edge Cloud (REC) wiki for the and the Not just for TelCo! The above functionalities are implemented by the following components: danm is the CNI plugin which can be directly integrated with kubelet. Internally it consists of the CNI metaplugin, the CNI plugin responsible for managing IPVLAN interfaces, and the in-built IPAM plugin. Danm binary is integrated to kubelet as any other . fakeipam is a little program used in natively integrating 3rd party CNI plugins into the DANM ecosystem. It is basically used to echo the result of DANM's in-built IPAM to CNIs DANM delegates operations to. Fakeipam binary should be placed into kubelet's configured CNI plugin directory, next to danm. Fakeipam is a temporary solution, the long-term aim is to separate DANM's IPAM component into a full-fledged, standalone IPAM solution. netwatcher is a Kubernetes Controller watching the Kubernetes API for changes in the DANM related CRD network management APIs. This component is responsible for validating the semantics of network objects, and also for maintaining VxLAN and VLAN host interfaces of all Kubernetes nodes. Netwatcher binary is deployed in Kubernetes as a DaemonSet, running on all nodes. svcwatcher is another Kubernetes Controller monitoring Pod, Service, Endpoint, and DanmEp API paths. This Controller is responsible for extending Kubernetes native Service Discovery to work even for the non-primary networks of the Pod. Svcwatcher binary is deployed in Kubernetes as a DaemonSet, running only on the Kubernetes master nodes in a clustered setup. webhook is a standard Kubernetes Validating and Mutating Webhook. It has multiple, crucial responsibilities: it validates all DANM introduced CRD APIs both syntactically, and semantically both during creation, and modification it automatically mutates parameters only relevant to the internal implementation of DANM into the API objects it automatically assigns physical network resources to the logical networks of tenant users in a production-grade infrastructure It is undeniable that TelCo products- even in containerized format- *must* own physically separated network interfaces, but we have always felt other projects put too much emphasis on this lone fact, and entirely ignored -or were afraid to tackle- the larger issue with Kubernetes. That is: capability to provision multiple network interfaces to Pods is a very limited enhancement if the cloud native feature of Kubernetes cannot be used with those extra" }, { "data": "This is the very big misconception our solution aims to rectify - we strongly believe that all network interfaces shall be natively supported by K8s, and there are no such things as \"primary\", or \"secondary\" network interfaces. Why couldn't NetworkPolicies, Services, LoadBalancers, all of these existing and proven Kubernetes constructs work with all network interfaces? Why couldn't network administrators freely decide which physical networks are reachable by a Pod? In our opinion the answer is quite simple: because networks are not first-class citizens in Kubernetes. This is the historical reason why DANM's CRD based, abstract network management APIs were born, and why is the whole ecosystem built around the concept of promoting networks to first-class Kubernetes API objects. This approach opens-up a plethora of possibilities, even with today's Kubernetes core code! The following chapters will guide you through the description of these features, and will show you how you can leverage them in your Kubernetes cluster. You will see at the end of this README that we really went above and beyond what \"networks\" are in vanilla Kubernetes. But, DANM core project never did, and will break one core concept: DANM is first and foremost a run-time agnostic standard CNI system for Kubernetes, 100% adhering to the Kubernetes life-cycle management principles. It is important to state this, because the features DANM provides open up a couple of very enticing, but also very dangerous avenues: what if we would monitor the run-time and provide added high-availability feature based on events happening on that level? what if we could change the networks of existing Pods? We strongly feel that all such scenarios incompatible with the life-cycle of a standard CNI plugin firmly fall outside the responsibility of the core DANM project. That being said, tell us about your Kubernetes breaking ideas! We are open to accept such plugins into the wider umbrella of the existing eco-system: outside of the core project, but still loosely linked to suite as optional, external components. Just because something doesn't fit into core DANM, it does not mean it can't fit into your cloud! Please visit repository for more info. See . See . Please read for details on our code of conduct, and the process for submitting pull requests to us. Robert Springer* (@rospring) - Initial work (V1 Python), IPAM, Netwatcher, Svcwatcher Levente Kale* (@Levovar) - Initial work (V2 Golang), Documentation, Integration, SCM, UTs, Metaplugin, V4 work Special thanks to the original author who started the whole project in 2015 by putting a proprietary network management plugin between Kubelet and Docker; and also for coining the DANM acronym: Peter Braun (@peter-braun) This project is licensed under the 3-Clause BSD License - see the" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "DANM", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Use Tencent Cloud Object Storage as Velero's storage destination.\" layout: docs You can deploy Velero on Tencent , or an other Kubernetes cluster, and use Tencent Cloud Object Store as a destination for Veleros backups. Registered . service, referred to as COS, has been launched A Kubernetes cluster has been created, cluster version v1.16 or later, and the cluster can use DNS and Internet services normally. If you need to create a TKE cluster, refer to the Tencent documentation. Create an object bucket for Velero to store backups in the Tencent Cloud COS console. For how to create, please refer to Tencent Cloud COS usage instructions. Set access to the bucket through the object storage console, the bucket needs to be read and written, so the account is granted data reading, data writing permissions. For how to configure, see the Tencent user instructions. Velero uses an AWS S3-compatible API to access Tencent Cloud COS storage, which requires authentication using a pair of access key IDs and key-created signatures. In the S3 API parameter, the \"accesskeyid\" field is the access key ID and the \"secretaccesskey\" field is the key. In the , Create and acquire Tencent Cloud Keys \"SecretId\" and \"SecretKey\" for COS authorized account. Where the \"SecretId\" value corresponds to the value of S3 API parameter \"access_key_id\" field, the \"SecretKey\" value corresponds to the value of S3 API parameter \"secret_access_key\" field. Create the credential profile \"credentials-velero\" required by Velero in the local directory based on the above correspondence: ```bash [default] awsaccesskey_id=<SecretId> awssecretaccess_key=<SecretKey> ``` You need to install the Velero CLI first, see for how to install. Follow the Velero installation command below to create velero and restic workloads and other necessary resource objects. ```bash velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.1.0 --bucket <BucketName> \\ --secret-file ./credentials-velero \\ --use-restic \\ --default-volumes-to-restic \\ --backup-location-config \\ region=ap-guangzhou,s3ForcePathStyle=\"true\",s3Url=https://cos.ap-guangzhou.myqcloud.com ``` Description of the parameters: `--provider`: Declares the type of plug-in provided by \"aws\". `--plugins`: Use the AWS S3 compatible API plug-in \"velero-plugin-for-aws\". `--bucket`: The bucket name created at Tencent Cloud COS. `--secret-file`: Access tencent cloud COS access credential file for the \"credentials-velero\" credential file created above. `--use-restic`: Back up and restore persistent volume data using the open source free backup tool . However, 'hostPath' volumes are not supported, see the for details), an integration that complements Velero's backup capabilities and is recommended to be turned on. `--default-volumes-to-restic`: Enable the use of Restic to back up all Pod volumes, provided that the `--use-restic`parameter needs to be turned on. `--backup-location-config`: Back up the bucket access-related configuration: `region`: Tencent cloud COS bucket area, for example, if the created region is Guangzhou, the Region parameter value is \"ap-guangzhou\". `s3ForcePathStyle`: Use the S3 file path format. `s3Url`: Tencent Cloud COS-compatible S3 API access address,Note that instead of creating a COS bucket for public network access domain name, you must use a format of \"https://cos.`region`.myqcloud.com\" URL, for example, if the region is Guangzhou, the parameter value is \"https://cos.ap-guangzhou.myqcloud.com.\". There are other installation parameters that can be viewed using `velero install --help`, such as setting `--use-volume-snapshots-false` to close the storage volume data snapshot backup if you do not want to back up the storage volume data. After executing the installation commands above, the installation process looks like this: {{< figure" }, { "data": "width=\"100%\">}} After the installation command is complete, wait for the velero and restic workloads to be ready to see if the configured storage location is available. Executing the 'velero backup-location get' command to view the storage location status and display \"Available\" indicates that access to Tencent Cloud COS is OK, as shown in the following image: {{< figure src=\"/docs/main/contributions/img-for-tencent/69194157ccd5e377d1e7d914fd8c0336.png\" width=\"100%\">}} At this point, The installation using Tencent Cloud COS as Velero storage location is complete, If you need more installation information about Velero, You can see the official website . In the cluster, use the helm tool to create a minio test service with a persistent volume, and the minio installation method can be found in the , in which case can bound a load balancer for the minio service to access the management page using a public address in the browser. {{< figure src=\"/docs/main/contributions/img-for-tencent/f0fff5228527edc72d6e71a50d5dc966.png\" width=\"100%\">}} Sign in to the minio web management page and upload some image data for the test, as shown below: {{< figure src=\"/docs/main/contributions/img-for-tencent/e932223585c0b19891cc085ad7f438e1.png\" width=\"100%\">}} With Velero Backup, you can back up all objects in the cluster directly, or filter objects by type, namespace, and/or label. This example uses the following command to back up all resources under the 'default' namespace. ``` velero backup create default-backup --include-namespaces <Namespace> ``` Use the `velero backup get` command to see if the backup task is complete, and when the backup task status is \"Completed,\" the backup task is completed without any errors, as shown in the following below: {{< figure src=\"/docs/main/contributions/img-for-tencent/eb2bbabae48b188748f5278bedf177f1.png\" width=\"100%\">}} At this point delete all of MinIO's resources, including its PVC persistence volume, as shown below:: {{< figure src=\"/docs/main/contributions/img-for-tencent/15ccaacf00640a04ae29ceed4c86195b.png\" width=\"100%\">}} After deleting the MinIO resource, use your backup to restore the deleted MinIO resource, and temporarily update the backup storage location to read-only mode (this prevents the backup object from being created or deleted in the backup storage location during the restore process):: ```bash kubectl patch backupstoragelocation default --namespace velero \\ --type merge \\ --patch '{\"spec\":{\"accessMode\":\"ReadOnly\"}}' ``` Modifying access to Velero's storage location is \"ReadOnly,\" as shown in the following image: {{< figure src=\"/docs/main/contributions/img-for-tencent/e8c2ab4e5e31d1370c62fad25059a8a8.png\" width=\"100%\">}} Now use the backup \"default-backup\" that Velero just created to create the restore task: ```bash velero restore create --from-backup <BackupObject> ``` You can also use `velero restore get` to see the status of the restore task, and if the restore status is \"Completed,\" the restore task is complete, as shown in the following image: {{< figure src=\"/docs/main/contributions/img-for-tencent/effe8a0a7ce3aa8e422db00bfdddc375.png\" width=\"100%\">}} When the restore is complete, you can see that the previously deleted minio-related resources have been restored successfully, as shown in the following image: {{< figure src=\"/docs/main/contributions/img-for-tencent/1d53b0115644d43657c2a5ece805c9b4.png\" width=\"100%\">}} Log in to minio's management page on your browser and you can see that the previously uploaded picture data is still there, indicating that the persistent volume's data was successfully restored, as shown below: {{< figure src=\"/docs/main/contributions/img-for-tencent/ceaca9ce6bc92bdce987c63d2fe71561.png\" width=\"100%\">}} When the restore is complete, don't forget to restore the backup storage location to read and write mode so that the next backup task can be used successfully: ```bash kubectl patch backupstoragelocation default --namespace velero \\ --type merge \\ --patch '{\"spec\":{\"accessMode\":\"ReadWrite\"}}' ``` To uninstall velero resources in a cluster, you can do so using the following command: ```bash kubectl delete namespace/velero clusterrolebinding/velero kubectl delete crds -l component=velero ```" } ]
{ "category": "Runtime", "file_name": "tencent-config.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "While statedumps provide stats of the number of allocations, size etc for a particular mem type, there is no easy way to examine all the allocated objects of that type in memory.Being able to view this information could help with determining how an object is used, and if there are any memory leaks. The memacctrec structures have been updated to include lists to which the allocated object is added. These can be examined in gdb using simple scripts. `gdb> plist xl->memacct.rec[$type]->objlist` will print out the pointers of all allocations of $type. These changes are primarily targeted at developers and need to enabled at compile-time using `configure --enable-debug`." } ]
{ "category": "Runtime", "file_name": "mem-alloc-list.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "platforms;android-31 build-tools;31.0.0 cmake;3.22.1 ndk;23.1.7779620 You could deploy these components via or command line tool. Setup environment variable `ANDROID_HOME=path/to/your/android/sdk` Run Command `./gradlew assembleRelease` Sign your APK file with `apksigner` > apk file location `./app/build/outputs/apk/release` > `apksigner` location `$ANDROID_HOME/build-tools/$VERSION/apksigner` Open this folder with 2020.3.1 or later For Release APK, click `Menu -> Build -> Generate Signed Bundle/APK`, select APK, setup keystore configuration and wait for build finished." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "(devices-pci)= ```{note} The `pci` device type is supported for VMs. It does not support hotplugging. ``` PCI devices are used to pass raw PCI devices from the host into a virtual machine. They are mainly intended to be used for specialized single-function PCI cards like sound cards or video capture cards. In theory, you can also use them for more advanced PCI devices like GPUs or network cards, but it's usually more convenient to use the specific device types that Incus provides for these devices ( or ). `pci` devices have the following device options: Key | Type | Default | Required | Description :-- | :-- | :-- | :-- | :-- `address` | string | - | yes | PCI address of the device" } ]
{ "category": "Runtime", "file_name": "devices_pci.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Mutation testing (or Mutation analysis or Program mutation) is used to design new software tests and evaluate the quality of existing software tests. Mutation testing involves modifying a program in small ways. Each mutated version is called a mutant and tests detect and reject mutants by causing the behavior of the original version to differ from the mutant. This is called killing the mutant. Test suites are measured by the percentage of mutants that they kill. New tests can be designed to kill additional mutants. there are several kind of mutants: killed mutants: the mutants which is killed by the unit test. which is identified by \"tests passed -> FAIL\" in the log failed or escaped mutants: the mutants which pass the unit test. which is identified by \"tests failed -> PASS\" in the log skipped mutants: the mutants may skipped because of 1. out of coverage code. 2. in the black list, 3. skipped by comment. other exception cases. open the github action workflow page. click \"run mutate test\" step. search \"tests passed \" keyword, all the \"tests passed -> FAIL\" mutants are failed. you can try here: https://github.com/juicedata/juicefs/actions/runs/3565436367/jobs/5990603552 open the github action workflow page. click \"run mutate test\" step. search \"tests passed \" keyword, all the \"tests passed -> FAIL\" mutants are failed. find which line is changed by mutation. copy the changed line to .go source file run all the tests in corresponding go test file, all the tests should passed. you should add test case to make the test failed, which kill this mutant. find the checksum from the github action log, like FAIL \"/tmp/go-mutesting-1324412688/pkg/chunk/prefetch.go.0\" with checksum bb9e9497f17e191adf89b5a2ef6764eb add a line //checksum: bb9e9497f17e191adf89b5a2ef6764eb in the go test file. For example: //checksum 9cb13bb28aa7918edaf4f0f4ca92eea5 //checksum 05debda2840d31bac0ab5c20c5510591 func TestMin(t *testing.T) { assertEqual(t, Min(1, 2), 1) assertEqual(t, Min(-1, -2), -2) assertEqual(t, Min(0, 0), 0) } Add \"//skip mutate\" to the end of the line you don't want to mutate in the source file. For example: if err != nil { //skip mutate return \"\", fmt.Errorf(\"failed to execute command `lsb_release`: %s\", err) } if you don't want to run a specific test case, you can add \"//skip mutate\" after the test case function. For example: func TestRandomWrite(t *testing.T) {//skip mutate ... } if the mutants of the target source file is more than 200, we will use 4 github jobs to run it. otherwise we will use 1 job to run. you can customize it in your test file with adding \"//mutatetestjobnumber: number\", eg: //mutatetestjobnumber: 8 add //mutate:disable in the *_test.go file to disable the mutate test." } ]
{ "category": "Runtime", "file_name": "how_to_use_mutate_test.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|" } ]
{ "category": "Runtime", "file_name": "fuzzy_mode_convert_table.md", "project_name": "CNI-Genie", "subcategory": "Cloud Native Network" }
[ { "data": "title: Storage Architecture Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. Rook enables Ceph storage to run on Kubernetes using Kubernetes primitives. With Ceph running in the Kubernetes cluster, Kubernetes applications can mount block devices and filesystems managed by Rook, or can use the S3/Swift API for object storage. The Rook operator automates configuration of storage components and monitors the cluster to ensure the storage remains available and healthy. The Rook operator is a simple container that has all that is needed to bootstrap and monitor the storage cluster. The operator will start and monitor , the Ceph OSD daemons to provide RADOS storage, as well as start and manage other Ceph daemons. The operator manages CRDs for pools, object stores (S3/Swift), and filesystems by initializing the pods and other resources necessary to run the services. The operator will monitor the storage daemons to ensure the cluster is healthy. Ceph mons will be started or failed over when necessary, and other adjustments are made as the cluster grows or shrinks. The operator will also watch for desired state changes specified in the Ceph custom resources (CRs) and apply the changes. Rook automatically configures the Ceph-CSI driver to mount the storage to your pods. The `rook/ceph` image includes all necessary tools to manage the cluster. Rook is not in the Ceph data path. Many of the Ceph concepts like placement groups and crush maps are hidden so you don't have to worry about them. Instead, Rook creates a simplified user experience for admins that is in terms of physical resources, pools, volumes, filesystems, and buckets. Advanced configuration can be applied when needed with the Ceph tools. Rook is implemented in golang. Ceph is implemented in C++ where the data path is highly optimized. We believe this combination offers the best of both worlds. Example applications are shown above for the three supported storage types: Block Storage is represented with a blue app, which has a `ReadWriteOnce (RWO)` volume mounted. The application can read and write to the RWO volume, while Ceph manages the IO. Shared Filesystem is represented by two purple apps that are sharing a ReadWriteMany (RWX)" }, { "data": "Both applications can actively read or write simultaneously to the volume. Ceph will ensure the data is safely protected for multiple writers with the MDS daemon. Object storage is represented by an orange app that can read and write to a bucket with a standard S3 client. Below the dotted line in the above diagram, the components fall into three categories: Rook operator (blue layer): The operator automates configuration of Ceph CSI plugins and provisioners (orange layer): The Ceph-CSI driver provides the provisioning and mounting of volumes Ceph daemons (red layer): The Ceph daemons run the core storage architecture. See the to learn more about each daemon. Production clusters must have three or more nodes for a resilient storage platform. In the diagram above, the flow to create an application with an RWO volume is: The (blue) app creates a PVC to request storage The PVC defines the Ceph RBD storage class (sc) for provisioning the storage K8s calls the Ceph-CSI RBD provisioner to create the Ceph RBD image. The kubelet calls the CSI RBD volume plugin to mount the volume in the app The volume is now available for reads and writes. A ReadWriteOnce volume can be mounted on one node at a time. In the diagram above, the flow to create a applications with a RWX volume is: The (purple) app creates a PVC to request storage The PVC defines the CephFS storage class (sc) for provisioning the storage K8s calls the Ceph-CSI CephFS provisioner to create the CephFS subvolume The kubelet calls the CSI CephFS volume plugin to mount the volume in the app The volume is now available for reads and writes. A ReadWriteMany volume can be mounted on multiple nodes for your application to use. In the diagram above, the flow to create an application with access to an S3 bucket is: The (orange) app creates an ObjectBucketClaim (OBC) to request a bucket The Rook operator creates a Ceph RGW bucket (via the lib-bucket-provisioner) The Rook operator creates a secret with the credentials for accessing the bucket and a configmap with bucket information The app retrieves the credentials from the secret The app can now read and write to the bucket with an S3 client A S3 compatible client can use the S3 bucket right away using the credentials (`Secret`) and bucket info (`ConfigMap`)." } ]
{ "category": "Runtime", "file_name": "storage-architecture.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Using Weave with Systemd menu_order: 40 search_type: Documentation Having installed Weave Net as per , you might find it convenient to configure the init daemon to start Weave on boot. Most recent Linux distribution releases ship with . The information below provides you with some initial guidance on getting a Weave Net service configured on a systemd-based OS. A regular service unit definition for Weave Net is shown below. This file is normally placed in `/etc/systemd/system/weave.service`. [Unit] Description=Weave Network Documentation=http://docs.weave.works/weave/latest_release/ Requires=docker.service After=docker.service [Service] EnvironmentFile=-/etc/sysconfig/weave ExecStartPre=/usr/local/bin/weave launch --no-restart $PEERS ExecStart=/usr/bin/docker attach weave ExecStop=/usr/local/bin/weave stop [Install] WantedBy=multi-user.target To specify the addresses or names of other Weave hosts to join the network, create the `/etc/sysconfig/weave` environment file using the following format: PEERS=\"HOST1 HOST2 .. HOSTn\" You can also use the command to add participating hosts dynamically. Additionally, if you want to enable specify a password using `WEAVE_PASSWORD=\"wfvAwt7sj\"` in the `/etc/sysconfig/weave` environment file, and it will get picked up by Weave Net on launch. Recommendations for choosing a suitably strong password can be found . You can now launch Weave Net using sudo systemctl start weave To ensure Weave Net launches after reboot, run: sudo systemctl enable weave For more information on systemd, please refer to the documentation supplied with your distribution of Linux. If your OS has SELinux enabled and you want to run Weave Net as a systemd unit, then follow the instructions below. These instructions apply to CentOS and RHEL as of 7.0. On Fedora 21, there is no need to do this. Once `weave` is installed in `/usr/local/bin`, set its execution context with the commands shown below. You will need to have the `policycoreutils-python` package installed. sudo semanage fcontext -a -t unconfinedexect -f f /usr/local/bin/weave sudo restorecon /usr/local/bin/weave See Also * *" } ]
{ "category": "Runtime", "file_name": "systemd.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "github.com/gobuffalo/flect does not try to reinvent the wheel! Instead, it uses the already great wheels developed by the Go community and puts them all together in the best way possible. Without these giants, this project would not be possible. Please make sure to check them out and thank them for all of their hard work. Thank you to the following GIANTS:" } ]
{ "category": "Runtime", "file_name": "SHOULDERS.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "The introduction of the kernel commit 46a87b3851f0d6eb05e6d83d5c5a30df0eca8f76 in 5.7 has affected a deterministic scheduling behavior by distributing tasks across CPU cores within a cgroups cpuset. It means that `runc exec` might be impacted under some circumstances, by example when a container has been created within a cgroup cpuset entirely composed of isolated CPU cores usually sets either with `nohz_full` and/or `isolcpus` kernel boot parameters. Some containerized real-time applications are relying on this deterministic behavior and uses the first CPU core to run a slow thread while other CPU cores are fully used by the real-time threads with SCHED_FIFO policy. Such applications can prevent runc process from joining a container when the runc process is randomly scheduled on a CPU core owned by a real-time thread. Runc introduces a way to restore this behavior by adding the following annotation to the container runtime spec (`config.json`): `org.opencontainers.runc.exec.isolated-cpu-affinity-transition` This annotation can take one of those values: `temporary` to temporarily set the runc process CPU affinity to the first isolated CPU core of the container cgroup cpuset. `definitive`: to definitively set the runc process CPU affinity to the first isolated CPU core of the container cgroup cpuset. For example: ```json \"annotations\": { \"org.opencontainers.runc.exec.isolated-cpu-affinity-transition\": \"temporary\" } ``` WARNING: `definitive` requires a kernel >= 6.2, also works with RHEL 9 and above. When enabled and during `runc exec`, runc is looking for the `nohz_full` kernel boot parameter value and considers the CPUs in the list as isolated, it doesn't look for `isolcpus` boot parameter, it just assumes that `isolcpus` value is identical to `nohzfull` when specified. If `nohzfull` parameter is not found, runc also attempts to read the list from" }, { "data": "Once it gets the isolated CPU list, it returns an eligible CPU core within the container cgroup cpuset based on those heuristics: when there is not cpuset cores: no eligible CPU when there is not isolated cores: no eligible CPU when cpuset cores are not in isolated core list: no eligible CPU when cpuset cores are all isolated cores: return the first CPU of the cpuset when cpuset cores are mixed between housekeeping/isolated cores: return the first housekeeping CPU not in isolated CPUs. The returned CPU core is then used to set the `runc init` CPU affinity before the container cgroup cpuset transition. `nohz_full` has the isolated cores `4-7`. A container has been created with the cgroup cpuset `4-7` to only run on the isolated CPU cores 4 to 7. `runc exec` is called by a process with CPU affinity set to `0-3` with `temporary` transition: runc exec (affinity 0-3) -> runc init (affinity 4) -> container process (affinity 4-7) with `definitive` transition: runc exec (affinity 0-3) -> runc init (affinity 4) -> container process (affinity 4) The difference between `temporary` and `definitive` is the container process affinity, `definitive` will constraint the container process to run on the first isolated CPU core of the cgroup cpuset, while `temporary` restore the CPU affinity to match the container cgroup cpuset. `definitive` transition might be helpful when `nohz_full` is used without `isolcpus` to avoid runc and container process to be a noisy neighbour for real-time applications. Kubernetes doesn't manage container directly, instead it uses the Container Runtime Interface (CRI) to communicate with a software implementing this interface and responsible to manage the lifecycle of containers. There are popular CRI implementations like Containerd and CRI-O. Those implementations allows to pass pod annotations to the container runtime via the container runtime spec. Currently runc is the runtime used by default for both. Containerd CRI uses runc by default but requires an extra step to pass the annotation to runc. You have to whitelist `org.opencontainers.runc.exec.isolated-cpu-affinity-transition` as a pod annotation allowed to be passed to the container runtime in `/etc/containerd/config.toml`: ```toml [plugins.\"io.containerd.grpc.v1.cri\".containerd] defaultruntimename = \"runc\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes] [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc] runtime_type = \"io.containerd.runc.v2\" baseruntimespec = \"/etc/containerd/cri-base.json\" pod_annotations = [\"org.opencontainers.runc.exec.isolated-cpu-affinity-transition\"] ``` CRI-O doesn't require any extra step, however some annotations could be excluded by configuration. ```yaml apiVersion: v1 kind: Pod metadata: name: demo-pod annotations: org.opencontainers.runc.exec.isolated-cpu-affinity-transition: \"temporary\" spec: containers: name: demo image: registry.com/demo:latest ```" } ]
{ "category": "Runtime", "file_name": "isolated-cpu-affinity-transition.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "title: \"Node Selection for Data Movement Backup\" layout: docs Velero node-agent is a daemonset hosting the data movement modules to complete the concrete work of backups/restores. Varying from the data size, data complexity, resource availability, the data movement may take a long time and remarkable resources (CPU, memory, network bandwidth, etc.) during the backup and restore. Velero data movement backup supports to constrain the nodes where it runs. This is helpful in below scenarios: Prevent the data movement backup from running in specific nodes because users have more critical workloads in the nodes Constrain the data movement backup to run in specific nodes because these nodes have more resources than others Constrain the data movement backup to run in specific nodes because the storage allows volume/snapshot provisions in these nodes only Velero introduces a new section in ```node-agent-config``` configMap, called ```loadAffinity```, through which you can specify the nodes to/not to run data movement backups, in the affinity and anti-affinity flavors. If it is not there, ```node-agent-config``` should be created manually. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only. Node-agent server checks these configurations at startup time. Therefore, you could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted. Here is a sample of the ```node-agent-config``` configMap with ```loadAffinity```: ```json { \"loadAffinity\": [ { \"nodeSelector\": { \"matchLabels\": { \"beta.kubernetes.io/instance-type\": \"Standard_B4ms\" }, \"matchExpressions\": [ { \"key\": \"kubernetes.io/hostname\", \"values\": [ \"node-1\", \"node-2\", \"node-3\" ], \"operator\": \"In\" }, { \"key\": \"xxx/critial-workload\", \"operator\": \"DoesNotExist\" } ] } } ] } ``` To create the configMap, save something like the above sample to a json file and then run below command: ``` kubectl create cm node-agent-config -n velero --from-file=<json file name> ``` Affinity configuration means allowing the data movement backup to run in the nodes specified. There are two ways to define it: It could be defined by `MatchLabels`. The labels defined in `MatchLabels` means a `LabelSelectorOpIn` operation by default, so in the current context, they will be treated as affinity rules. In the above sample, it defines to run data movement backups in nodes with label `beta.kubernetes.io/instance-type` of value `StandardB4ms` (Run data movement backups in `StandardB4ms` nodes only). It could be defined by `MatchExpressions`. The labels are defined in `Key` and `Values` of `MatchExpressions` and the `Operator` should be defined as `LabelSelectorOpIn` or `LabelSelectorOpExists`. In the above sample, it defines to run data movement backups in nodes with label `kubernetes.io/hostname` of values `node-1`, `node-2` and `node-3` (Run data movement backups in `node-1`, `node-2` and `node-3` only). Anti-affinity configuration means preventing the data movement backup from running in the nodes specified. Below is the way to define it: It could be defined by `MatchExpressions`. The labels are defined in `Key` and `Values` of `MatchExpressions` and the `Operator` should be defined as `LabelSelectorOpNotIn` or `LabelSelectorOpDoesNotExist`. In the above sample, it disallows data movement backups to run in nodes with label `xxx/critial-workload`." } ]
{ "category": "Runtime", "file_name": "data-movement-backup-node-selection.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This document provides information on who the Kanister maintainers are, and their responsibilities. If you're interested in contributing, see the guide. These are the current Kanister maintainers: Tom Manville @tdmanv Pavan Navarathna @pavannd1 Prasad Ghangal @PrasadG193 Vivek Singh @viveksinghggits Daniil Fedotov @hairyhum Eugen Sumin @e-sumin Maintainers are active and visible members of the community. They have a lot of experience with the project and are expected to have the knowledge and insight to lead its growth and improvement. This section describes their responsibilities. Uphold the values and behavior set forward by the to ensure a safe and welcoming community. Prioritize reported security vulnerabilities and ensure that they are addressed before features or bugs. See the project's . Review pull requests regularly with comments, suggestions, decision to reject, merge or close them. Accept only high quality pull requests. Provide code reviews and guidance on incoming pull requests to adhere to established standards, best practices and guidelines. Review issues regularly to determine their priority and relevance, with proper labeling to identify milestones, blockers, complexity, entry-level issues etc. Respond to Slack messages, enhancement requests, and forum posts. Allocate time to reviewing and commenting on issues and conversations as they come in. Maintain healthy test coverage and code quality report score. Avoid dependency bloat. Mitigate breaking changes. Make frequent project releases to the community. Keep the release branch at production quality at all times. Backport features as needed. Cut release branches and tags to enable future patches. Regularly promote and attend the recurring Kanister community meetings." } ]
{ "category": "Runtime", "file_name": "MAINTAINERS.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "Longhorn v1.5.1 is the latest version of Longhorn 1.5. This release introduces bug fixes as described below about 1.5.0 upgrade issues, stability, troubleshooting and so on. Please try it and feedback. Thanks for all the contributions! For the definition of stable or latest release, please check . Please ensure your Kubernetes cluster is at least v1.21 before installing v1.5.1. Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions . Please read the first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.5.1 from v1.4.x/v1.5.0, which are only supported source versions. Follow the upgrade instructions . N/A Please follow up on about any outstanding issues found after this release. ) - @PhanLe1010 ) - @derekbit @c3y1huang @roger-ryao ) - @derekbit @roger-ryao ) - @yangchiu @PhanLe1010 @ejweber ) - @yangchiu @ejweber ) - @derekbit @roger-ryao ) - @c3y1huang @chriscchien ) - @ChanYiLin @roger-ryao ) - @ChanYiLin @chriscchien ) - @yangchiu @derekbit ) - @derekbit @chriscchien ) - @ChanYiLin @chriscchien ) - @ChanYiLin @chriscchien @ChanYiLin @PhanLe1010 @c3y1huang @chriscchien @derekbit @ejweber @innobead @roger-ryao @yangchiu" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.5.1.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "A service mesh is a way to monitor and control the traffic between micro-services running in your Kubernetes cluster. It is a powerful tool that you might want to use in combination with the security brought by Kata Containers. You are expected to be familiar with concepts such as pods, containers, control plane, data plane, and sidecar. Istio and Linkerd both rely on the same model, where they run controller applications in the control plane, and inject a proxy as a sidecar inside the pod running the service. The proxy registers in the control plane as a first step, and it constantly sends different sorts of information about the service running inside the pod. That information comes from the filtering performed when receiving all the traffic initially intended for the service. That is how the interaction between the control plane and the proxy allows the user to apply load balancing and authentication rules to the incoming and outgoing traffic, inside the cluster, and between multiple micro-services. This cannot not happen without a good amount of `iptables` rules ensuring the packets reach the proxy instead of the expected service. Rules are setup through an init container because they have to be there as soon as the proxy starts. Follow the to get Kata Containers properly installed and configured with Kubernetes. You can choose between CRI-O and containerd, both are supported through this document. For both cases, select the workloads as trusted by default. This way, your cluster and your service mesh run with `runc`, and only the containers you choose to annotate run with Kata Containers. As documented , a kernel version between 4.14.22 and 4.14.40 causes a deadlock when `getsockopt()` gets called with the `SOORIGINALDST` option. Unfortunately, both service meshes use this system call with this same option from the proxy container running inside the VM. This means that you cannot run this kernel version range as the guest kernel for Kata if you want your service mesh to work. As mentioned when explaining the basic functioning of those service meshes, `iptables` are heavily used, and they need to be properly enabled through the guest kernel config. If they are not properly enabled, the init container is not able to perform a proper setup of the rules. The following is a summary of what you need to install Istio on your system: ``` $ curl -L https://git.io/getLatestIstio | sh - $ cd istio-* $ export PATH=$PWD/bin:$PATH ``` See the for further details. Now deploy Istio in the control plane of your cluster with the following: ``` $ kubectl apply -f install/kubernetes/istio-demo.yaml ``` To verify that the control plane is properly deployed, you can use both of the following commands: ``` $ kubectl get svc -n istio-system $ kubectl get pods -n istio-system -o wide ``` As a reference, follow the Linkerd" }, { "data": "The following is a summary of what you need to install Linkerd on your system: ``` $ curl https://run.linkerd.io/install | sh $ export PATH=$PATH:$HOME/.linkerd/bin ``` Now deploy Linkerd in the control plane of your cluster with the following: ``` $ linkerd install | kubectl apply -f - ``` To verify that the control plane is properly deployed, you can use both of the following commands: ``` $ kubectl get svc -n linkerd $ kubectl get pods -n linkerd -o wide ``` Once the control plane is running, you need a deployment to define a few services that rely on each other. Then, you inject the YAML file with the sidecar proxy using the tools provided by each service mesh. If you do not have such a deployment ready, refer to the samples provided by each project. Istio provides a sample, which you can rely on to inject their `envoy` proxy as a sidecar. You need to use their tool called `istioctl kube-inject` to inject your YAML file. We use their `bookinfo` sample as an example: ``` $ istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml -o bookinfo-injected.yaml ``` Linkerd provides an sample, which you can rely on to inject their `linkerd` proxy as a sidecar. You need to use their tool called `linkerd inject` to inject your YAML file. We use their `emojivoto` sample as example: ``` $ wget https://raw.githubusercontent.com/runconduit/conduit-examples/master/emojivoto/emojivoto.yml $ linkerd inject emojivoto.yml > emojivoto-injected.yaml ``` Now that your service deployment is injected with the appropriate sidecar containers, manually edit your deployment to make it work with Kata. In Kubernetes, the init container is often `privileged` as it needs to setup the environment, which often needs some root privileges. In the case of those services meshes, all they need is the `NET_ADMIN` capability to modify the underlying network rules. Linkerd, by default, does not use `privileged` container, but Istio does. Because of the previous reason, if you use Istio you need to switch all containers with `privileged: true` to `privileged: false`. There is no difference between Istio and Linkerd in this section. It is about which CRI implementation you use. For both CRI-O and containerd, you have to add an annotation indicating the workload for this deployment is not trusted, which will trigger `kata-runtime` to be called instead of `runc`. CRI-O: Add the following annotation for CRI-O ```yaml io.kubernetes.cri-o.TrustedSandbox: \"false\" ``` The following is an example of what your YAML can look like: ```yaml ... apiVersion: extensions/v1beta1 kind: Deployment metadata: creationTimestamp: null name: details-v1 spec: replicas: 1 strategy: {} template: metadata: annotations: io.kubernetes.cri-o.TrustedSandbox: \"false\" sidecar.istio.io/status: '{\"version\":\"55c9e544b52e1d4e45d18a58d0b34ba4b72531e45fb6d1572c77191422556ffc\",\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-certs\"],\"imagePullSecrets\":null}' creationTimestamp: null labels: app: details version: v1 ... ``` containerd: Add the following annotation for containerd ```yaml io.kubernetes.cri.untrusted-workload: \"true\" ``` The following is an example of what your YAML can look like: ```yaml ... apiVersion: extensions/v1beta1 kind: Deployment metadata: creationTimestamp: null name: details-v1 spec: replicas: 1 strategy: {} template: metadata: annotations: io.kubernetes.cri.untrusted-workload: \"true\" sidecar.istio.io/status: '{\"version\":\"55c9e544b52e1d4e45d18a58d0b34ba4b72531e45fb6d1572c77191422556ffc\",\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-certs\"],\"imagePullSecrets\":null}' creationTimestamp: null labels: app: details version: v1 ... ``` Deploy your application by using the following: ``` $ kubectl apply -f myapp-injected.yaml ```" } ]
{ "category": "Runtime", "file_name": "service-mesh.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": ", 2023-05-23, Chi He , May 9, 2023, Storage Team@0PP0 , April 25, 2023Storage Team @OPPO , April 18, 2023Storage Team @OPPO , March 31, 2023, Storage Team @OPPO , March 7, 2023, Storage Team @OPPO , February 7, 2023, Hu Weicai @Beike , July 26, 2022, Storage Team @OPPO , June 28, 2022, Storage Team @OPPO 2023-7-28Yuan Zeng 2023-07-20Rui Zhang 2023-07-13Chao Yang , 2023-06-06, Ke Zhang 2023-05-16Liangliang Tang , May 5, 2023, Bingxing Liu@BIGO , March 30, 2023, Storage Team @OPPO , December 15, 2022, Storage Team @OPPO , November 29, 2022, Storage Team @OPPO , September 1, 2022, Storage Team @OPPO , August 9, 2022, Storage Team @OPPO 2023-09-01Chenyang Zhao 2023-08-22Chenyang Zhao 2023-07-04Tianyue Peng 2023-06-15Huocheng Wu , 2023-05-31, Tianjiong,Zhang 2023-05-25Huocheng Wu ,April 13, 2023Storage Team @OPPO , April 6, 2023Storage Team @OPPO , March 23, 2023, Storage Team @OPPO , January 4, 2023, Storage Team @OPPO , December 22, 2022, Storage Team @OPPO , November 4, 2022, Storage Team @OPPO , October 13, 2022, Storage Team @OPPO , September 16, 2022, Storage Team @OPPO , July 21, 2022, Storage Team @OPPO" } ]
{ "category": "Runtime", "file_name": "article.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "name: Support Request about: Support request or question relating to Submariner labels: support <!-- GitHub may not be the right place for support requests. You can also post your question on the [Submariner Slack](https://kubernetes.slack.com/archives/C010RJV694M) or the Submariner or mailing lists. If the matter is security related, please disclose it privately to the Submariner Owners: https://github.com/orgs/submariner-io/teams/submariner-core -->" } ]
{ "category": "Runtime", "file_name": "support.md", "project_name": "Submariner", "subcategory": "Cloud Native Network" }
[ { "data": "myst: substitutions: type: \"volume\" (howto-storage-backup-volume)= There are different ways of backing up your custom storage volumes: {ref}`storage-backup-snapshots` {ref}`storage-backup-export` {ref}`storage-copy-volume` <!-- Include start backup types --> Which method to choose depends both on your use case and on the storage driver you use. In general, snapshots are quick and space efficient (depending on the storage driver), but they are stored in the same storage pool as the {{type}} and therefore not too reliable. Export files can be stored on different disks and are therefore more reliable. They can also be used to restore the {{type}} into a different storage pool. If you have a separate, network-connected Incus server available, regularly copying {{type}}s to this other server gives high reliability as well, and this method can also be used to back up snapshots of the {{type}}. <!-- Include end backup types --> ```{note} Custom storage volumes might be attached to an instance, but they are not part of the instance. Therefore, the content of a custom storage volume is not stored when you {ref}`back up your instance <instances-backup>`. You must back up the data of your storage volume separately. ``` (storage-backup-snapshots)= A snapshot saves the state of the storage volume at a specific time, which makes it easy to restore the volume to a previous state. It is stored in the same storage pool as the volume itself. <!-- Include start optimized snapshots --> Most storage drivers support optimized snapshot creation (see {ref}`storage-drivers-features`). For these drivers, creating snapshots is both quick and space-efficient. For the `dir` driver, snapshot functionality is available but not very efficient. For the `lvm` driver, snapshot creation is quick, but restoring snapshots is efficient only when using thin-pool mode. <!-- Include end optimized snapshots --> Use the following command to create a snapshot for a custom storage volume: incus storage volume snapshot <poolname> <volumename> [<snapshot_name>] <!-- Include start create snapshot options --> Add the `--reuse` flag in combination with a snapshot name to replace an existing snapshot. By default, snapshots are kept forever, unless the `snapshots.expiry` configuration option is set. To retain a specific snapshot even if a general expiry time is set, use the `--no-expiry` flag. <!-- Include end create snapshot options --> (storage-edit-snapshots)= Use the following command to display the snapshots for a storage volume: incus storage volume info <poolname> <volumename> You can view or modify snapshots in a similar way to custom storage volumes, by referring to the snapshot with `<volumename>/<snapshotname>`. To show information about a snapshot, use the following command: incus storage volume show <poolname> <volumename>/<snapshot_name> To edit a snapshot (for example, to add a description or change the expiry date), use the following command: incus storage volume edit <poolname> <volumename>/<snapshot_name> To delete a snapshot, use the following command: incus storage volume delete <poolname> <volumename>/<snapshot_name> You can configure a custom storage volume to automatically create snapshots at specific times. To do so, set the `snapshots.schedule` configuration option for the storage volume (see" }, { "data": "For example, to configure daily snapshots, use the following command: incus storage volume set <poolname> <volumename> snapshots.schedule @daily To configure taking a snapshot every day at 6 am, use the following command: incus storage volume set <poolname> <volumename> snapshots.schedule \"0 6 *\" When scheduling regular snapshots, consider setting an automatic expiry (`snapshots.expiry`) and a naming pattern for snapshots (`snapshots.pattern`). See the {ref}`storage-drivers` documentation for more information about those configuration options. You can restore a custom storage volume to the state of any of its snapshots. To do so, you must first stop all instances that use the storage volume. Then use the following command: incus storage volume restore <poolname> <volumename> <snapshot_name> You can also restore a snapshot into a new custom storage volume, either in the same storage pool or in a different one (even a remote storage pool). To do so, use the following command: incus storage volume copy <sourcepoolname>/<sourcevolumename>/<sourcesnapshotname> <targetpoolname>/<targetvolumename> (storage-backup-export)= You can export the full content of your custom storage volume to a standalone file that can be stored at any location. For highest reliability, store the backup file on a different file system to ensure that it does not get lost or corrupted. Use the following command to export a custom storage volume to a compressed file (for example, `/path/to/my-backup.tgz`): incus storage volume export <poolname> <volumename> [<file_path>] If you do not specify a file path, the export file is saved as `backup.tar.gz` in the working directory. ```{warning} If the output file already exists, the command overwrites the existing file without warning. ``` <!-- Include start export info --> You can add any of the following flags to the command: `--compression` : By default, the output file uses `gzip` compression. You can specify a different compression algorithm (for example, `bzip2`) or turn off compression with `--compression=none`. `--optimized-storage` : If your storage pool uses the `btrfs` or the `zfs` driver, add the `--optimized-storage` flag to store the data as a driver-specific binary blob instead of an archive of individual files. In this case, the export file can only be used with pools that use the same storage driver. Exporting a volume in optimized mode is usually quicker than exporting the individual files. Snapshots are exported as differences from the main volume, which decreases their size and makes them easily accessible. <!-- Include end export info --> `--volume-only` : By default, the export file contains all snapshots of the storage volume. Add this flag to export the volume without its snapshots. You can import an export file (for example, `/path/to/my-backup.tgz`) as a new custom storage volume. To do so, use the following command: incus storage volume import <poolname> <filepath> [<volume_name>] If you do not specify a volume name, the original name of the exported storage volume is used for the new volume. If a volume with that name already (or still) exists in the specified storage pool, the command returns an error. In that case, either delete the existing volume before importing the backup or specify a different volume name for the import." } ]
{ "category": "Runtime", "file_name": "storage_backup_volume.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "name: Feature request about: Suggest an idea/feature for this project labels: feature <!-- Are you in the right place? For issues or feature requests, please create an issue in this repository. For general technical and non-technical questions, we are happy to help you on our . Did you already search the existing open issues for anything similar? --> Is this a bug report or feature request? Feature Request What should the feature do: What is use case behind this feature: Environment: <!-- Specific environment information that helps with the feature request -->" } ]
{ "category": "Runtime", "file_name": "feature_request.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Please refer to for details. For backward compatibility, Cobra still supports its legacy dynamic completion solution (described below). Unlike the `ValidArgsFunction` solution, the legacy solution will only work for Bash shell-completion and not for other shells. This legacy solution can be used along-side `ValidArgsFunction` and `RegisterFlagCompletionFunc()`, as long as both solutions are not used for the same command. This provides a path to gradually migrate from the legacy solution to the new solution. Note: Cobra's default `completion` command uses bash completion V2. If you are currently using Cobra's legacy dynamic completion solution, you should not use the default `completion` command but continue using your own. The legacy solution allows you to inject bash functions into the bash completion script. Those bash functions are responsible for providing the completion choices for your own completions. Some code that works in kubernetes: ```bash const ( bashcompletionfunc = `kubectlparseget() { local kubectl_output out if kubectl_output=$(kubectl get --no-headers \"$1\" 2>/dev/null); then out=($(echo \"${kubectl_output}\" | awk '{print $1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } kubectlgetresource() { if [[ ${#nouns[@]} -eq 0 ]]; then return 1 fi kubectlparseget ${nouns[${#nouns[@]} -1]} if [[ $? -eq 0 ]]; then return 0 fi } kubectlcustomfunc() { case ${last_command} in kubectlget | kubectldescribe | kubectldelete | kubectlstop) kubectlgetresource return ;; *) ;; esac } `) ``` And then I set that in my command definition: ```go cmds := &cobra.Command{ Use: \"kubectl\", Short: \"kubectl controls the Kubernetes cluster manager\", Long: `kubectl controls the Kubernetes cluster manager. Find more information at https://github.com/GoogleCloudPlatform/kubernetes.`, Run: runHelp, BashCompletionFunction: bashcompletionfunc, } ``` The `BashCompletionFunction` option is really only valid/useful on the root command. Doing the above will cause `kubectl_custom_func()` (`<command-use>customfunc()`) to be called when the built in processor was unable to find a solution. In the case of kubernetes a valid command might look something like `kubectl get pod ` the `kubectl_customc_func()` will run because the cobra.Command only understood \"kubectl\" and \"get.\" `kubectlcustomfunc()` will see that the cobra.Command is \"kubectlget\" and will thus call another helper `kubectlgetresource()`. `kubectlgetresource` will look at the 'nouns' collected. In our example the only noun will be `pod`. So it will call `kubectlparseget pod`. `kubectlparse_get` will actually call out to kubernetes and get any pods. It will then set `COMPREPLY` to valid pods! Similarly, for flags: ```go annotation := make(mapstring) annotation[cobra.BashCompCustom] = []string{\"kubectlgetnamespaces\"} flag := &pflag.Flag{ Name: \"namespace\", Usage: usage, Annotations: annotation, } cmd.Flags().AddFlag(flag) ``` In addition add the `kubectlgetnamespaces` implementation in the `BashCompletionFunction` value, e.g.: ```bash kubectlgetnamespaces() { local template template=\"{{ range .items }}{{ .metadata.name }} {{ end }}\" local kubectl_out if kubectl_out=$(kubectl get -o template --template=\"${template}\" namespace 2>/dev/null); then COMPREPLY=( $( compgen -W \"${kubectl_out}[*]\" -- \"$cur\" ) ) fi } ```" } ]
{ "category": "Runtime", "file_name": "bash_completions.md", "project_name": "Kilo", "subcategory": "Cloud Native Network" }
[ { "data": "Capacity constrained environments, MinIO will work but not recommended for production. | servers | drives (per node) | stripe_size | parity chosen (default) | tolerance for reads (servers) | tolerance for writes (servers) | |--:|:|:|:|:|-:| | 1 | 1 | 1 | 0 | 0 | 0 | | 1 | 4 | 4 | 2 | 0 | 0 | | 4 | 1 | 4 | 2 | 2 | 1 | | 5 | 1 | 5 | 2 | 2 | 2 | | 6 | 1 | 6 | 3 | 3 | 2 | | 7 | 1 | 7 | 3 | 3 | 3 | | servers | drives (per node) | stripe_size | parity chosen (default) | tolerance for reads (servers) | tolerance for writes (servers) | |--:|:|:|:|:|-:| | 4 | 2 | 8 | 4 | 2 | 1 | | 5 | 2 | 10 | 4 | 2 | 2 | | 6 | 2 | 12 | 4 | 2 | 2 | | 7 | 2 | 14 | 4 | 2 | 2 | | 8 | 1 | 8 | 4 | 4 | 3 | | 8 | 2 | 16 | 4 | 2 | 2 | | 9 | 2 | 9 | 4 | 4 | 4 | | 10 | 2 | 10 | 4 | 4 | 4 | | 11 | 2 | 11 | 4 | 4 | 4 | | 12 | 2 | 12 | 4 | 4 | 4 | | 13 | 2 | 13 | 4 | 4 | 4 | | 14 | 2 | 14 | 4 | 4 | 4 | | 15 | 2 | 15 | 4 | 4 | 4 | | 16 | 2 | 16 | 4 | 4 | 4 | If one or more drives are offline at the start of a PutObject or NewMultipartUpload operation the object will have additional data protection bits added automatically to provide the regular safety for these objects up to 50% of the number of drives. This will allow normal write operations to take place on systems that exceed the write tolerance. This means that in the examples above the system will always write 4 parity shards at the expense of slightly higher disk usage." } ]
{ "category": "Runtime", "file_name": "SIZING.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "Antrea supports collecting support bundle tarballs, which include the information from Antrea Controller and Antrea Agent. The collected information can help debugging issues in the Kubernetes cluster. Be aware that the generated support bundle includes a lot of information, including logs, so please review the contents before sharing it on Github and ensure that you do not share any sensitive information. There are two ways of generating support bundles. Firstly, you can run `antctl supportbundle` directly in the Antrea Agent Pod, Antrea Controller Pod, or on a host with a `kubeconfig` file for the target cluster. Secondly, you can also apply `SupportBundleCollection` CRs to create support bundles for K8s Nodes or external Nodes. We name this feature as `SupportBundleCollection` in Antrea. The details are provided in section . <!-- toc --> - <!-- /toc --> The `antctl supportbundle` command is supported in Antrea since version 0.7.0. The `SupportBundleCollection` CRD is introduced in Antrea v1.10.0 as an alpha feature. The feature gate must be enabled in both antrea-controller and antrea-agent configurations. If you plan to collect support bundle on an external Node, you should enable it in the configuration on the external Node as well. ```yaml antrea-agent.conf: | featureGates: SupportBundleCollection: true ``` ```yaml antrea-controller.conf: | featureGates: SupportBundleCollection: true ``` A single Namespace (e.g., default) is created for saving the Secrets that are used to access the support bundle file server, and the permission to read Secrets in this Namespace is given to antrea-controller by modifying and applying the . ```yaml kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: antrea-read-secrets namespace: default # Change the Namespace to where the Secret for file server's authentication credential is created. roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: antrea-secret-reader subjects: kind: ServiceAccount name: antrea-controller namespace: kube-system ``` SupportBundleCollection CRD is introduced to supplement the `antctl` command with three additional features: Allow users to collect support bundle files on external Nodes. Upload all the support bundle files into a user-provided SFTP Server. Support tracking status of a SupportBundleCollection CR. Please refer to the . Note: `antctl supportbundle` only works on collecting the support bundles from Antrea Controller and Antrea Agent that is running on a K8s Node, but it does not work for the Agent on an external Node. In this section, we will create two SupportBundleCollection CRs for K8s Nodes and external Nodes. Note, it is supported to specify Nodes/ExternalNodes by their names or by matching their labels in a SupportBundleCollection CR. Assume we have a cluster with Nodes named \"worker1\" and \"worker2\". In addition, we have set up two external Nodes named \"vm1\" and \"vm2\" in the \"vm-ns\" Namespace by following the instruction of the . In addition, an SFTP server needs to be provided in advance to collect the" }, { "data": "You can host the SFTP server by applying YAML `hack/externalnode/sftp-deployment.yml` or deploy one by yourself. A Secret needs to be created in advance with the username and password of the SFTP Server. The Secret will be referred as `authSecret` in the following YAML examples. ```bash kubectl create secret generic support-bundle-secret --from-literal=username='your-sftp-username' --from-literal=password='your-sftp-password' ``` Then we can apply the following YAML files. The first one is to collect support bundle on K8s Nodes \"worker1\" and \"worker2\": \"worker1\" is specified by the name, and \"worker2\" is specified by matching label \"role: workers\". The second one is to collect support bundle on external Nodes \"vm1\" and \"vm2\" in Namespace \"vm-ns\": \"vm1\" is specified by the name, and \"vm2\" is specified by matching label \"role: vms\". ```bash cat << EOF | kubectl apply -f - apiVersion: crd.antrea.io/v1alpha1 kind: SupportBundleCollection metadata: name: support-bundle-for-nodes spec: nodes: # All Nodes will be selected if both nodeNames and matchLabels are empty nodeNames: worker1 matchLabels: role: workers expirationMinutes: 10 # expirationMinutes is the requested duration of validity of the SupportBundleCollection. A SupportBundleCollection will be marked as Failed if it does not finish before expiration. sinceTime: 2h # Collect the logs in the latest 2 hours. Collect all available logs if the time is not set. fileServer: url: sftp://yourtestdomain.com:22/root/test authentication: authType: \"BasicAuthentication\" authSecret: name: support-bundle-secret namespace: default # antrea-controller must be given the permission to read Secrets in \"default\" Namespace. EOF ``` ```bash cat << EOF | kubectl apply -f - apiVersion: crd.antrea.io/v1alpha1 kind: SupportBundleCollection metadata: name: support-bundle-for-vms spec: externalNodes: # All ExternalNodes in the Namespace will be selected if both nodeNames and matchLabels are empty nodeNames: vm1 nodeSelector: matchLabels: role: vms namespace: vm-ns # namespace is mandatory when collecting support bundle from external Nodes. fileServer: url: yourtestdomain.com:22/root/test # Scheme sftp can be omitted. The url of \"$controlplanenodeip:30010/upload\" is used if deployed with sftp-deployment.yml. authentication: authType: \"BasicAuthentication\" authSecret: name: support-bundle-secret namespace: default # antrea-controller must be given the permission to read Secrets in \"default\" Namespace. EOF ``` For more information about the supported fields in a \"SupportBundleCollection\" CR, please refer to the You can check the status of `SupportBundleCollection` by running command `kubectl get supportbundlecollections [NAME] -ojson`. The following example shows a successful realization of `SupportBundleCollection`. `desiredNodes` shows the expected number of Nodes/ExternalNodes to collect with this request, while `collectedNodes` shows the number of Nodes/ExternalNodes which have already uploaded bundle files to the target file server. If the collection completes successfully, `collectedNodes` and `desiredNodes`should have an equal value which should match the number of Nodes/ExternalNodes you want to collect support bundle. If the following two conditions are presented, it means a bundle collection succeeded, \"Completed\" is true \"CollectionFailure\" is false. If any expected Node/ExternalNode failed to upload the bundle files in the required time, the \"CollectionFailure\" condition will be set to" }, { "data": "```bash $ kubectl get supportbundlecollections support-bundle-for-nodes -ojson ... \"status\": { \"collectedNodes\": 1, \"conditions\": [ { \"lastTransitionTime\": \"2022-12-08T06:49:35Z\", \"status\": \"True\", \"type\": \"Started\" }, { \"lastTransitionTime\": \"2022-12-08T06:49:41Z\", \"status\": \"True\", \"type\": \"BundleCollected\" }, { \"lastTransitionTime\": \"2022-12-08T06:49:35Z\", \"status\": \"False\", \"type\": \"CollectionFailure\" }, { \"lastTransitionTime\": \"2022-12-08T06:49:41Z\", \"status\": \"True\", \"type\": \"Completed\" } ], \"desiredNodes\": 1 } ``` The collected bundle should include three tarballs. To access these files, you can download the files from the SFTP server `yourtestdomain.com`. There will be two tarballs for `support-bundle-for-nodes`: \"support-bundle-for-nodes_worker1.tar.gz\" and \"support-bundle-for-nodes_worker2.tar.gz\", and two for `support-bundle-for-vms`: \"support-bundle-for-vmsvm1.tar.gz\" and \"support-bundle-for-vmsvm2.tar.gz\", in the `/root/test` folder. Run the `tar xvf $TARBALL_NAME` command to extract the files from the tarballs. Depending on the methods you use to collect the support bundle, the contents in the bundle may differ. The following table shows the differences. We use `agent`,`controller`, `outside` to represent running command `antctl supportbundle` in Antrea Agent, Antrea Controller, out-of-cluster respectively. Also, we use `Node` and `ExternalNode` to represent \"create SupportBundleCollection CR for Nodes\" and \"create SupportBundleCollection CR for external Nodes\". | Collected Item | Supported Collecting Method | Explanation | |--|-|| | Antrea Agent Log | `agent`, `outside`, `Node`, `ExternalNode` | Antrea Agent log files | | Antrea Controller Log | `controller`, `outside` | Antrea Controller log files | | iptables (Linux Only) | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip6tables-save` and `iptable-save` with counters | | OVS Ports | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ovs-ofctl dump-ports-desc` | | NetworkPolicy Resources | `agent`, `controller`, `outside`, `Node`, `ExternalNode` | YAML output of `antctl get appliedtogroups` and `antctl get addressgroups` commands | | Heap Pprof | `agent`, `controller`, `outside`, `Node`, `ExternalNode` | Output of | | HNSResources (Windows Only) | `agent`, `outside`, `Node`, `ExternalNode` | Output of `Get-HNSNetwork` and `Get-HNSEndpoint` commands | | Antrea Agent Info | `agent`, `outside`, `Node`, `ExternalNode` | YAML output of `antctl get agentinfo` | | Antrea Controller Info | `controller`, `outside` | YAML output of `antctl get controllerinfo` | | IP Address Info | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip address` command on Linux or `ipconfig /all` command on Windows | | IP Route Info | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip route` on Linux or `route print` on Windows | | IP Link Info | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip link` on Linux or `Get-NetAdapter` on Windows | | Cluster Information | `outside` | Dump of resources in the cluster, including: 1. all Pods, Deployments, Replicasets and Daemonsets in all Namespaces with any resourceVersion. 2. all Nodes with any resourceVersion. 3. all ConfigMaps in all Namespaces with any resourceVersion and label `app=antrea`. | | Memberlist State | `agent`, `outside` | YAML output of `antctl get memberlist` | Only SFTP basic authentication is supported for SupportBundleCollection. Other authentication methods will be added in the future." } ]
{ "category": "Runtime", "file_name": "support-bundle-guide.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "In order for WebAssembly to be widely adopted by developers as a runtime, it must support \"easy\" languages like JavaScript. Or, better yet, through its advanced compiler toolchain, WasmEdge could support high performance DSLs (Domain Specific Languages), which are low code solutions designed for specific tasks. WasmEdge can act as a cloud-native JavaScript runtime by embedding a JS execution engine or interpreter. It is faster and lighter than running a JS engine inside Docker. WasmEdge supports JS APIs to access native extension libraries such as network sockets, tensorflow, and user-defined shared libraries. It also allows embedding JS into other high-performance languages (eg, Rust) or using Rust / C to implement JS functions. Tutorials * * The image classification DSL is a YAML format that allows the user to specify a tensorflow model and its parameters. WasmEdge takes an image as the input of the DSL and outputs the detected item name / label. Example: A chatbot DSL function takes an input string and responds with a reply string. The DSL specifies the internal state transitions of the chatbot, as well as AI models for language understanding. This work is in progress." } ]
{ "category": "Runtime", "file_name": "js_or_dsl_runtime.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "The Rook operator currently uses a highly privileged service account with permissions to create namespaces, roles, role bindings, etc. Our approach would not pass a security audit and this design explores an improvement to this. Furthermore given our use of multiple service accounts and namespace, setting policies and quotas is harder than it needs to be. Reduce the number of service accounts and privileges used by Rook Reduce the number of namespaces that are used by Rook Only use services accounts and namespaces used by the cluster admin -- this enables them to set security policies and quotas that rook adheres to Continue to support a least privileged model Today the cluster admin creates the rook system namespace, rook-operator service account and RBAC rules as follows: ```yaml apiVersion: v1 kind: Namespace metadata: name: rook-system apiVersion: v1 kind: ServiceAccount metadata: name: rook-operator namespace: rook-system kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-operator rules: apiGroups: [\"\"] resources: [\"namespaces\", \"serviceaccounts\", \"secrets\", \"pods\", \"services\", \"nodes\", \"nodes/proxy\", \"configmaps\", \"events\", \"persistentvolumes\", \"persistentvolumeclaims\"] verbs: [ \"get\", \"list\", \"watch\", \"patch\", \"create\", \"update\", \"delete\" ] apiGroups: [\"extensions\"] resources: [\"thirdpartyresources\", \"deployments\", \"daemonsets\", \"replicasets\"] verbs: [ \"get\", \"list\", \"watch\", \"create\", \"delete\" ] apiGroups: [\"apiextensions.k8s.io\"] resources: [\"customresourcedefinitions\"] verbs: [ \"get\", \"list\", \"watch\", \"create\", \"delete\" ] apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"clusterroles\", \"clusterrolebindings\", \"roles\", \"rolebindings\"] verbs: [ \"get\", \"list\", \"watch\", \"create\", \"update\", \"delete\" ] apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [ \"get\", \"list\", \"watch\", \"delete\" ] apiGroups: [\"rook.io\"] resources: [\"*\"] verbs: [ \"*\" ] kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: rook-operator subjects: kind: ServiceAccount name: rook-operator namespace: rook-system ``` `rook-operator` is a highly privileged service account with cluster wide scope. It likely has more privileges than is currently needed, for example, the operator does not create namespaces today. Note the name `rook-system` and `rook-operator` are not important and can be set to anything. Once the rook operator is up and running it will automatically create the service account for the rook agent and the following RBAC rules: ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: rook-ceph-agent namespace: rook-system kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-ceph-agent rules: apiGroups: [\"\"] resources: [\"pods\", \"secrets\", \"configmaps\", \"persistentvolumes\", \"nodes\", \"nodes/proxy\"] verbs: [ \"get\", \"list\" ] apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [ \"get\" ] apiGroups: [\"rook.io\"] resources: [\"volumeattachment\"] verbs: [ \"get\", \"list\", \"watch\", \"create\", \"update\" ] kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-ceph-agent roleRef: apiGroup:" }, { "data": "kind: ClusterRole name: rook-ceph-agent subjects: kind: ServiceAccount name: rook-ceph-agent namespace: rook-system ``` When the cluster admin create a new Rook cluster they do so by adding a namespace and the rook cluster spec: ```yaml apiVersion: v1 kind: Namespace metadata: name: mycluster apiVersion: rook.io/v1alpha1 kind: Cluster metadata: name: myrookcluster namespace: mycluster ... ``` At this point the rook operator will notice that a new rook cluster CRD showed up and proceeds to create a service account for the `rook-api` and `rook-ceph-osd`. It will also use the `default` service account in the `mycluster` namespace for some pods. The `rook-api` service account and RBAC rules are as follows: ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: rook-api namespace: mycluster kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-api namespace: mycluster rules: apiGroups: [\"\"] resources: [\"namespaces\", \"secrets\", \"pods\", \"services\", \"nodes\", \"configmaps\", \"events\"] verbs: [ \"get\", \"list\", \"watch\", \"create\", \"update\" ] apiGroups: [\"extensions\"] resources: [\"thirdpartyresources\", \"deployments\", \"daemonsets\", \"replicasets\"] verbs: [ \"get\", \"list\", \"create\", \"update\" ] apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [ \"get\", \"list\" ] apiGroups: [\"apiextensions.k8s.io\"] resources: [\"customresourcedefinitions\"] verbs: [ \"get\", \"list\", \"create\" ] kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-api namespace: mycluster roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rook-api subjects: kind: ServiceAccount name: rook-api namespace: mycluster ``` The `rook-ceph-osd` service account and RBAC rules are as follows: ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: rook-ceph-osd namespace: mycluster kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-ceph-osd namespace: mycluster rules: apiGroups: [\"\"] resources: [\"configmaps\"] verbs: [ \"get\", \"list\", \"watch\", \"create\", \"update\", \"delete\" ] kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-ceph-osd namespace: mycluster roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rook-ceph-osd subjects: kind: ServiceAccount name: rook-ceph-osd namespace: mycluster ``` Just as we do today the cluster admin is responsible for creating the `rook-system` namespace. I propose we have a single service account in this namespace and call it `rook-system` by default. The names used are inconsequential and can be set to something different by the cluster admin. ```yaml apiVersion: v1 kind: Namespace metadata: name: rook-system apiVersion: v1 kind: ServiceAccount metadata: name: rook-system namespace: rook-system ``` The `rook-system` service account is responsible for launching all pods, services, daemonsets, etc. for Rook and should have enough privilege to do and nothing more. I've not audited all the RBAC rules but a good tool to do is . For example: ```yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-system rules: apiGroups: [\"\"] resources: [\"pods\", \"services\", \"configmaps\"] verbs: [ \"get\", \"list\", \"watch\", \"patch\", \"create\", \"update, \"delete\" ] apiGroups: [\"extensions\"] resources: [\"deployments\", \"daemonsets\", \"replicasets\"] verbs: [ \"get\", \"list\", \"watch\", \"patch\", \"create\", \"update, \"delete\" ] apiGroups: [\"apiextensions.k8s.io\"] resources: [\"customresourcedefinitions\"] verbs: [ \"get\", \"list\", \"watch\", \"patch\", \"create\", \"update, \"delete\" ] apiGroups: [\"rook.io\"] resources: [\"*\"] verbs: [ \"*\" ] kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-system namespace: rook-system roleRef: apiGroup:" }, { "data": "kind: ClusterRole name: rook-system subjects: kind: ServiceAccount name: rook-system namespace: rook-system ``` Notably absent here are privileges to set other RBAC rules and create read cluster-wide secrets and other resources. Because the admin created the `rook-system` namespace and service account they are free to set policies on them using PSP or namespace quotas. Also note that while we use a `ClusterRole` for rook-system we only use a `RoleBinding` to grant it access to the `rook-system` namespace. It does not have cluster-wide privileges. When creating a Rook cluster the cluster admin will continue to define the namespace and cluster CRD as follows: ```yaml apiVersion: v1 kind: Namespace metadata: name: mycluster apiVersion: rook.io/v1alpha1 kind: Cluster metadata: name: myrookcluster namespace: mycluster ... ``` In addition we will require that the cluster-admin define a service account and role binding as follows: ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: rook-cluster namespace: mycluster kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-cluster namespace: mycluster rules: apiGroups: [\"\"] resources: [\"configmaps\"] verbs: [ \"get\", \"list\", \"watch\", \"create\", \"update\", \"delete\" ] kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-cluster namespace: mycluster roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: rook-system subjects: kind: ServiceAccount name: rook-system namespace: rook-system kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rook-system namespace: mycluster roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rook-cluster namespace: rook-cluster subjects: kind: ServiceAccount name: rook-cluster namespace: mycluster ``` This will grant the `rook-system` service account access to the new namespace and also setup a least privileged service account `rook-cluster` to be used for pods in this namespace that need K8S api access. With this approach `rook-system` will only have access to namespaces nominated by the cluster admin. Also we will no longer create any service accounts or namespaces enabling admins to set stable policies and quotas. Also all rook pods except the rook operator pod should run using `rook-cluster` service account in the namespace they're in. Finally, we should support running multiple rook clusters in the same namespaces. While namespaces are a great organizational unit for pods etc. they are also a unit of policy and quotas. While we can force the cluster admin to go to an approach where they need to manage multiple namespaces, we would be better off if we give the option to cluster admin decide how they use namespace. For example, it should be possible to run rook-operator, rook-agent, and multiple independent rook clusters in a single namespace. This is going to require setting a prefix for pod names and other resources that could collide. The following should be possible: ```yaml apiVersion: v1 kind: Namespace metadata: name: myrook apiVersion: rook.io/v1alpha1 kind: Cluster metadata: name: red namespace: mycluster ... apiVersion: rook.io/v1alpha1 kind: Cluster metadata: name: blue namespace: mycluster ... ```" } ]
{ "category": "Runtime", "file_name": "security-model.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Upgrading to Velero 1.5\" layout: docs Velero , or installed. If you're not yet running at least Velero v1.4, see the following: Before upgrading, check the to make sure your version of Kubernetes is supported by the new version of Velero. Install the Velero v1.5 command-line interface (CLI) by following the . Verify that you've properly installed it by running: ```bash velero version --client-only ``` You should see the following output: ```bash Client: Version: v1.5.3 Git commit: <git SHA> ``` Update the Velero custom resource definitions (CRDs) to include schema changes across all CRDs that are at the core of the new features in this release: ```bash velero install --crds-only --dry-run -o yaml | kubectl apply -f - ``` NOTE: If you are upgrading Velero in Kubernetes 1.14.x or earlier, you will need to use `kubectl apply`'s `--validate=false` option when applying the CRD configuration above. See and for more context. Update the container image used by the Velero deployment and, optionally, the restic daemon set: ```bash kubectl set image deployment/velero \\ velero=velero/velero:v1.5.3 \\ --namespace velero kubectl set image daemonset/restic \\ restic=velero/velero:v1.5.3 \\ --namespace velero ``` Confirm that the deployment is up and running with the correct version by running: ```bash velero version ``` You should see the following output: ```bash Client: Version: v1.5.3 Git commit: <git SHA> Server: Version: v1.5.3 ```" } ]
{ "category": "Runtime", "file_name": "upgrade-to-1.5.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "We have already explored and explained the benefit of multi-homed networking, so this document will not rehearse that but simply focus on the implementation for the Ceph backend. If you are interested in learning more about multi-homed networking you can read the . To make the story short, should allow us to get the same performance benefit as `HostNetworking` as well as increasing the security. Using `HostNetworking` results in exposing all the network interfaces (the entire stack) of the host inside the container where Multus allows you to pick the one you want. Also, this minimizes the need of privileged containers (required for `HostNetworking`). We already have a `network` CRD property, which looks like: ```yaml network: provider: selectors: ``` We will expand the `selectors` with the following two hardcoded keys: ```yaml selectors: public: cluster: ``` Each selector represents a object in Multus. At least, one must be provided and by default they will represent: `public`: data daemon public network (binds to `public_network` Ceph option). `cluster`: data daemon cluster network (binds to `cluster_network` Ceph option). If only `public` is set then `cluster` will take the value of `public`. As part of the CNI spec, Multus supports several . Rook will naturally support any of them as they don't fundamentally change the working behavior. This is where things get more complex. Currently there are solutions available. As part of our research we have found that the following IPAM types are not good candidates: host-local: Maintains a local database of allocated IPs, this only works on a per host basis, so not suitable for a distributed environment since we will end up with IP collision To fix this, the looks promising but is not officially supported. static: Allocate a static IPv4/IPv6 addresses to container and it's useful in debugging purpose. This cannot at scale because this means we will have to allocate IPs for all the daemon so it's not scalable. You can find a more detailed analysis at the end of the document in the . The Ceph monitors only need access to the public network. The OSDs needs access to both public and cluster networks. Monitors requirements so far are the following: Predictable IP addresses Keep the same IP address for the life time of the deployment (IP should survive a restart) Only need access to the public network. They use service IP with a load-balancer so we need to be careful. Only need access to the public network. Nothing to do in particular since they don't use any service IPs. Steps must be taken to fix a CSI-with-multus issue documented . To summarize the issue: when a CephFS/RBD volume is mounted in a pod using Ceph CSI and then the CSI CephFS/RBD plugin is restarted or terminated (e.g. by restarting or deleting its DaemonSet), all operations on the volume become blocked, even after restarting the CSI" }, { "data": "The only workaround is to restart the node where the Ceph CSI plugin pod was restarted. When deploying a CephCluster resource configured to use multus networks, a multus-connected network interface will be added to the host network namespace for all nodes that will run CSI plugin pods. This will allow Ceph CSI pods to run using host networking and still access Ceph's public multus network. The design for mitigating the issue is to add a new DaemonSet that will own the network for all CephFS mounts as well as RBD mapped devices. The `csi-{cephfs,rbd}plugin` DaemonSet are left untouched. The Rook-Ceph Operator's CSI controller creates a `csi-plugin-holder` DaemonSet configured to use the `network.selectors.public` network specified for the CephCluster. This DaemonSet runs on all the nodes along side the `csi-{cephfs,rbd}plugin` DaemonSet. This Pod should only be stopped and restarted when a node is stopped so that volume operations do not become blocked. The Rook-Ceph Operator's CSI controller should set the DaemonSet's update strategy to `OnDelete` so that the pods do not get deleted if the DaemonSet is updated while also ensuring that the pods will be updated on the next node reboot (or node drain). The new holder DaemonSet only contains a single container called `holder`, the container responsible for pinning the network for filesystem mounts and mapped block devices. It is used as a passthrough by the Ceph-CSI plugin pod which when mounting or mapping will use the network namespace of that holder pod. One task of the Holder container is to pass the required PID into the `csi-plugin` pod. To solve this, the container will create a symlink to an equivalent of `/proc/$$/ns/net` in the `hostPath` volume shared with the `csi-plugin` pod. Then it will sleep forever. In order to support multiple Ceph-CSI consuming multiple Ceph clusters, the name of the symlink should be based on the Ceph cluster FSID. The complete name of the symlink can be decided during the implementation phase of the design. That symlink name will be stored in the Ceph-CSI configmap with a key name that we will define during the implementation. When the Ceph-CSI plugin mounts or maps a block device it will use the network namespace of the Holder pod. This is achieved by using `nsenter` to enter the network namespace of the Holder pod. Some changes are expected as part of the mounting/mapping method in today's `csi-plugin` pod. Instead of invoking `rbd map ...`, invoke `nsenter --net=<net ns file of pause in long-running pod> rbd map ...` (and same for `mount -t ceph ...`). The `csi-plugin` pod is already privileged, it already adds `CAPSYSADMIN`. The only missing piece is to expose the host PID namespace with `hostPID: true` on the `csi-cephfsplugin` DaemonSet (RBD already has it). Cleaning up the symlink is not a requirement nor a big issue. Once the Holder container stops, its network namespace goes away with it. So at worst, we end up with a broken" }, { "data": "When a new node is added, both DaemonSets are started, and the process described above occurs on the node. The initial implementation of this design is limited to supporting a single CephCluster with Multus. Until we stop using HostNetwork entirely it is impossible to support multiple CephClusters with and without Multus enabled. So far, the team has decided to go with the IPAM. It's an IP Address Management (IPAM) CNI plugin that assigns IP addresses cluster-wide. If you need a way to assign IP addresses dynamically across your cluster -- Whereabouts is the tool for you. If you've found that you like how the host-local CNI plugin works, but, you need something that works across all the nodes in your cluster (host-local only knows how to assign IPs to pods on the same node) -- Whereabouts is just what you're looking for. Whereabouts can be used for both IPv4 & IPv6 addressing. It is under active development and is not ready but ultimately will allow to: Have static IP addresses distributed across the cluster These IPs will survive any deployment restart The allocation will be done by whereabouts and not by Rook /!\\ The only thing that is not solved yet is how can we predict the IPs for the upcoming monitors? We might need to rework the way we bootstrap the monitors a little bit to not require to know the IP in advance. The following proposal were rejected but we keep them here for traceability and knowledge. In this scenario, a DHCP server will distribute an IP address to a pod using a given range. Pros: Pods will get a dedicated IP on a physical network interface No changes required in Ceph, Rook will detect the CIDR via the `NetworkAttachmentDefinition` then populate Ceph's flag `publicnetwork` and `clusternetwork` Cons: IP allocation not predictable, we don't know it until the pod is up and running. So the detection must append inside the monitor container is running, similarly to what the OSD code does today. Requires drastic changes in the monitor bootstrap code This adds a DHCP daemon on every part of the cluster and this has proven to be troublesome Assuming we go with this solution, we might need to change the way the monitors are bootstrapped: let the monitor discovered its own IP based on an interface once the first mon is bootstrapped, we register its IP in a ConfigMap as well as populating clusterInfo boot the second mon, look up in clusterInfo for the initial member (if the op dies in the process, we always `CreateOrLoadClusterInfo()` as each startup based on the cm so no worries) go on and on with the rest of the monitors TBT: if the pod restarts, it keeps the same IP. We could and this is pure theory at this point use a IPAM with DHCP along with service IP. This would require interacting with Kubeproxy and there is no such feature yet. Even if it was there, we decided not to go with DHCP so this is not relevant." } ]
{ "category": "Runtime", "file_name": "multus-network.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "A GJSON Path is a text string syntax that describes a search pattern for quickly retreiving values from a JSON payload. This document is designed to explain the structure of a GJSON Path through examples. The definitive implemenation is . Use the to experiment with the syntax online. A GJSON Path is intended to be easily expressed as a series of components seperated by a `.` character. Along with `.` character, there are a few more that have special meaning, including `|`, `#`, `@`, `\\`, `*`, `!`, and `?`. Given this JSON ```json { \"name\": {\"first\": \"Tom\", \"last\": \"Anderson\"}, \"age\":37, \"children\": [\"Sara\",\"Alex\",\"Jack\"], \"fav.movie\": \"Deer Hunter\", \"friends\": [ {\"first\": \"Dale\", \"last\": \"Murphy\", \"age\": 44, \"nets\": [\"ig\", \"fb\", \"tw\"]}, {\"first\": \"Roger\", \"last\": \"Craig\", \"age\": 68, \"nets\": [\"fb\", \"tw\"]}, {\"first\": \"Jane\", \"last\": \"Murphy\", \"age\": 47, \"nets\": [\"ig\", \"tw\"]} ] } ``` The following GJSON Paths evaluate to the accompanying values. In many cases you'll just want to retreive values by object name or array index. ```go name.last \"Anderson\" name.first \"Tom\" age 37 children [\"Sara\",\"Alex\",\"Jack\"] children.0 \"Sara\" children.1 \"Alex\" friends.1 {\"first\": \"Roger\", \"last\": \"Craig\", \"age\": 68} friends.1.first \"Roger\" ``` A key may contain the special wildcard characters `*` and `?`. The `*` will match on any zero+ characters, and `?` matches on any one character. ```go child*.2 \"Jack\" c?ildren.0 \"Sara\" ``` Special purpose characters, such as `.`, `*`, and `?` can be escaped with `\\`. ```go fav\\.movie \"Deer Hunter\" ``` You'll also need to make sure that the `\\` character is correctly escaped when hardcoding a path in your source code. ```go // Go val := gjson.Get(json, \"fav\\\\.movie\") // must escape the slash val := gjson.Get(json, `fav\\.movie`) // no need to escape the slash ``` ```rust // Rust let val = gjson::get(json, \"fav\\\\.movie\") // must escape the slash let val = gjson::get(json, r#\"fav\\.movie\"#) // no need to escape the slash ``` The `#` character allows for digging into JSON Arrays. To get the length of an array you'll just use the `#` all by itself. ```go friends.# 3 friends.#.age [44,68,47] ``` You can also query an array for the first match by using `#(...)`, or find all matches with `#(...)#`. Queries support the `==`, `!=`, `<`, `<=`, `>`, `>=` comparison operators, and the simple pattern matching `%` (like) and `!%` (not like) operators. ```go friends.#(last==\"Murphy\").first \"Dale\" friends.#(last==\"Murphy\")#.first [\"Dale\",\"Jane\"] friends.#(age>45)#.last [\"Craig\",\"Murphy\"] friends.#(first%\"D*\").last \"Murphy\" friends.#(first!%\"D*\").last \"Craig\" ``` To query for a non-object value in an array, you can forgo the string to the right of the operator. ```go children.#(!%\"a\") \"Alex\" children.#(%\"a\")# [\"Sara\",\"Jack\"] ``` Nested queries are allowed. ```go friends.#(nets.#(==\"fb\"))#.first >> [\"Dale\",\"Roger\"] ``` *Please note that prior to v1.3.0, queries used the `#[...]` brackets. This was changed in" }, { "data": "as to avoid confusion with the new syntax. For backwards compatibility, `#[...]` will continue to work until the next major release.* The `~` (tilde) operator will convert a value to a boolean before comparison. Supported tilde comparison type are: ``` ~true Converts true-ish values to true ~false Converts false-ish and non-existent values to true ~null Converts null and non-existent values to true ~* Converts any existing value to true ``` For example, using the following JSON: ```json { \"vals\": [ { \"a\": 1, \"b\": \"data\" }, { \"a\": 2, \"b\": true }, { \"a\": 3, \"b\": false }, { \"a\": 4, \"b\": \"0\" }, { \"a\": 5, \"b\": 0 }, { \"a\": 6, \"b\": \"1\" }, { \"a\": 7, \"b\": 1 }, { \"a\": 8, \"b\": \"true\" }, { \"a\": 9, \"b\": false }, { \"a\": 10, \"b\": null }, { \"a\": 11 } ] } ``` To query for all true-ish or false-ish values: ``` vals.#(b==~true)#.a >> [2,6,7,8] vals.#(b==~false)#.a >> [3,4,5,9,10,11] ``` The last value which was non-existent is treated as `false` To query for null and explicit value existence: ``` vals.#(b==~null)#.a >> [10,11] vals.#(b==~*)#.a >> [1,2,3,4,5,6,7,8,9,10] vals.#(b!=~*)#.a >> [11] ``` The `.` is standard separator, but it's also possible to use a `|`. In most cases they both end up returning the same results. The cases where`|` differs from `.` is when it's used after the `#` for and . Here are some examples ```go friends.0.first \"Dale\" friends|0.first \"Dale\" friends.0|first \"Dale\" friends|0|first \"Dale\" friends|# 3 friends.# 3 friends.#(last=\"Murphy\")# [{\"first\": \"Dale\", \"last\": \"Murphy\", \"age\": 44},{\"first\": \"Jane\", \"last\": \"Murphy\", \"age\": 47}] friends.#(last=\"Murphy\")#.first [\"Dale\",\"Jane\"] friends.#(last=\"Murphy\")#|first <non-existent> friends.#(last=\"Murphy\")#.0 [] friends.#(last=\"Murphy\")#|0 {\"first\": \"Dale\", \"last\": \"Murphy\", \"age\": 44} friends.#(last=\"Murphy\")#.# [] friends.#(last=\"Murphy\")#|# 2 ``` Let's break down a few of these. The path `friends.#(last=\"Murphy\")#` all by itself results in ```json [{\"first\": \"Dale\", \"last\": \"Murphy\", \"age\": 44},{\"first\": \"Jane\", \"last\": \"Murphy\", \"age\": 47}] ``` The `.first` suffix will process the `first` path on each array element before returning the results. Which becomes ```json [\"Dale\",\"Jane\"] ``` But the `|first` suffix actually processes the `first` path after the previous result. Since the previous result is an array, not an object, it's not possible to process because `first` does not exist. Yet, `|0` suffix returns ```json {\"first\": \"Dale\", \"last\": \"Murphy\", \"age\": 44} ``` Because `0` is the first index of the previous result. A modifier is a path component that performs custom processing on the JSON. For example, using the built-in `@reverse` modifier on the above JSON payload will reverse the `children` array: ```go children.@reverse [\"Jack\",\"Alex\",\"Sara\"] children.@reverse.0 \"Jack\" ``` There are currently the following built-in modifiers: `@reverse`: Reverse an array or the members of an object. `@ugly`: Remove all whitespace from" }, { "data": "`@pretty`: Make the JSON more human readable. `@this`: Returns the current element. It can be used to retrieve the root element. `@valid`: Ensure the json document is valid. `@flatten`: Flattens an array. `@join`: Joins multiple objects into a single object. `@keys`: Returns an array of keys for an object. `@values`: Returns an array of values for an object. `@tostr`: Converts json to a string. Wraps a json string. `@fromstr`: Converts a string from json. Unwraps a json string. `@group`: Groups arrays of objects. See . `@dig`: Search for a value without providing its entire path. See . A modifier may accept an optional argument. The argument can be a valid JSON payload or just characters. For example, the `@pretty` modifier takes a json object as its argument. ``` @pretty:{\"sortKeys\":true} ``` Which makes the json pretty and orders all of its keys. ```json { \"age\":37, \"children\": [\"Sara\",\"Alex\",\"Jack\"], \"fav.movie\": \"Deer Hunter\", \"friends\": [ {\"age\": 44, \"first\": \"Dale\", \"last\": \"Murphy\"}, {\"age\": 68, \"first\": \"Roger\", \"last\": \"Craig\"}, {\"age\": 47, \"first\": \"Jane\", \"last\": \"Murphy\"} ], \"name\": {\"first\": \"Tom\", \"last\": \"Anderson\"} } ``` *The full list of `@pretty` options are `sortKeys`, `indent`, `prefix`, and `width`. Please see for more information.* You can also add custom modifiers. For example, here we create a modifier which makes the entire JSON payload upper or lower case. ```go gjson.AddModifier(\"case\", func(json, arg string) string { if arg == \"upper\" { return strings.ToUpper(json) } if arg == \"lower\" { return strings.ToLower(json) } return json }) \"children.@case:upper\" [\"SARA\",\"ALEX\",\"JACK\"] \"children.@case:lower.@reverse\" [\"jack\",\"alex\",\"sara\"] ``` Note: Custom modifiers are not yet available in the Rust version Starting with v1.3.0, GJSON added the ability to join multiple paths together to form new documents. Wrapping comma-separated paths between `[...]` or `{...}` will result in a new array or object, respectively. For example, using the given multipath: ``` {name.first,age,\"the_murphys\":friends.#(last=\"Murphy\")#.first} ``` Here we selected the first name, age, and the first name for friends with the last name \"Murphy\". You'll notice that an optional key can be provided, in this case \"the_murphys\", to force assign a key to a value. Otherwise, the name of the actual field will be used, in this case \"first\". If a name cannot be determined, then \"_\" is used. This results in ```json {\"first\":\"Tom\",\"age\":37,\"the_murphys\":[\"Dale\",\"Jane\"]} ``` Starting with v1.12.0, GJSON added support of json literals, which provides a way for constructing static blocks of json. This is can be particularly useful when constructing a new json document using . A json literal begins with the '!' declaration character. For example, using the given multipath: ``` {name.first,age,\"company\":!\"Happysoft\",\"employed\":!true} ``` Here we selected the first name and age. Then add two new fields, \"company\" and \"employed\". This results in ```json {\"first\":\"Tom\",\"age\":37,\"company\":\"Happysoft\",\"employed\":true} ``` See issue for additional context on JSON Literals." } ]
{ "category": "Runtime", "file_name": "SYNTAX.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Customize Velero Install\" layout: docs During install, Velero requires that at least one plugin is added (with the `--plugins` flag). Please see the documentation under Velero is installed in the `velero` namespace by default. However, you can install Velero in any namespace. See for details. By default, `velero install` expects a credentials file for your `velero` IAM account to be provided via the `--secret-file` flag. If you are using an alternate identity mechanism, such as kube2iam/kiam on AWS, Workload Identity on GKE, etc., that does not require a credentials file, you can specify the `--no-secret` flag instead of `--secret-file`. By default, `velero install` does not install Velero's . To enable it, specify the `--use-restic` flag. If you've already run `velero install` without the `--use-restic` flag, you can run the same command again, including the `--use-restic` flag, to add the restic integration to your existing install. By default, `velero install` does not enable the use of restic to take backups of all pod volumes. You must apply an to every pod which contains volumes for Velero to use restic for the backup. If you are planning to only use restic for volume backups, you can run the `velero install` command with the `--default-volumes-to-restic` flag. This will default all pod volumes backups to use restic without having to apply annotations to pods. Note that when this flag is set during install, Velero will always try to use restic to perform the backup, even want an individual backup to use volume snapshots, by setting the `--snapshot-volumes` flag in the `backup create` command. Alternatively, you can set the `--default-volumes-to-restic` on an individual backup to to make sure Velero uses Restic for each volume being backed up. New features in Velero will be released as beta features behind feature flags which are not enabled by default. A full listing of Velero feature flags can be found . Features on the Velero server can be enabled using the `--features` flag to the `velero install` command. This flag takes as value a comma separated list of feature flags to enable. As an example can be enabled using `EnableCSI` feature flag in the `velero install` command as shown below: ```bash velero install --features=EnableCSI ``` Another example is enabling the support of multiple API group versions, as documented at . Feature flags, passed to `velero install` will be passed to the Velero deployment and also to the `restic` daemon set, if `--use-restic` flag is used. Similarly, features may be disabled by removing the corresponding feature flags from the `--features` flag. Enabling and disabling feature flags will require modifying the Velero deployment and also the restic daemonset. This may be done from the CLI by uninstalling and re-installing Velero, or by editing the `deploy/velero` and `daemonset/restic` resources in-cluster. ```bash $ kubectl -n velero edit deploy/velero $ kubectl -n velero edit daemonset/restic ``` For some features it may be necessary to use the `--features` flag to the Velero client. This may be done by passing the `--features` on every command run using the Velero CLI or the by setting the features in the velero client config file using the `velero client config set` command as shown below: ```bash velero client config set features=EnableCSI ``` This stores the config in a file at `$HOME/.config/velero/config.json`. All client side feature flags may be disabled using the below command ```bash velero client config set features= ``` Velero CLI uses colored output for some commands, such as `velero" }, { "data": "If the environment in which Velero is run doesn't support colored output, the colored output will be automatically disabled. However, you can manually disable colors with config file: ```bash velero client config set colorized=false ``` Note that if you specify `--colorized=true` as a CLI option it will override the config file setting. At installation, Velero sets default resource requests and limits for the Velero pod and the restic pod, if you using the . {{< table caption=\"Velero Customize resource requests and limits defaults\" >}} |Setting|Velero pod defaults|restic pod defaults| | | | | |CPU request|500m|500m| |Memory requests|128Mi|512Mi| |CPU limit|1000m (1 CPU)|1000m (1 CPU)| |Memory limit|512Mi|1024Mi| {{< /table >}} Depending on the cluster resources, especially if you are using Restic, you may need to increase these defaults. Through testing, the Velero maintainers have found these defaults work well when backing up and restoring 1000 or less resources and total size of files is 100GB or below. If the resources you are planning to backup or restore exceed this, you will need to increase the CPU or memory resources available to Velero. In general, the Velero maintainer's testing found that backup operations needed more CPU & memory resources but were less time-consuming than restore operations, when comparing backing up and restoring the same amount of data. The exact CPU and memory limits you will need depend on the scale of the files and directories of your resources and your hardware. It's recommended that you perform your own testing to find the best resource limits for your clusters and resources. Due to a , the Restic pod will consume large amounts of memory, especially if you are backing up millions of tiny files and directories. If you are planning to use Restic to backup 100GB of data or more, you will need to increase the resource limits to make sure backups complete successfully. You can customize these resource requests and limit when you first install using the CLI command. ``` velero install \\ --velero-pod-cpu-request <CPU_REQUEST> \\ --velero-pod-mem-request <MEMORY_REQUEST> \\ --velero-pod-cpu-limit <CPU_LIMIT> \\ --velero-pod-mem-limit <MEMORY_LIMIT> \\ [--use-restic] \\ [--default-volumes-to-restic] \\ [--restic-pod-cpu-request <CPU_REQUEST>] \\ [--restic-pod-mem-request <MEMORY_REQUEST>] \\ [--restic-pod-cpu-limit <CPU_LIMIT>] \\ [--restic-pod-mem-limit <MEMORY_LIMIT>] ``` After installation you can adjust the resource requests and limits in the Velero Deployment spec or restic DeamonSet spec, if you are using the restic integration. Velero pod Update the `spec.template.spec.containers.resources.limits` and `spec.template.spec.containers.resources.requests` values in the Velero deployment. ```bash kubectl patch deployment velero -n velero --patch \\ '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\": \"velero\", \"resources\": {\"limits\":{\"cpu\": \"1\", \"memory\": \"512Mi\"}, \"requests\": {\"cpu\": \"1\", \"memory\": \"128Mi\"}}}]}}}}' ``` restic pod Update the `spec.template.spec.containers.resources.limits` and `spec.template.spec.containers.resources.requests` values in the restic DeamonSet spec. ```bash kubectl patch daemonset restic -n velero --patch \\ '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\": \"restic\", \"resources\": {\"limits\":{\"cpu\": \"1\", \"memory\": \"1024Mi\"}, \"requests\": {\"cpu\": \"1\", \"memory\": \"512Mi\"}}}]}}}}' ``` Additionally, you may want to update the the default Velero restic pod operation timeout (default 240 minutes) to allow larger backups more time to complete. You can adjust this timeout by adding the `- --restic-timeout` argument to the Velero Deployment spec. NOTE: Changes made to this timeout value will revert back to the default value if you re-run the Velero install command. Open the Velero Deployment spec. ``` kubectl edit deploy velero -n velero ``` Add `- --restic-timeout` to `spec.template.spec.containers`. ```yaml spec: template: spec: containers: args: --restic-timeout=240m ``` Velero supports any number of backup storage locations and volume snapshot locations. For more details, see . However, `velero install` only supports configuring at most one backup storage location and one volume snapshot location. To configure additional locations after running `velero install`, use the `velero backup-location create` and/or `velero snapshot-location create` commands along with provider-specific" }, { "data": "Use the `--help` flag on each of these commands for more details. When performing backups, Velero needs to know where to backup your data. This means that if you configure multiple locations, you must specify the location Velero should use each time you run `velero backup create`, or you can set a default backup storage location or default volume snapshot locations. If you only have one backup storage llocation or volume snapshot location set for a provider, Velero will automatically use that location as the default. Set a default backup storage location by passing a `--default` flag with when running `velero backup-location create`. ``` velero backup-location create backups-primary \\ --provider aws \\ --bucket velero-backups \\ --config region=us-east-1 \\ --default ``` You can set a default volume snapshot location for each of your volume snapshot providers using the `--default-volume-snapshot-locations` flag on the `velero server` command. ``` velero server --default-volume-snapshot-locations=\"<PROVIDER-NAME>:<LOCATION-NAME>,<PROVIDER2-NAME>:<LOCATION2-NAME>\" ``` If you need to install Velero without a default backup storage location (without specifying `--bucket` or `--provider`), the `--no-default-backup-location` flag is required for confirmation. Velero supports using different providers for volume snapshots than for object storage -- for example, you can use AWS S3 for object storage, and Portworx for block volume snapshots. However, `velero install` only supports configuring a single matching provider for both object storage and volume snapshots. To use a different volume snapshot provider: Install the Velero server components by following the instructions for your object storage provider Add your volume snapshot provider's plugin to Velero (look in 's documentation for the image name): ```bash velero plugin add <registry/image:version> ``` Add a volume snapshot location for your provider, following 's documentation for configuration: ```bash velero snapshot-location create <NAME> \\ --provider <PROVIDER-NAME> \\ [--config <PROVIDER-CONFIG>] ``` By default, `velero install` generates and applies a customized set of Kubernetes configuration (YAML) to your cluster. To generate the YAML without applying it to your cluster, use the `--dry-run -o yaml` flags. This is useful for applying bespoke customizations, integrating with a GitOps workflow, etc. If you are installing Velero in Kubernetes 1.14.x or earlier, you need to use `kubectl apply`'s `--validate=false` option when applying the generated configuration to your cluster. See and for more context. If you intend to use Velero with a storage provider that is secured by a self-signed certificate, you may need to instruct Velero to trust that certificate. See for details. Run `velero install --help` or see the for the full set of installation options. Velero CLI provides autocompletion support for `Bash` and `Zsh`, which can save you a lot of typing. Below are the procedures to set up autocompletion for `Bash` (including the difference between `Linux` and `macOS`) and `Zsh`. The Velero CLI completion script for `Bash` can be generated with the command `velero completion bash`. Sourcing the completion script in your shell enables velero autocompletion. However, the completion script depends on , which means that you have to install this software first (you can test if you have bash-completion already installed by running `type initcompletion`). `bash-completion` is provided by many package managers (see ). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc. The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file. To find out, reload your shell and run `type initcompletion`. If the command succeeds, you're already set, otherwise add the following to your" }, { "data": "file: ```shell source /usr/share/bash-completion/bash_completion ``` Reload your shell and verify that bash-completion is correctly installed by typing `type initcompletion`. You now need to ensure that the Velero CLI completion script gets sourced in all your shell sessions. There are two ways in which you can do this: Source the completion script in your `~/.bashrc` file: ```shell echo 'source <(velero completion bash)' >>~/.bashrc ``` Add the completion script to the `/etc/bash_completion.d` directory: ```shell velero completion bash >/etc/bash_completion.d/velero ``` If you have an alias for velero, you can extend shell completion to work with that alias: ```shell echo 'alias v=velero' >>~/.bashrc echo 'complete -F start_velero v' >>~/.bashrc ``` `bash-completion` sources all completion scripts in `/etc/bash_completion.d`. Both approaches are equivalent. After reloading your shell, velero autocompletion should be working. The Velero CLI completion script for Bash can be generated with `velero completion bash`. Sourcing this script in your shell enables velero completion. However, the velero completion script depends on which you thus have to previously install. There are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The velero completion script doesn't work correctly with bash-completion v1 and Bash 3.2. It requires bash-completion v2 and Bash 4.1+. Thus, to be able to correctly use velero completion on macOS, you have to install and use Bash 4.1+ (). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer). As mentioned, these instructions assume you use Bash 4.1+, which means you will install bash-completion v2 (in contrast to Bash 3.2 and bash-completion v1, in which case kubectl completion won't work). You can test if you have bash-completion v2 already installed with `type initcompletion`. If not, you can install it with Homebrew: ```shell brew install bash-completion@2 ``` As stated in the output of this command, add the following to your `~/.bashrc` file: ```shell export BASHCOMPLETIONCOMPATDIR=\"/usr/local/etc/bashcompletion.d\" [[ -r \"/usr/local/etc/profile.d/bashcompletion.sh\" ]] && . \"/usr/local/etc/profile.d/bashcompletion.sh\" ``` Reload your shell and verify that bash-completion v2 is correctly installed with `type initcompletion`. You now have to ensure that the velero completion script gets sourced in all your shell sessions. There are multiple ways to achieve this: Source the completion script in your `~/.bashrc` file: ```shell echo 'source <(velero completion bash)' >>~/.bashrc ``` Add the completion script to the `/usr/local/etc/bash_completion.d` directory: ```shell velero completion bash >/usr/local/etc/bash_completion.d/velero ``` If you have an alias for velero, you can extend shell completion to work with that alias: ```shell echo 'alias v=velero' >>~/.bashrc echo 'complete -F start_velero v' >>~/.bashrc ``` If you installed velero with Homebrew (as explained ), then the velero completion script should already be in `/usr/local/etc/bash_completion.d/velero`. In that case, you don't need to do anything. The Homebrew installation of bash-completion v2 sources all the files in the `BASHCOMPLETIONCOMPAT_DIR` directory, that's why the latter two methods work. In any case, after reloading your shell, velero completion should be working. The velero completion script for Zsh can be generated with the command `velero completion zsh`. Sourcing the completion script in your shell enables velero autocompletion. To do so in all your shell sessions, add the following to your `~/.zshrc` file: ```shell source <(velero completion zsh) ``` If you have an alias for kubectl, you can extend shell completion to work with that alias: ```shell echo 'alias v=velero' >>~/.zshrc echo 'complete -F start_velero v' >>~/.zshrc ``` After reloading your shell, kubectl autocompletion should be working. If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file: ```shell autoload -Uz compinit compinit ```" } ]
{ "category": "Runtime", "file_name": "customize-installation.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Run Velero on AWS\" layout: docs To set up Velero on AWS, you: Download an official release of Velero Create your S3 bucket Create an AWS IAM user for Velero Install the server If you do not have the `aws` CLI locally installed, follow the to set it up. Download the tarball for your client platform. _We strongly recommend that you use an of Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch of the Velero repository is under active development and is not guaranteed to be stable!_ Extract the tarball: ``` tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to ``` We'll refer to the directory you extracted to as the \"Velero directory\" in subsequent steps. Move the `velero` binary from the Velero directory to somewhere in your PATH. Velero requires an object storage bucket to store backups in, preferably unique to a single Kubernetes cluster (see the for more details). Create an S3 bucket, replacing placeholders appropriately: ```bash BUCKET=<YOUR_BUCKET> REGION=<YOUR_REGION> aws s3api create-bucket \\ --bucket $BUCKET \\ --region $REGION \\ --create-bucket-configuration LocationConstraint=$REGION ``` NOTE: us-east-1 does not support a `LocationConstraint`. If your region is `us-east-1`, omit the bucket configuration: ```bash aws s3api create-bucket \\ --bucket $BUCKET \\ --region us-east-1 ``` For more information, see . Create the IAM user: ```bash aws iam create-user --user-name velero ``` If you'll be using Velero to backup multiple clusters with multiple S3 buckets, it may be desirable to create a unique username per cluster rather than the default `velero`. Attach policies to give `velero` the necessary permissions: ``` cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::${BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\" ], \"Resource\": [ \"arn:aws:s3:::${BUCKET}\" ] } ] } EOF ``` ```bash aws iam put-user-policy \\ --user-name velero \\ --policy-name velero \\ --policy-document file://velero-policy.json ``` Create an access key for the user: ```bash aws iam create-access-key --user-name velero ``` The result should look like: ```json { \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWSSECRETACCESS_KEY>, \"AccessKeyId\": <AWSACCESSKEY_ID> } } ``` Create a Velero-specific credentials file (`credentials-velero`) in your local directory: ```bash [default] awsaccesskeyid=<AWSACCESSKEYID> awssecretaccesskey=<AWSSECRETACCESSKEY> ``` where the access key id and secret are the values returned from the `create-access-key` request. Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it. ```bash velero install \\ --provider aws \\ --bucket $BUCKET \\ --secret-file ./credentials-velero \\ --backup-location-config region=$REGION \\ --snapshot-location-config region=$REGION ``` Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready. (Optional) Specify for the `--backup-location-config` flag. (Optional) Specify for the `--snapshot-location-config`" }, { "data": "(Optional) Specify for the Velero/restic pods. For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation. If you have multiple clusters and you want to support migration of resources between them, you can use `kubectl edit deploy/velero -n velero` to edit your deployment: Add the environment variable `AWSCLUSTERNAME` under `spec.template.spec.env`, with the current cluster's name. When restoring backup, it will make Velero (and cluster it's running on) claim ownership of AWS volumes created from snapshots taken on different cluster. The best way to get the current cluster's name is to either check it with used deployment tool or to read it directly from the EC2 instances tags. The following listing shows how to get the cluster's nodes EC2 Tags. First, get the nodes external IDs (EC2 IDs): ```bash kubectl get nodes -o jsonpath='{.items[*].spec.externalID}' ``` Copy one of the returned IDs `<ID>` and use it with the `aws` CLI tool to search for one of the following: The `kubernetes.io/cluster/<AWSCLUSTERNAME>` tag of the value `owned`. The `<AWSCLUSTERNAME>` is then your cluster's name: ```bash aws ec2 describe-tags --filters \"Name=resource-id,Values=<ID>\" \"Name=value,Values=owned\" ``` If the first output returns nothing, then check for the legacy Tag `KubernetesCluster` of the value `<AWSCLUSTERNAME>`: ```bash aws ec2 describe-tags --filters \"Name=resource-id,Values=<ID>\" \"Name=key,Values=KubernetesCluster\" ``` is a Kubernetes application that allows managing AWS IAM permissions for pod via annotations rather than operating on API keys. This path assumes you have `kube2iam` already running in your Kubernetes cluster. If that is not the case, please install it first, following the docs here: It can be set up for Velero by creating a role that will have required permissions, and later by adding the permissions annotation on the velero deployment to define which role it should use internally. Create a Trust Policy document to allow the role being used for EC2 management & assume kube2iam role: ``` cat > velero-trust-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"ec2.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\" }, { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::<AWSACCOUNTID>:role/<ROLECREATEDWHENINITIALIZINGKUBE2IAM>\" }, \"Action\": \"sts:AssumeRole\" } ] } EOF ``` Create the IAM role: ```bash aws iam create-role --role-name velero --assume-role-policy-document file://./velero-trust-policy.json ``` Attach policies to give `velero` the necessary permissions: ``` BUCKET=<YOUR_BUCKET> cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::${BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\" ], \"Resource\": [ \"arn:aws:s3:::${BUCKET}\" ] } ] } EOF ``` ```bash aws iam put-role-policy \\ --role-name velero \\ --policy-name velero-policy \\ --policy-document file://./velero-policy.json ``` Use the `--pod-annotations` argument on `velero install` to add the following annotation: ```bash velero install \\ --pod-annotations iam.amazonaws.com/role=arn:aws:iam::<AWSACCOUNTID>:role/<VELEROROLENAME> \\ --provider aws \\ --bucket $BUCKET \\ --backup-location-config region=$REGION \\ --snapshot-location-config region=$REGION \\ --no-secret ```" } ]
{ "category": "Runtime", "file_name": "aws-config.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Kata Containers on VEXXHOST use nested virtualization to provide an identical installation experience to Kata on your preferred Linux distribution. This guide assumes you have an OpenStack public cloud account set up and tools to remotely connect to your virtual machine (SSH). All regions support nested virtualization using the V2 flavors (those prefixed with v2). The recommended machine type for container workloads is `v2-highcpu` range. Follow distribution specific ." } ]
{ "category": "Runtime", "file_name": "vexxhost-installation-guide.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "(instances-troubleshoot)= If your instance fails to start and ends up in an error state, this usually indicates a bigger issue related to either the image that you used to create the instance or the server configuration. To troubleshoot the problem, complete the following steps: Save the relevant log files and debug information: Instance log : Enter the following command to display the instance log: incus info <instance_name> --show-log Console log : Enter the following command to display the console log: incus console <instance_name> --show-log Reboot the machine that runs your Incus server. Try starting your instance again. If the error occurs again, compare the logs to check if it is the same error. If it is, and if you cannot figure out the source of the error from the log information, open a question in the . Make sure to include the log files you collected. In this example, let's investigate a RHEL 7 system in which `systemd` cannot start. ```{terminal} :input: incus console --show-log systemd Console log: Failed to insert module 'autofs4' Failed to insert module 'unix' Failed to mount sysfs at /sys: Operation not permitted Failed to mount proc at /proc: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing. ``` The errors here say that `/sys` and `/proc` cannot be mounted - which is correct in an unprivileged container. However, Incus mounts these file systems automatically if it can. The {doc}`container requirements <../container-environment>` specify that every container must come with an empty `/dev`, `/proc` and `/sys` directory, and that `/sbin/init` must exist. If those directories don't exist, Incus cannot mount them, and `systemd` will then try to do so. As this is an unprivileged container, `systemd` does not have the ability to do this, and it then freezes. So you can see the environment before anything is changed, and you can explicitly change the init system in a container using the `raw.lxc` configuration parameter. This is equivalent to setting `init=/bin/bash` on the Linux kernel command line. incus config set systemd raw.lxc 'lxc.init.cmd = /bin/bash' Here is what it looks like: ```{terminal} :input: incus config set systemd raw.lxc 'lxc.init.cmd = /bin/bash' :input: incus start systemd :input: incus console --show-log systemd Console log: [root@systemd /]# ``` Now that the container has started, you can check it and see that things are not running as well as expected: ```{terminal} :input: incus exec systemd -- bash [root@systemd ~]# ls [root@systemd ~]# mount mount: failed to read mtab: No such file or directory [root@systemd ~]# cd / [root@systemd /]# ls /proc/ sys [root@systemd /]# exit ``` Because Incus tries to auto-heal, it created some of the directories when it was starting up. Shutting down and restarting the container fixes the problem, but the original cause is still there - the template does not contain the required files." } ]
{ "category": "Runtime", "file_name": "instances_troubleshoot.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "(devices)= Devices are attached to an instance (see {ref}`instances-configure-devices`) or to a profile (see {ref}`profiles-edit`). They include, for example, network interfaces, mount points, USB and GPU devices. These devices can have instance device options, depending on the type of the instance device. Incus supports the following device types: | ID (database) | Name | Condition | Description | |:--|:|:-|:--| | 0 | | - | Inheritance blocker | | 1 | | - | Network interface | | 2 | | - | Mount point inside the instance | | 3 | | container | Unix character device | | 4 | | container | Unix block device | | 5 | | - | USB device | | 6 | | - | GPU device | | 7 | | container | InfiniBand device | | 8 | | container | Proxy device | | 9 | | container | Unix hotplug device | | 10 | | - | TPM device | | 11 | | VM | PCI device | Each instance comes with a set of {ref}`standard-devices`. ```{toctree} :maxdepth: 1 :hidden: ../reference/standard_devices.md ../reference/devices_none.md ../reference/devices_nic.md ../reference/devices_disk.md ../reference/devicesunixchar.md ../reference/devicesunixblock.md ../reference/devices_usb.md ../reference/devices_gpu.md ../reference/devices_infiniband.md ../reference/devices_proxy.md ../reference/devicesunixhotplug.md ../reference/devices_tpm.md ../reference/devices_pci.md ```" } ]
{ "category": "Runtime", "file_name": "devices.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "pull code ```shell $ git clone https://github.com/carina-io/carina.git ``` golang 1.17 compiling carina-controller / carina-node ```shell $ make docker-build $ make release VERSION=v0.9 ``` compiling carina-scheduler. Carina-scheduler is an independent projectwhich is just placed under carina for now. ```shell $ cd scheduler $ make docker-build $ make release VERSION=v0.9 ``` how to run e2e test cases? For local volume projects, it's not ideal to run e2e tests via KIND clusters. It's recommended to test carina on physical or virtual nodes. ```shell $ cd test/e2e $ make e2e ```" } ]
{ "category": "Runtime", "file_name": "development.md", "project_name": "Carina", "subcategory": "Cloud Native Storage" }
[ { "data": "[TOC] Today, gVisor requires Linux. gVisor currently supports compatible processors. Preliminary support is also available for . No. gVisor is capable of running unmodified Linux binaries. gVisor supports Linux binaries. Binaries run in gVisor should be built for the or CPU architectures. Yes. Please see the . Yes. Please see the . See the [Production guide]. See the . If youre having problems running a container with `runsc` its most likely due to a compatibility issue or a missing feature in gVisor. See . You are using an older version of Linux which doesn't support `memfd_create`. This is tracked in . You're using an old version of Docker. See . For performance reasons, gVisor caches directory contents, and therefore it may not realize a new file was copied to a given directory. To invalidate the cache and force a refresh, create a file under the directory in question and list the contents again. As a workaround, shared root filesystem can be enabled. See . This bug is tracked in . Note that `kubectl cp` works because it does the copy by exec'ing inside the sandbox, and thus gVisor's internal cache is made aware of the new files and directories. Make sure that permissions is correct on the `runsc` binary. ```bash sudo chmod a+rx /usr/local/bin/runsc ``` If your Kernel is configured with YAMA LSM (see https://www.kernel.org/doc/Documentation/security/Yama.txt and https://man7.org/linux/man-pages/man2/ptrace.2.html) gVisor may fail in certain modes (i.e., systrap and/or directfs) with this error if `/proc/sys/kernel/yama/ptrace_scope` is set to 2. If this is the case, try setting `/proc/sys/kernel/yama/ptrace_scope` to max of mode 1: ```bash sudo cat /proc/sys/kernel/yama/ptrace_scope 2 sudo bash -c 'echo 1 > /proc/sys/kernel/yama/ptrace_scope' ``` There is a bug in Linux kernel versions 5.1 to 5.3.15, 5.4.2, and 5.5. Upgrade to a newer kernel or add the following to `/lib/systemd/system/containerd.service` as a" }, { "data": "``` LimitMEMLOCK=infinity ``` And run `systemctl daemon-reload && systemctl restart containerd` to restart containerd. See for more details. This error indicates that the Kubernetes CRI runtime was not set up to handle `runsc` as a runtime handler. Please ensure that containerd configuration has been created properly and containerd has been restarted. See the for more details. If you have ensured that containerd has been set up properly and you used kubeadm to create your cluster please check if Docker is also installed on that system. Kubeadm prefers using Docker if both Docker and containerd are installed. Please recreate your cluster and set the `--cri-socket` option on kubeadm commands. For example: ```bash kubeadm init --cri-socket=/var/run/containerd/containerd.sock ... ``` To fix an existing cluster edit the `/var/lib/kubelet/kubeadm-flags.env` file and set the `--container-runtime` flag to `remote` and set the `--container-runtime-endpoint` flag to point to the containerd socket. e.g. `/var/run/containerd/containerd.sock`. This is normally indicated by errors like `bad address 'container-name'` when trying to communicate to another container in the same network. Docker user defined bridge uses an embedded DNS server bound to the loopback interface on address 127.0.0.10. This requires access to the host network in order to communicate to the DNS server. runsc network is isolated from the host and cannot access the DNS server on the host network without breaking the sandbox isolation. There are a few different workarounds you can try: Use default bridge network with `--link` to connect containers. Default bridge doesn't use embedded DNS. Use option in runsc, however beware that it will use the host network stack and is less secure. Use IPs instead of container names. Use . Container name lookup works fine in Kubernetes. This error may happen when using `gvisor-containerd-shim` with a `containerd` that does not contain the fix for [CVE-2020-15257]. The resolve the issue, update containerd to 1.3.9 or 1.4.3 (or newer versions respectively)." } ]
{ "category": "Runtime", "file_name": "FAQ.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "This document shows how to deploy containers with Sysbox. It assumes Sysbox is already . To deploy containers with Sysbox, you typically use a container manager or orchestrator (e.g., Docker or Kubernetes) or any higher level tools built on top of these (e.g., Docker compose). Users do not normally interact with Sysbox directly, as it uses a low-level interface (though this is possible as shown for reference). Simply add the `--runtime=sysbox-runc` flag in the `docker run` command: ```console $ docker run --runtime=sysbox-runc --rm -it nestybox/ubuntu-bionic-systemd-docker root@my_cont:/# ``` You can choose any container image of your choice, whether that image carries a single microservice that you wish to run with strong isolation (via the Linux user-namespace), or whether it carries a full OS environment with systemd, Docker, etc. (as the container image in the above example does). The and have several examples. If you wish, you can configure Sysbox as the default runtime for Docker. This way you don't have to use the `--runtime` flag every time. To do this, see . NOTE: Almost all Docker functionality works with Sysbox, but there are a few exceptions. See the for further info. Assuming Sysbox is on the Kubernetes cluster, deploying pods with Sysbox is easy: you only need a couple of things in the pod spec. For example, here is a sample pod spec using the `ubuntu-bionic-systemd-docker` image. It creates a rootless pod that runs systemd as init (pid 1) and comes with Docker (daemon + CLI) inside: ```yaml apiVersion: v1 kind: Pod metadata: name: ubu-bio-systemd-docker annotations: io.kubernetes.cri-o.userns-mode: \"auto:size=65536\" spec: runtimeClassName: sysbox-runc containers: name: ubu-bio-systemd-docker image: registry.nestybox.com/nestybox/ubuntu-bionic-systemd-docker command: [\"/sbin/init\"] restartPolicy: Never ``` There are two key pieces of the pod's spec that tie it to Sysbox: \"runtimeClassName\": Tells K8s to deploy the pod with Sysbox (rather than the default OCI runc). The pods will be scheduled only on the nodes that support Sysbox. \"io.kubernetes.cri-o.userns-mode\": Tells CRI-O to launch this as a rootless pod (i.e., root user in the pod maps to an unprivileged user on the host) and to allocate a range of 65536 Linux user-namespace user and group IDs. This is required for Sysbox pods. Also, for Sysbox pods you typically want to avoid sharing the process (pid) namespace between containers in a pod. Thus, avoid setting `shareProcessNamespace: true` in the pod's spec, especially if the pod carries systemd inside (as otherwise systemd won't be pid 1 in the pod and will" }, { "data": "Depending on the size of the pod's image, it may take several seconds for the pod to deploy on the node. Once the image is downloaded on a node however, deployment should be very quick (few seconds). Here is a similar example, but this time using a Kubernetes Deployment: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: syscont-deployment labels: app: syscont spec: replicas: 4 selector: matchLabels: app: syscont template: metadata: labels: app: syscont annotations: io.kubernetes.cri-o.userns-mode: \"auto:size=65536\" spec: runtimeClassName: sysbox-runc containers: name: ubu-bio-systemd-docker image: registry.nestybox.com/nestybox/ubuntu-bionic-systemd-docker command: [\"/sbin/init\"] ``` See the of the User Guide. The prior examples used the `ubuntu-bionic-systemd-docker`, but you can use any container image you want. Sysbox places no requirements on the container image. Nestybox has several images which you can find in the . Those same images are in the . We usually rely on `registry.nestybox.com` as an image front-end so that Docker image pulls are forwarded to the most suitable repository without impacting our users. Refer to the . NOTE: This is not the usual way to deploy containers with Sysbox as the interface is low-level (it's defined by the OCI runtime spec). However, we include it here for reference because it's useful when wishing to control the configuration of the system container at the lowest level. As the root user, follow these steps: Create a rootfs image for the system container: ```console ``` Create the OCI spec (i.e., `config.json` file) for the system container: ```console ``` Launch the system container. Choose a unique name for the container and run: ```console ``` Use `sysbox-runc --help` command for help on all commands supported. A couple of tips: In step (1): If the `config.json` does not specify the Linux user namespace, the container's rootfs should be owned by `root:root`. If the `config.json` does specify the Linux user namespace and associated user-ID and group-ID mappings, the container's rootfs should be owned by the corresponding user-ID and group-ID. In addition, make sure you have support for either or ID-mapped mounts (kernel >= 5.12) in your host. Without these, host files mounted into the container will show up with `nobody:nogroup` ownership. Feel free to modify the system container's `config.json` to your needs. But note that Sysbox ignores a few of the OCI directives in this file (refer to the for details). We currently only support the above methods to run containers with Sysbox. However, other OCI-based container managers will most likely work." } ]
{ "category": "Runtime", "file_name": "deploy.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "First, you need to make sure you have a healthy Kubernetes(1.20+) cluster and have the permissions to create Kata pods. WARNING If you select a `K8S` with lower version, It cannot ensure that it will work well. The `CSI driver` is deployed as a `daemonset` and the pods of the `daemonset` contain 4 containers: `Kata Direct Volume CSI Driver`, which is the key implementation in it The easiest way to deploy the `Direct Volume CSI driver` is to run the `deploy.sh` script for the Kubernetes version used by the cluster as shown below for Kubernetes 1.28.2. ```shell sudo deploy/deploy.sh ``` You'll get an output similar to the following, indicating the application of `RBAC rules` and the successful deployment of `csi-provisioner`, `node-driver-registrar`, `kata directvolume csi driver`(`csi-kata-directvol-plugin`), liveness-probe. Please note that the following output is specific to Kubernetes 1.28.2. ```shell Creating Namespace kata-directvolume ... kubectl apply -f /tmp/tmp.kN43BWUGQ5/kata-directvol-ns.yaml namespace/kata-directvolume created Namespace kata-directvolume created Done ! Applying RBAC rules ... curl https://raw.githubusercontent.com/kubernetes-csi/external-provisioner/v3.6.0/deploy/kubernetes/rbac.yaml --output /tmp/tmp.kN43BWUGQ5/rbac.yaml --silent --location kubectl apply -f ./kata-directvolume/kata-directvol-rbac.yaml serviceaccount/csi-provisioner created clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created role.rbac.authorization.k8s.io/external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/csi-provisioner-role-cfg created $ ./directvol-deploy.sh deploying kata directvolume components ./kata-directvolume/csi-directvol-driverinfo.yaml csidriver.storage.k8s.io/directvolume.csi.katacontainers.io created ./kata-directvolume/csi-directvol-plugin.yaml kata-directvolume plugin using image: registry.k8s.io/sig-storage/csi-provisioner:v3.6.0 kata-directvolume plugin using image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.0 kata-directvolume plugin using image: localhost/kata-directvolume:v1.0.52 kata-directvolume plugin using image: registry.k8s.io/sig-storage/livenessprobe:v2.8.0 daemonset.apps/csi-kata-directvol-plugin created ./kata-directvolume/kata-directvol-ns.yaml namespace/kata-directvolume unchanged ./kata-directvolume/kata-directvol-rbac.yaml serviceaccount/csi-provisioner unchanged clusterrole.rbac.authorization.k8s.io/external-provisioner-runner configured clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role unchanged role.rbac.authorization.k8s.io/external-provisioner-cfg unchanged rolebinding.rbac.authorization.k8s.io/csi-provisioner-role-cfg unchanged NAMESPACE NAME READY STATUS RESTARTS AGE default pod/kata-driectvol-01 1/1 Running 0 3h57m kata-directvolume pod/csi-kata-directvol-plugin-92smp 4/4 Running 0 4s kube-flannel pod/kube-flannel-ds-vq796 1/1 Running 1 (67d ago) 67d kube-system pod/coredns-66f779496c-9bmp2 1/1 Running 3 (67d ago) 67d kube-system pod/coredns-66f779496c-qlq6d 1/1 Running 1 (67d ago) 67d kube-system pod/etcd-tnt001 1/1 Running 19 (67d ago) 67d kube-system pod/kube-apiserver-tnt001 1/1 Running 5 (67d ago) 67d kube-system pod/kube-controller-manager-tnt001 1/1 Running 8 (67d ago) 67d kube-system pod/kube-proxy-p9t6t 1/1 Running 6 (67d ago) 67d kube-system pod/kube-scheduler-tnt001 1/1 Running 8 (67d ago) 67d NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kata-directvolume daemonset.apps/csi-kata-directvol-plugin 1 1 1 1 1 <none> 4s kube-flannel daemonset.apps/kube-flannel-ds 1 1 1 1 1 <none> 67d kube-system daemonset.apps/kube-proxy 1 1 1 1 1" }, { "data": "67d ``` First, ensure all expected pods are running properly, including `csi-provisioner`, `node-driver-registrar`, `kata-directvolume` `csi driver(csi-kata-directvol-plugin)`, liveness-probe: ```shell $ kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE default csi-kata-directvol-plugin-dlphw 4/4 Running 0 68m kube-flannel kube-flannel-ds-vq796 1/1 Running 1 (52d ago) 52d kube-system coredns-66f779496c-9bmp2 1/1 Running 3 (52d ago) 52d kube-system coredns-66f779496c-qlq6d 1/1 Running 1 (52d ago) 52d kube-system etcd-node001 1/1 Running 19 (52d ago) 52d kube-system kube-apiserver-node001 1/1 Running 5 (52d ago) 52d kube-system kube-controller-manager-node001 1/1 Running 8 (52d ago) 52d kube-system kube-proxy-p9t6t 1/1 Running 6 (52d ago) 52d kube-system kube-scheduler-node001 1/1 Running 8 (52d ago) 52d ``` From the root directory, deploy the application pods including a storage class, a `PVC`, and a pod which uses direct block device based volume. The details can be seen in `/examples/pod-with-directvol/*.yaml`: ```shell kubectl apply -f ${BASE_DIR}/csi-storageclass.yaml kubectl apply -f ${BASE_DIR}/csi-pvc.yaml kubectl apply -f ${BASE_DIR}/csi-app.yaml ``` Let's validate the components are deployed: ```shell $ kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kata-directvolume csi-kata-directvol-plugin-dlphw 4/4 Running 0 68m default kata-driectvol-01 1/1 Running 0 67m $ kubectl get sc,pvc -A NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE storageclass.storage.k8s.io/csi-kata-directvolume-sc directvolume.csi.katacontainers.io Delete Immediate false 71m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/csi-directvolume-pvc Bound pvc-d7644547-f850-4bdf-8c93-aa745c7f31b5 1Gi RWO csi-kata-directvolume-sc 71m ``` Finally, inspect the application pod `kata-driectvol-01` which running with direct block device based volume: ```shell $ kubectl describe po kata-driectvol-01 Name: kata-driectvol-01 Namespace: kata-directvolume Priority: 0 Runtime Class Name: kata Service Account: default Node: node001/10.10.1.19 Start Time: Sat, 09 Dec 2023 23:06:49 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.0.232 IPs: IP: 10.244.0.232 Containers: first-container: Container ID: containerd://c5eec9d645a67b982549321f382d83c56297d9a2a705857e8f3eaa6c6676908e Image: ubuntu:22.04 Image ID: docker.io/library/ubuntu@sha256:2b7412e6465c3c7fc5bb21d3e6f1917c167358449fecac8176c6e496e5c1f05f Port: <none> Host Port: <none> Command: sleep 1000000 State: Running Started: Sat, 09 Dec 2023 23:06:51 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /data from kata-driectvol0-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zs9tm (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kata-driectvol0-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: csi-directvolume-pvc ReadOnly: false kube-api-access-zs9tm: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none> ```" } ]
{ "category": "Runtime", "file_name": "deploy-csi-kata-directvol.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "This document provides info on mounting storage into Sysbox containers. Sysbox containers support all Docker storage mount types: . For bind mounts in particular, Sysbox leverages the Linux feature (kernel >= 5.12) or alternatively the kernel module (available on Ubuntu, Debian, and Flatcar) to ensure that the host files that are bind-mounted into the container show up with proper user-ID and group-ID inside the container. See the for more info on this. For example, if we have a host directory called `my-host-dir` where files are owned by users in the range \\[0:65536], and that directory is bind mounted into a system container as follows: ```console $ docker run --runtime=sysbox-runc -it --mount type=bind,source=my-host-dir,target=/mnt/my-host-dir alpine ``` then Sysbox will setup and ID-mapped mount on `my-host-dir` (or alternatively mount shiftfs on it), causing the files to show up with the same ownership (\\[0:65536]) inside the container, even though the container's user-IDs and group-IDs are mapped to a completely different set of IDs on the host (e.g., 100000->165536). This way, users need not worry about what host IDs are mapped into the container via the Linux user-namespace. Sysbox takes care of setting things up so that the bind mounted files show up with proper permissions inside the container. This makes it possible for Sysbox containers to share files with the host or with other containers, even if these have independent user-namespace ID mappings (as assigned by Sysbox-EE for extra isolation). Note that if neither ID-mapped mounts or shiftfs are present in your host, then host files mounted into the Sysbox container will show up as owned by `nobody:nogroup` inside the container. Kubernetes supports several for mounting into pods. Pods launched with Kubernetes + Sysbox (aka Sysbox pods) support several of these volume types, though we've not yet verified" }, { "data": "The following volume types are known to work with Sysbox: ConfigMap EmptyDir gcePersistentDisk hostPath local secret subPath Other volume types may also work, though Nestybox has not tested them. Note that Sysbox must mount the volumes with ID-mapping in order for them to show up with proper permissions inside the Sysbox container, using either the ID-mapped mounts or shiftfs mechanisms (see the prior section). This may create incompatibilities with some Kubernetes volume types (other than those listed above). If you find such an incompatibility, please file an issue in the . The following spec creates a Sysbox pod with ubuntu-bionic + systemd + Docker and mounts host directory `/root/somedir` into the pod's `/mnt/host-dir`. ```yaml apiVersion: v1 kind: Pod metadata: name: ubu-bio-systemd-docker annotations: io.kubernetes.cri-o.userns-mode: \"auto:size=65536\" spec: runtimeClassName: sysbox-runc containers: name: ubu-bio-systemd-docker image: registry.nestybox.com/nestybox/ubuntu-bionic-systemd-docker command: [\"/sbin/init\"] volumeMounts: mountPath: /mnt/host-dir name: host-vol restartPolicy: Never volumes: name: host-vol hostPath: path: /root/somedir type: Directory ``` When this pod is deployed, Sysbox will automatically setup ID shifting on the pod's `/mnt/host-dir` (either with ID-mapped mounts or with shiftfs). As a result that directory will show up with proper user-ID and group-ID ownership inside the pod. To share storage between Sysbox containers, simply mount the storage to the containers. Even though each Sysbox container may use different user-namespace ID mappings, Sysbox will leverage the ID-mapped mounts or shiftfs mechanisms to ensure the containers see the storage with a consistent set of filesystem user-ID and group-IDs. When mounting storage into a container, note the following: Any files or directories mounted into the container are writable from within the container (unless the mount is \"read-only\"). Furthermore, when the container's root user writes to these mounted files or directories, it will do so as if it were the root user on the host. In other words, be aware that files or directories mounted into the container are not isolated from the container. Thus, make sure to only mount host files / directories into the container when it's safe to do so." } ]
{ "category": "Runtime", "file_name": "storage.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "oep-number: NDM 0001 title: Uniquely identify disk in cluster authors: \"@akhilerm\" owners: \"@gila\" \"@kmova\" \"@vishnuitta\" editor: \"@akhilerm\" creation-date: 2019-07-05 last-updated: 2020-09-01 status: provisional * * [User Stories(#user-stories) * * * This proposal brings out the design details to implement a Disk identification solution which can be used to uniquely identify a disk across the cluster. Uniquely identifying a disk across a cluster is a major challenge which needs to be tackled. The identification is necessary, due to the fact that disk can move from node to node and even attached to different ports on the same node. The unique identification solution will help the consumers of NDM to track the disk and thus the data movement within the cluster. Configurable device detection mechanism A unique disk identification mechanism that will work across the cluster on disks having at least a GPT label Should be able to identify same disk attached at multiple paths. Should be able to generate separate IDs for partitions Identification of disk attached to 2 nodes at the same time NDM should be able to generate a unique id for a physical disk attached to the node and this id should not change even if the disk is attached at a different port on the same host or moved to a different node or altogether moved to a different cluster. NDM should be able to generate a unique id for Virtual Disks(which may not have all relevant information in VPD). This id should also persist between restarts of the node, attaching to a different machine or a different SCSI port of the same machine. The implementation details for the disk identification. The current implementation works well for physical disks which have a unique WWN and model assigned by the manufacturer. But the implementation fails for `Virtual_disk`, because the fields are provider dependent and are not guaranteed to be unique. The following data is fetched from the disk: `ID_WWN` `ID_MODEL` `IDSERIALSHORT` `ID_VENDOR` `ID_TYPE` If the `IDMODEL` is not `Virtualdisk` or `EphemeralDisk`: field 1-4 are appended md5 hash of the result is generated which will be UID for the disk by NDM If the `IDTYPE` is empty or `IDMODEL` is `Virtual_disk` or `EphemeralDisk`: There is a chance that the values in fields 1-4 will be either empty or all the same depending on the provider. fields 1-4 are appended. Along with it, the hostname and DEVNAME(`/dev/sda`) is appended to the result md5 hash of the result is generated which will be UID for the disk by NDM In case of virtual disks, since we are using the DEVNAME for generating UID, every time the same disk comes at a different path, the UID gets changed. This results in getting the device identified as a new disk. This will lead to data corruption as the disk that was being claimed by some users has now came up with a new UID, but with the old data. Also, since this disk is now in an Unclaimed state, it will be available for others to claim possibly leading to data loss If the disk has WWN, then a combination of WWN and Serial will be used, else a GPT partition will be created on the" }, { "data": "This method describes the algorithm used to generate UUID for different device types: `disk` type Check if the device has a WWN, if yes then use the combination of WWN and Serial to generate UUID Else, Check if the disk has a Filesystem label available on it. This is in cases where the complete disk is formatted with a filesystem without a partition table. If filesystem label is available we use it for generating the UUID. If WWN and FS label is not available on the disk, we create a GPT partition table on the disk, and then create a single partition which spans the complete disk. This new partition will be consumed by the users. This will help to identify a virtual disk for movements across the cluster also. `partition` type The Partition entry ID (IDPARTENTRY_UUID) will be used to generate the UUID. This will be further configurable, so that users can specify how many partition to be created etc. This partitions will be automatically by NDM during start-up itself. Data Cleanup: NDM takes care of data clean up, after a BD is released from a BDC. i.e. a complete wipe of the disk will be done `wipefs -fa`. Since in case of NDM created GPT labels, only partition is being wiped, it won't cause the labels to be removed. ``` +--+ +-v-+ +--+ +-+ |if DEVTYPE | | | | Check | | Create GPT | | equals | No | If | No | FS label | No | partition | |\"partition\"|->+ WWN is present +->+ +-->+ table and | |use PART | | | | | | one partition | |ENTRY ID | | | | | | spanning the disk | +-++ ++ +--+ +-+ | | | | | Yes |Yes | | | | |Yes | | | | | +-v+ | | | | | | | | |Use md5 of | | | | | generated | | | | | UID |<++- | | | | +--+ ``` The algorithm used for upgrading from devices using old UUID to new UUID is mentioned NDM should be able to uniquely identify the disks, across reboots, across different SCSI ports and anywhere within the cluster. The unique identification can be marked successful, if NDM detects the disk, and the user is able to retrieve his data from the disk. A mechanism to seamlessly upgrade from old UUID to new UUID. Owner acceptance of `Summary` and `Motivation` sections - 2020-02-29 Agreement on `Proposal` section - 2020-02-29 Date implementation started - 2020-02-29 First OpenEBS release where an initial version of this OEP was available - OpenEBS 1.10 Version of OpenEBS where this OEP graduated to general availability - OpenEBS 2.0 If this OEP was retired or superseded - YYYYMMDD Do not have a mechanism to identify disks that are attached in multipath mode. Cannot identify disks and create BDs if the same disk is connected to 2 nodes at the same time. The partition created on the disk for identification will remain even after NDM is uninstalled from the node. Consumers of NDM will not be able to create a partition on blockdevices on which NDM already created a partition for identification. Test setup with different types of disks in different configurations to test and ensure the unique ID generation process." } ]
{ "category": "Runtime", "file_name": "20190705-ndm-disk-identification.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "A: Each controller should only reconcile one object type. Other affected objects should be mapped to a single type of root object, using the `EnqueueRequestForOwner` or `EnqueueRequestsFromMapFunc` event handlers, and potentially indices. Then, your Reconcile method should attempt to reconcile all state for that given root objects. A: You should not. Reconcile functions should be idempotent, and should always reconcile state by reading all the state it needs, then writing updates. This allows your reconciler to correctly respond to generic events, adjust to skipped or coalesced events, and easily deal with application startup. The controller will enqueue reconcile requests for both old and new objects if a mapping changes, but it's your responsibility to make sure you have enough information to be able clean up state that's no longer referenced. A: There are several different approaches that can be taken, depending on your situation. When you can, take advantage of optimistic locking: use deterministic names for objects you create, so that the Kubernetes API server will warn you if the object already exists. Many controllers in Kubernetes take this approach: the StatefulSet controller appends a specific number to each pod that it creates, while the Deployment controller hashes the pod template spec and appends that. In the few cases when you cannot take advantage of deterministic names (e.g. when using generateName), it may be useful in to track which actions you took, and assume that they need to be repeated if they don't occur after a given time (e.g. using a requeue result). This is what the ReplicaSet controller does. In general, write your controller with the assumption that information will eventually be correct, but may be slightly out of date. Make sure that your reconcile function enforces the entire state of the world each time it runs. If none of this works for you, you can always construct a client that reads directly from the API server, but this is generally considered to be a last resort, and the two approaches above should generally cover most circumstances. A: The fake client , but we generally recommend using to test against a real API server. In our experience, tests using fake clients gradually re-implement poorly-written impressions of a real API server, which leads to hard-to-maintain, complex test code. Use the aforementioned to spin up a real API server instead of trying to mock one out. Structure your tests to check that the state of the world is as you expect it, not that a particular set of API calls were made, when working with Kubernetes APIs. This will allow you to more easily refactor and improve the internals of your controllers without changing your tests. Remember that any time you're interacting with the API server, changes may have some delay between write time and reconcile time. A: You're probably missing a fully-set-up Scheme. Schemes record the mapping between Go types and group-version-kinds in Kubernetes. In general, your application should have its own Scheme containing the types from the API groups that it needs (be they Kubernetes types or your own). See the [scheme builder docs](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/scheme) for more information." } ]
{ "category": "Runtime", "file_name": "FAQ.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "The most reliable way to build flannel is by using Docker. To build flannel in a container run `make dist/flanneld-amd64`. You will now have a `flanneld-amd64` binary in the `dist` directory. If you're not running `amd64` then you need to manually set `ARCH` before running `make`. For example, to produce a `flanneld-s390x` binary and image, run ARCH=s390x make image If you want to cross-compile for a different platform (e.g. you're running `amd64` but you want to produce `arm` binaries) then you need the qemu-static binaries to be present in `/usr/bin`. They can be installed on Ubuntu with `sudo apt-get install qemu-user-static` Then you should be able to set the ARCH as above ARCH=arm make image To build the multi-arch image of flannel locally, you need to install . Then you can use the following target: ``` make build-multi-arch ``` If you don't already have a builder running locally, you can this target to start it: ``` make buildx-create-builder ``` See the for more details. To run the end-to-end tests locally, you need to installl . Make sure you have required dependencies installed on your machine. On Ubuntu, run `sudo apt-get install linux-libc-dev golang gcc`. If the golang version installed is not 1.7 or higher. Download the newest golang and install manually. To build the flannel.exe on windows, mingw-w64 is also needed. Run command `sudo apt-get install mingw-w64` On Fedora/Redhat, run `sudo yum install kernel-headers golang gcc glibc-static`. Git clone the flannel repo. It MUST be placed in your GOPATH under `github.com/flannel-io/flannel`: `cd $GOPATH/src; git clone https://github.com/flannel-io/flannel.git` Run the build script, ensuring that `CGOENABLED=1`: `cd flannel; CGOENABLED=1 make dist/flanneld` for linux usage. Run the build script, ensuring that `CGOENABLED=1`: `cd flannel; CGOENABLED=1 make dist/flanneld.exe` for windows usage." } ]
{ "category": "Runtime", "file_name": "building.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "- - - - - - - MicroVM snapshotting is a mechanism through which a running microVM and its resources can be serialized and saved to an external medium in the form of a `snapshot`. This snapshot can be later used to restore a microVM with its guest workload at that particular point in time. \\[!WARNING\\] The Firecracker snapshot feature is in on all CPU micro-architectures listed in . See for more info. A Firecracker microVM snapshot can be used for loading it later in a different Firecracker process, and the original guest workload is being simply resumed. The original guest which the snapshot is created from, should see no side effects from this process (other than the latency introduced by the snapshot creation process). Both network and vsock packet loss can be expected on guests that are resumed from snapshots in another Firecracker process. It is also not guaranteed that the state of the network connections survives the process. In order to make restoring possible, Firecracker snapshots save the full state of the following resources: the guest memory, the emulated HW state (both KVM and Firecracker emulated HW). The state of the components listed above is generated independently, which brings flexibility to our snapshotting support. This means that taking a snapshot results in multiple files that are composing the full microVM snapshot: the guest memory file, the microVM state file, zero or more disk files (depending on how many the guest had; these are managed by the users). The design allows sharing of memory pages and read only disks between multiple microVMs. When loading a snapshot, instead of loading at resume time the full contents from file to memory, Firecracker creates a of the memory file, resulting in runtime on-demand loading of memory pages. Any subsequent memory writes go to a copy-on-write anonymous memory mapping. This has the advantage of very fast snapshot loading times, but comes with the cost of having to keep the guest memory file around for the entire lifetime of the resumed microVM. The Firecracker snapshot design offers a very simple interface to interact with snapshots but provides no functionality to package or manage them on the host. The states that the host, host/API communication and snapshot files are trusted by Firecracker. To ensure a secure integration with the snapshot functionality, users need to secure snapshot files by implementing authentication and encryption schemes while managing their lifecycle or moving them across the trust boundary, like for example when provisioning them from a repository to a host over the network. Firecracker is optimized for fast load/resume, and it's designed to do some very basic sanity checks only on the vm state file. It only verifies integrity using a 64-bit CRC value embedded in the vm state file, but this is only a partial measure to protect against accidental corruption, as the disk files and memory file need to be secured as well. It is important to note that CRC computation is validated before trying to load the snapshot. Should it encounter failure, an error will be shown to the user and the Firecracker process will be terminated. The Firecracker snapshot create/resume performance depends on the memory size, vCPU count and emulated devices count. The Firecracker CI runs snapshot tests on all . The snapshot functionality is still in developer preview due to the following: Poor entropy and replayable randomness when resuming multiple microvms from the same" }, { "data": "We do not recommend to use snapshotting in production if there is no mechanism to guarantee proper secrecy and uniqueness between guests. Please see . Currently on aarch64 platforms only lower 128 bits of any register are saved due to the limitations of `get/setonereg` from `kvm-ioctls` crate that Firecracker uses to interact with KVM. This creates an issue with newer aarch64 CPUs with support for registers with width greater than 128 bits, because these registers will be truncated before being stored in the snapshot. This can lead to uVM failure if restored from such snapshot. Because registers wider than 128 bits are usually used in SVE instructions, the best way to mitigate this issue is to ensure that the software run in uVM does not use SVE instructions during snapshot creation. An alternative way is to use to disable SVE related features in uVM. High snapshot latency on 5.4+ host kernels due to cgroups V1. We strongly recommend to deploy snapshots on cgroups V2 enabled hosts for the implied kernel versions - . Guest network connectivity is not guaranteed to be preserved after resume. For recommendations related to guest network connectivity for clones please see . Vsock device does not have full snapshotting support. Please see . Snapshotting on arm64 works for both GICv2 and GICv3 enabled guests. However, restoring between different GIC version is not possible. If a is not used on x86_64, overwrites of `MSRIA32TSX_CTRL` MSR value will not be preserved after restoring from a snapshot. Resuming from a snapshot that was taken during early stages of the guest kernel boot might lead to crashes upon snapshot resume. We suggest that users take snapshot after the guest microVM kernel has booted. Please see . Fresh Firecracker microVMs are booted using `anonymous` memory, while microVMs resumed from snapshot load memory on-demand from the snapshot and copy-on-write to anonymous memory. Resuming from a snapshot is optimized for speed, while taking a snapshot involves some extra CPU cycles for synchronously writing dirty memory pages to the memory snapshot file. Taking a snapshot of a fresh microVM, on which dirty pages tracking is not enabled, results in the full contents of guest memory being written to the snapshot. The memory file and microVM state file are generated by Firecracker on snapshot creation. The disk contents are not explicitly flushed to their backing files. The API calls exposing the snapshotting functionality have clear Prerequisites that describe the requirements on when/how they should be used. The Firecracker microVM's MMDS config is included in the snapshot. However, the data store is not persisted across snapshots. Configuration information for metrics and logs are not saved to the snapshot. These need to be reconfigured on the restored microVM. The microVM state snapshot file uses a data format that has a version in the form of `MAJOR.MINOR.PATCH`. Each Firecracker binary supports a fixed version of the snapshot data format. When creating a snapshot, Firecracker will use the supported data format version. When loading snapshots, Firecracker will check that the snapshot version is compatible with the version it supports. More information about the snapshot data format and details about snapshot data format versions can be found at . Firecracker exposes the following APIs for manipulating snapshots: `Pause`, `Resume` and `CreateSnapshot` can be called only after booting the microVM, while `LoadSnapshot` is allowed only before boot. To create a snapshot, first you have to pause the running microVM and its vCPUs with the following API command: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PATCH 'http://localhost/vm' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"state\": \"Paused\" }' ``` Prerequisites: The microVM is" }, { "data": "Successive calls of this request keep the microVM in the `Paused` state. Effects: on success: microVM is guaranteed to be `Paused`. on failure: no side-effects. Now that the microVM is paused, you can create a snapshot, which can be either a `full`one or a `diff` one. Full snapshots always create a complete, resume-able snapshot of the current microVM state and memory. Diff snapshots save the current microVM state and the memory dirtied since the last snapshot (full or diff). Diff snapshots are not resume-able, but can be merged into a full snapshot. In this context, we will refer to the base as the first memory file created by a `/snapshot/create` API call and the layer as a memory file created by a subsequent `/snapshot/create` API call. The order in which the snapshots were created matters and they should be merged in the same order in which they were created. To merge a `diff` snapshot memory file on top of a base, users should copy its content over the base. This can be done using the `rebase-snap` (deprecated) or `snapshot-editor` tools provided with the firecracker release: `rebase-snap` (deprecated) example: ```bash rebase-snap --base-file path/to/base --diff-file path/to/layer ``` `snapshot-editor` example: ```bash snapshot-editor edit-memory rebase \\ --memory-path path/to/base \\ --diff-path path/to/layer ``` After executing the command above, the base would be a resumable snapshot memory file describing the state of the memory at the moment of creation of the layer. More layers which were created later can be merged on top of this base. This process needs to be repeated for each layer until the one describing the desired memory state is merged on top of the base, which is constantly updated with information from previously merged layers. Please note that users should not merge state files which resulted from `/snapshot/create` API calls and they should use the state file created in the same call as the memory file which was merged last on top of the base. For creating a full snapshot, you can use the following API command: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT 'http://localhost/snapshot/create' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"snapshot_type\": \"Full\", \"snapshotpath\": \"./snapshotfile\", \"memfilepath\": \"./mem_file\", }' ``` Details about the required and optional fields can be found in the . Note: If the files indicated by `snapshotpath` and `memfile_path` don't exist at the specified paths, then they will be created right before generating the snapshot. If they exist, the files will be truncated and overwritten. Prerequisites: The microVM is `Paused`. Effects: on success: The file indicated by `snapshotpath` (e.g. `/path/to/snapshotfile`) contains the devices' model state and emulation state. The one indicated by `memfilepath`(e.g. `/path/to/mem_file`) contains a full copy of the guest memory. The generated snapshot files are immediately available to be used (current process releases ownership). At this point, the block devices backing files should be backed up externally by the user. Please note that block device contents are only guaranteed to be committed/flushed to the host FS, but not necessarily to the underlying persistent storage (could still live in host FS cache). If diff snapshots were enabled, the snapshot creation resets then the dirtied page bitmap and marks all pages clean (from a diff snapshot point of view). on failure: no side-effects. Notes: The separate block device file components of the snapshot have to be handled by the user. For creating a diff snapshot, you should use the same API command, but with `snapshot_type` field set to `Diff`. Note: If not specified, `snapshot_type` is by default `Full`. ```bash curl --unix-socket" }, { "data": "-i \\ -X PUT 'http://localhost/snapshot/create' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"snapshot_type\": \"Diff\", \"snapshotpath\": \"./snapshotfile\", \"memfilepath\": \"./mem_file\", }' ``` Prerequisites: The microVM is `Paused`. Note: On a fresh microVM, `trackdirtypages` field should be set to `true`, when configuring the `/machine-config` resource, while on a snapshot loaded microVM, `enablediffsnapshots` from `PUT /snapshot/load`request body, should be set. Effects: on success: The file indicated by `snapshot_path` contains the devices' model state and emulation state, same as when creating a full snapshot. The one indicated by `memfilepath` contains this time a diff copy of the guest memory; the diff consists of the memory pages which have been dirtied since the last snapshot creation or since the creation of the microVM, whichever of these events was the most recent. All the other effects mentioned in the Effects paragraph from Creating full snapshots section apply here. on failure: no side-effects. Note: This is an example of an API command that enables dirty page tracking: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT 'http://localhost/machine-config' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"vcpu_count\": 2, \"memsizemib\": 1024, \"smt\": false, \"trackdirtypages\": true }' ``` Enabling this support enables KVM dirty page tracking, so it comes at a cost (which consists of CPU cycles spent by KVM accounting for dirtied pages); it should only be used when needed. Creating a snapshot will not influence state, will not stop or end the microVM, it can be used as before, so the microVM can be resumed if you still want to use it. At this point, in case you plan to continue using the current microVM, you should make sure to also copy the disk backing files. You can resume the microVM by sending the following API command: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PATCH 'http://localhost/vm' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"state\": \"Resumed\" }' ``` Prerequisites: The microVM is `Paused`. Successive calls of this request are ignored (microVM remains in the running state). Effects: on success: microVM is guaranteed to be `Resumed`. on failure: no side-effects. If you want to load a snapshot, you can do that only before the microVM is configured (the only resources that can be configured prior are the Logger and the Metrics systems) by sending the following API command: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT 'http://localhost/snapshot/load' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"snapshotpath\": \"./snapshotfile\", \"mem_backend\": { \"backendpath\": \"./memfile\", \"backend_type\": \"File\" }, \"enablediffsnapshots\": true, \"resume_vm\": false }' ``` The `backend_type` field represents the memory backend type used for loading the snapshot. Accepted values are: `File` - rely on the kernel to handle page faults when loading the contents of the guest memory file into memory. `Uffd` - use a dedicated user space process to handle page faults that occur for the guest memory range. Please refer to for more details on handling page faults in the user space. The meaning of `backendpath` depends on the `backendtype` chosen: if using `File`, then `backend_path` should contain the path to the snapshot's memory file to be loaded. when using `Uffd`, `backend_path` refers to the path of the unix domain socket used for communication between Firecracker and the user space process that handles page faults. When relying on the OS to handle page faults, the command below is also accepted. Note that `memfilepath` field is currently under the deprecation policy. `memfilepath` and `mem_backend` are mutually exclusive, therefore specifying them both at the same time will return an error. ```bash curl --unix-socket" }, { "data": "-i \\ -X PUT 'http://localhost/snapshot/load' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"snapshotpath\": \"./snapshotfile\", \"memfilepath\": \"./mem_file\", \"enablediffsnapshots\": true, \"resume_vm\": false }' ``` Details about the required and optional fields can be found in the . Prerequisites: A full memory snapshot and a microVM state file must be provided. The disk backing files, network interfaces backing TAPs and/or vsock backing socket that were used for the original microVM's configuration should be set up and accessible to the new Firecracker process (in which the microVM is resumed). These host-resources need to be accessible at the same relative paths to the new Firecracker process as they were to the original one. Effects: on success: The complete microVM state is loaded from snapshot into the current Firecracker process. It then resets the dirtied page bitmap and marks all pages clean (from a diff snapshot point of view). The loaded microVM is now in the `Paused` state, so it needs to be resumed for it to run. The memory file (pointed by `backend_path` when using `File` backend type, or pointed by `memfilepath`) must be considered immutable from Firecracker and host point of view. It backs the guest OS memory for read access through the page cache. External modification to this file corrupts the guest memory and leads to undefined behavior. The file indicated by `snapshot_path`, that is used to load from, is released and no longer used by this process. If `enablediffsnapshots` is set, then diff snapshots can be taken afterwards. If `resume_vm` is set, the vm is automatically resumed if load is successful. on failure: A specific error is reported and then the current Firecracker process is ended (as it might be in an invalid state). Notes: Please, keep in mind that only by setting to true `enablediffsnapshots`, when loading a snapshot, or `trackdirtypages`, when configuring the machine on a fresh microVM, you can then create a `diff` snapshot. Also, `trackdirtypages` is not saved when creating a snapshot, so you need to explicitly set `enablediffsnapshots` when sending `LoadSnapshot`command if you want to be able to do diff snapshots from a loaded microVM. Another thing that you should be aware of is the following: if a fresh microVM can create diff snapshots, then if you create a full snapshot, the memory file contains the whole guest memory, while if you create a diff one, that file is sparse and only contains the guest dirtied pages. With these in mind, some possible snapshotting scenarios are the following: `Boot from a fresh microVM` -> `Pause` -> `Create snapshot` -> `Resume` -> `Pause` -> `Create snapshot` -> ... ; `Boot from a fresh microVM` -> `Pause` -> `Create snapshot` -> `Resume` -> `Pause` -> `Resume` -> ... -> `Pause` -> `Create snapshot` -> ... ; `Load snapshot` -> `Resume` -> `Pause` -> `Create snapshot` -> `Resume` -> `Pause` -> `Create snapshot` -> ... ; `Load snapshot` -> `Resume` -> `Pause` -> `Create snapshot` -> `Resume` -> `Pause` -> `Resume` -> ... -> `Pause` -> `Create snapshot` -> ... ; where `Create snapshot` can refer to either a full or a diff snapshot for all the aforementioned flows. It is also worth knowing, a microVM that is restored from snapshot will be resumed with the guest OS wall-clock continuing from the moment of the snapshot creation. For this reason, the wall-clock should be updated to the current time, on the guest-side. More details on how you could do this can be found at a" }, { "data": "Depending on VM memory size, snapshots can consume a lot of disk space. Firecracker integrators must ensure that the provisioned disk space is sufficient for normal operation of their service as well as during failure scenarios. If the service exposes the snapshot triggers to customers, integrators must enforce proper disk quotas to avoid any DoS threats that would cause the service to fail or function abnormally. For recommendations related to continued network connectivity for multiple clones created from a single Firecracker microVM snapshot please see . When snapshots are used in a such a manner that a given guest's state is resumed from more than once, guest information assumed to be unique may in fact not be; this information can include identifiers, random numbers and random number seeds, the guest OS entropy pool, as well as cryptographic tokens. Without a strong mechanism that enables users to guarantee that unique things stay unique across snapshot restores, we consider resuming execution from the same state more than once insecure. For more information please see ```console Boot microVM A -> ... -> Create snapshot S -> Terminate -> Load S in microVM B -> Resume -> ... ``` Here, microVM A terminates after creating the snapshot without ever resuming work, and a single microVM B resumes execution from snapshot S. In this case, unique identifiers, random numbers, and cryptographic tokens that are meant to be used once are indeed only used once. In this example, we consider microVM B secure. ```console Boot microVM A -> ... -> Create snapshot S -> Resume -> ... -> Load S in microVM B -> Resume -> ... ``` Here, both microVM A and B do work starting from the state stored in snapshot S. Unique identifiers, random numbers, and cryptographic tokens that are meant to be used once may be used twice. It doesn't matter if microVM A is terminated before microVM B resumes execution from snapshot S or not. In this example, we consider both microVMs insecure as soon as microVM A resumes execution. ```console Boot microVM A -> ... -> Create snapshot S -> ... -> Load S in microVM B -> Resume -> ... -> Load S in microVM C -> Resume -> ... [...] ``` Here, both microVM B and C do work starting from the state stored in snapshot S. Unique identifiers, random numbers, and cryptographic tokens that are meant to be used once may be used twice. It doesn't matter at which points in time microVMs B and C resume execution, or if microVM A terminates or not after the snapshot is created. In this example, we consider microVMs B and C insecure, and we also consider microVM A insecure if it resumes execution. (VMGenID) is a virtual device that allows VM guests to detect when they have resumed from a snapshot. It works by exposing a cryptographically random 16-bytes identifier to the guest. The VMM ensures that the value of the indentifier changes every time the VM a time shift happens in the lifecycle of the VM, e.g. when it resumes from a snapshot. Linux supports VMGenID since version 5.18. When Linux detects a change in the identifier, it uses its value to reseed its internal PRNG. Moreover, Linux VMGenID driver also emits to userspace a uevent. User space processes can monitor this uevent for detecting snapshot resume events. Firecracker supports VMGenID device on x86 platforms. Firecracker will always enable the device. During snapshot resume, Firecracker will update the 16-byte generation ID and inject a notification in the guest before resuming its" }, { "data": "As a result, guests that run Linux versions >= 5.18 will re-seed their in-kernel PRNG upon snapshot resume. User space applications can rely on the guest kernel for randomness. State other than the guest kernel entropy pool, such as unique identifiers, cached random numbers, cryptographic tokens, etc will still be replicated across multiple microVMs resumed from the same snapshot. Users need to implement mechanisms for ensuring de-duplication of such state, where needed. On guests that run Linux versions >= 6.8, users can make use of the uevent that VMGenID driver emits upon resuming from a snapshot, to be notified about snapshot resume events. Vsock must be inactive during snapshot. Vsock device can break if snapshotted while having active connections. Firecracker snapshots do not capture any inflight network or vsock (through the linux unix domain socket backend) traffic that has left or not yet entered Firecracker. The above, coupled with the fact that Vsock control protocol is not resilient to vsock packet loss, leads to Vsock device breakage when doing a snapshot while there are active Vsock connections. As a solution to the above issue, active Vsock connections prior to snapshotting the VM are forcibly closed by sending a specific event called `VIRTIOVSOCKEVENTTRANSPORTRESET`. The event is sent on `SnapshotCreate`. On `SnapshotResume`, when the VM becomes active again, the vsock driver closes all existing connections. Listen sockets still remain active. Users wanting to build vsock applications that use the snapshot capability have to take this into consideration. More details about this event can be found in the official Virtio document , section 5.10.6.6 Device Events. Firecracker handles sending the `reset` event to the vsock driver, thus the customers are no longer responsible for closing active connections. During snashot resume, Firecracker updates the 16-byte generation ID of the VMGenID device and injects an interrupt in the guest before resuming vCPUs. If the snapshot was taken at the very early stages of the guest kernel boot process proper interrupt handling might not be in place yet. As a result, the kernel might not be able to handle the injected notification and crash. We suggest to users that they take snapshots only after the guest kernel has completed booting, to avoid this issue. We have a mechanism in place to experiment with snapshot compatibility across supported host kernel versions by generating snapshot artifacts through and checking devices' functionality using . The microVM snapshotted is built from . The test restores the snapshot and ensures that all the devices set-up in the configuration file (network devices, disk, vsock, balloon and MMDS) are operational post-load. In those tests the instance is fixed, except some combinations where we also test across the same CPU family (Intel x86, Gravitons). In general cross-CPU snapshots The tables below reflect the snapshot compatibility observed on the AWS instances we support. all means all currently supported Intel/AMD/ARM metal instances (m6g, m7g, m5n, c5n, m6i, m6a). It does not mean cross-instance, i.e. a snapshot taken on m6i won't work on an m6g instance. | CPU family | taken on host kernel | restored on host kernel | working? | | | - | - | - | | x86_64 | 4.14 | 5.10 | Y | | all | 5.10 | 4.14 | N | | all | 5.10 | 6.1 | Y | | all | 6.1 | 5.10 | Y | What doesn't work: Graviton 4.14 \\<-> 5.10 does not restore due to register incompatibility. Intel 5.10 -> 4.14 does not restore because unresponsive net devices AMD m6a 5.10 -> 4.14 does not restore due to mismatch in MSRs" } ]
{ "category": "Runtime", "file_name": "snapshot-support.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "The `config` subcommand prints the configuration of each rkt stage in JSON on the standard output. The general structure is a simple hierarchy consisting of the following top-level element: ``` { \"stage0\": [...] } ``` The entry \"stage0\" refers to stage-specific configuration; \"stage1\" is currently left out intentionally because its configuration subsystem is subject to change. The generated output are valid configuration entries as specified in the configuration documentation. The \"stage0\" entry contains subentries of rktKind \"auth\", \"dockerAuth\", \"paths\", and \"stage1\". Note that the `config` subcommand will output separate entries per \"auth\" domain and separate entries per \"dockerAuth\" registry. While it is possible to specify an array of strings in the input configuration rkt internally merges configuration state from different directories potentially creating multiple entries. Consider the following system configuration: ``` $ cat /etc/rkt/auth.d/basic.json { \"rktKind\": \"auth\", \"rktVersion\": \"v1\", \"domains\": [ \"foo.com\", \"bar.com\", \"baz.com\" ], \"type\": \"basic\", \"credentials\": { \"user\": \"sysUser\", \"password\": \"sysPassword\" } } ``` And the following user configuration: ``` $ ~/.config/rkt/auth.d/basic.json { \"rktKind\": \"auth\", \"rktVersion\": \"v1\", \"domains\": [ \"foo.com\" ], \"type\": \"basic\", \"credentials\": { \"user\": \"user\", \"password\": \"password\" } } ``` The `config` subcommand would generate the following separate merged entries: ``` { \"stage0\": [ { \"rktVersion\": \"v1\", \"rktKind\": \"auth\", \"domains\": [ \"bar.com\" ], \"type\": \"basic\", \"credentials\": { \"user\": \"sysUser\", \"password\": \"sysPassword\" } }, { \"rktVersion\": \"v1\", \"rktKind\": \"auth\", \"domains\": [ \"baz.com\" ], \"type\": \"basic\", \"credentials\": { \"user\": \"sysUser\", \"password\": \"sysPassword\" } }, { \"rktVersion\": \"v1\", \"rktKind\": \"auth\", \"domains\": [ \"foo.com\" ], \"type\": \"basic\", \"credentials\": { \"user\": \"user\", \"password\": \"password\" } } ] } ``` In the example given above the user configuration entry for the domain \"foo.com\" overrides the system configuration entry leaving the entries \"bar.com\" and \"baz.com\" unchanged. The `config` subcommand output creates three separate entries for \"foo.com\", \"bar.com\", and \"baz.com\". Note: While the \"bar.com\", and \"baz.com\" entries in the example given above could be merged into one entry they are still being printed separate. This behavior is subject to change, future implementations may provide a merged output. ``` $ rkt config { \"stage0\": [ { \"rktVersion\": \"v1\", \"rktKind\": \"auth\", \"domains\": [ \"bar.com\" ], \"type\": \"oauth\", \"credentials\": { \"token\": \"someToken\" } }, { \"rktVersion\": \"v1\", \"rktKind\": \"auth\", \"domains\": [ \"foo.com\" ], \"type\": \"basic\", \"credentials\": { \"user\": \"user\", \"password\": \"userPassword\" } }, { \"rktVersion\": \"v1\", \"rktKind\": \"paths\", \"data\": \"/var/lib/rkt\", \"stage1-images\": \"/usr/lib/rkt\" }, { \"rktVersion\": \"v1\", \"rktKind\": \"stage1\", \"name\": \"coreos.com/rkt/stage1-coreos\", \"version\": \"0.15.0+git\", \"location\": \"/usr/libexec/rkt/stage1-coreos.aci\" } ] } ```" } ]
{ "category": "Runtime", "file_name": "config.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "This document defines project governance for the project. The CNI project employs \"organization voting\" to ensure no single organization can dominate the project. Individuals not associated with or employed by a company or organization are allowed one organization vote. Each company or organization (regardless of the number of maintainers associated with or employed by that company/organization) receives one organization vote. In other words, if two maintainers are employed by Company X, two by Company Y, two by Company Z, and one maintainer is an un-affiliated individual, a total of four \"organization votes\" are possible; one for X, one for Y, one for Z, and one for the un-affiliated individual. Any maintainer from an organization may cast the vote for that organization. For formal votes, a specific statement of what is being voted on should be added to the relevant github issue or PR, and a link to that issue or PR added to the maintainers meeting agenda document. Maintainers should indicate their yes/no vote on that issue or PR, and after a suitable period of time, the votes will be tallied and the outcome noted. New maintainers are proposed by an existing maintainer and are elected by a 2/3 majority organization vote. Maintainers can be removed by a 2/3 majority organization vote. Non-specification-related PRs may be merged after receiving at least two organization votes. Changes to the CNI Specification also follow the normal PR approval process (eg, 2 organization votes), but any maintainer can request that the approval require a 2/3 majority organization vote. Maintainers will be added to the containernetworking GitHub organization and added to the GitHub cni-maintainers team, and made a GitHub maintainer of that team. After 6 months a maintainer will be made an \"owner\" of the GitHub organization. All changes in Governance require a 2/3 majority organization vote. Unless specified above, all other changes to the project require a 2/3 majority organization vote. Additionally, any maintainer may request that any change require a 2/3 majority organization vote." } ]
{ "category": "Runtime", "file_name": "GOVERNANCE.md", "project_name": "Container Network Interface (CNI)", "subcategory": "Cloud Native Network" }
[ { "data": "The following document guides you through the upgrade process for Piraeus Operator from version 1 (\"v1\") to version 2 (\"v2\"). Piraeus Operator v2 offers improved convenience and customization. This however made it necessary to make significant changes to the way Piraeus Operator manages Piraeus Datastore. As such, upgrading from v1 to v2 is a procedure requiring manual oversight. Upgrading Piraeus Operator is done in four steps: [Step 1]: (Optional) Migrate the LINSTOR database to use the `k8s` backend. [Step 2]: Collect information about the current deployment. [Step 3]: Remove the Piraeus Operator v1 deployment, keeping existing volumes untouched. [Step 4]: Deploy Piraeus Operator v2 using the information gathered in step 2. This guide assumes: You used Helm to create the original deployment and are familiar with upgrading Helm deployments. Your Piraeus Datastore deployment is up-to-date with the latest v1 release. Check the releases . You have the following command line tools available: - You are familiar with the `linstor` command line utility, specifically to verify the cluster state. The resources `LinstorController`, `LinstorSatelliteSet` and `LinstorCSIDriver` have been replaced by and . The default deployment runs the LINSTOR Satellite in the . The migration script will propose changing to the host network. In Operator v1, all labels on the Kubernetes node resource were replicated on the satellite, making them usable in the `replicasOnSame` and `replicasOnDifferent` parameters on the storage class. In Operator v2, only the following labels are automatically synchronized. `kubernetes.io/hostname` `topology.kubernetes.io/region` `topology.kubernetes.io/zone` Use to synchronize additional labels. The following settings are applied by Operator v2 cluster-wide: DrbdOptions/Net/rr-conflict: retry-connect DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary DrbdOptions/Resource/on-no-data-accessible: suspend-io DrbdOptions/auto-quorum: suspend-io Operator v2 also includes a deployment to prevent stuck nodes caused by suspended DRBD devices." } ]
{ "category": "Runtime", "file_name": "index.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "See for more information Traffic that gets rejected due to network policy enforcements gets logged by kube-route using iptables NFLOG target under the group 100. Simplest way to observe the dropped packets by kube-router is by running tcpdump on `nflog:100` interface for e.g. `tcpdump -i nflog:100 -n`. You can also configure ulogd to monitor dropped packets in desired output format. Please see for an example configuration to setup a stack to log packets." } ]
{ "category": "Runtime", "file_name": "observability.md", "project_name": "Kube-router", "subcategory": "Cloud Native Network" }
[ { "data": "All iam policy controlled tests go under `nodetests/iampolicies`, then tests are split into different projects like `cloudserver` and `backbeat` which need to interact with `Vault` for iam policy check. ``` node_tests backbeat cloudserver iam_policies cloudserver AssumeRole.js AssumeRoleWithWebIdentity.js IAMUser.js backbeat ... smoke_tests ui utils ... ``` Under each project ex. cloudserver, we have 3 test files that represent 3 different entities that need iam permissions to perform operations. IAM user AssumeRole session user AssumeRoleWithWebIdentity session user In each test file, the test frameworks are defined generically for all s3 APIs' permission tests. For example, in `AssumeRoleWithWebIdentity.js`, the test process is like this: Create an account, use this account to create a bucket, and put an object into it. Get web token from keycloak for Storage Manager user, Storage Account Owner user or Data Consumer user. Assume Storage Manager role, Storage Account Owner role or Data Consumer role using the web token. Use the temporary credentials returned by AssumeRoleWWI to perform the action you want to test and check if the response is expected. Whenever we need to add iam permission tests for a new API, we just add API in , and also define its with minimum required parameters(query and body). We follow the AWS standard, so we can always refer to for more details about API request syntax. Note: Instead of checking if the response is a success, we only check code not equal to AccessDenied. Because permission check is the first error returned except for missing required parameters error, if we can provide a minimum required query and body for requests, we don't necessarily have to provide the exact correct context and params to get a successful response. It doesn't matter if we get other errors like MalFormedACLError or NoSuchCORSConfiguration etc., because these errors always happen after checking its permission. For Backbeat tests please see All with their latest version Docker Kubectl Helm Tilt Kind m4 git Zenko-operator deployment will need to access private images from the scality registry. You will need to connect to Harbor, then go into your user profile to generate a CLI secret, which you will user as password Then login from CLI, using your registry Username and the CLI secret you just generated as password: ```shell $ docker login ghcr.io User name: <Username on github> Password: <Github personal access token> ``` ```shell $ git clone https://github.com/scality/zenko-operator.git $ cd zenko-operator ``` Edit file" }, { "data": "by adding this at the end of the file ```yaml initialConfiguration: locations: name: us-east-1 type: location-file-v1 s3API: endpoints: hostname: s3.zenko.local locationName: us-east-1 ``` Edit file `/doc/examples/zenkoversion-1.2-dev.yaml` by changing the versions For example: ```diff vault: image: ghcr.io/scality/vault2 tag: '8.3.6' tag: '8.4.0' shell: image: busybox ``` Run the script: `./hack/scripts/bootstrap-kind-dev.sh` <hr style=\"border:2px solid gray\"/> ```shell $ export ADMINACCESSKEY=$(kubectl get secret dev-management-vault-admin-creds.v1 -n default -o jsonpath='{.data.accessKey}' | base64 -d) ``` ```shell $ export ADMINSECRETKEY=$(kubectl get secret dev-management-vault-admin-creds.v1 -n default -o jsonpath='{.data.secretKey}' | base64 -d) ``` ```shell $ sudo vi /etc/hosts 127.0.0.1 iam.zenko.local ui.zenko.local s3-local-file.zenko.local keycloak.zenko.local sts.zenko.local management.zenko.local s3.zenko.local ``` ```shell $ ADMINACCESSKEYID=$ADMINACCESSKEY ADMINSECRETACCESSKEY=$ADMINSECRETKEY ./bin/vaultclient create-account --name account --email acc@ount.fr --host iam.zenko.local --port 80 $ ADMINACCESSKEYID=$ADMINACCESSKEY ADMINSECRETACCESSKEY=$ADMINSECRETKEY ./bin/vaultclient generate-account-access-key --name account --host iam.zenko.local --port 80 ``` ```shell $ export ZENKOACCESSKEY=<access key generated previously> $ export ZENKOSECRETKEY=<secret key generated previously> $ export CLOUDSERVER_ENDPOINT=http://s3.zenko.local:80 $ export VAULT_ENDPOINT=http://iam.zenko.local:80 $ export VAULTSTSENDPOINT=http://sts.zenko.local:80 $ export CLOUDSERVER_HOST=s3.zenko.local #No http and port here ``` ```shell $ git clone git@github.com:scality/zenko $ cd zenko/tests/node_tests $ yarn install $ yarn run testiampolicies ``` ```shell $ kubectl get secret mongodb-db-creds -o jsonpath={.data.mongodb-username} | base64 -d $ kubectl get secret mongodb-db-creds -o jsonpath={.data.mongodb-password} | base64 -d ``` ```shell $ kubectl port-forward dev-db-mongodb-primary-0 27021:27017 ``` Connect to `localhost:27021` with database `admin` and `username/password` got from above using your local MongoDB GUI (Robo3T, MongoDB Compass, etc...) docker redis mongodb image from scality/ci-mongo A local Vault cloned repository with your ongoing modifications ```shell $ docker run -d --net=host --name ci-mongo scality/ci-mongo ``` First, cd to the Vault repository ```shell $ cd <vaultrepositoryfolder>/.github/docker/keycloak ``` Then build your Keycloak image: ```shell $ docker build -t keycloak . ``` Create a configuration file for Keycloak: ```shell $ cat <<EOF > env.list KEYCLOAK_REALM=myrealm KEYCLOAKCLIENTID=myclient KEYCLOAK_USERNAME=bartsimpson KEYCLOAK_PASSWORD=123 KEYCLOAKUSERFIRSTNAME=Bart KEYCLOAKUSERLASTNAME=Simpson EOF ``` Finally, you can run your keycloak image locally: ```shell $ docker run -p 8443:8443 -p 8080:8080 --env-file env.list -it -e KEYCLOAKUSER=admin -e KEYCLOAKPASSWORD=admin keycloak ``` add the following in `config.json` file under Vault root folder: ```json \"jwks\": { \"interval\": 300, \"issuer\": \"http://localhost:8080/auth/realms/myrealm\" }, \"oidcProvider\": \"http://localhost:8080/auth/realms/myrealm\" ``` ```shell $ VAULTDBBACKEND=MONGODB yarn start ``` ```shell $ S3METADATA=mongodb REMOTEMANAGEMENTDISABLE=1 S3BACKEND=mem S3VAULT=multiple node index.js ``` Under vault root folder ```shell $ ADMINACCESSKEYID=\"D4IT2AWSB588GO5J9T00\" ADMINSECRETACCESSKEY=\"UEEu8tYlsOGGrgf4DAiSZD6apVNPUWqRiPG0nTB6\" ./node_modules/vaultclient/bin/vaultclient create-account -name account --email acc@ount.fr --port 8600 $ ADMINACCESSKEYID=\"D4IT2AWSB588GO5J9T00\" ADMINSECRETACCESSKEY=\"UEEu8tYlsOGGrgf4DAiSZD6apVNPUWqRiPG0nTB6\" ./node_modules/vaultclient/bin/vaultclient generate-account-access-key --name account --port 8600 ``` ```shell $ KEYCLOAKTESTHOST=http://localhost \\ KEYCLOAKTESTPORT=8080 \\ KEYCLOAKTESTREALM_NAME=myrealm \\ KEYCLOAKTESTCLIENT_ID=myclient \\ CLOUDSERVER_ENDPOINT=http://127.0.0.1:8000 \\ CLOUDSERVER_HOST=127.0.0.1 \\ CLOUDSERVER_PORT=8000 \\ VAULTSTSENDPOINT=http://127.0.0.1:8800 \\ VAULT_ENDPOINT=http://127.0.0.1:8600 \\ ZENKOACCESSKEY=<account access key generated previously> \\ ZENKOSECRETKEY=<account secret key generated previously> \\ ADMINACCESSKEY_ID=D4IT2AWSB588GO5J9T00 \\ ADMINSECRETACCESS_KEY=UEEu8tYlsOGGrgf4DAiSZD6apVNPUWqRiPG0nTB6 \\ yarn testiampolicies ``` Make sure the keycloak host and port envs are the same as what configured before in Vault , either http://localhost:8080 or https://localhost:8443" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "A new feature for loading kernel modules was introduced in Kata Containers 1.9. The list of kernel modules and their parameters can be provided using the configuration file or OCI annotations. The gives that information to the through gRPC when the sandbox is created. The will insert the kernel modules using `modprobe(8)`, hence modules dependencies are resolved automatically. The sandbox will not be started when: A kernel module is specified and the `modprobe(8)` command is not installed in the guest or it fails loading the module. The module is not available in the guest or it doesn't meet the guest kernel requirements, like architecture and version. In the following sections are documented the different ways that exist for loading kernel modules in Kata Containers. ``` NOTE: Use this method, only if you need to pass the kernel modules to all containers. Please use annotations described below to set per pod annotations. ``` The list of kernel modules and parameters can be set in the `kernel_modules` option as a coma separated list, where each entry in the list specifies a kernel module and its parameters. Each list element comprises one or more space separated fields. The first field specifies the module name and subsequent fields specify individual parameters for the module. The following example specifies two modules to load: `e1000e` and `i915`. Two parameters are specified for the `e1000` module: `InterruptThrottleRate` (which takes an array of integer values) and `EEE` (which requires a single integer value). ```toml kernel_modules=[\"e1000e InterruptThrottleRate=3000,3000,3000 EEE=1\", \"i915\"] ``` Not all the container managers allow users provide custom annotations, hence this is the only way that Kata Containers provide for loading modules when custom annotations are not supported. There are some limitations with this approach: Write access to the Kata configuration file is required. The configuration file must be updated when a new container is created, otherwise the same list of modules is used, even if they are not needed in the container. As was mentioned above, not all containers need the same modules, therefore using the configuration file for specifying the list of kernel modules per can be a pain. Unlike the configuration file, provide a way to specify custom configurations per POD. The list of kernel modules and parameters can be set using the annotation `io.katacontainers.config.agent.kernel_modules` as a semicolon separated list, where the first word of each element is considered as the module name and the rest as its parameters. In the following example two PODs are created, but the kernel modules `e1000e` and `i915` are inserted only in the POD `pod1`. ```yaml apiVersion: v1 kind: Pod metadata: name: pod1 annotations: io.katacontainers.config.agent.kernel_modules: \"e1000e EEE=1; i915\" spec: runtimeClassName: kata containers: name: c1 image: busybox command: sh stdin: true tty: true apiVersion: v1 kind: Pod metadata: name: pod2 spec: runtimeClassName: kata containers: name: c2 image: busybox command: sh stdin: true tty: true ``` Note: To pass annotations to Kata containers," } ]
{ "category": "Runtime", "file_name": "how-to-load-kernel-modules-with-kata.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "Calico uses Kubernetes style APIs. To add a new API to Calico, or to add a new field to an existing API, use the following steps. For most cases, a new API field on an existing API is all that is needed. To add a new API field: Update the structures in the Run `make generate` to update generated code and CRDs. Add the new logic for your field, including . Add unit tests for your field. Start by opening a GitHub issue or design document to design the feature. Consider the following: What component(s) need to know about this resource? What is the correct abstraction for this feature? Is there an existing API that makes sense for this feature instead? Agree on a design for the new API. Read and follow the for new APIs. Get your proposed API reviewed. Add the new structure to the in its own go file. Include kubebuilder where appropriate. Run code and CRD generation - `make generate` Add client code to libcalico-go for the new API, using existing resources as a template. https://github.com/projectcalico/calico/tree/master/libcalico-go/lib/clientv3 https://github.com/projectcalico/calico/blob/master/libcalico-go/lib/backend/k8s/k8s.go Add unit tests for the API, using existing ones as a template. Add CRUD commands and tests to calicoctl using existing ones as a template. If felix or confd needs the new resource, add it to either the or respectively." } ]
{ "category": "Runtime", "file_name": "adding-an-api.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "Cobra can generate shell completions for multiple shells. The currently supported shells are: Bash Zsh fish PowerShell Cobra will automatically provide your program with a fully functional `completion` command, similarly to how it provides the `help` command. If you do not wish to use the default `completion` command, you can choose to provide your own, which will take precedence over the default one. (This also provides backwards-compatibility with programs that already have their own `completion` command.) If you are using the `cobra-cli` generator, which can be found at , you can create a completion command by running ```bash cobra-cli add completion ``` and then modifying the generated `cmd/completion.go` file to look something like this (writing the shell script to stdout allows the most flexible use): ```go var completionCmd = &cobra.Command{ Use: \"completion [bash|zsh|fish|powershell]\", Short: \"Generate completion script\", Long: fmt.Sprintf(`To load completions: Bash: $ source <(%[1]s completion bash) $ %[1]s completion bash > /etc/bash_completion.d/%[1]s $ %[1]s completion bash > $(brew --prefix)/etc/bash_completion.d/%[1]s Zsh: $ echo \"autoload -U compinit; compinit\" >> ~/.zshrc $ %[1]s completion zsh > \"${fpath[1]}/_%[1]s\" fish: $ %[1]s completion fish | source $ %[1]s completion fish > ~/.config/fish/completions/%[1]s.fish PowerShell: PS> %[1]s completion powershell | Out-String | Invoke-Expression PS> %[1]s completion powershell > %[1]s.ps1 `,cmd.Root().Name()), DisableFlagsInUseLine: true, ValidArgs: []string{\"bash\", \"zsh\", \"fish\", \"powershell\"}, Args: cobra.MatchAll(cobra.ExactArgs(1), cobra.OnlyValidArgs), Run: func(cmd *cobra.Command, args []string) { switch args[0] { case \"bash\": cmd.Root().GenBashCompletion(os.Stdout) case \"zsh\": cmd.Root().GenZshCompletion(os.Stdout) case \"fish\": cmd.Root().GenFishCompletion(os.Stdout, true) case \"powershell\": cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout) } }, } ``` Note: The cobra generator may include messages printed to stdout, for example, if the config file is loaded; this will break the auto-completion script so must be removed. Cobra provides a few options for the default `completion` command. To configure such options you must set the `CompletionOptions` field on the root command. To tell Cobra not to provide the default `completion` command: ``` rootCmd.CompletionOptions.DisableDefaultCmd = true ``` To tell Cobra to mark the default `completion` command as hidden: ``` rootCmd.CompletionOptions.HiddenDefaultCmd = true ``` To tell Cobra not to provide the user with the `--no-descriptions` flag to the completion sub-commands: ``` rootCmd.CompletionOptions.DisableNoDescFlag = true ``` To tell Cobra to completely disable descriptions for completions: ``` rootCmd.CompletionOptions.DisableDescriptions = true ``` The generated completion scripts will automatically handle completing commands and flags. However, you can make your completions much more powerful by providing information to complete your program's nouns and flag values. Cobra allows you to provide a pre-defined list of completion choices for your nouns using the `ValidArgs` field. For example, if you want `kubectl get ` to show a list of valid \"nouns\" you have to set them. Some simplified code from `kubectl get` looks like: ```go validArgs = []string{ \"pod\", \"node\", \"service\", \"replicationcontroller\" } cmd := &cobra.Command{ Use: \"get [(-o|--output=)json|yaml|template|...] (RESOURCE [NAME] | RESOURCE/NAME ...)\", Short: \"Display one or many resources\", Long: get_long, Example: get_example, Run: func(cmd *cobra.Command, args []string) { cobra.CheckErr(RunGet(f, out, cmd, args)) }, ValidArgs: validArgs, } ``` Notice we put the `ValidArgs` field on the `get`" }, { "data": "Doing so will give results like: ```bash $ kubectl get node pod replicationcontroller service ``` If your nouns have aliases, you can define them alongside `ValidArgs` using `ArgAliases`: ```go argAliases = []string { \"pods\", \"nodes\", \"services\", \"svc\", \"replicationcontrollers\", \"rc\" } cmd := &cobra.Command{ ... ValidArgs: validArgs, ArgAliases: argAliases } ``` The aliases are shown to the user on tab completion only if no completions were found within sub-commands or `ValidArgs`. In some cases it is not possible to provide a list of completions in advance. Instead, the list of completions must be determined at execution-time. In a similar fashion as for static completions, you can use the `ValidArgsFunction` field to provide a Go function that Cobra will execute when it needs the list of completion choices for the nouns of a command. Note that either `ValidArgs` or `ValidArgsFunction` can be used for a single cobra command, but not both. Simplified code from `helm status` looks like: ```go cmd := &cobra.Command{ Use: \"status RELEASE_NAME\", Short: \"Display the status of the named release\", Long: status_long, RunE: func(cmd *cobra.Command, args []string) { RunGet(args[0]) }, ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { if len(args) != 0 { return nil, cobra.ShellCompDirectiveNoFileComp } return getReleasesFromCluster(toComplete), cobra.ShellCompDirectiveNoFileComp }, } ``` Where `getReleasesFromCluster()` is a Go function that obtains the list of current Helm releases running on the Kubernetes cluster. Notice we put the `ValidArgsFunction` on the `status` sub-command. Let's assume the Helm releases on the cluster are: `harbor`, `notary`, `rook` and `thanos` then this dynamic completion will give results like: ```bash $ helm status harbor notary rook thanos ``` You may have noticed the use of `cobra.ShellCompDirective`. These directives are bit fields allowing to control some shell completion behaviors for your particular completion. You can combine them with the bit-or operator such as `cobra.ShellCompDirectiveNoSpace | cobra.ShellCompDirectiveNoFileComp` ```go // Indicates that the shell will perform its default behavior after completions // have been provided (this implies none of the other directives). ShellCompDirectiveDefault // Indicates an error occurred and completions should be ignored. ShellCompDirectiveError // Indicates that the shell should not add a space after the completion, // even if there is a single completion provided. ShellCompDirectiveNoSpace // Indicates that the shell should not provide file completion even when // no completion is provided. ShellCompDirectiveNoFileComp // Indicates that the returned completions should be used as file extension filters. // For example, to complete only files of the form .json or .yaml: // return []string{\"yaml\", \"json\"}, ShellCompDirectiveFilterFileExt // For flags, using MarkFlagFilename() and MarkPersistentFlagFilename() // is a shortcut to using this directive explicitly. // ShellCompDirectiveFilterFileExt // Indicates that only directory names should be provided in file completion. // For example: // return nil, ShellCompDirectiveFilterDirs // For flags, using MarkFlagDirname() is a shortcut to using this directive explicitly. // // To request directory names within another directory, the returned completions // should specify a single directory name within which to search. For example, // to complete directories within \"themes/\": // return []string{\"themes\"}, ShellCompDirectiveFilterDirs // ShellCompDirectiveFilterDirs // ShellCompDirectiveKeepOrder indicates that the shell should preserve the order // in which the completions are provided ShellCompDirectiveKeepOrder ``` *Note*: When using the `ValidArgsFunction`, Cobra will call your registered function after having parsed all flags and arguments provided in the command-line. You therefore don't need to do this parsing" }, { "data": "For example, when a user calls `helm status --namespace my-rook-ns `, Cobra will call your registered `ValidArgsFunction` after having parsed the `--namespace` flag, as it would have done when calling the `RunE` function. Cobra achieves dynamic completion through the use of a hidden command called by the completion script. To debug your Go completion code, you can call this hidden command directly: ```bash $ helm complete status har<ENTER> harbor :4 Completion ended with directive: ShellCompDirectiveNoFileComp # This is on stderr ``` *Important:* If the noun to complete is empty (when the user has not yet typed any letters of that noun), you must pass an empty parameter to the `complete` command: ```bash $ helm complete status \"\"<ENTER> harbor notary rook thanos :4 Completion ended with directive: ShellCompDirectiveNoFileComp # This is on stderr ``` Calling the `complete` command directly allows you to run the Go debugger to troubleshoot your code. You can also add printouts to your code; Cobra provides the following functions to use for printouts in Go completion code: ```go // Prints to the completion script debug file (if BASHCOMPDEBUG_FILE // is set to a file path) and optionally prints to stderr. cobra.CompDebug(msg string, printToStdErr bool) { cobra.CompDebugln(msg string, printToStdErr bool) // Prints to the completion script debug file (if BASHCOMPDEBUG_FILE // is set to a file path) and to stderr. cobra.CompError(msg string) cobra.CompErrorln(msg string) ``` *Important: You should not* leave traces that print directly to stdout in your completion code as they will be interpreted as completion choices by the completion script. Instead, use the cobra-provided debugging traces functions mentioned above. Most of the time completions will only show sub-commands. But if a flag is required to make a sub-command work, you probably want it to show up when the user types . You can mark a flag as 'Required' like so: ```go cmd.MarkFlagRequired(\"pod\") cmd.MarkFlagRequired(\"container\") ``` and you'll get something like ```bash $ kubectl exec -c --container= -p --pod= ``` As for nouns, Cobra provides a way of defining dynamic completion of flags. To provide a Go function that Cobra will execute when it needs the list of completion choices for a flag, you must register the function using the `command.RegisterFlagCompletionFunc()` function. ```go flagName := \"output\" cmd.RegisterFlagCompletionFunc(flagName, func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return []string{\"json\", \"table\", \"yaml\"}, cobra.ShellCompDirectiveDefault }) ``` Notice that calling `RegisterFlagCompletionFunc()` is done through the `command` with which the flag is associated. In our example this dynamic completion will give results like so: ```bash $ helm status --output json table yaml ``` You can also easily debug your Go completion code for flags: ```bash $ helm complete status --output \"\" json table yaml :4 Completion ended with directive: ShellCompDirectiveNoFileComp # This is on stderr ``` *Important: You should not* leave traces that print to stdout in your completion code as they will be interpreted as completion choices by the completion script. Instead, use the cobra-provided debugging traces functions mentioned further" }, { "data": "To limit completions of flag values to file names with certain extensions you can either use the different `MarkFlagFilename()` functions or a combination of `RegisterFlagCompletionFunc()` and `ShellCompDirectiveFilterFileExt`, like so: ```go flagName := \"output\" cmd.MarkFlagFilename(flagName, \"yaml\", \"json\") ``` or ```go flagName := \"output\" cmd.RegisterFlagCompletionFunc(flagName, func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return []string{\"yaml\", \"json\"}, ShellCompDirectiveFilterFileExt}) ``` To limit completions of flag values to directory names you can either use the `MarkFlagDirname()` functions or a combination of `RegisterFlagCompletionFunc()` and `ShellCompDirectiveFilterDirs`, like so: ```go flagName := \"output\" cmd.MarkFlagDirname(flagName) ``` or ```go flagName := \"output\" cmd.RegisterFlagCompletionFunc(flagName, func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return nil, cobra.ShellCompDirectiveFilterDirs }) ``` To limit completions of flag values to directory names within another directory you can use a combination of `RegisterFlagCompletionFunc()` and `ShellCompDirectiveFilterDirs` like so: ```go flagName := \"output\" cmd.RegisterFlagCompletionFunc(flagName, func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return []string{\"themes\"}, cobra.ShellCompDirectiveFilterDirs }) ``` Cobra provides support for completion descriptions. Such descriptions are supported for each shell (however, for bash, it is only available in the ). For commands and flags, Cobra will provide the descriptions automatically, based on usage information. For example, using zsh: ``` $ helm s[tab] search -- search for a keyword in charts show -- show information of a chart status -- displays the status of the named release ``` while using fish: ``` $ helm s[tab] search (search for a keyword in charts) show (show information of a chart) status (displays the status of the named release) ``` Cobra allows you to add descriptions to your own completions. Simply add the description text after each completion, following a `\\t` separator. This technique applies to completions returned by `ValidArgs`, `ValidArgsFunction` and `RegisterFlagCompletionFunc()`. For example: ```go ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return []string{\"harbor\\tAn image registry\", \"thanos\\tLong-term metrics\"}, cobra.ShellCompDirectiveNoFileComp }} ``` or ```go ValidArgs: []string{\"bash\\tCompletions for bash\", \"zsh\\tCompletions for zsh\"} ``` If you don't want to show descriptions in the completions, you can add `--no-descriptions` to the default `completion` command to disable them, like: ```bash $ source <(helm completion bash) $ helm completion bash (generate autocompletion script for bash) powershell (generate autocompletion script for powershell) fish (generate autocompletion script for fish) zsh (generate autocompletion script for zsh) $ source <(helm completion bash --no-descriptions) $ helm completion bash fish powershell zsh ``` The bash completion script generated by Cobra requires the `bashcompletion` package. You should update the help text of your completion command to show how to install the `bashcompletion` package () You can also configure `bash` aliases for your program and they will also support completions. ```bash alias aliasname=origcommand complete -o default -F start_origcommand aliasname $ aliasname <tab><tab> completion firstcommand secondcommand ``` For backward compatibility, Cobra still supports its bash legacy dynamic completion solution. Please refer to for details. Cobra provides two versions for bash completion. The original bash completion (which started it all!) can be used by calling `GenBashCompletion()` or `GenBashCompletionFile()`. A new V2 bash completion version is also available. This version can be used by calling `GenBashCompletionV2()` or `GenBashCompletionFileV2()`. The V2 version does not support the legacy dynamic completion (see ) but instead works only with the Go dynamic completion solution described in this" }, { "data": "Unless your program already uses the legacy dynamic completion solution, it is recommended that you use the bash completion V2 solution which provides the following extra features: Supports completion descriptions (like the other shells) Small completion script of less than 300 lines (v1 generates scripts of thousands of lines; `kubectl` for example has a bash v1 completion script of over 13K lines) Streamlined user experience thanks to a completion behavior aligned with the other shells `Bash` completion V2 supports descriptions for completions. When calling `GenBashCompletionV2()` or `GenBashCompletionFileV2()` you must provide these functions with a parameter indicating if the completions should be annotated with a description; Cobra will provide the description automatically based on usage information. You can choose to make this option configurable by your users. ``` $ helm s search (search for a keyword in charts) status (display the status of the named release) show (show information of a chart) $ helm s search show status ``` Note: Cobra's default `completion` command uses bash completion V2. If for some reason you need to use bash completion V1, you will need to implement your own `completion` command. Cobra supports native zsh completion generated from the root `cobra.Command`. The generated completion script should be put somewhere in your `$fpath` and be named `_<yourProgram>`. You will need to start a new shell for the completions to become available. Zsh supports descriptions for completions. Cobra will provide the description automatically, based on usage information. Cobra provides a way to completely disable such descriptions by using `GenZshCompletionNoDesc()` or `GenZshCompletionFileNoDesc()`. You can choose to make this a configurable option to your users. ``` $ helm s[tab] search -- search for a keyword in charts show -- show information of a chart status -- displays the status of the named release $ helm s[tab] search show status ``` Note: Because of backward-compatibility requirements, we were forced to have a different API to disable completion descriptions between `zsh` and `fish`. Custom completions implemented in Bash scripting (legacy) are not supported and will be ignored for `zsh` (including the use of the `BashCompCustom` flag annotation). You should instead use `ValidArgsFunction` and `RegisterFlagCompletionFunc()` which are portable to the different shells (`bash`, `zsh`, `fish`, `powershell`). The function `MarkFlagCustom()` is not supported and will be ignored for `zsh`. You should instead use `RegisterFlagCompletionFunc()`. Cobra 1.1 standardized its zsh completion support to align it with its other shell completions. Although the API was kept backward-compatible, some small changes in behavior were introduced. Please refer to for details. Cobra supports native fish completions generated from the root `cobra.Command`. You can use the `command.GenFishCompletion()` or `command.GenFishCompletionFile()` functions. You must provide these functions with a parameter indicating if the completions should be annotated with a description; Cobra will provide the description automatically based on usage information. You can choose to make this option configurable by your users. ``` $ helm s[tab] search (search for a keyword in charts) show (show information of a chart) status (displays the status of the named release) $ helm s[tab] search show status ``` Note: Because of backward-compatibility requirements, we were forced to have a different API to disable completion descriptions between `zsh` and" }, { "data": "Custom completions implemented in bash scripting (legacy) are not supported and will be ignored for `fish` (including the use of the `BashCompCustom` flag annotation). You should instead use `ValidArgsFunction` and `RegisterFlagCompletionFunc()` which are portable to the different shells (`bash`, `zsh`, `fish`, `powershell`). The function `MarkFlagCustom()` is not supported and will be ignored for `fish`. You should instead use `RegisterFlagCompletionFunc()`. The following flag completion annotations are not supported and will be ignored for `fish`: `BashCompFilenameExt` (filtering by file extension) `BashCompSubdirsInDir` (filtering by directory) The functions corresponding to the above annotations are consequently not supported and will be ignored for `fish`: `MarkFlagFilename()` and `MarkPersistentFlagFilename()` (filtering by file extension) `MarkFlagDirname()` and `MarkPersistentFlagDirname()` (filtering by directory) Similarly, the following completion directives are not supported and will be ignored for `fish`: `ShellCompDirectiveFilterFileExt` (filtering by file extension) `ShellCompDirectiveFilterDirs` (filtering by directory) Cobra supports native PowerShell completions generated from the root `cobra.Command`. You can use the `command.GenPowerShellCompletion()` or `command.GenPowerShellCompletionFile()` functions. To include descriptions use `command.GenPowerShellCompletionWithDesc()` and `command.GenPowerShellCompletionFileWithDesc()`. Cobra will provide the description automatically based on usage information. You can choose to make this option configurable by your users. The script is designed to support all three PowerShell completion modes: TabCompleteNext (default windows style - on each key press the next option is displayed) Complete (works like bash) MenuComplete (works like zsh) You set the mode with `Set-PSReadLineKeyHandler -Key Tab -Function <mode>`. Descriptions are only displayed when using the `Complete` or `MenuComplete` mode. Users need PowerShell version 5.0 or above, which comes with Windows 10 and can be downloaded separately for Windows 7 or 8.1. They can then write the completions to a file and source this file from their PowerShell profile, which is referenced by the `$Profile` environment variable. See `Get-Help about_Profiles` for more info about PowerShell profiles. ``` $ helm s[tab] search (search for a keyword in charts) show (show information of a chart) status (displays the status of the named release) $ helm s[tab] search show status search for a keyword in charts $ helm s[tab] search show status ``` You can also configure `powershell` aliases for your program and they will also support completions. ``` $ sal aliasname origcommand $ Register-ArgumentCompleter -CommandName 'aliasname' -ScriptBlock $origcommandCompleterBlock $ aliasname <tab> completion firstcommand secondcommand ``` The name of the completer block variable is of the form `$<programName>CompleterBlock` where every `-` and `:` in the program name have been replaced with `_`, to respect powershell naming syntax. Custom completions implemented in bash scripting (legacy) are not supported and will be ignored for `powershell` (including the use of the `BashCompCustom` flag annotation). You should instead use `ValidArgsFunction` and `RegisterFlagCompletionFunc()` which are portable to the different shells (`bash`, `zsh`, `fish`, `powershell`). The function `MarkFlagCustom()` is not supported and will be ignored for `powershell`. You should instead use `RegisterFlagCompletionFunc()`. The following flag completion annotations are not supported and will be ignored for `powershell`: `BashCompFilenameExt` (filtering by file extension) `BashCompSubdirsInDir` (filtering by directory) The functions corresponding to the above annotations are consequently not supported and will be ignored for `powershell`: `MarkFlagFilename()` and `MarkPersistentFlagFilename()` (filtering by file extension) `MarkFlagDirname()` and `MarkPersistentFlagDirname()` (filtering by directory) Similarly, the following completion directives are not supported and will be ignored for `powershell`: `ShellCompDirectiveFilterFileExt` (filtering by file extension) `ShellCompDirectiveFilterDirs` (filtering by directory)" } ]
{ "category": "Runtime", "file_name": "shell_completions.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "https://github.com/vmware-tanzu/velero/releases/tag/v1.7.0 `velero/velero:v1.7.0` https://velero.io/docs/v1.7/ https://velero.io/docs/v1.7/upgrade-to-1.7/ The Velero container images now use . Using distroless images as the base ensures that only the packages and programs necessary for running Velero are included. Unrelated libraries and OS packages, that often contain security vulnerabilities, are now excluded. This change reduces the size of both the server and restic restore helper image by approximately 62MB. As the images do not contain a shell, it will no longer be possible to exec into Velero containers using these images. This release introduces the new `velero debug` command. This command collects information about a Velero installation, such as pod logs and resources managed by Velero, in a tarball which can be provided to the Velero maintainer team to help diagnose issues. Distinguish between different unnamed node ports when preserving (#4026, @sseago) Validate namespace in Velero backup create command (#4057, @codegold79) Empty the \"ClusterIPs\" along with \"ClusterIP\" when \"ClusterIP\" isn't \"None\" (#4101, @ywk253100) Add a RestoreItemAction plugin (`velero.io/apiservice`) which skips the restore of any `APIService` which is managed by Kubernetes. These are identified using the `kube-aggregator.kubernetes.io/automanaged` label. (#4028, @zubron) Change the base image to distroless (#4055, @ywk253100) Updated the version of velero/velero-plugin-for-aws version from v1.2.0 to v1.2.1 (#4064, @kahirokunn) Skip the backup and restore of DownwardAPI volumes when using restic. (#4076, @zubron) Bump up Go to 1.16 (#3990, @reasonerjt) Fix restic error when volume is emptyDir and Pod not running (#3993, @mahaupt) Select the velero deployment with both label and container name (#3996, @ywk253100) Wait for the namespace to be deleted before removing the CRDs during uninstall. This deprecates the `--wait` flag of the `uninstall` command (#4007, @ywk253100) Use the cluster preferred CRD API version when polling for Velero CRD readiness. (#4015, @zubron) Implement velero debug (#4022, @reasonerjt) Skip the restore of volumes that originally came from a projected volume when using" }, { "data": "(#3877, @zubron) Run the E2E test with kind(provision various versions of k8s cluster) and MinIO on Github Action (#3912, @ywk253100) Fix -install-velero flag for e2e tests (#3919, @jaidevmane) Upgrade Velero ClusterRoleBinding to use v1 API (#3926, @jenting) enable e2e tests to choose crd apiVersion (#3941, @sseago) Fixing multipleNamespaceTest bug - Missing expect statement in test (#3983, @jaidevmane) Add --client-page-size flag to server to allow chunking Kubernetes API LIST calls across multiple requests on large clusters (#3823, @dharmab) Fix CR restore regression introduced in 1.6 restore progress. (#3845, @sseago) Use region specified in the BackupStorageLocation spec when getting restic repo identifier. Originally fixed by @jala-dx in #3617. (#3857, @zubron) skip backuping projected volume when using restic (#3866, @alaypatel07) Install Kubernetes preferred CRDs API version (v1beta1/v1). (#3614, @jenting) Add Label to BackupSpec so that labels can explicitly be provided to Schedule.Spec.Template.Metadata.Labels which will be reflected on the backups created. (#3641, @arush-sal) Add PVC UID label to PodVolumeRestore (#3792, @sseago) Support pulling plugin images by digest (#3803, @2uasimojo) Added BackupPhaseUploading and BackupPhaseUploadingPartialFailure backup phases as part of Upload Progress Monitoring. (#3805, @dsmithuchida) Uploading (new) The \"Uploading\" phase signifies that the main part of the backup, including snapshotting has completed successfully and uploading is continuing. In the event of an error during uploading, the phase will change to UploadingPartialFailure. On success, the phase changes to Completed. The backup cannot be restored from when it is in the Uploading state. UploadingPartialFailure (new) The \"UploadingPartialFailure\" phase signifies that the main part of the backup, including snapshotting has completed, but there were partial failures either during the main part or during the uploading. The backup cannot be restored from when it is in the UploadingPartialFailure state. Fix plugin name derivation from image name (#3711, @ashish-amarnath) Remove CSI volumesnapshot artifact deletion This change requires https://github.com/vmware-tanzu/velero-plugin-for-csi/pull/86 for Velero to continue deleting of CSI volumesnapshots when the corresponding backups are deleted. (#3734, @ashish-amarnath) use unstructured to marshal selective fields for service restore action (#3789, @alaypatel07)" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.7.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Architecture of the library === ```mermaid graph RL Program --> ProgramSpec --> ELF btf.Spec --> ELF Map --> MapSpec --> ELF Links --> Map & Program ProgramSpec -.-> btf.Spec MapSpec -.-> btf.Spec subgraph Collection Program & Map end subgraph CollectionSpec ProgramSpec & MapSpec & btf.Spec end ``` ELF BPF is usually produced by using Clang to compile a subset of C. Clang outputs an ELF file which contains program byte code (aka BPF), but also metadata for maps used by the program. The metadata follows the conventions set by libbpf shipped with the kernel. Certain ELF sections have special meaning and contain structures defined by libbpf. Newer versions of clang emit additional metadata in . The library aims to be compatible with libbpf so that moving from a C toolchain to a Go one creates little friction. To that end, the is tested against the Linux selftests and avoids introducing custom behaviour if possible. The output of the ELF reader is a `CollectionSpec` which encodes all of the information contained in the ELF in a form that is easy to work with in Go. The returned `CollectionSpec` should be deterministic: reading the same ELF file on different systems must produce the same output. As a corollary, any changes that depend on the runtime environment like the current kernel version must happen when creating . Specifications `CollectionSpec` is a very simple container for `ProgramSpec`, `MapSpec` and `btf.Spec`. Avoid adding functionality to it if possible. `ProgramSpec` and `MapSpec` are blueprints for in-kernel objects and contain everything necessary to execute the relevant `bpf(2)` syscalls. They refer to `btf.Spec` for type information such as `Map` key and value types. The package provides an assembler that can be used to generate `ProgramSpec` on the fly. Objects `Program` and `Map` are the result of loading specifications into the kernel. Features that depend on knowledge of the current system (e.g kernel version) are implemented at this point. Sometimes loading a spec will fail because the kernel is too old, or a feature is not enabled. There are multiple ways the library deals with that: Fallback: older kernels don't allow naming programs and maps. The library automatically detects support for names, and omits them during load if necessary. This works since name is primarily a debug aid. Sentinel error: sometimes it's possible to detect that a feature isn't available. In that case the library will return an error wrapping `ErrNotSupported`. This is also useful to skip tests that can't run on the current kernel. Once program and map objects are loaded they expose the kernel's low-level API, e.g. `NextKey`. Often this API is awkward to use in Go, so there are safer wrappers on top of the low-level API, like `MapIterator`. The low-level API is useful when our higher-level API doesn't support a particular use case. Links Programs can be attached to many different points in the kernel and newer BPF hooks tend to use bpf_link to do so. Older hooks unfortunately use a combination of syscalls, netlink messages, etc. Adding support for a new link type should not pull in large dependencies like netlink, so XDP programs or tracepoints are out of scope. Each bpflinktype has one corresponding Go type, e.g. `link.tracing` corresponds to BPFLINKTRACING. In general, these types should be unexported as long as they don't export methods outside of the Link interface. Each Go type may have multiple exported constructors. For example `AttachTracing` and `AttachLSM` create a tracing link, but are distinct functions since they may require different arguments." } ]
{ "category": "Runtime", "file_name": "ARCHITECTURE.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "A GJSON Path is a text string syntax that describes a search pattern for quickly retreiving values from a JSON payload. This document is designed to explain the structure of a GJSON Path through examples. The definitive implemenation is . Use the to experiment with the syntax online. A GJSON Path is intended to be easily expressed as a series of components seperated by a `.` character. Along with `.` character, there are a few more that have special meaning, including `|`, `#`, `@`, `\\`, `*`, `!`, and `?`. Given this JSON ```json { \"name\": {\"first\": \"Tom\", \"last\": \"Anderson\"}, \"age\":37, \"children\": [\"Sara\",\"Alex\",\"Jack\"], \"fav.movie\": \"Deer Hunter\", \"friends\": [ {\"first\": \"Dale\", \"last\": \"Murphy\", \"age\": 44, \"nets\": [\"ig\", \"fb\", \"tw\"]}, {\"first\": \"Roger\", \"last\": \"Craig\", \"age\": 68, \"nets\": [\"fb\", \"tw\"]}, {\"first\": \"Jane\", \"last\": \"Murphy\", \"age\": 47, \"nets\": [\"ig\", \"tw\"]} ] } ``` The following GJSON Paths evaluate to the accompanying values. In many cases you'll just want to retreive values by object name or array index. ```go name.last \"Anderson\" name.first \"Tom\" age 37 children [\"Sara\",\"Alex\",\"Jack\"] children.0 \"Sara\" children.1 \"Alex\" friends.1 {\"first\": \"Roger\", \"last\": \"Craig\", \"age\": 68} friends.1.first \"Roger\" ``` A key may contain the special wildcard characters `*` and `?`. The `*` will match on any zero+ characters, and `?` matches on any one character. ```go child*.2 \"Jack\" c?ildren.0 \"Sara\" ``` Special purpose characters, such as `.`, `*`, and `?` can be escaped with `\\`. ```go fav\\.movie \"Deer Hunter\" ``` You'll also need to make sure that the `\\` character is correctly escaped when hardcoding a path in your source code. ```go // Go val := gjson.Get(json, \"fav\\\\.movie\") // must escape the slash val := gjson.Get(json, `fav\\.movie`) // no need to escape the slash ``` ```rust // Rust let val = gjson::get(json, \"fav\\\\.movie\") // must escape the slash let val = gjson::get(json, r#\"fav\\.movie\"#) // no need to escape the slash ``` The `#` character allows for digging into JSON Arrays. To get the length of an array you'll just use the `#` all by itself. ```go friends.# 3 friends.#.age [44,68,47] ``` You can also query an array for the first match by using `#(...)`, or find all matches with `#(...)#`. Queries support the `==`, `!=`, `<`, `<=`, `>`, `>=` comparison operators, and the simple pattern matching `%` (like) and `!%` (not like) operators. ```go friends.#(last==\"Murphy\").first \"Dale\" friends.#(last==\"Murphy\")#.first [\"Dale\",\"Jane\"] friends.#(age>45)#.last [\"Craig\",\"Murphy\"] friends.#(first%\"D*\").last \"Murphy\" friends.#(first!%\"D*\").last \"Craig\" ``` To query for a non-object value in an array, you can forgo the string to the right of the operator. ```go children.#(!%\"a\") \"Alex\" children.#(%\"a\")# [\"Sara\",\"Jack\"] ``` Nested queries are allowed." }, { "data": "friends.#(nets.#(==\"fb\"))#.first >> [\"Dale\",\"Roger\"] ``` *Please note that prior to v1.3.0, queries used the `#[...]` brackets. This was changed in v1.3.0 as to avoid confusion with the new syntax. For backwards compatibility, `#[...]` will continue to work until the next major release.* The `~` (tilde) operator will convert a value to a boolean before comparison. For example, using the following JSON: ```json { \"vals\": [ { \"a\": 1, \"b\": true }, { \"a\": 2, \"b\": true }, { \"a\": 3, \"b\": false }, { \"a\": 4, \"b\": \"0\" }, { \"a\": 5, \"b\": 0 }, { \"a\": 6, \"b\": \"1\" }, { \"a\": 7, \"b\": 1 }, { \"a\": 8, \"b\": \"true\" }, { \"a\": 9, \"b\": false }, { \"a\": 10, \"b\": null }, { \"a\": 11 } ] } ``` You can now query for all true(ish) or false(ish) values: ``` vals.#(b==~true)#.a >> [1,2,6,7,8] vals.#(b==~false)#.a >> [3,4,5,9,10,11] ``` The last value which was non-existent is treated as `false` The `.` is standard separator, but it's also possible to use a `|`. In most cases they both end up returning the same results. The cases where`|` differs from `.` is when it's used after the `#` for and . Here are some examples ```go friends.0.first \"Dale\" friends|0.first \"Dale\" friends.0|first \"Dale\" friends|0|first \"Dale\" friends|# 3 friends.# 3 friends.#(last=\"Murphy\")# [{\"first\": \"Dale\", \"last\": \"Murphy\", \"age\": 44},{\"first\": \"Jane\", \"last\": \"Murphy\", \"age\": 47}] friends.#(last=\"Murphy\")#.first [\"Dale\",\"Jane\"] friends.#(last=\"Murphy\")#|first <non-existent> friends.#(last=\"Murphy\")#.0 [] friends.#(last=\"Murphy\")#|0 {\"first\": \"Dale\", \"last\": \"Murphy\", \"age\": 44} friends.#(last=\"Murphy\")#.# [] friends.#(last=\"Murphy\")#|# 2 ``` Let's break down a few of these. The path `friends.#(last=\"Murphy\")#` all by itself results in ```json [{\"first\": \"Dale\", \"last\": \"Murphy\", \"age\": 44},{\"first\": \"Jane\", \"last\": \"Murphy\", \"age\": 47}] ``` The `.first` suffix will process the `first` path on each array element before returning the results. Which becomes ```json [\"Dale\",\"Jane\"] ``` But the `|first` suffix actually processes the `first` path after the previous result. Since the previous result is an array, not an object, it's not possible to process because `first` does not exist. Yet, `|0` suffix returns ```json {\"first\": \"Dale\", \"last\": \"Murphy\", \"age\": 44} ``` Because `0` is the first index of the previous result. A modifier is a path component that performs custom processing on the JSON. For example, using the built-in `@reverse` modifier on the above JSON payload will reverse the `children` array: ```go children.@reverse [\"Jack\",\"Alex\",\"Sara\"] children.@reverse.0 \"Jack\" ``` There are currently the following built-in modifiers: `@reverse`: Reverse an array or the members of an object. `@ugly`: Remove all whitespace from JSON. `@pretty`: Make the JSON more human readable. `@this`: Returns the current" }, { "data": "It can be used to retrieve the root element. `@valid`: Ensure the json document is valid. `@flatten`: Flattens an array. `@join`: Joins multiple objects into a single object. `@keys`: Returns an array of keys for an object. `@values`: Returns an array of values for an object. `@tostr`: Converts json to a string. Wraps a json string. `@fromstr`: Converts a string from json. Unwraps a json string. `@group`: Groups arrays of objects. See . A modifier may accept an optional argument. The argument can be a valid JSON payload or just characters. For example, the `@pretty` modifier takes a json object as its argument. ``` @pretty:{\"sortKeys\":true} ``` Which makes the json pretty and orders all of its keys. ```json { \"age\":37, \"children\": [\"Sara\",\"Alex\",\"Jack\"], \"fav.movie\": \"Deer Hunter\", \"friends\": [ {\"age\": 44, \"first\": \"Dale\", \"last\": \"Murphy\"}, {\"age\": 68, \"first\": \"Roger\", \"last\": \"Craig\"}, {\"age\": 47, \"first\": \"Jane\", \"last\": \"Murphy\"} ], \"name\": {\"first\": \"Tom\", \"last\": \"Anderson\"} } ``` *The full list of `@pretty` options are `sortKeys`, `indent`, `prefix`, and `width`. Please see for more information.* You can also add custom modifiers. For example, here we create a modifier which makes the entire JSON payload upper or lower case. ```go gjson.AddModifier(\"case\", func(json, arg string) string { if arg == \"upper\" { return strings.ToUpper(json) } if arg == \"lower\" { return strings.ToLower(json) } return json }) \"children.@case:upper\" [\"SARA\",\"ALEX\",\"JACK\"] \"children.@case:lower.@reverse\" [\"jack\",\"alex\",\"sara\"] ``` Note: Custom modifiers are not yet available in the Rust version Starting with v1.3.0, GJSON added the ability to join multiple paths together to form new documents. Wrapping comma-separated paths between `[...]` or `{...}` will result in a new array or object, respectively. For example, using the given multipath: ``` {name.first,age,\"the_murphys\":friends.#(last=\"Murphy\")#.first} ``` Here we selected the first name, age, and the first name for friends with the last name \"Murphy\". You'll notice that an optional key can be provided, in this case \"the_murphys\", to force assign a key to a value. Otherwise, the name of the actual field will be used, in this case \"first\". If a name cannot be determined, then \"_\" is used. This results in ```json {\"first\":\"Tom\",\"age\":37,\"the_murphys\":[\"Dale\",\"Jane\"]} ``` Starting with v1.12.0, GJSON added support of json literals, which provides a way for constructing static blocks of json. This is can be particularly useful when constructing a new json document using . A json literal begins with the '!' declaration character. For example, using the given multipath: ``` {name.first,age,\"company\":!\"Happysoft\",\"employed\":!true} ``` Here we selected the first name and age. Then add two new fields, \"company\" and \"employed\". This results in ```json {\"first\":\"Tom\",\"age\":37,\"company\":\"Happysoft\",\"employed\":true} ``` See issue for additional context on JSON Literals." } ]
{ "category": "Runtime", "file_name": "SYNTAX.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "We as members, contributors, and leaders pledge to make participation in the Velero project and our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. Examples of behavior that contributes to a positive environment for our community include: Demonstrating empathy and kindness toward other people Being respectful of differing opinions, viewpoints, and experiences Giving and gracefully accepting constructive feedback Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: The use of sexualized language or imagery, and sexual attention or advances of any kind Trolling, insulting or derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or email address, without their explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline" }, { "data": "Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at oss-coc@vmware.com. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. Community Impact: A violation through a single incident or series of actions. Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. Community Impact: A serious violation of community standards, including sustained inappropriate behavior. Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. Consequence: A permanent ban from any sort of public interaction within the community. This Code of Conduct is adapted from the , version 2.0, available at https://www.contributor-covenant.org/version/2/0/codeofconduct.html. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity). For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations." } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List SRv6 policy entries List SRv6 policy entries. ``` cilium-dbg bpf srv6 policy [flags] ``` ``` -h, --help help for policy -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the SRv6 routing rules" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_srv6_policy.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "To provide an API for imperative application operations (e.g. start|stop|add) inside a pod for finer-grained control over rkt containerization concepts and debugging needs, this proposal introduces new stage1 entrypoints and a subcommand CLI API that will be used for manipulating applications inside pods. The primary motivation behind this change is to facilitate the new direction orchestration systems are taking in how they integrate with container runtimes. For more details, see . The envisioned workflow for the app-level API is that after a pod has been started, users will invoke the rkt CLI to manipulate the pod. The implementation behaviour consists of application logic on top of the aforementioned stage1 entrypoints. The proposed app-level commands are described below. Initializes an empty pod having no applications. This returns a single line containing the `pod-uuid` which can be used to perform application operations specified below. This also implies the started pod will be injectable. ```bash rkt app sandbox ``` Injects an application image into a running pod. After this has been called, the app is prepared and ready to be run via `rkt app start`. It first prepares an application rootfs for the application image, creates a runtime manifest for the app, and then injects the prepared app via an entrypoint. ```bash rkt app add <pod-uuid> --app=<app-name> <image-name/hash/address/registry-URL> <arguments> ``` Note: Not every pod will be injectable; it will be configured through an option when the pod is created.. Starts an application that was previously added (injected) to a pod. This operation is idempotent; if the specified application is already started, it will have no effect. ```bash rkt app start <pod-uuid> --app=<app-name> <arguments> ``` Stops a running application gracefully. Grace is defined in the `app/stop` entrypoint section. This does not remove associated resources (see `app rm`). ```bash rkt app stop <pod-uuid> --app=<app-name> ``` Removes a stopped application from a running pod, including all associated resources. ```bash rkt app rm <pod-uuid> --app=<app-name> <arguments> ``` Note: currently, when a pod becomes empty (no apps are running), it will terminate. This proposal will introduce a `--mutable` or `--allow-empty` or `--dumb` flag to be used when starting pods, so that the lifecycle management of the pod is configurable by the user (i.e. it will be possible to create a pod that won't be terminated when it is empty). Rootfs (e.g. `/opt/stage2/<app-name>`) Mounts from volumes (e.g. `/opt/stage2/<app-name>/<volume-name>`) Mounts related to rkt operations (e.g. `/opt/stage2/<app-name>/dev/null`) systemd service files (e.g. `<app-name>.service` and `reaper-<app-name>.service`) Miscellaneous files (e.g. `/rkt/<app-name>.env`, `/rkt/status...`) Lists the applications that are inside a pod, running or stopped. ```bash rkt app list <pod-uuid> <arguments> ``` Note: The information returned by list should consist of an app specifier and status at the very least, the rest is up for discussion. Returns the execution status of an application inside a pod. ```bash rkt app status <pod-uuid> --app=<app-name> <arguments> ``` The returned status information for an application would contain the following details (output format is up for discussion): ```go type AppStatus struct { Name string State AppState CreatedAt time.Time StartedAt time.Time FinishedAt" }, { "data": "ExitCode int64 } ``` Note: status will be obtained from an annotated JSON file residing in stage1 that contains the required information. **OPEN QUESTION**: what is responsible for updating this file? How is concurrent access handled? Executes a command inside an application. ```bash rkt app exec <pod-uuid> --app=<app-name> <arguments> -- <command> <command-arguments> ``` In order to facilitate the app-level operations API, four new stage1 entrypoints are introduced. Entrypoints are resolved via annotations found within a pod's stage1 manifest (e.g. `/var/lib/rkt/pods/run/$uuid/stage1/manifest`). The responsibility of this entrypoint is to receive a prepared app and inject it into the pod, where it will be started using the `app/start` entrypoint. The entrypoint should receive a reference to a runtime manifest of the prepared app, and perform any necessary setup based on that runtime manifest. The responsibility of this entrypoint is to remove an app from a pod. After `rm`, starting the application again is not possible - the app must be re-injected to be re-used. receive a reference to an application that resides inside the pod (running or stopped) stop the application if its running. remove the contents of the application (rootfs) from the pod (keep the logs?) and delete references to it (e.g. service files). The responsibility of this entrypoint is to start an application that is in the `Prepared` state, which is an app that was recently injected. The responsibility of this entrypoint is to stop an application that is in the `Running` state, by instructing the stage1. rkt will attempt a graceful shutdown: sending a termination signal (i.e. `SIGTERM`) to application and waiting for a grace period for the application to exit. If the application does not terminate by the end of the grace period, rkt will forcefully shut it down (i.e. `SIGKILL`). Expected set of app states are listed below: ```go type AppState string const ( UnknownAppState AppState = \"unknown\" PreparingAppState AppState = \"preparing\" // Apps that are ready to be used by `app start`. PreparedAppState AppState = \"prepared\" RunningAppState AppState = \"running\" // Apps stopped by `app stop`. StoppingAppState AppState = \"stopping\" // Apps that finish their execution naturally. ExitedAppState AppState = \"exited\" // Once an app is marked for removal, while the removal is being // performed, no further operations can be done on that app. DeletingAppState AppState = \"deleting\" ) ``` Note: State transitions are linear; an app that is in state `Exited` cannot transition into `Running` state. **OPEN QUESTION** can a stopped app not be restarted? Grant granular access to pods for orchestration systems and allow orchestration systems to develop their own pod concept on top of the exposed app-level operations. Create an empty pod. Inject applications into the pod. Orchestrate the workflow of applications (e.g. app1 has to terminate successfully before app2). Enable in-place updates of a pod without disrupting the operations of the pod. Remove old applications without disturbing/restarting the whole pod. Inject updated applications. Start the updated applications. Allow users to inject debug applications into a pod in production. Deploy an application containing only a Go web service binary. Encounter an error not decipherable via the available information (e.g. status info, logs, etc.). Add a debug app image containing binaries (e.g. `lsof`) for debugging the service. Enter the pod namespace and use the debug binaries." } ]
{ "category": "Runtime", "file_name": "app-level-api.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "title: Understanding Weave Net menu_order: 10 search_type: Documentation A Weave network consists of a number of 'peers' - Weave Net routers residing on different hosts. Each peer has a name, which tends to remain the same over restarts, a human friendly nickname for use in status and logging output and a unique identifier (UID) that is different each time it is run. These are opaque identifiers as far as the router is concerned, although the name defaults to a MAC address. Weave Net routers establish TCP connections with each other, over which they perform a protocol handshake and subsequently exchange information. These connections are encrypted if so configured. Peers also establish UDP \"connections\", possibly encrypted, which carry encapsulated network packets. These \"connections\" are duplex and can traverse firewalls. Weave Net creates a network bridge on the host. Each container is connected to that bridge via a veth pair, the container side of which is given an IP address and netmask supplied either by the user or by Weave Net's IP address allocator. Weave Net routes packets between containers on different hosts via two methods: a method, which operates entirely in kernel space, and a fallback `sleeve` method, in which packets destined for non-local containers are captured by the kernel and processed by the Weave Net router in user space, forwarded over UDP to weave router peers running on other hosts, and there injected back into the kernel which in turn passes them to local destination containers. Weave Net routers learn which peer host a particular MAC address resides on. They combine this knowledge with topology information in order to make routing decisions and thus avoid forwarding every packet to every peer. Weave Net can route packets in partially connected networks with changing topology. For example, in this network, peer 1 is connected directly to 2 and 3, but if 1 needs to send a packet to 4 or 5 it must first send it to peer 3: See Also *" } ]
{ "category": "Runtime", "file_name": "how-it-works.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "MinIO creates FIPS builds using a patched version of the Go compiler (that uses BoringCrypto, from BoringSSL, which is ) published by the Golang Team . MinIO FIPS executables are available at <http://dl.min.io> - they are only published for `linux-amd64` architecture as binary files with the suffix `.fips`. We also publish corresponding container images to our official image repositories. We are not making any statements or representations about the suitability of this code or build in relation to the FIPS 140-2 standard. Interested users will have to evaluate for themselves whether this is useful for their own purposes." } ]
{ "category": "Runtime", "file_name": "README.fips.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage runtime config ``` -h, --help help for config ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - List all runtime config entries" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_config.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This release includes some small command line tweaks and adds support for CRI logging in iottymux which is required by rktlet. It also fixes a number of bugs, adds a lot of new documentation, and updates some dependencies. status: added read from uuid-file (). stage0/run: relax '--hosts-entry' parser (). iottymux: store logs for kubelet in the appropriate location (). This change is made for rktlet. iottymux will store the logs directly in the CRI format. rkt: add AWS auth headerer support to `rkt config` (). kvm: solve certain routing issues by using the same default bridge as CNI (). networking/portfwd: fix compare routeLocalnetValue (). list: add ip of non-running pods to status output (). stage1: execute pre-start/post-stop hooks as privileged (). Even if we run the container as an unprivileged user. stage1-fly/run: allow non absolute commands to be run (). rkt: prevent skipping some images in image gc (). rkt: skip parsing in case of an empty string (). Fix issue where `rkt app add` fails with an error message like `must give only one app`, even when only one app name is given. scripts: Add libfdt to install deps (). libfdt-dev is needed when building kernels for architectures that support a device tree. makelib: Fix go-find-directories symlink problem (). scripts: adding missing dependecies to debian dependency installer (). scripts/build-pkgs: use RPM file dependency for shadow tools (). Lots of . selinux: Update to latest (). travis: update go versions (). vendor: bump docker2aci to v0.17.1 (). It fixes an image pulling bug for some images in GCR. Fixes all the misspell (). stage1/usrfromcoreos: add new image signing subkey 0638EB2F (). tests: Use semaphore install-package (). tests: Add verbose flag to build-and-run-tests.sh (). This release contains a number of bugfixes, new features like the ability to share the host IPC namespace, dependency updates, and build system improvements. app/add: Use the image name as a default name for app (). Make the `--name` flag optional like stated in the help message. stage1/init: activate systemd-journal-flush.service (). It's needed to make systemd-journald write to /var/log/journal instead of /run/log/journal. stage0/gc: try to avoid double overlay mounts (). Before Linux 4.13, it used to be possible to perform double overlayfs mounts and now it's not, handle this case. api: add CreatedAt to v1.Pod (). It might happen that the pod is created but we can't get its start time so we add a CreatedAt field to the API. lib: don't error out if we can't get the app exit code (). This can happen if the pod dies but we don't have time to register the app exit code. image: set the header instead of adding it (). The go changes its behavior for redirect and header's copy since the go 1.8: lib/app: check in upper/ if the pod uses overlay (). Getting creation/start time and status of applications will fail for pods using overlay if stage1 was unmounted (e.g. when rebooting). stage1: handle docker group semantics (). Docker uses the UID as GID if you only specify the \"user\". stage1: support hybrid cgroup hierarchy (). systemd introduced the hybrid cgroup hierarchy in v233, which was breaking the host flavor of rkt. pkg/keystore: ensure correct permissions on path creation (). Allow writing to `/etc/rkt/trustedkeys` as a user in the rkt group in systems with restrictive umask. networking: ensure the netns directory is mounted (). Allows using rktnetes and rkt on the same host. stage1: fix systemd version fmt in error message" }, { "data": "The previous version caused cryptic error messages. app/add: Allow to define annotations for app from CLI (). app/sandbox: Allow to define annotations for sandbox from CLI (). stage0,rkt: don't require the pod to be running to remove apps (). stage1: enable host IPC namespace (). rkt normally creates a new IPC namespace for the pod. In order to stay in the host IPC namespace, a new option `--ipc=` was added. rkt: bash completion code (). This patch provides an implementation of the command used to generate completion code for the bash shell. vendor: bump docker2aci to v0.17.0 (). vendor: update pborman/uuid to v1.1 (). vendor: bump appc/spec to v0.8.11 (). rkt\\seccomp\\test: Fix arm64 stat tests (). build: sort stage1 manifest files (). To ease maintenance. build/stage1: support local systemd source for offline builds (). RPM/deb package can upgrade even if running pods (). src flavor: copy the real libnss\\_files.so.2 file from the host (). It was copying a symbolic link instead. This is a minor bugfix release. It does not contain any changes to the rkt code, but it updates dependencies and runtime versions for bugfixes: vendor: update go-systemd to v15 (). rkt stopped working when running in a service with systemd v234. This update fixes it. scripts: update rkt-builder version to 1.3.0 (). This updates the default Go runtime to 1.8, fixing . This release contains changes to the behavior of `rkt run`, `rkt status`, and `rkt fly` to make them more consistent. Two of them need particular attention: `rkt status` can now omit the pid field when non-existent. Use `--wait[-ready]` to ensure a pid will be available. the `default[-restricted]` network is not added by default when a custom network is specified with `--net`. There are also some improvements on documentation and tests working on arm64. stage0/status: fix failure when systemd never runs in stage1 (). This changes the behavior of `rkt status` when a PID is not available: instead of crashing, it will now omit the pid field. Users that need to read the PID shortly after an invocation of `rkt run` should now use the `--wait[-ready]` flag explicitly. BREAKING network: do not automatically add `default*` networks when custom ones are specified (). stage1/fly: preserve environment between run and enter (). Fly run now writes the app env file, and `fly enter` reads it. stage1/fly: make run/enter honour uid/gid/suppGids (). Refactored common functionality out of run. stage1/init/units: keep journald running while apps are shutting down (). This prevents a race when apps are writing to their stdout/err (and output is being sent to stage1's journal) while shutting down. If journald terminates before the apps finish shutting down, their output will be lost. tests: get functional tests working on arm64 (). Various arch fixups to get `make check` with a coreos stage1 working on arm64 machines. Fix `--user --group` on arm64 (). Fixes issue https://github.com/rkt/rkt/issues/3714 (`rkt run --user` fails on arm64). docs: update CLI flags in run.md (). Also added rkt-run options present in rkt 1.27.0 but not present in the run.md markdown. The entries in markdown have been sorted. tests/net: skip TestNetCustomBridge on semaphore (). Reference https://github.com/rkt/rkt/issues/3739 doc: mention external stage1s (). This was discussed on: https://github.com/rkt/rkt/pull/3645#issuecomment-296865635 rkt/pubkeys: print debug logs on discovery errors (). Thisreorders log-printing and error-returning when pubkeys discovery fails, in order to print useful debugging information on error. docs: correct rkt pronunciation (). `rkt` has an icon of a rocket but previously the official pronunciation was \"rock-it\" which is incompatible with the logo. This change fixes" }, { "data": "stage0: fix message formatting errors, stale forward-vars (). This minor release contains bugfixes and other improvements related to the tests and the documentation. stage1/kvm: add arm64 build (). stage0: list|status --format=json panics: RuntimeApp.Mounts.AppVolume is optional (). When it is nil, the Volume info at the Pod level (with the same name) should be used. Without this patch `rkt list --format=json` panics on a nil pointer when Apps reference Volumes from the Pod level. imagestore: Fix sql resource leaks (). When using sql queries the rows iterator needs to be closed if the entire query result is not iterated over. Failure to close the iterator results in resource leakage. networking: change the default-restricted subnet (). Previously, we were using 172.17/16, which conflicts with the default Docker networking. Change it to 172.31/16. scripts/pkg: improved detection of active mounts (). On systems which have /var/lib/rkt as a separate partition, the active mount detection in before-remove needs to not get confused by the presence of /var/lib/rkt itself as a mount. Therefore a longer path is used for active mount detection. stage1/usrfromcoreos: add new image signing sub-key EF4B4ED9 (). See coreos/init#236. scripts: skip nonexistent stage1 images when packaging (). Not all builds will generate all stage1 images. It depends on what `./configure` flags (`--with-stage1-flavors`) were used. tests: Only run race test on supported arch (). Fixes build errors like these when run on non amd64 machines: functional test: Fix manifest arch error (). The manifest contains values for the ACI arch and OS, not the go language values. Documentation updates: , , , This minor release contains bugfixes and other improvements. It also adds better support for the arm architecture to rkt, so that you can now fetch images via autodiscovery and have the correct seccomp whitelist to run them. Also notable is the new possibilty to pass extra kernel parameters to kvm, and last but not least a significant prepare/run speedup in stage0. This also introduces stricter validation on volume names, now rejecting duplicate ones. stage1: improve duplicate mount-volume detection (). Breaking change: volumes with duplicate names are now rejected. stage0/{run,prepare}: remove ondisk verification (). For backwards compatibility, specifying 'insecure-options=ondisk' will still run without error, however it will also not do anything. kvm/qemu: add extra kernel parameters (). seccomp: add arch-specific syscalls on ARM (). fetch: use proper appc os/arch labels (). tests/caps: skip if overlayfs support is missing (). build/stage1: transfer user xattr data (). stage1: include <sys/sysmacros.h> for makedev function (). Add code of conduct (). Required by CNCF. rkt list|status: app state info (i.e. exit codes) in --format=json (). integrations: add mesos (). Documentation: add container linux and tectonic as production users (). Documentation: add Gentoo to the list of distributions that have rkt (). Documentation: add some individual blog posts (). Documentation: cleanup stage1 stuff (). dist: use multi-user.target instead of default.target (). added production-users and integrations pages (). scripts: update rkt-builder version (). This minor release contains bugfixes and other improvements related to the KVM flavour, which is now using qemu-kvm by default. Switch default kvm flavour from lkvm to qemu (). stage1/kvm: Change RAM calculation, and increase minimum (). stage1: Ensure ptmx device usable by non-root for all flavours (). tests: fix TestNonRootReadInfo when $HOME is only accessible by current user (). glide: bump grpc to 1.0.4 (). vendor: bump docker2aci to 0.16.0 (). This release includes experimental support for attaching to a running application's input and output. It also introduces a more finely grained pull-policy" }, { "data": "rkt: add experimental support for attachable applications (). It consists of: a new `attach` subcommand a set of per-app flags to control stdin/stdout/stderr modes a stage1 `iottymux` binary for multiplexing and attaching two new templated stage1 services, `iomux` and `ttymux` run/prepare/fetch: replace --no-store and --store-only with --pull-policy (). Replaces the `--no-store` and `--store-only` flags with a singular flag `--pull-policy`. can accept one of three things, `never`, `new`, and `update`. `--no-store` has been aliased to `--pull-policy=update` `--store-only` has been aliased to `--pull-policy=never` image gc: don't remove images that currently running pods were made from (). stage1/fly: evaluate symlinks in mount targets (). lib/app: use runtime app mounts and appVolumes rather than mountpoints (). kvm/qemu: Update QEMU to v2.8.0 (). stage0/app-add: CLI args should override image ones (). lib/app: use runtime app mounts and appVolumes rather than mountpoints (). kvm/lkvm: update lkvm version to HEAD (). vendor: bump appc to v0.8.10 (). docs: () tests: remove gexpect from TestAppUserGroup (). travis: remove \"gimme.local\" script (). tests: fix when $HOME is only accessible by current user (). makelib: introduce --enable-incremental-build, enabling \"go install\" (). This release adds a lot of bugfixes around the rkt fly flavor, garbage collection, kvm, and the sandbox. The new experimental `app` subcommand now follows the semantic of CRI of not quitting prematurely if apps fail or exit. Finally docker2aci received an important update fixing issues with os/arch labels which caused issues on arm architectures, a big thanks here goes to @ybubnov for this contribution. sandbox: don't exit if an app fails (). In contrast to regular `rkt run` behavior, the sandbox now does not quit if all or single apps fail or exit. stage1: fix incorrect splitting function (). sandbox/app-add: fix mount targets with absolute symlink targets (). namefetcher: fix nil pointer dereference (). Bump appc/docker2aci library version to 0.15.0 (). This supports the conversion of images with various os/arch labels. stage1: uid shift systemd files (). stage1/kvm/lkvm: chown files and dirs on creation (). stage1/fly: record pgid and let stop fallback to it (). common/overlay: allow data directory name with colon character (). api-service: stop erroring when a pod is running (). stage1/fly: clear FD_CLOEXEC only once (). stage1: Add hostname to /etc/hosts (). gc: avoid erroring in race to deletion (). tests/rkt_stop: Wait for 'stop' command to complete (). pkg/pod: avoid nil panic for missing pods (). stage1: move more logic out of AppUnit (). tests: use appc schema instead of string templates (). stage1: kvm: Update kernel to 4.9.2 (). stage1: remount entire subcgroup r/w, instead of each knob (). tests: update AWS CI setup (). pkg/fileutil: helper function to get major, minor numbers of a device file (). pkg/log: correctly handle var-arg printf params (). Documentation/stop: describe --uuid-file option (). This is a stabilization release which includes better support for environments without systemd, improvements to GC behavior in complex scenarios, and several additional fixes. rkt/cat-manifest: add support for --uuid-file (). stage1: fallback if systemd cgroup doesn't exist (). vendor: bump gocapability (). This change renames `syspsacct` to `syspacct`. stage0/app: pass debug flag to entrypoints (). gc: fix cleaning mounts and files (). This improves GC behavior in case of busy mounts and other complex scenarios. mount: ensure empty volume paths exist for copy-up (). rkt stop/rm: a pod must be closed after PodFromUUIDString() (). stage1/kvm: add a dash in kernel LOCALVERSION (). stage1/kvm: Improve QEMU Makefile rules (). pkg/pod: use IncludeMostDirs bitmask instead of constructing it (). pkg/pod: add WaitReady, dry Sandbox methods" }, { "data": "vendor: bump gexpect to 0.1.1 (). common: fix 'the the' duplication in comment (). docs: multiple updates (, , , ). This release includes bugfixes for the experimental CRI support, more stable integration tests, and some other interesting changes: The `default-restricted` network changed from 172.16.28.0/24 to 172.17.0.0/26. The detailed roadmap for OCI support has been finalized. Change the subnet for the default-restricted network (), (). Prepare for writable /proc/sys, and /sys (). Documentation/proposals: add OCI Image Format roadmap (). stage1: app add, status didn't work with empty vols (). stage1: properly run defer'd umounts in app add (). cri: correct 'created' timestamp (). fly: ensure the target bin directory exists before building (). rkt: misc systemd-related fixes (). pkg/mountinfo: move mountinfo parser to its own package (). stage1: persist runtime parameters (), (). stage1: signal supervisor readiness (), (). sandbox: add missing flagDNSDomain and flagHostsEntries parameters (). pkg/tar: fix variable name in error (). tests: fix TestExport for the KVM+overlay case (). tests: fix some potential gexpect hangs (). tests: add smoke test for app sandbox (). tests: tentative fixes for sporadic host and kvm failures (). rkt: remove empty TODO (). Documentation updates: , (), (). This release contains additional bug fixes for the new experimental `app` subcommand, following the path towards the Container Runtime Interface (CRI). It also adds first step towards OCI by introducing an internal concept called \"distribution points\", which will allow rkt to recognize multiple image formats internally. Finally the rkt fly flavor gained support for `rkt enter`. stage1/fly: Add a working `rkt enter` implementation (). tests/build-and-run-test.sh: fix systemd revision parameter (). namefetcher: Use ETag in fetchVerifiedURL() (). rkt/run: validates pod manifest to make sure it contains at least one app (). rkt/app: multiple bugfixes (). glide: deduplicate cni entries and update go-systemd (). stage0: improve list --format behavior and flags (). pkg/pod: flatten the pod state if-ladders (). tests: adjust security tests for systemd v232 (). image: export `ImageListEntry` type for image list (). glide: bump gopsutil to v2.16.10 (). stage1: update coreos base to alpha 1235.0.0 (). rkt: Implement distribution points (). This is the implementation of the distribution concept proposed in . build: add --with-stage1-systemd-revision option for src build (). remove isReallyNil() (). This is cleanup PR, removing some reflection based code. vendor: update appc/spec to 0.8.9 (). vendor: Remove direct k8s dependency (). Documentation updates: , , , , . This release contains multiple changes to rkt core, bringing it more in line with the new Container Runtime Interface (CRI) from Kubernetes. A new experimental `app` subcommand has been introduced, which allows creating a \"pod sandbox\" and dynamically mutating it at runtime. This feature is not yet completely stabilized, and is currently gated behind an experimental flag. rkt: experimental support for pod sandbox (). This PR introduces an experimental `app` subcommand and many additional app-level options. rkt/image: align image selection behavior for the rm subcommand (). stage1/init: leave privileged pods without stage2 mount-ns (). stage0/image: list images output in JSON format (). stage0/arch: initial support for ppc64le platform (). gc: make sure `CNI_PATH` is same for gc and init (). gc: clean up some GC leaks (). stage0: minor wording fixes (). setup-data-dir.sh: fallback to the `mkdir/chmod`s if the rkt.conf doesn't exist (). scripts: add gpg to Debian dependencies (). kvm: fix for breaking change in Debian Sid GCC default options (). image/list: bring back field filtering in plaintext mode (). cgroup/v1: introduce mount flags to mountFsRO" }, { "data": "kvm: update QEMU version to 2.7.0 (). kvm: bump kernel version to 4.8.6, updated config (). vendor: introduce kr/pretty and bump go-systemd (). vendor: update docker2aci to 0.14.0 (). tests: add the --debug option to more tests (). scripts/build-rir: bump rkt-builder version to 1.1.1 (). Documentation updates: , , . This minor release contains bugfixes, UX enhancements, and other improvements. rkt: gate diagnostic output behind `--debug` (). rkt: Change exit codes to 254 (). stage1/kvm: correctly bind-mount read-only volumes (). stage0/cas: apply xattr attributes (). scripts/install-rkt: add iptables dependency (). stage0/image: set proxy if InsecureSkipVerify is set (). vendor: update docker2aci to 0.13.0 (). This fixes multiple fetching and conversion bugs, including two security issues. scripts: update glide vendor script (). vendor: update appc/spec to v0.8.8 (). stage1: update to CoreOS 1192.0.0 (and update sanity checks) (). cgroup: introduce proper cgroup/v1, cgroup/v2 packages (). Documentation updates: (), (), (). This is a minor release packaging rkt-api systemd service units, and fixing a bug caused by overly long lines in generated stage1 unit files. dist: Add systemd rkt-api service and socket (). dist: package rkt-api unit files (). stage1: break down overlong property lines (). stage0: fix typo and some docstring style (). stage0: Create an mtab symlink if not present (). stage1: use systemd protection for kernel tunables (). Documentation updates: (, , , , , ) This release contains an important bugfix for the stage1-host flavor, as well as initial internal support for cgroup2 and pod sandboxes as specified by kubernetes CRI (Container Runtime Interface). stage1/host: fix systemd-nspawn args ordering (). Fixes https://github.com/rkt/rkt/issues/3215. rkt: support for unified cgroups (cgroup2) (). This implements support for cgroups v2 along support for legacy version. cri: initial implementation of stage1 changes (). This PR pulls the stage1-based changes from the CRI branch back into master, leaving out the changes in stage0 (new app subcommands). doc/using-rkt-with-systemd: fix the go app example (). rkt: refactor app-level flags handling (). This is in preparation for https://github.com/rkt/rkt/pull/3205 docs/distributions: rearrange, add centos (). rkt: Correct typos listed by the tool misspell (). This relase brings some expanded DNS configuration options, beta support for QEMU, recursive volume mounts, and improved sd_notify support. DNS configuration improvements (): Respect DNS results from CNI Add --dns=host mode to bind-mount the host's /etc/resolv.conf Add --dns=none mode to ignore CNI DNS Add --hosts-entry (IP=HOSTNAME) to tweak the pod's /etc/hosts Add --hosts-entry=host to bind-mount the host's /etc/hosts Introduce QEMU support as an alternative KVM hypervisor () add support for recursive volume/mounts () stage1: allow sd_notify from the app in the container to the host (). rkt-monitor: bunch of improvements () makefile/kvm: add dependency for copied files () store: refactor GetRemote (). build,stage1: include systemd dir when checking libs () tests: volumes: add missing test `volumeMountTestCasesNonRecursive` () kvm/pod: disable insecure-options=paths for kvm flavor () stage0: don't copy image annotations to pod manifest RuntimeApp annotations () stage1: shutdown.service: don't use /dev/console () build: build simple .deb and .rpm packages (). Add a simple script to build .deb and .rpm packages. This is not a substitute for a proper distro-maintained package. Documentation updates: () () () () () () () proposals/app-level-api: add rkt app sandbox subcommand (). This adds a new subcommand `app init` to create an initial empty pod. This release updates the coreos and kvm flavors, bringing in a newer stable systemd (v231). Several fixes and cgroups-related changes landed in `api-service`, and better heuristics have been introduced to avoid using overlays in non-supported" }, { "data": "Finally, `run-prepared` now honors options for insecure/privileged pods too. stage1: update to CoreOS 1151.0.0 and systemd v231 (). common: fall back to non-overlay with ftype=0 (). rkt: honor insecure-options in run-prepared (). stage0: fix golint warnings (). rkt: avoid possible panic in api-server (). rkt/run: allow --set-env-file files with comments (). scripts/install-rkt: add wget as dependency (). install-rkt.sh: scripts: Fix missing files in .deb when using install-rkt.sh (). tests: check for run-prepared with insecure options (). seccomp/docker: update docker whitelist to include mlock (). This updates the `@docker/default-whitelist` to include mlock-related syscalls (mlock, mlock2, mlockall). build: add PowerPC (). scripts: install-rkt.sh: fail install-pak on errors (). When install-pak (called from install-rkt.sh) fails at some point abort packaging. api_service: Rework cgroup detection (). Use the `subcgroup` file hint provided by some stage1s rather than machined registration. Documentation/devel: add make images target (). This introduces the possibility to generate graphivz based PNG images using a new `images` make target. vendor: update appc/spec to 0.8.7 (). stage1/kvm: avoid writing misleading subcgroup (). vendor: update go-systemd to v12 (). scripts: bump coreos.com/rkt/builder image version (). This bumps rkt-builder version to 1.0.2, in order to work with seccomp filtering. export: test export for multi-app pods (). Documentation updates: (, , , , , , , , , ) This release introduces support for exporting single applications out of multi-app pods. Moreover, it adds additional support to control device manipulation inside pods. Finally all runtime security features can now be optionally disabled at the pod level via new insecure options. This version also contains multiple bugfixes and supports Go 1.7. export: name flag for exporting multi-app pods (). stage1: limit device node creation/reading/writing with DevicePolicy= and DeviceAllow= (, ). rkt: implements --insecure-options={capabilities,paths,seccomp,run-all} (). kvm: use a properly formatted comment for iptables chains (). rkt was using the chain name as comment, which could lead to confusion. pkg/label: supply mcsdir as function argument to InitLabels() (). api_service: improve machined call error output (). general: fix old appc/spec version in various files (). rkt/pubkey: use custom http client including timeout (). dist: remove quotes from rkt-api.service ExecStart (). build: multiple fixes (, , ). configure: disable tests on host flavor with systemd <227 (). travis: add go 1.7, bump go 1.5/1.6 (). api_service: Add lru cache to cache image info (). scripts: add curl as build dependency (). vendor: use appc/spec 0.8.6 and k8s.io/kubernetes v1.3.0 (). common: use fileutil.IsExecutable() (). build: Stop printing irrelevant invalidation messages (). build: Make generating clean files simpler to do (). Documentation: misc changes (, , , , , , , , , , ). functional tests: misc fixes (). This release introduces support for seccomp filtering via two new seccomp isolators. It also gives a boost to api-service performance by introducing manifest caching. Finally it fixes several regressions related to Docker images handling. cli: rename `--cap-retain` and `--cap-remove` to `--caps-*` (). stage1: apply seccomp isolators (). This introduces support for appc seccomp isolators. scripts: add /etc/rkt owned by group rkt-admin in setup-data-dir.sh (). rkt: add `--caps-retain` and `--caps-remove` to prepare (). store: allow users in the rkt group to delete images (). api_service: cache pod manifest (). Manifest caching considerably improves api-service performances. store: tell the user to run as root on db update (). stage1: disabling cgroup namespace in systemd-nspawn (). For more information see . fly: copy rkt-resolv.conf in the app (). store: decouple aci store and treestore implementations (). store: record ACI fetching information (). stage1/init: fix writing of /etc/machine-id (). rkt-monitor: multiple fixes (," }, { "data": "rkt: don't errwrap cli_apps errors (). pkg/tar/chroot: avoid errwrap in function called by multicall (). networking: apply CNI args to the default networks as well (). trust: provide InsecureSkipTLSCheck to pubkey manager (). api_service: update grpc version (). fetcher: httpcaching fixes (). build,stage1/init: set interpBin at build time for src flavor (). common: introduce RemoveEmptyLines() (). glide: update docker2aci to v0.12.3 (). This fixes multiple bugs in layers ordering for Docker images. glide: update go-systemd to v11 (). This fixes a buggy corner-case in journal seeking (implicit seek to head). docs: document capabilities overriding (, ). issue template: add '\\n' to the end of environment output (). functional tests: multiple fixes (, , ). This release sets the ground for the new upcoming KVM qemu flavor. It adds support for exporting a pod to an ACI including all modifications. The rkt API service now also supports systemd socket activation. Finally we have diagnostics back, helping users to find out why their app failed to execute. KVM: Hypervisor support for KVM flavor focusing on qemu (). This provides a generic mechanism to use different kvm hypervisors (such as lkvm, qemu-kvm). rkt: add command to export a pod to an aci (). Adds a new `export` command to rkt which generates an ACI from a pod; saving any changes made to the pod. rkt/api: detect when run as a `systemd.socket(5)` service (). This allows rkt to run as a systemd socket-based unit. rkt/stop: implement `--uuid-file` (). So the user can use the value saved on rkt run with `--uuid-file-save`. scripts/glide-update: ensure running from $GOPATH (). glide is confused when it's not running with the rkt repository inside $GOPATH. store: fix missing shared storelock acquisition on NewStore (). store,rkt: fix fd leaks (). Close db lock on store close. If we don't do it, there's a fd leak everytime we open a new Store, even if it was closed. stage1/enterexec: remove trailing `\\n` in environment variables (). Loading environment retained the new line character (`\\n`), this produced an incorrect evaluation of the environment variables. stage1/gc: skip cleaning our own cgroup (). api_service/log: fix file descriptor leak in GetLogs() (). protobuf: fix protoc-gen-go build with vendoring (). build: fix x86 builds (). This PR fixes a minor issue which leads to x86 builds failing. functional tests: add some more volume/mount tests (). stage1/init: link pod's journal in kvm flavor (). In nspawn flavors, nspawn creates a symlink from `/var/log/journal/${machine-id}` to the pod's journal directory. In kvm we need to do the link ourselves. build: Build system fixes (). This should fix the `expr: syntax error` and useless rebuilds of network plugins. stage1: diagnostic functionality for rkt run (). If the app exits with `ExecMainStatus == 203`, the app's reaper runs the diagnostic tool and prints the output on stdout. systemd sets `ExecMainstatus` to EXIT_EXEC (203) when execve() fails. build: add support for more architectures at configure time (). stage1: update coreos image to 1097.0.0 (). This is needed for a recent enough version of libseccomp (2.3.0), with support for new syscalls (eg. getrandom). api: By adding labels to the image itself, we don't need to pass the manifest to filter function (). api: Add labels to pod and image type. api: optionally build systemd-journal support (). This introduces a 'sdjournal' tag and corresponding stubs in api_service, turning libsystemd headers into a soft-dependency. store: simplify db locking and functions (). Instead of having a file lock to handle inter process locking and a" }, { "data": "to handle locking between multiple goroutines, just create, lock and close a new file lock at every db.Do function. stage1/enterexec: Add entry to ASSCBEXTRAHEADERS (). Added entry to ASSCBEXTRAHEADERS for better change tracking. build: use rkt-builder ACI (). Add hidden 'image fetch' next to the existing 'fetch' option (). stage1: prepare-app: don't mount /sys if path already used (). When users mount /sys or a sub-directory of /sys as a volume, prepare-app should not mount /sys: that would mask the volume provided by users. build,stage1/init: set interpBin at build time to fix other architecture builds (e.g. x86) (). functional tests: re-purpose aws.sh for generating AMIs (). rkt: Add `--cpuprofile` `--memprofile` for profiling rkt (). Adds two hidden global flags and documentation to enable profiling rkt. functional test: check PATH variable for trailer `\\n` character (). functional tests: disable TestVolumeSysfs on kvm (). Documentation updates () glide: update docker2aci to v0.12.1 (). Includes support for the docker image format v2.2 and OCI image format and allows fetching via digest. This is a minor bug fix release. rkt/run: handle malformed environment files () stage1/enterexec: remove trailing `\\n` in environment variables () This release introduces a number of important features and improvements: ARM64 support A new subcommand `rkt stop` to gracefully stop running pods native Go vendoring with Glide rkt is now packaged for openSUSE Tumbleweed and Leap Add ARM64 support (). This enables ARM64 cross-compliation, fly, and stage1-coreos. Replace Godep with Glide, introduce native Go vendoring (). rkt: rkt stop (). Cleanly stops a running pod. For systemd-nspawn, sends a SIGTERM. For kvm, executes `systemctl halt`. stage1/fly: respect runtimeApp App's MountPoints (). Fixes #2846. run: fix sandbox-side metadata service to comply to appc v0.8.1 (). Fixes #2621. build directory layout change (): The rkt binary and stage1 image files have been moved from the 'bin' sub-directory to the 'target/bin' sub-directory. networking/kvm: add flannel default gateway parsing (). stage1/enterexec: environment file with '\\n' as separator (systemd style) (). pkg/tar: ignore global extended headers (). pkg/tar: remove errwrap (). tests: fix abuses of appc types.Isolator (). common: remove unused GetImageIDs() (). common/cgroup: add mountFsRO() helper function (). Documentation updates (, , , , , , ) glide: bump ql to v1.0.4 (). It fixes an occassional panic when doing GC. glide: bump gopsutils to 2.1 (). To include https://github.com/shirou/gopsutil/pull/194 (this adds ARM aarch64 support) vendor: update appc/spec to 0.8.5 (). This is a minor bug fix release. Godeps: update go-systemd (). go-systemd v10 fixes a panic-inducing bug due to returning incorrect Read() length values. stage1/fly: use 0755 to create mountpaths (). This will allow any user to list the content directories. It does not have any effect on the permissions on the mounted files itself. This release focuses on bug fixes and developer tooling and UX improvements. rkt/run: added --set-env-file switch and priorities for environments (). --set-env-file gets an environment variables file path in the format \"VAR=VALUE\\n...\". run: add --cap-retain and --cap-remove (). store: print more information on rm as non-root (). Documentation/vagrant: use rkt binary for getting started (). docs: New file in documentation - instruction for new developers in rkt (). stage0/trust: change error message if prefix/root flag missing (). rkt/uuid: fix match when uuid is an empty string (). rkt/api_service: fix fly pods (). api/clientexample: fix panic if pod has no apps (). Fixes the concern expressed in https://github.com/rkt/rkt/pull/2763#discussionr66409260 api_service: wait until a pod regs with machined (). stage1: update coreos image to 1068.0.0" }, { "data": "KVM: Update LKVM patch to mount with mmap mode (). stage1: always write /etc/machine-id (). Prepare rkt for systemd-v230 in stage1. stage1/prepare-app: always adjust /etc/hostname (). This release focuses on stabilizing the API service, fixing multiple issues in the logging subsystem. api: GetLogs: improve client example with 'Follow' (). kvm: add proxy arp support to macvtap (). stage0/config: add a CLI flag to pretty print json (). stage1: make /proc/bus/ read-only (). api: GetLogs: use the correct type in LogsStreamWriter (). api: fix service panic on incomplete pods (). api: Fix the GetLogs() when appname is given (). pkg/selinux: various fixes (). pkg/fileutil: don't remove the cleanSrc if it equals '.' (). stage0: remove superfluous error verbs (). Godeps: bump go-systemd (). Fixes a panic on the api-service when calling GetLogs(). Documentation updates (, , , , , ) Test improvements (). This release introduces some new security features, including a \"no-new-privileges\" isolator and initial (partial) restrictions on /proc and /sys access. Cgroups handling has also been improved with regards to setup and cleaning. Many bugfixes and new documentation are included too. stage1: implement no-new-privs linux isolator (). stage0: disable OverlayFS by default when working on ZFS (). stage1: (partially) restrict access to procfs and sysfs paths (). stage1: clean up pod cgroups on GC (). stage1/prepare-app: don't mount /sys/fs/cgroup in stage2 (). stage0: complain and abort on conflicting CLI flags (). stage1: update CoreOS image signing key (). api_service: Implement GetLogs RPC request (). networking: update to CNI v0.3.0 (). api: fix image size reporting (). build: fix build failures on manpages/bash-completion target due to missing GOPATH (). dist: fix \"other\" permissions so rkt list can work without root/rkt-admin (). kvm: fix logging network plugin type (). kvm: transform flannel network to allow teardown (). rkt: fix panic on rm a non-existing pod with uuid-file (). stage1/init: work around `cgroup/SCM_CREDENTIALS` race (). gc: mount stage1 on GC (). stage1: fix network files leak on GC (). deps: remove unused dependencies (). deps: appc/spec, k8s, protobuf updates (). deps: use tagged release of github.com/shirou/gopsutil (). deps: bump docker2aci to v0.11.1 (). Documentation updates (, , , , , , ). Test improvements (, , , , , , , , , , ). This release focuses on security enhancements. It provides additional isolators, creating a new mount namespace per app. Also a new version of CoreOS 1032.0.0 with systemd v229 is being used in stage1. stage1: implement read-only rootfs (). Using the Pod manifest readOnlyRootFS option mounts the rootfs of the app as read-only using systemd-exec unit option ReadOnlyDirectories, see . stage1: capabilities: implement both remain set and remove set (). It follows the , as modified by . stage1/init: create a new mount ns for each app (). Up to this point, you could escape the app's chroot easily by using a simple program downloaded from the internet . To avoid this, we now create a new mount namespace per each app. api: Return the pods even when we failed getting information about them (). stage1/usrfromcoreos: use CoreOS 1032.0.0 with systemd v229 (). kvm: fix flannel network info (). It wasn't saving the network information on disk. stage1: Machine name wasn't being populated with the full UUID (). rkt: Some simple arg doc string fixes (). Remove some unnecessary indefinite articles from the start of argument doc strings and fixes the arg doc string for run-prepared's --interactive flag. stage1: Fix segfault in enterexec" }, { "data": "This happened if rkt enter was executed without the TERM environment variable set. net: fix port forwarding behavior with custom CNI ipMasq'ed networks and allow different hostPort:podPort combinations (). stage0: check and create /etc (). Checks '/etc' before writing to '/etc/rkt-resolv.conf' and creates it with default permissions if it doesn't exist. godep: update cni to v0.2.3 (). godep: update appc/spec to v0.8.1 (, ). dist: Update tmpfiles to create /etc/rkt (). By creating this directory, users can run `rkt trust` without being root, if the user is in the rkt group. Invoke gofmt with simplify-code flag (). Enables code simplification checks of gofmt. Implement composable uid/gid generators (). This cleans up the code a bit and implements uid/gid functionality for rkt fly. stage1: download CoreOS over HTTPS (). Documentation updates (, , , , , , ). Test improvements (, , ). This release is a minor bug fix release. rkt: fix bug where rkt errored out if the default data directory didn't exist . kvm: fix docker volume semantics (). When a Docker image exposes a mount point that is not mounted by a host volume, Docker volume semantics expect the files in the directory to be available to the application. This was not working properly in the kvm flavor and it's fixed now. kvm: fix net long names (). Handle network names that are longer than the maximum allowed by iptables in the kvm flavor. minor tests and clean-ups (). This release switches to pure systemd for running apps within a pod. This lays the foundation to implement enhanced isolation capabilities. For example, starting with v1.5.0, apps are started with more restricted capabilities. User namespace support and the KVM stage1 are not experimental anymore. Resource usage can be benchmarked using the new rkt-monitor tool. stage1: replace appexec with pure systemd (). Replace functionality implemented in appexec with equivalent systemd options. This allows restricting the capabilities granted to apps in a pod and makes enabling other security features (per-app mount namespaces, seccomp filters...) easier. stage1: restrict capabilities granted to apps (). Apps in a pod receive now a . rkt/image: render images on fetch (). On systems with overlay fs support, rkt was delaying rendering images to the tree store until they were about to run for the first time which caused that first run to be slow for big images. When fetching as root, render the images right away so the first run is faster. kvm: fix mounts regression (). Cause - AppRootfsPath called with local \"root\" value was adding stage1/rootfs twice. After this change this is made properly. rkt/image: strip \"Authorization\" on redirects to a different host (). We now don't pass the \"Authorization\" header if the redirect goes to a different host, it can leak sensitive information to unexpected third parties. stage1/init: interpret the string \"root\" as UID/GID 0 (). This is a special case and it should work even if the image doesn't have /etc/passwd or /etc/group. added benchmarks folder, benchmarks for v1.4.0 (). Added the `Documentation/benchmarks` folder which includes a README that describes how rkt-monitor works and how to use it, and a file detailing the results of running rkt-monitor on each current workload with rkt v1.4.0. minor documentation fixes (, , ). kvm: enable functional tests for kvm (). This includes initial support for running functional tests on the `kvm` flavor. benchmarks: added rkt-monitor benchmarks (). This includes the code for a golang binary that can start rkt and watch its resource usage and bash scripts for generating a handful of test" }, { "data": "scripts: generate a Debian Sid ACI instead of using the Docker hub image (). This is the first step to having an official release builder. pkg/sys: add SYSSYNCFS definition for ppc64/ppc64le (). Added missing SYSSYNCFS definition for ppc64 and ppc64le, fixing build failures on those architectures. userns: not experimental anymore (). Although it requires doing a recursive chown for each app, user namespaces work fine and shouldn't be marked as experimental. kvm: not experimental anymore (). The kvm flavor was initially introduced in rkt v0.8.0, no reason to mark it as experimental. This release includes a number of new features and bugfixes like a new config subcommand, man page, and bash completion generation during build time. config: add config subcommand (). This new subcommand prints the current rkt configuration. It can be used to get i.e. authentication credentials. See rkt's documentation. run: add `--user`/`--group` app flags to `rkt run` and `rkt prepare` allowing to override the user and group specified in the image manifest (). gc: Add flag 'mark-only' to mark garbage pods without deleting them (, ). This new flag moves exited/aborted pods to the exited-garbage/garbage directory but does not delete them. A third party application can use `rkt gc --mark-only=true` to mark exited pods as garbage without deleting them. kvm: Add support for app capabilities limitation (). By default kvm flavor has got enabled every capability inside pod. This patch adds support for a restricted set of capabilities inside a kvm flavor of rkt. stage1/init: return exit code 1 on error (). On error, stage1/init was returning a non-zero value between 1 and 7. This change makes it return status code 1 only. api: Add 'CreatedAt', 'StartedAt' in pod's info returned by api service. (). Minor documentation fixes (, , ). functional tests: Add new test with systemd-proxyd (). Adds a new test and documentation how to use systemd-proxyd with rkt pods. kvm: refactor volumes support (). This allows users to share regular files as volumes in addition to directories. kvm: fix rkt status (). Fixes a regression bug were `rkt status` was no longer reporting the pid of the pod when using the kvm flavor. Build actool for the build architecture (). Fixes a cross compilation issue with acbuild. rkt: calculate real dataDir path (). Fixes garbage collection when the data directory specified by `--dir` contains a symlink component. stage1/init: fix docker volume semantics (). Fixes a bug in docker volume semantics when rkt runs with the option `--pod-manifest`. When a Docker image exposes a mount point that is not mounted by a host volume, Docker volume semantics expect the files in the directory to be available to the application. This was partially fixed in rkt 1.3.0 via but the bug remained when rkt runs with the option `--pod-manifest`. This is now fully fixed. rkt/image: check that discovery labels match manifest labels (). store: fix multi process with multi goroutines race on db (). This was a bug when multiple `rkt fetch` commands were executed concurrently. kvm: fix pid vs ppid usage (). Fixes a bug in `rkt enter` in the kvm flavor causing an infinite loop. kvm: Fix connectivity issue in macvtap networks caused by macvlan NICs having incorrect names (). tests: TestRktListCreatedStarted: fix timing issue causing the test to fail on slow machines (). rkt/image: remove redundant quotes in an error message (). prepare: Support 'ondisk' verification skip as documented by" }, { "data": "Prior to this commit, rkt prepare would check the ondisk image even if the `--insecure-options=ondisk` flag was provided. This corrects that. tests: skip TestSocketProxyd when systemd-socket-proxyd is not installed (). tests: TestDockerVolumeSemantics: more tests with symlinks (). rkt: Improve build shell script used in (). protobuf: generate code using a script (). Generate manpages (). This adds support for generating rkt man pages using `make manpages` and the bash completion file using `make bash-completion`, see the note for packagers below. tests/aws.sh: add test for Fedora 24 (). Files generated from sources are no longer checked-in the git repository. Instead, packagers should build them: Bash completion file, generated by `make bash-completion` Man pages, generated by `make manpages` This release includes a number of new features and bugfixes like the long-awaited propagation of apps' exit status. Propagate exit status from apps inside the pod to rkt (). Previously, if an app exited with a non-zero exit status, rkt's exit status would still be 0. Now, if an app fails, its exit status will be propagated to the outside. While this was partially implemented in some stage1 flavors since rkt v1.1.0, it now works in the default coreos flavor. Check signatures for stage1 images by default, especially useful when stage1 images are downloaded from the Internet (). This doesn't affect the following cases: The stage1 image is already in the store The stage1 image is in the default directory configured at build time The stage1 image is the default one and it is in the same directory as the rkt binary Allow downloading of insecure public keys with the `pubkey` insecure option (). Implement Docker volume semantics (). Docker volumes are initialized with the files in the image if they exist, unless a host directory is mounted there. Implement that behavior in rkt when it runs a Docker converted image. Return the cgroup when getting information about running pods and add a new cgroup filter (). Avoid configuring more CPUs than the host has in the kvm flavor (). Fix a bug where the proxy configuration wasn't forwarded to docker2aci (). This release drops support for go1.4. This release fixes a couple of bugs we missed in 1.2.0. Do not error out if `/dev/ptmx` or `/dev/log` exist (). Vendor a release of go-systemd instead of current master (). This release is an incremental release with numerous bug fixes. Add `--hostname` option to rkt run/run-prepared (). This option allows setting the pod host name. Fix deadlock while exiting a lkvm rkt pod (). SELinux fixes preparating rkt to work on Fedora with SELinux enabled ( and ). Fix bug that occurs for some types of on-disk image corruption, making it impossible for the user run or garbage collect them (). Fix authentication issue when fetching from a private quay.io repository (). Allow concurrent image fetching (). Fix issue mounting volumes on images if the target path includes an absolute symlink (). Clean up dangling symlinks in `/var/log/journal` on garbage collection if running on systemd hosts (). The stage1 command line interface is versioned now. See the for more information. This release is the first incremental release since 1.0. It includes bugfixes and some UX improvements. Add support for non-numerical UID/GID as specified in the appc spec (). rkt can now start apps as the user and group specified in the with three different possible formats: a numeric UID/GID, a username and group name referring to the ACI's /etc/passwd and /etc/group, or a file path in the ACI whose owner will determine the" }, { "data": "When an application terminates with a non-zero exit status, `rkt run` should return that exit status (). This is now fixed in the with but not yet in the shipped coreos flavor. Use exit status 2 to report usage errors (). Add support for tuning pod's network via the (). For example, this allows increasing the size of the listen queue for accepting new TCP connections (`net.core.somaxconn`) in the rkt pod. Keep $TERM from the host when entering a pod (). This fixes the command \"clear\" which previously was not working. Socket activation was not working if the port on the host is different from the app port as set in the image manifest (). Fix an authentication failure when fetching images from private repositories in the official Docker registry (). Set /etc/hostname in kvm pods (). This marks the first release of rkt recommended for use in production. The command-line UX and on-disk format are considered stable and safe to develop against. Any changes to these interfaces will be backwards compatible and subject to formal deprecation. The API is not yet completely stabilized, but is functional and suitable for use by early adopters. Add pod creation and start times to `rkt list` and `rkt status` (). See and documentation. The DNS configuration can now be passed to the pod via the command line (). See documentation. Errors are now structured, allowing for better control of the output (). See for how a developer should use it. All output now uses the new log package in `pkg/log` to provide a more clean and consistent output format and more helpful debug output (). Added configuration for stage1 image. Users can drop a configuration file to `/etc/rkt/stage1.d` (or to `stage1.d` in the user configuration directory) to tell rkt to use a different stage1 image name, version and location instead of build-time defaults (). Replaced the `--stage1-image` flag with a new set of flags. `--stage1-url`, `--stage-path`, `--stage1-name` do the usual fetching from remote if the image does not exist in the store. `--stage1-hash` takes the stage1 image directly from the store. `--stage1-from-dir` works together with the default stage1 images directory and is described in the next point (). Added default stage1 images directory. User can use the newly added `--stage1-from-dir` parameter to avoid typing the full path. `--stage1-from-dir` behaves like `--stage1-path` (). Removed the deprecated `--insecure-skip-verify` flag (). Fetched keys are no longer automatically trusted by default, unless `--trust-keys-from-https` is used. Additionally, newly fetched keys have to be explicitly trusted with `rkt trust` if a previous key was trusted for the same image prefix (). Use NAT loopback to make ports forwarded in pods accessible from localhost (). Show a clearer error message when unprivileged users execute commands that require root privileges (). Add a rkt tmpfiles configuration file to make the creation of the rkt data directory on first boot easier (). Remove `rkt install` command. It was replaced with a `setup-data-dir.sh` script (. Fix regression when authenticating to v2 Docker registries (). Don't link to libacl, but dlopen it (). This means that rkt will not crash if libacl is not present on the host, but it will just print a warning. Only suppress diagnostic messages, not error messages in stage1 (). Trusted Platform Module logging (TPM) is now enabled by default (). This ensures that rkt benefits from security features by default. See rkt's documentation. Added long descriptions to all rkt commands (). The `--stage1-image` flag was" }, { "data": "Scripts using it should be updated to use one of `--stage1-url`, `--stage1-path`, `--stage1-name`, `--stage1-hash` or `--stage1-from-dir` All uses of the deprecated `--insecure-skip-verify` flag should be replaced with the `--insecure-options` flag which allows user to selectively disable security features. The `rkt install` command was removed in favor of the `dist/scripts/setup-data-dir.sh` script. With this release, `rkt` RPM/dpkg packages should have the following updates: Pass `--enable-tpm=no` to configure script, if `rkt` should not use TPM. Use the `--with-default-stage1-images-directory` configure flag, if the default is not acceptable and install the built stage1 images there. Distributions using systemd: install the new file `dist/init/systemd/tmpfiles.d/rkt.conf` in `/usr/lib/tmpfiles.d/rkt.conf` and then run `systemd-tmpfiles --create rkt.conf`. This can replace running `rkt install` to set the correct ownership and permissions. Explicitly allow http connections via a new 'http' option to `--insecure-options` (). Any data and credentials will be sent in the clear. When using `bash`, `rkt` commands can be auto-completed (). The executables given on the command line via the `--exec` parameters don't need to be absolute paths anymore (). This change reflects an update in the appc spec since . See rkt's documentation. Add a `--full` flag to rkt fetch so it returns full hash of the image (). There is a new global flag for specifying the user configuration directory, `--user-config`. It overrides whatever is configured in system and local configuration directories. It can be useful for specifying different credentials for fetching images without putting them in a globally visible directory like `/etc/rkt`. See rkt's documentation (). As a temporary fix, search for network plugins in the local configuration directory too (). Pass the environment defined in the image manifest to the application when using the fly stage1 image (). Fix vagrant rkt build (). Switch to using unrewritten imports, this will allow rkt packages to be cleanly vendored by other projects (). Allow filtering images by name (). Fix bug where the wrong image signature was checked when using dependencies (). A new script to run test on AWS makes it easier to test under several distributions: CentOS, Debian, Fedora, Ubuntu (). The functional tests now skip user namespace tests when user namespaces do not work (). Check that rkt is not built with go 1.5.{0,1,2} to make sure it's not vulnerable to CVE-2015-8618 (). Cleanups in the kvm stage1 (). Document stage1 filesystem layout for developers (). With this release, `rkt` RPM/dpkg packages should have the following updates: Install the new file `dist/bashcompletion/rkt.bash` in `/etc/bashcompletion.d/`. rkt v0.15.0 is an incremental release with UX improvements, bug fixes, API service enhancements and new support for Go 1.5. Images can now be deleted from the store by both ID and name (). See rkt's documentation. The journals of rkt pods can now be accessed by members of the Unix group rkt (). See rkt's documentation. Mention (). Document and add a explaining how the api service can be used to integrate rkt with other programs (). Programs using rkt's API service are now provided with the size of the images stored in rkt's store (). Programs using rkt's API service are now provided with any annotations found in the and (). Fix a panic in the API service by making the store database thread-safe () and by refactoring the API service functions to get the pod state (). Add support for building rkt with Go 1.5, which is now the preferred version. rkt can still be built with Go 1.4 as best effort (). As part of the move to Go" }, { "data": "rkt now has a godep-save script to support Go 1.5 (). Continuous Integration on Travis now builds with both Go 1.4.2 and Go 1.5.2. Go 1.4.3 is avoided to workaround recent problems with go vet (). Fix regression issue when downloading image signatures from quay.io (). Properly cleanup the tap network interface that were not cleaned up in some error cases when using the kvm stage1 (). Fix a bug in the 9p filesystem used by the kvm stage1 that were preventing `apt-get` from working propertly (). rkt v0.14.0 brings new features like resource isolators in the kvm stage1, a new stage1 flavor called fly, bug fixes and improved documentation. The appc spec version has been updated to v0.7.4 The data directory that rkt uses can now be configured with a config file (). See rkt's documentation. CPU and memory resource isolators can be specified on the command line to override the limits specified in the image manifest (, ). See rkt's documentation. CPU and memory resource isolators can now be used within the kvm stage1 () The `rkt image list` command can now display the image size (). A new stage1 flavor has been added: fly; and it represents the first experimental implementation of the upcoming rkt fly feature. () It is now possible to build rkt inside rkt (). This should improve the reproducibility of builds. This release does not use it yet but it is planned for future releases. Linux distribution packagers can override the version of stage1 during the build (). This is needed for any Linux distributions that might carry distro-specific patches along the upstream release. See rkt's documentation about . Smaller build improvements with dep generation (), error messages on `make clean` (), dependency checks in the kvm flavor () rkt is now able to override the application command with `--exec` when the application manifest didn't specify any command (). In some cases, user namespaces were not working in Linux distributions without systemd, such as Ubuntu 14.04 LTS. This is fixed by creating a unique cgroup for each pod when systemd is not used () rkt's tar package didn't prefix the destination file correctly when using hard links in images. This was not a issue in rkt itself but was causing acbuild to misbehave (). ACIs with multiple dependencies can end up depending on the same base image through multiple paths. In some of those configuration with multiple dependencies, fetching the image via image discovery was not working. This is fixed and a new test ensures it will keep working (). The pod cgroups were misconfigured when systemd-devel is not installed. This was causing per-app CPU and memory isolators to be ineffective on those systems. This is now fixed but will require an additional fix for NixOS (). During the garbage collection of pods (`rkt gc`), all mounts will be umounted even when the pod is in an inconsistent state (, ) New documentation about configure flags (). This also includes formatting and typos fixes and updates. The examples about rkt's configuration files are also clarified (). New documentation explaining (). This should make it easier for software developers to integrate rkt with monitoring software. The API service is meant to be used by orchestration tools like Kubernetes. The performance of the API service was improved by reducing the round-trips in the ListPods and ListImages requests (). Those requests also gained multiple filters for more flexibility" }, { "data": "The primary motivation for this release is to add support for fetching images on the Docker Registry 2.0. It also includes other small improvements. docker2aci: support Docker Registry 2.0 () always use https:// when fetching docker images () stage0: add container hash data into TPM () host flavor: fix systemd copying into stage1 for Debian packaging () clarify network error messages () documentation: add more build-time requirements () rkt v0.12.0 is an incremental release with UX improvements like fine-grained security controls and implicit generation of empty volumes, performance improvements, bug fixes and testing enhancements. implement `rkt cat-manifest` for pods () generate an empty volume if a required one is not provided () make disabling security features granular; `--insecure-skip-verify` is now `--insecure-options={feature(s)-to-disable}` (). See rkt's documentation. allow skipping the on-disk integrity check using `--insecure-options=ondisk`. This greatly speeds up start time. () set empty volumes' permissions following the () flannel networking support in kvm flavor () store used MCS contexts on the filesystem () fix Docker images with whiteout-ed hard links () fix Docker images relying on /dev/stdout () use authentication for discovery and trust () fix build in Docker () fix kvm networking () add functional tests for rkt api service () fix TestSocketActivation on systemd-v219 () fix the ACE validator test () Bumped appc spec to 0.7.3 () rkt v0.11.0 is an incremental release with mostly bug fixes and testing improvements. support resuming ACI downloads () `rkt image gc` now also removes images from the store () handle building multiple flavors () verbosity control (, ) fix bugs in `make clean` () nicer output in tests () refactor test code () skip CI tests when the source was not modified () better output when tests fail () fix tests in `10.*` IP range () document how to run functional tests () add some help on how to run rkt as a daemon () do not return manifest in `ListPods()` and `ListImages()` () parameter `--mount` fixed in kvm flavour () fix rkt leaking containers in machinectl on CoreOS (, ) `rkt status` now returns the stage1 pid () fix crash in `rkt status` when an image is removed () fix fd leak in store () fix exec line parsing in ACI manifest () fix build on 32-bit systems () rkt v0.10.0 is an incremental release with numerous bug fixes and a few small new features and UX improvements. added implementation for basic API service (`rkt api-service`) () mount arbitrary volumes with `--mount` (, ) `--net=none` only exposes the loopback interface () better formatting for rkt help () metadata service registration (`--mds-register`) disabled by default () () () () new test for user namespaces (`--private-users`) () fix races in tests () suppress unnecessary output when `--debug` is not used () fix permission of rootfs with overlayfs () allow relative path in parameters () fix pod garbage collection failure in some cases () fix `rkt list` when an image was removed () user namespace (`--private-users`) regression with rkt group fixed () rkt v0.9.0 is a significant milestone release with a number of internal and user-facing changes. There are several notable breaking changes from the previous release: The on-disk format for pod trees has changed slightly, meaning that `rkt gc` and `rkt run-prepared` may not work for pods created by previous versions of rkt. To work around this, we recommend removing the pods with an older version of rkt. The `--private-net` flag has been renamed to `--net` and its semantic has changed (in particular, it is now enabled by default) - see below for" }, { "data": "Several changes to CLI output (e.g. column names) from the `rkt list` and `rkt image list` subcommands. The image fetching behaviour has changed, with the introduction of new flags to `rkt run` and `rkt fetch` and the removal of `--local` - see below for details. The `--private-net` flag has been changed to `--net`, and has been now made the default behaviour. (, ) That is, a `rkt run` command will now by default set up a private network for the pod. To achieve the previous default behaviour of the pod sharing the networking namespace of the host, use `--net=host`. The flag still allows the specification of multiple networks via CNI plugins, and overriding plugin configuration on a per-network basis. For more details, see the . When fetching images during `rkt fetch` or `rkt run`, rkt would previously behave inconsistently for different formats (e.g when performing discovery or when retrieving a Docker image) when deciding whether to use a cached version or not. `rkt run` featured a `--local` flag to adjust this behaviour but it provided an unintuitive semantic and was not available to the `rkt fetch` command. Instead, rkt now features two new flags, `--store-only` and `--no-store`, on both the `rkt fetch` and `rkt run` commands, to provide more consistent, controllable, and predictable behaviour regarding when images should be retrieved. For full details of the new behaviour see the . A number of changes were made to the permissions of rkt's internal store to facilitate unprivileged users to access information about images and pods on the system (, ). In particular, the set-group-ID bit is applied to the directories touched by `rkt install` so that the `rkt` group (if it exists on the system) can retain read-access to information about pods and images. This will be used by the rkt API service (targeted for the next release) so that it can run as an unprivileged user on the system. This support is still considered partially experimental. Some tasks like `rkt image gc` remain a root-only operation. If no `/etc/hosts` exists in an application filesystem at the time it starts running, rkt will now provide a basic default version of this file. If rkt detects one already in the app's filesystem (whether through being included in an image, or a volume mounted in), it will make no changes. () rkt now supports setting supplementary group IDs on processes (). rkt's use of cgroups has been reworked to facilitate rkt running on a variety of operating systems like Void and older non-systemd distributions (, , , ) If `rkt run` is used with an image that does not have an app section, rkt will now create one if the user provides an `--exec` flag () A new `rkt image gc` command adds initial support for garbage collecting images from the store (). This removes treeStores not referenced by any non-GCed rkt pod. `rkt list` now provides more information including image version and hash () `rkt image list` output now shows shortened hash identifiers by default, and human readable date formats. To use the previous output format, use the `--full` flag. () `rkt prepare` gained the `--exec` flag, which restores flag-parity with `rkt run` () lkvm stage1 backend has experimental support for `rkt enter` () rkt now supports empty volume types () An early, experimental read-only API definition has been added (," }, { "data": "Fixed bug in `--stage1-image` option which prevented it from using URLs () Fixed bug in `rkt trust`'s handling of `--root` () Fixed bug when decompressing xz-compressed images (, ) In earlier versions of rkt, hooks had an implicit timeout of 30 seconds, causing some pre-start jobs which took a long time to be killed. This implicit timeout has been removed. () When running with the lkvm stage1, rkt now sets `$HOME` if it is not already set, working around a bug in the lkvm tool (, ) Fixed bug preventing `run-prepared` from working if the metadata service was not available () Bumped appc spec to 0.7.1 () Bumped CNI and netlink dependencies () Bumped ioprogress to a version which prevents the download bar from being drawn when rkt is not drawing to a terminal (, ) Significantly reworked rkt's internal use of systemd to orchestrate apps, which should facilitate more granular control over pod lifecycles () Reworked rkt's handling of images with non-deterministically dependencies (, ). rkt functional tests now run appc's ACE validator, which should ensure that rkt is always compliant with the specification. () A swathe of improvements to the build system `make clean` should now work Different rkt stage1 images are now built with different names () rkt can now build on older Linux distributions (like CentOS 6) () Various internal improvements to the functional test suite to improve coverage and consolidate code The \"ACI\" field header in `rkt image` output has been changed to \"IMAGE NAME\" `rkt image rm` now exits with status 1 on any failure () Fixed permissions in the default stage1 image () Added documentation for `prepare` and `run-prepared` subcommands () rkt should now report more helpful errors when encountering manifests it does not understand () rkt v0.8.1 is an incremental release with numerous bug fixes and clean-up to the build system. It also introduces a few small new features and UX improvements. New features and UX changes: `rkt rm` is now variadic: it can now remove multiple pods in one command, by UUID The `APPNAME` column in `rkt image list` output has been changed to the more accurate `NAME`. This involves a schema change in rkt's on-disk datastore, but this should be upgraded transparently. Headers are now sent when following HTTP redirects while trying to retrieve an image The default metadata service port number was changed from a registered/reserved IANA port to an arbitrary port in the non-dynamic range Added the ability to override arguments for network plugins rkt will now error out if someone attempts to use `--private-users` with the lkvm backend Bug fixes: Fixed creation of /tmp in apps' root filesystems with correct permissions Fixed garbage collection after umounts (for example, if a system reboots before a pod is cleanly destroyed) Fixed a race in interactive mode when using the lkvm backend that could cause a deadlock or segfault Fixed bad parameter being passed to the metadata service (\"uid\" -> \"uuid\") Fixed setting of file permissions during stage1 set up Fixed a potential race condition during simultaneous `iptables` invocation Fixed ACI download progress being sent to stderr instead of stdout, now consistent with the output during retrieval of Docker images `rkt help prepare` will now show the correct default stage1 image rkt will refuse to add isolators with nil Limits, preventing a panic caused by an ambiguity in upstream appc schema Other changes: Reworked the SELinux implementation to use `systemd-nspawn`'s native context-switching feature Added a workaround for a bug in Docker <1.8 when it is run on the same system as rkt" }, { "data": "https://github.com/rkt/rkt/issues/1210#issuecomment-132793300) Added a `rkt-xxxx-tapN` name to tap devices that rkt creates Functional tests now clean intermediate images between tests Countless improvements and cleanup to the build system Numerous documentation improvements, including splitting out all top-level `rkt` subcommands into their own documents rkt 0.8.0 includes support for running containers under an LKVM hypervisor and experimental user namespace support. Full changelog: Documentation improvements Better integration with systemd: journalctl -M machinectl {reboot,poweroff} Update stage1's systemd to v222 Add more functional tests Build system improvements Fix bugs with garbage-collection LKVM stage1 support with network and volumes Smarter image discovery: ETag and Cache-Control support Add CNI DHCP plugin Support systemd socket activation Backup CAS database when migrating Improve error messages Add the ability to override ACI exec Optimize rkt startup times when a stage1 is present in the store Trust keys fetched via TLS by default Add the ability to garbage-collect a specific pod Add experimental user namespace support Bugfixes rkt 0.7.0 includes new subcommands for `rkt image` to manipulate images from the local store. It also has a new build system based on autotools and integration with SELinux. Full changelog: New subcommands for `rkt image`: extract, render and export Metadata service: Auth now based on tokens Registration done by default, unless --mds-register=false is passed Build: Remove support for Go 1.3 Replace build system with autoconf and make Network: fixes for plugins related to mnt namespace Signature: clearer error messages Security: Support for SELinux Check signature before downloading Commands: fix error messages and parameter parsing Output: reduce output verbosity Systemd integration: fix stop bug Tests: Improve tests output The highlight of this release is the support of per-app memory and CPU isolators. This means that, in addition to restricting a pod's CPU and memory usage, individual apps inside a pod can also be restricted now. rkt 0.6.1 also includes a new CLI/subcommand framework, more functional testing and journalctl integration by default. Full changelog: Updated to v0.6.1 of the appc spec support per-app memory and CPU isolators allow network selection to the --private-net flag which can be useful for grouping certain pods together while separating others move to the Cobra CLI/subcommand framework per-app logging via journalctl now supported by default stage1 runs an unpatched systemd v220 to help packagers, rkt can generate stage1 from the binaries on the host at runtime more functional tests bugfixes rkt 0.5.6 includes better integration with systemd on the host, some minor bug fixes and a new ipvlan network plugin. Updated to v0.5.2 of the appc spec support running from systemd unit files for top-level isolation support per-app logging via journalctl. This is only supported if stage1 has systemd v219 or v220 add ipvlan network plugin new rkt subcommand: cat-manifest extract ACI in a chroot to avoid malformed links modifying the host filesystem improve rkt error message if the user doesn't provide required volumes fix rkt status when using overlayfs support for some arm architectures documentation improvements rkt 0.5.5 includes a move to network plugins, a number of minor bug fixes and two new experimental commands for handling images: `rkt images` and `rkt" }, { "data": "Full changelog: switched to using based network plugins fetch images dependencies recursively when ACIs have dependent images fix the progress bar used when downloading images with no content-length building the initial stage1 can now be done on various versions of systemd support retrying signature downloads in the case of a 202 remove race in doing a rkt enter various documentation fixes to getting started and other guides improvements to the functional testing using a new gexpect, testing for non-root apps, run context, port test, and more rkt 0.5.4 introduces a number of new features - repository authentication, per-app arguments + local image signature verification, port forwarding and more. Further, although we aren't yet guaranteeing API/ABI stability between releases, we have added important work towards this goal including functional testing and database migration code. This release also sees the removal of the `--spawn-metadata-svc` flag to `rkt run`. The flag was originally provided as a convenience, making it easy for users to get started with the metadata service. In rkt v0.5.4 we removed it in favor of explicitly starting it via `rkt metadata-service` command. Full changelog: added configuration support for repository authentication (HTTP Basic Auth, OAuth, and Docker repositories). Full details in `Documentation/configuration.md` `rkt run` now supports per-app arguments and per-image `--signature` specifications `rkt run` and `rkt fetch` will now verify signatures for local image files `rkt run` with `--private-net` now supports port forwarding (using `--port=NAME:1234`) `rkt run` now supports a `--local` flag to use only local images (i.e. no discovery or remote image retrieval will be performed) added initial support for running directly from a pod manifest the store DB now supports migrations for future versions systemd-nspawn machine names are now set to pod UUID removed the `--spawn-metadata-svc` option from `rkt run`; this mode was inherently racy and really only for convenience. A separate `rkt metadata-service` invocation should be used instead. various internal codebase refactoring: \"cas\" renamed to \"store\", tasks to encapsulate image fetch operations, etc bumped docker2aci to support authentication for Docker registries and fix a bug when retrieving images from Google Container Registry fixed a bug where `--interactive` did not work with arguments garbage collection for networking is now embedded in the stage1 image when rendering images into the treestore, a global syncfs() is used instead of a per-file sync(). This should significantly improve performance when first extracting large images added extensive functional testing on semaphoreci.com/coreos/rkt added a test-auth-server to facilitate testing of fetching images This release contains minor updates over v0.5.2, notably finalising the move to pods in the latest appc spec and becoming completely name consistent on `rkt`. {Container,container} changed globally to {Pod,pod} {Rocket,rocket} changed globally to `rkt` `rkt install` properly sets permissions for all directories `rkt fetch` leverages the cas.Store TmpDir/TmpFile functions (now exported) to generate temporary files for downloads Pod lifecycle states are now exported for use by other packages Metadata service properly synchronizes access to pod state This release is a minor update over v0.5.1, incorporating several bug fixes and a couple of small new features: `rkt enter` works when overlayfs is not available `rkt run` now supports the `--no-overlay` option referenced (but not implemented!) in the previous release the appc-specified environment variables (PATH, HOME, etc) are once again set correctly during `rkt run` metadata-service no longer manipulates IP tables rules as it connects over a unix socket by default pkg/lock has been improved to also support regular (non-directory) files images in the cas are now locked at runtime (as described in ) This release updates Rocket to follow the latest version of the appc spec, v0.5.1. This involves the major change of moving to pods and Pod Manifests (which enhance and supplant the previous Container Runtime Manifest). The Rocket codebase has been updated across the board to reflect the schema/spec change, as well as changing various terminology in other human-readable places: for example, the previous ambiguous (unqualified) \"container\" is now replaced everywhere with" }, { "data": "This release also introduces a number of key features and minor changes: overlayfs support, enabled for `rkt run` by default (disable with `--no-overlayfs`) to facilitate overlayfs, the CAS now features a tree store which stores expanded versions of images the default stage1 (based on systemd) can now be built from source, instead of only derived from an existing binary distribution as previously. This is configurable using the new `RKTSTAGE1USR_FROM` environment variable when invoking the build script - see fdcd64947 the metadata service now uses a Unix socket for registration; this limits who can register/unregister pods by leveraging filesystem permissions on the socket `rkt list` now abbreviates UUIDs by default (configurable with `--full`) the ImageManifest's `readOnly` field (for volume mounts) is now overridden by the rkt command line a simple debug script (in scripts/debug) to facilitate easier debugging of applications running under Rocket by injecting Busybox into the pod documentation for the metadata service, as well as example systemd unit files First support for interactive containers, with the `rkt run --interactive` flag. This is currently only supported if a container has one app. # Add container IP address information to `rkt list` Provide `/sys` and `/dev/shm` to apps (per spec) Introduce \"latest\" pattern handling for local image index Implement FIFO support in tar package Restore atime and mtime during tar extraction Bump docker2aci dependency This is primarily a bug fix release with the addition of the `rkt install` subcommand to help people setup a unprivileged `rkt fetch` based on unix users. Fix marshalling error when running containers with resource isolators Fixup help text on run/prepare about volumes Fixup permissions in `rkt trust` created files Introduce the `rkt install` subcommand This release is mostly a milestone release and syncs up with the latest release of the yesterday. Note that due to the introduction of a database for indexing the local CAS, users upgrading from previous versions of Rocket on a system may need to clear their local cache by removing the `cas` directory. For example, using the standard Rocket setup, this would be accomplished with `rm -fr /var/lib/rkt/cas`. Major changes since v0.3.2: Updated to v0.4.0 of the appc spec Introduced a database for indexing local images in the CAS (based on github.com/cznic/ql) Refactored container lifecycle to support a new \"prepared\" state, to pre-allocate a container UUID without immediately running the application Added support for passing arguments to apps through the `rkt run` CLI Implemented ACI rendering for dependencies Renamed `rkt metadatasvc` -> `rkt metadata-service` Added documentation around networking, container lifecycle, and rkt commands This release introduces much improved documentation and a few new features. The highlight of this release is that Rocket can now natively run Docker images. To do this, it leverages the appc/docker2aci library which performs a straightforward conversion between images in the Docker format and the appc format. A simple example: ``` $ rkt --insecure-skip-verify run docker://redis docker://tenstartups/redis-commander rkt: fetching image from docker://redis rkt: warning: signature verification has been disabled Downloading layer: 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 ``` Note that since Docker images do not support image signature verifications, the `-insecure-skip-verify` must be used. Another important change in this release is that the default location for the stage1 image used by `rkt run` can now be set at build time, by setting the `RKTSTAGE1IMAGE` environment variable when invoking the build script. (If this is not set, `rkt run` will continue with its previous behaviour of looking for a stage1.aci in the same directory as the binary" }, { "data": "This makes it easier for distributions to package Rocket and include the stage1 wherever they choose (for example, `/usr/lib/rkt/stage1.aci`). For more information, see https://github.com/coreos/rocket/pull/520 The primary motivation for this release is to resynchronise versions with the appc spec. To minimise confusion in the short term we intend to keep the major/minor version of Rocket aligned with the version of spec it implements; hence, since yesterday v0.3.0 of the appc spec was released, today Rocket becomes v0.3.1. After the spec (and Rocket) reach v1.0.0, we may relax this restriction. This release also resolves an upstream bug in the appc discovery code which was causing rkt trust to fail in certain cases. This is largely a momentum release but it does introduce a few new user-facing features and some important changes under the hood which will be of interest to developers and distributors. First, the CLI has a couple of new commands: `rkt trust` can be used to easily add keys to the public keystore for ACI signatures (introduced in the previous release). This supports retrieving public keys directly from a URL or using discovery to locate public keys - a simple example of the latter is `rkt trust --prefix coreos.com/etcd`. See the commit for other examples. `rkt list` is an extremely simple tool to list the containers on the system As mentioned, v0.3.0 includes two significant changes to the Rocket build process: Instead of embedding the (default) stage1 using go-bindata, Rocket now consumes a stage1 in the form of an actual ACI, containing a rootfs and stage1 init/exec binaries. By default, Rocket will look for a `stage1.aci` in the same directory as the location of the binary itself, but the stage1 can be explicitly specified with the new `-stage1-image` flag (which deprecates `-stage1-init` and `-stage1-rootfs`). This makes it much more straightforward to use alternative stage1 images with rkt and facilitates packing it for different distributions like Fedora. Rocket now vendors a copy of the appc/spec instead of depending on HEAD. This means that Rocket can be built in a self-contained and reproducible way and that master will no longer break in response to changes to the spec. It also makes explicit the specific version of the spec against which a particular release of Rocket is compiled. As a consequence of these two changes, it is now possible to use the standard Go workflow to build the Rocket CLI (e.g. `go get github.com/coreos/rocket/rkt` will build rkt). Note however that this does not implicitly build a stage1, so that will still need to be done using the included ./build script, or some other way for those desiring to use a different stage1. This introduces countless features and improvements over v0.1.1. Highlights include several new commands (`rkt status`, `rkt enter`, `rkt gc`) and signature validation. The most significant change in this release is that the spec has been split into its own repository (https://github.com/appc/spec), and significantly updated since the last release - so many of the changes were to update to match the latest spec. Numerous improvements and fixes over v0.1.0: Rocket builds on non-Linux (in a limited capacity) Fix bug handling uncompressed images More efficient image handling in CAS mkrootfs now caches and GPG checks images stage1 is now properly decoupled from host runtime stage1 supports socket activation stage1 no longer warns about timezones cas now logs download progress to stdout rkt run now acquires an exclusive lock on the container directory and records the PID of the process tons of documentation improvements added actool introduced along with documentation image discovery introduced to rkt run and rkt fetch Initial release." } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "is an easy way to try out a Kubernetes (k8s) cluster locally. It creates a single node Kubernetes stack in a local VM. can be installed into a Minikube cluster using . This document details the pre-requisites, installation steps, and how to check the installation has been successful. This installation guide has only been verified under a Minikube Linux installation, using the driver. Notes: This installation guide may not work for macOS installations of Minikube, due to the lack of nested virtualization support on that platform. This installation guide has not been tested on a Windows installation. Kata under Minikube does not currently support Kata Firecracker (`kata-fc`). Although the `kata-fc` binary will be installed as part of these instructions, via `kata-deploy`, pods cannot be launched with `kata-fc`, and will fail to start. Before commencing installation, it is strongly recommended you read the . For Kata Containers to work under a Minikube VM, your host system must support nested virtualization. If you are using a Linux system utilizing Intel VT-x and the `kvm_intel` driver, you can perform the following check: ```sh $ cat /sys/module/kvm_intel/parameters/nested ``` If your system does not report `Y` from the `nested` parameter, then details on how to enable nested virtualization can be found on the Alternatively, and for other architectures, the Kata Containers built in command can be used inside Minikube once Kata has been installed, to check for compatibility. To enable Kata Containers under Minikube, you need to add a few configuration options to the default Minikube setup. You can easily accomplish this as Minikube supports them on the setup commandline. Minikube can be set up to use either CRI-O or containerd. Here are the features to set up a CRI-O based Minikube, and why you need them: | what | why | | - | | | `--bootstrapper=kubeadm` | As recommended for | | `--container-runtime=cri-o` | Using CRI-O for Kata | | `--enable-default-cni` | As recommended for | | `--memory 6144` | Allocate sufficient memory, as Kata Containers default to 1 or 2Gb | | `--network-plugin=cni` | As recommended for | | `--vm-driver kvm2` | The host VM driver | To use containerd, modify the `--container-runtime` argument: | what | why | | - | | | `--container-runtime=containerd` | Using containerd for Kata | Notes: Adjust the `--memory 6144` line to suit your environment and requirements. Kata Containers default to requesting 2048MB per container. We recommended you supply more than that to the Minikube node. The full command is therefore: ```sh $ minikube start --vm-driver kvm2 --memory 6144 --network-plugin=cni --enable-default-cni --container-runtime=cri-o --bootstrapper=kubeadm ``` Note: For Kata Containers later than v1.6.1, the now default `tcfilter` networking of Kata Containers does not work for Minikube versions less than v1.1.1. Please ensure you use Minikube version v1.1.1 or above. Before you install Kata Containers, check that your Minikube is operating. On your guest: ```sh $ kubectl get nodes ``` You should see your `control-plane` node listed as being `Ready`. Check you have virtualization enabled inside your Minikube. The following should return a number larger than `0` if you have either of the `vmx` or `svm` nested virtualization features available: ```sh $ minikube ssh \"egrep -c 'vmx|svm' /proc/cpuinfo\" ``` You can now install the Kata Containers runtime" }, { "data": "You will need a local copy of some Kata Containers components to help with this, and then use `kubectl` on the host (that Minikube has already configured for you) to deploy them: ```sh $ git clone https://github.com/kata-containers/kata-containers.git $ cd kata-containers/tools/packaging/kata-deploy $ kubectl apply -f kata-rbac/base/kata-rbac.yaml $ kubectl apply -f kata-deploy/base/kata-deploy.yaml ``` This installs the Kata Containers components into `/opt/kata` inside the Minikube node. It can take a few minutes for the operation to complete. You can check the installation has worked by checking the status of the `kata-deploy` pod, which will be executing , and will be executing a `sleep infinity` once it has successfully completed its work. You can accomplish this by running the following: ```sh $ podname=$(kubectl -n kube-system get pods -o=name | fgrep kata-deploy | sed 's?pod/??') $ kubectl -n kube-system exec ${podname} -- ps -ef | fgrep infinity ``` NOTE: This check only works for single node clusters, which is the default for Minikube. For multi-node clusters, the check would need to be adapted to check `kata-deploy` had completed on all nodes. Now you have installed the Kata Containers components in the Minikube node. Next, you need to configure Kubernetes `RuntimeClass` to know when to use Kata Containers to run a pod. Now register the `kata qemu` runtime with that class. This should result in no errors: ```sh $ cd kata-containers/tools/packaging/kata-deploy/runtimeclasses $ kubectl apply -f kata-runtimeClasses.yaml ``` The Kata Containers installation process should be complete and enabled in the Minikube cluster. Launch a container that has been defined to run on Kata Containers. The enabling is configured by the following lines in the YAML file. See the Kubernetes for more details. ```yaml spec: runtimeClassName: kata-qemu ``` Perform the following action to launch a Kata Containers based Apache PHP pod: ```sh $ cd kata-containers/tools/packaging/kata-deploy/examples $ kubectl apply -f test-deploy-kata-qemu.yaml ``` This may take a few moments if the container image needs to be pulled down into the cluster. Check progress using: ```sh $ kubectl rollout status deployment php-apache-kata-qemu ``` There are a couple of ways to verify it is running with Kata Containers. In theory, you should not be able to tell your pod is running as a Kata Containers container. Careful examination can verify your pod is in fact a Kata Containers pod. First, look on the node for a `qemu` running. You should see a QEMU command line output here, indicating that your pod is running inside a Kata Containers VM: ```sh $ minikube ssh -- pgrep -a qemu ``` Another way to verify Kata Containers is running is to look in the container itself and check which kernel is running there. For a normal software container you will be running the same kernel as the node. For a Kata Container you will be running a Kata Containers kernel inside the Kata Containers VM. First, examine which kernel is running inside the Minikube node itself: ```sh $ minikube ssh -- uname -a ``` And then compare that against the kernel that is running inside the container: ```sh $ podname=$(kubectl get pods -o=name | fgrep php-apache-kata-qemu | sed 's?pod/??') $ kubectl exec ${podname} -- uname -a ``` You should see the node and pod are running different kernel versions. This guide has shown an easy way to setup Minikube with Kata Containers. Be aware, this is only a small single node Kubernetes cluster running under a nested virtualization setup. As such, it has limitations, but as a first introduction to Kata Containers, and how to install it under Kubernetes, it should suffice for initial learning and experimentation." } ]
{ "category": "Runtime", "file_name": "minikube-installation-guide.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "() This section describes how to plan for and deploy a new Manta. <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> - - - - - - - - <!-- END doctoc generated TOC please keep comment here to allow auto update --> Before even starting to deploy Manta, you must decide: the number of datacenters the number of metadata shards the number of storage and non-storage compute nodes how to lay out the non-storage zones across the fleet You can deploy Manta across any odd number of datacenters in the same region (i.e., having a reliable low-latency, high-bandwidth network connection among all datacenters). We've only tested one- and three-datacenter configurations. Even-numbered configurations are not supported. See \"Other configurations\" below for details. A single-datacenter installation can be made to survive server failure, but obviously cannot survive datacenter failure. The us-east deployment uses three datacenters. Recall that each metadata shard has the storage and load capacity of a single postgres instance. If you want more capacity than that, you need more shards. Shards can be added later without downtime, but it's a delicate operation. The us-east deployment uses three metadata shards, plus a separate shard for the compute and storage capacity data. We recommend at least two shards so that the compute and storage capacity information can be fully separated from the remaining shards, which would be used for metadata. The two classes of node (storage nodes and non-storage nodes) usually have different hardware configurations. The number of storage nodes needed is a function of the expected data footprint and (secondarily) the desired compute capacity. The number of non-storage nodes required is a function of the expected load on the metadata tier. Since the point of shards is to distribute load, each shard's postgres instance should be on a separate compute node. So you want at least as many compute nodes as you will have shards. The us-east deployment distributes the other services on those same compute nodes. For information on the latest recommended production hardware, see [Joyent Manufacturing Matrix](http://eng.joyent.com/manufacturing/matrix.html) and [Joyent Manufacturing Bill of Materials](http://eng.joyent.com/manufacturing/bom.html). The us-east deployment uses older versions of the Tenderloin-A for service nodes and Mantis Shrimps for storage nodes. Since there are so many different Manta components, and they're all deployed redundantly, there are a lot of different pieces to think about. (The production deployment in us-east has 20+ zones in each of the three datacenters.) So when setting up a Manta deployment, it's very important to think ahead of time about which components will run where! The `manta-adm genconfig` tool (when used with the --from-file option) can be very helpful in laying out zones for Manta. See the `manta-adm` manual page for details. `manta-adm genconfig --from-file` takes as input a list of physical servers and information about each one. Large deployments that use Device 42 to manage hardware inventory may find the tool useful for constructing the input for `manta-adm genconfig`. The most important production configurations are described below, but for reference, here are the principles to keep in mind: Storage* zones should only be located in dedicated storage nodes. We do not recommend co-locating them with other services. All other zones should be deployed onto non-storage compute nodes. Nameservice*: There must be an odd number of \"nameservice\" zones in order to achieve consensus, and there should be at least three of them to avoid a single point of" }, { "data": "There must be at least one in each DC to survive any combination of datacenter partitions, and it's recommended that they be balanced across DCs as much as possible. For the non-sharded, non-ops-related zones (which is everything except moray, postgres, ops, madtom), there should be at least two of each kind of zone in the entire deployment (for availability), and they should not be in the same datacenter (in order to survive a datacenter loss). For single-datacenter deployments, they should at least be on separate compute nodes. Only one madtom* zone is considered required. It would be good to provide more than one in separate datacenters (or at least separate compute nodes) for maintaining availability in the face of a datacenter failure. There should only be one ops* zone. If it's unavailable for any reason, that will only temporarily affect metering, garbage collection, and reports. For postgres*, there should be at least three instances in each shard. For multi-datacenter configurations, these instances should reside in different datacenters. For single-datacenter configurations, they should be on different compute nodes. (Postgres instances from different shards can be on the same compute node, though for performance reasons it would be better to avoid that.) For moray*, there should be at least two instances per shard in the entire deployment, and these instances should reside on separate compute nodes (and preferably separate datacenters). garbage-collector* zones should not be configured to poll more than 6 shards. Further, they should not be co-located with instances of other CPU-intensive Manta components (e.g. loadbalancer) to avoid interference with the data path. Most of these constraints are required in order to maintain availability in the event of failure of any component, server, or datacenter. Below are some example configurations. On each storage node, you should deploy one \"storage\" zone. If you have N metadata shards, and assuming you'll be deploying 3 postgres instances in each shard, you'd ideally want to spread these over 3N compute nodes. If you combine instances from multiple shards on the same host, you'll defeat the point of splitting those into shards. If you combine instances from the same shard on the same host, you'll defeat the point of using replication for improved availability. You should deploy at least two Moray instances for each shard onto separate compute nodes. The remaining services can be spread over the compute nodes in whatever way, as long as you avoid putting two of the same thing onto the same compute node. Here's an example with two shards using six compute nodes: | CN1 | CN2 | CN3 | CN4 | CN5 | CN6 | | - | -- | -- | -- | | -- | | postgres 1 | postgres 1 | postgres 1 | postgres 2 | postgres 2 | postgres 2 | | moray 1 | moray 1 | electric-moray | moray 2 | moray 2 | electric-moray | | nameservice | webapi | nameservice | webapi | nameservice | webapi | | ops | madtom | garbage-collector | loadbalancer | loadbalancer | loadbalancer | | storinfo | authcache | storinfo | | | authcache | In this notation, \"postgres 1\" and \"moray 1\" refer to an instance of \"postgres\" or \"moray\" for shard 1. All three datacenters should be in the same region, meaning that they share a reliable, low-latency, high-bandwidth network connection. On each storage node, you should deploy one \"storage\" zone. As with the single-datacenter configuration, you'll want to spread the postgres instances for N shards across 3N compute nodes, but you'll also want to deploy at least one postgres instance in each" }, { "data": "For four shards, we recommend the following in each datacenter: | CN1 | CN2 | CN3 | CN4 | | - | -- | -- | | | postgres 1 | postgres 2 | postgres 3 | postgres 4 | | moray 1 | moray 2 | moray 3 | moray 4 | | nameservice | nameservice | electric-moray | authcache | | webapi | webapi | loadbalancer | loadbalancer | | ops | garbage-collector | madtom | storinfo | In this notation, \"postgres 1\" and \"moray 1\" refer to an instance of \"postgres\" or \"moray\" for shard 1. For testing purposes, it's fine to deploy all of Manta on a single system. Obviously it won't survive server failure. This is not supported for a production deployment. It's not supported to run Manta in an even number of datacenters since there would be no way to maintain availability in the face of an even split. More specifically: A two-datacenter configuration is possible but cannot survive datacenter failure or partitioning. That's because the metadata tier would require synchronous replication across two datacenters, which cannot be maintained in the face of any datacenter failure or partition. If we relax the synchronous replication constraint, then data would be lost in the event of a datacenter failure, and we'd also have no way to resolve the split-brain problem where both datacenters accept conflicting writes after the partition. For even numbers N >= 4, we could theoretically survive datacenter failure, but any N/2 -- N/2 split would be unresolvable. You'd likely be better off dividing the same hardware into N - 1 datacenters. It's not supported to run Manta across multiple datacenters not in the same region (i.e., not having a reliable, low-latency, high-bandwidth connection between all pairs of datacenters). Before you get started for anything other than a COAL or lab deployment, be sure to read and fully understand the section on \"Planning a Manta deployment\" above. The two main commands used to deploy and maintain Manta, `manta-init` and `manta-adm` operate on software delivered as Triton images. These images are retrieved from `https://updates.tritondatacenter.com` by default. Both `manta-init` and `manta-adm` are \"channel aware\", taking `-C` options to override the Triton image channel to download images from. The default channel for a Manta deployment can be set using `sdcadm channel set ...` within a given datacenter. These general instructions should work for anything from COAL to a multi-DC, multi-compute-node deployment. The general process is: Set up Triton in each datacenter, including the headnode, all Triton services, and all compute nodes you intend to use. For easier management of hosts, we recommend that the hostname reflect the type of server and, possibly, the intended purpose of the host. For example, we use the \"RA\" or \"RM\" prefix for \"Richmond-A\" hosts and \"MS\" prefix for \"Mantis Shrimp\" hosts. In the global zone of each Triton headnode, set up a manta deployment zone using: sdcadm post-setup common-external-nics # enable downloading service images sdcadm post-setup manta In each datacenter, generate a Manta networking configuration file. a. For COAL, from the GZ, use: headnode$ /zones/$(vmadm lookup alias=manta0)/root/opt/smartdc/manta-deployment/networking/gen-coal.sh > /var/tmp/netconfig.json b. For those using the internal Joyent Engineering lab, run this from the : lab.git$ node bin/genmanta.js -r RIGNAME LABNAME and copy that to the headnode GZ. c. For other deployments, see \"Networking configuration\" below. Once you've got the networking configuration file, configure networks by running this in the global zone of each Triton headnode: headnode$ ln -s /zones/$(vmadm lookup alias=manta0)/root/opt/smartdc/manta-deployment/networking /var/tmp/networking headnode$ cd /var/tmp/networking headnode$ ./manta-net.sh CONFIG_FILE This step is" }, { "data": "Note that if you are setting up a multi-DC Manta, ensure that (1) your Triton networks have cross datacenter connectivity and routing set up and (2) the Triton firewalls allow TCP and UDP traffic cross- datacenter. For multi-datacenter deployments, you must [link the datacenters within Triton](https://docs.joyent.com/private-cloud/install/headnode-installation/linked-data-centers) so that the UFDS database is replicated across all three datacenters. For multi-datacenter deployments, you must [configure SAPI for multi-datacenter support](https://github.com/TritonDataCenter/sdc-sapi/blob/master/docs/index.md#multi-dc-mode). If you'll be deploying a loadbalancer on any compute nodes other than a headnode, then you'll need to create the \"external\" NIC tag on those CNs. For common single-system configurations (for dev and test systems), you don't usually need to do anything for this step. For multi-CN configurations, you probably will need to do this. See the Triton documentation for [how to add a NIC tag to a CN](https://docs.joyent.com/sdc7/nic-tags#AssigningaNICTagtoaComputeNode). In each datacenter's manta deployment zone, run the following: manta$ manta-init -s SIZE -e YOUR_EMAIL `manta-init` must not be run concurrently in multiple datacenters. `SIZE` must be one of \"coal\", \"lab\", or \"production\". `YOUR_EMAIL` is used to create an Amon contact for alarm notifications. This step runs various initialization steps, including downloading all of the zone images required to deploy Manta. This can take a while the first time you run it, so you may want to run it in a screen session. It's idempotent. In each datacenter's manta deployment zone, deploy Manta components. a. In COAL, just run `manta-deploy-coal`. This step is idempotent. b. For a lab machine, just run `manta-deploy-lab`. This step is idempotent. c. For any other installation (including a multi-CN installation), you'll need to run several more steps: assign shards for storage and object metadata with \"manta-shardadm\"; create a hash ring with \"manta-adm create-topology\"; generate a \"manta-adm\" configuration file (see \"manta-adm configuration\" below); and finally run \"manta-adm update config.json\" to deploy those zones. Your best bet is to examine the \"manta-deploy-dev\" script to see how it uses these tools. See \"manta-adm configuration\" below for details on the input file to \"manta-adm update\". Each of these steps is idempotent, but the shard and hash ring must be set up before deploying any zones. If desired, set up connectivity to the \"ops\" and \"madtom\" zones. See \"Overview of Operating Manta\" below for details. For multi-datacenter deployments, set the MUSKIE\\MULTI\\DC SAPI property. This is required to enforce that object writes are distributed to multiple datacenters. In the SAPI master datacenter: headnode $ app_uuid=\"$(sdc-sapi /applications?name=manta | json -Ha uuid)\" headnode $ echo '{ \"metadata\": { \"MUSKIEMULTIDC\": true } }' | \\ sapiadm update \"$app_uuid\" Repeat the following in each datacenter. headnode $ manta-oneach -s webapi 'svcadm restart \"muskie\"' If you wish to enable basic monitoring, run the following in each datacenter: manta-adm alarm config update to deploy Amon probes and probe groups shipped with Manta. This will cause alarms to be opened when parts of Manta are not functioning. Email notifications are enabled by default using the address provided to `manta-init` above. (Email notifications only work after you have configured the Amon service for sending email.) If you want to be notified about alarm events via XMPP, see \"Changing alarm contact methods\" below. In development environments with more than one storage zone on a single system, it may be useful to apply quotas to storage zones so that if the system fills up, there remains space in the storage pool to address the problem. You can do this by finding the total size of the storage pool using `zfs list zones` in the global zone: NAME USED AVAIL REFER MOUNTPOINT zones" }, { "data": "395G 612K /zones Determine how much you want to allow the storage zones to use. In this case, we'll allow the zones to use 100 GiB each, making up 300 GiB, or 75% of the available storage. Now, find the storage zones: SERVICE SH ZONENAME GZ ADMIN IP storage 1 15711409-ca77-4204-b733-1058f14997c1 172.25.10.4 storage 1 275dd052-5592-45aa-a371-5cd749dba3b1 172.25.10.4 storage 1 b6d2c71f-ec3d-497f-8b0e-25f970cb2078 172.25.10.4 and for each one, update the quota using `vmadm update`. You can apply a 100 GiB quota to all of the storage zones on a single-system Manta using: manta-adm show -H -o zonename storage | while read zonename; do vmadm update $zonename quota=100; done Note: This only prevents Manta storage zones from using more disk space than you've budgeted for them. If the rest of the system uses more than expected, you could still run out of disk space. To avoid this, make sure that all zones have quotas and the sum of quotas does not exceed the space available on the system. Background: Manta operators are responsible for basic monitoring of components, including monitoring disk usage to avoid components running out of disk space. Out of the box, Manta stops using storage zones that are nearly full. This mechanism relies on storage zones reporting how full they are, which they determine by dividing used space by available space. However, Manta storage zones are typically deployed without quotas, which means their available space is limited by the total space in the ZFS storage pool. This accounting is not correct when there are multiple storage zones on the same system. To make this concrete, consider a system with 400 GiB of total ZFS pool space. Suppose there are three storage zones, each using 100 GiB of space, and suppose that the rest of the system uses negligible storage space. In this case, there are 300 GiB in use overall, so there's 100 GiB available in the pool. As a result, each zone reports that it's using 100 GiB and has 100 GiB available, so it's 50% full. In reality, though, the system is 75% full. Each zone reports that it has 100 GiB free, but if we were to write just 33 GiB to each zone, the whole system would be full. This problem only affects deployments that place multiple storage zones on the same system, which is not typical outside of development. In development, the problem can be worked around by applying appropriate quotas in each zone (as described above). Once the above steps have been completed, there are a few steps you should consider doing to ensure a working deployment. If you haven't already done so, you will need to . To test Manta with the Manta CLI tools, you will need an account configured in Triton. You can either use one of the default configured accounts or setup your own. The most common method is to test using the `poseidon` account which is created by the Manta install. In either case, you will need access to the Operations Portal. [See the instructions here](https://docs.joyent.com/private-cloud/install/headnode-installation#adding-external-access-to-adminui-and-imgapi) on how to find the IP address of the Operations Portal from your headnode. Log into the Operations Portal: COAL users should use login `admin` and the password . Lab users will also use `admin`, but need to ask whoever provisioned your lab account for the password. Once in, follow to add ssh keys to the account of your choice. Once you have setup an account on Manta or added your ssh keys added to an existing account, you can test your Manta install with the Manta CLI tools you installed above in" }, { "data": "There are complete instructions on how to get started with the CLI tools . Some things in that guide will not be as clear for users of custom deployments. The biggest difference will be the setting of the `MANTA_URL` variable. You will need to find the IP address of your API endpoint. To do this from your headnode: headnode$ manta-adm show -H -o primary_ip loadbalancer Multiple addresses will be returned. Choose any one and set `MANTA_URL` to `https://$that_ip`. `MANTA_USER` will be the account you setup in \"Set up a Manta Account\" section. `MANTAKEYID` will be the ssh key id you added in \"Set up a Manta Account\" section. If the key you used is in an environment that has not installed a certificate signed by a recognized authority you might see `Error: self signed certificate` errors. To fix this, add `MANTATLSINSECURE=true` to your environment or shell config. A final `~/.bashrc` or `~/.bash_profile` might look something like: export MANTA_USER=poseidon export MANTA_URL=https://<your-loadbalancer-ip> export MANTATLSINSECURE=true export MANTAKEYID=`ssh-keygen -l -f ~/.ssh/id_rsa.pub | awk '{print $2}' | tr -d '\\n'` Lastly test the CLI tools from your development machine: $ echo \"Hello, Manta\" > /tmp/hello.txt $ mput -f /tmp/hello.txt ~~/stor/hello-foo .../stor/hello-foo [=======================================================>] 100% 13B $ mls ~~/stor/ hello-foo $ mget ~~/stor/hello-foo Hello, Manta The networking configuration file is a per-datacenter JSON file with several properties: | Property | Kind | Description | | -- | -- | - | | `azs` | array&nbsp;of&nbsp;strings | list of all availability zones (datacenters) participating in Manta in this region | | `this_az` | string | string (in `azs`) denoting this availability zone | | `manta_nodes` | array&nbsp;of&nbsp;strings | list of server uuid's for all servers participating in Manta in this AZ | | `admin` | object | describes the \"admin\" network in this datacenter (see below) | | `manta` | object | describes the \"manta\" network in this datacenter (see below) | | `nicmappings` | object | maps each server in `mantanodes` to an object mapping each network name to the network interface on the server that should be tagged | | `macmappings` | object | (deprecated) maps each server uuid from `mantanodes` to an object mapping each network name (\"admin\", \"manta\") to the MAC address on that server over which that network should be created. | | `distribute_svcs` | boolean | control switch over boot-time networking detection performed by `manta-net.sh` (see below) | \"admin\" and \"manta\" describe the networks that are built into Manta: `admin`: the Triton administrative network `manta`: the Manta administrative network, used for high-volume communication between all Manta services. Each of these is an object with several properties: | Property | Kind | Description | | | | - | | `network` | string | Name for the Triton network object (usually the same as the network name) | | `nic_tag` | string | NIC tag name for this network (usually the same as the network name) | Besides those two, each of these blocks has a property for the current availability zone that describes the \"subnet\", \"gateway\", \"vlan_id\", and \"start\" and \"end\" provisionable addresses. `nic_mappings` is a nested object structure defining the network interface to be tagged for each server defined in `manta_nodes`, and for each of Manta's required networks. See below for an example of this section of the configuration. Note: If aggregations are used, they must already exist in NAPI, and updating NIC tags on aggregations will require a reboot of the server in question. The optional boolean `distribute_svcs` gives control to the operator over the boot-time networking detection that happens each time" }, { "data": "is executed (which determines if the global zone SMF services should be distributed). For example, an operator has enabled boot-time networking in a datacenter after installing Manta, and subsequently would like to add some more storage nodes. For consistency, the operator can set `distribute_svcs` to `true` in order to force distribution of these global zone services. Note: For global zone network changes handled by boot-time networking to take effect, a reboot of the node must be performed. See for more information on boot-time networking. For reference, here's an example multi-datacenter configuration with one service node (aac3c402-3047-11e3-b451-002590c57864) and one storage node (445aab6c-3048-11e3-9816-002590c3f3bc): { \"this_az\": \"staging-1\", \"manta_nodes\": [ \"aac3c402-3047-11e3-b451-002590c57864\", \"445aab6c-3048-11e3-9816-002590c3f3bc\" ], \"azs\": [ \"staging-1\", \"staging-2\", \"staging-3\" ], \"admin\": { \"nic_tag\": \"admin\", \"network\": \"admin\", \"staging-1\": { \"subnet\": \"172.25.3.0/24\", \"gateway\": \"172.25.3.1\" }, \"staging-2\": { \"subnet\": \"172.25.4.0/24\", \"gateway\": \"172.25.4.1\" }, \"staging-3\": { \"subnet\": \"172.25.5.0/24\", \"gateway\": \"172.25.5.1\" } }, \"manta\": { \"nic_tag\": \"manta\", \"network\": \"manta\", \"staging-1\": { \"vlan_id\": 3603, \"subnet\": \"172.27.3.0/24\", \"start\": \"172.27.3.4\", \"end\": \"172.27.3.254\", \"gateway\": \"172.27.3.1\" }, \"staging-2\": { \"vlan_id\": 3604, \"subnet\": \"172.27.4.0/24\", \"start\": \"172.27.4.4\", \"end\": \"172.27.4.254\", \"gateway\": \"172.27.4.1\" }, \"staging-3\": { \"vlan_id\": 3605, \"subnet\": \"172.27.5.0/24\", \"start\": \"172.27.5.4\", \"end\": \"172.27.5.254\", \"gateway\": \"172.27.5.1\" } }, \"nic_mappings\": { \"aac3c402-3047-11e3-b451-002590c57864\": { \"manta\": { \"mac\": \"90:e2:ba:4b:ec:d1\" } }, \"445aab6c-3048-11e3-9816-002590c3f3bc\": { \"manta\": { \"mac\": \"90:e2:ba:4a:32:71\" }, \"mantanat\": { \"aggr\": \"aggr0\" } } } } The deprecated `macmappings` can also be used in place of `nicmappings`. Only one of `nicmappings` or `macmappings` is supported per network configuration file. In a multi-datacenter configuration, this would be used to configure the \"staging-1\" datacenter. There would be two more configuration files, one for \"staging-2\" and one for \"staging-3\". \"manta-adm\" is the tool we use both to deploy all of the Manta zones and then to provision new zones, deprovision old zones, or reprovision old zones with a new image. \"manta-adm\" also has commands for viewing what's deployed, showing information about compute nodes, and more, but this section only discusses the configuration file format. A manta-adm configuration file takes the form: { \"COMPUTENODEUUID\": { \"SERVICE_NAME\": { \"IMAGEUUID\": COUNTOF_ZONES }, \"SHARDEDSERVICENAME\": { \"SHARD_NUMBER\": { \"IMAGEUUID\": COUNTOF_ZONES }, } }, } The file specifies how many of each kind of zone should be deployed on each compute node. For most zones, the \"kind\" of zone is just the service name (e.g., \"storage\"). For sharded zones, you also have to specify the shard number. After you've run `manta-init`, you can generate a sample configuration for a single-system install using \"manta-adm genconfig\". Use that to give you an idea of what this looks like: $ manta-adm genconfig coal { \"<any>\": { \"nameservice\": { \"197e905a-d15d-11e3-90e2-6bf8f0ea92b3\": 1 }, \"postgres\": { \"1\": { \"92782f28-d236-11e3-9e6c-5f7613a4df37\": 2 } }, \"moray\": { \"1\": { \"ef659002-d15c-11e3-a5f6-4bf577839d16\": 1 } }, \"electric-moray\": { \"e1043ddc-ca82-11e3-950a-ff14d493eebf\": 1 }, \"storage\": { \"2306b44a-d15d-11e3-8f87-6b9768efe5ae\": 2 }, \"authcache\": { \"5dff63a4-d15c-11e3-a312-5f3ea4981729\": 1 }, \"storinfo\": { \"2ef81a09-ad04-445f-b4fe-1aa87ce4e54c\": 1 }, \"webapi\": { \"319afbfa-d15e-11e3-9aa9-33ebf012af8f\": 1 }, \"loadbalancer\": { \"7aac4c88-d15c-11e3-9ea6-dff0b07f5db1\": 1 }, \"ops\": { \"ab253aae-d15d-11e3-8f58-3fb986ce12b3\": 1 } } } This file effectively specifies all of the Manta components except for the platforms. You can generate a configuration file that describes your current deployment with `manta-adm show -s -j`. For a coal or lab deployment, your best bet is to save the output of `manta-adm genconfig coal` or `manta-adm genconfig lab` to a file and use that. This is what the `manta-deploy-coal` and `manta-deploy-lab` scripts do, and you may as well just use those. Once you have a file like this, you can pass it to `manta-adm update`, which will show you what it will do in order to make the deployment match the configuration file, and then it will go ahead and do it. For more information, see \"manta-adm" } ]
{ "category": "Runtime", "file_name": "deployment.md", "project_name": "Triton Object Storage", "subcategory": "Cloud Native Storage" }
[ { "data": ". Examples of user facing changes: <!-- Select one or more options that fit this PR. --> Features Bug fixes Docs Tests <!-- Describe your changes here, ideally you can get that description straight from your descriptive commit message(s)! --> Fixes #(issue-number)" } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "Kube-OVN", "subcategory": "Cloud Native Network" }
[ { "data": "Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the primary CNI. Antrea is designed to work as NetworkPolicy plug-in to work together with a routed CNIs. For as long as a CNI implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that CNI's environment using Open vSwitch(OVS). In addition, Antrea working as NetworkPolicy plug-in automatically enables Antrea-proxy, because it requires Antrea-proxy to load balance Pod-to-Service traffic. <img src=\"../assets/policy-only-cni.svg\" width=\"600\" alt=\"Antrea Switched CNI\"> The above diagram depicts a routed CNI network topology on the left, and what it looks like after Antrea inserts the OVS bridge into the data path. The diagram on the left illustrates a routed CNI network topology such as AWS EKS. In this topology a Pod connects to the host network via a point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a host route with corresponding Pod's IP address as destination is created on each PtP device. Within each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod traffic, even within the same worker Node must traverse first to the host network and be routed by it. When the container runtime instantiates a Pod, it first calls the primary CNI to configure Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a veth-pair. When Antrea is chained with this primary CNI, container runtime then calls Antrea Agent, and the Antrea Agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local host gateway(`antrea-gw0`) interface from the PtP device. This is illustrated by the diagram on the right. Antrea needs to satisfy that All IP packets, sent on ``antrea-gw0`` in the host network, are received by the Pods exactly the same as if the OVS bridge had not been inserted. All IP packets, sent by Pods, are received by other Pods or the host network exactly the same as if OVS bridge had not been inserted. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge. To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge: A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod can resolve its neighbors, and the Pod therefore can generate traffic to these neighbors. A L3 flow for each local Pod that routes IP packets to that Pod if packets' destination IP matches that of the Pod. A L3 flow that routes all other IP packets to host network via `antrea-gw0` interface. These flows together handle all Pod traffic patterns." } ]
{ "category": "Runtime", "file_name": "policy-only.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Firecracker relies on KVM and on the processor virtualization features for workload isolation. The host and guest kernels and host microcode must be regularly patched in accordance with your distribution's security advisories such as for Amazon Linux. Security guarantees and defense in depth can only be upheld, if the following list of recommendations are implemented in production. Firecracker uses filters to limit the system calls allowed by the host OS to the required minimum. By default, Firecracker uses the most restrictive filters, which is the recommended option for production usage. Production usage of the `--seccomp-filter` or `--no-seccomp` parameters is not recommended. Firecracker implements the 8250 serial device, which is visible from the guest side and is tied to the Firecracker/non-daemonized jailer process stdout. Without proper handling, because the guest has access to the serial device, this can lead to unbound memory or storage usage on the host side. Firecracker does not offer users the option to limit serial data transfer, nor does it impose any restrictions on stdout handling. Users are responsible for handling the memory and storage usage of the Firecracker process stdout. We suggest using any upper-bounded forms of storage, such as fixed-size or ring buffers, using programs like `journald` or `logrotate`, or redirecting to `/dev/null` or a named pipe. Furthermore, we do not recommend that users enable the serial device in production. To disable it in the guest kernel, use the `8250.nr_uarts=0` boot argument when configuring the boot source. Please be aware that the device can be reactivated from within the guest even if it was disabled at boot. If Firecracker's `stdout` buffer is non-blocking and full (assuming it has a bounded size), any subsequent writes will fail, resulting in data loss, until the buffer is freed. Firecracker outputs logging data into a named pipe, socket, or file using the path specified in the `log_path` field of logger configuration. Firecracker can generate log data as a result of guest operations and therefore the guest can influence the volume of data written in the logs. Users are responsible for consuming and storing this data safely. We suggest using any upper-bounded forms of storage, such as fixed-size or ring buffers, programs like `journald` or `logrotate`, or redirecting to a named pipe. We recommend adding `quiet loglevel=1` to the host kernel command line to limit the number of messages written to the serial console. This is because some host configurations can have an effect on Firecracker's performance as the process will generate host kernel logs during normal operations. The most recent example of this was the addition of `console=ttyAMA0` host kernel command line argument on one of our testing setups. This enabled console logging, which degraded the snapshot restore time from 3ms to 8.5ms on `aarch64`. In this case, creating the tap device for snapshot restore generated host kernel logs, which were very slow to write. Firecracker installs custom signal handlers for some of the POSIX signals, such as SIGSEGV, SIGSYS, etc. The custom signal handlers used by Firecracker are not async-signal-safe, since they write logs and flush the metrics, which use locks for" }, { "data": "While very unlikely, it is possible that the handler will intercept a signal on a thread which is already holding a lock to the log or metrics buffer. This can result in a deadlock, where the specific Firecracker thread becomes unresponsive. While there is no security impact caused by the deadlock, we recommend that customers have an overwatcher process on the host, that periodically looks for Firecracker processes that are unresponsive, and kills them, by SIGKILL. For assuring secure isolation in production deployments, Firecracker should be started using the `jailer` binary that's part of each Firecracker release, or executed under process constraints equal or more restrictive than those in the jailer. For more about Firecracker sandboxing please see The Jailer process applies , namespace isolation and drops privileges of the Firecracker process. To set up the jailer correctly, you'll need to: Create a dedicated non-privileged POSIX user and group to run Firecracker under. Use the created POSIX user and group IDs in Jailer's `--uid <uid>` and `--gid <gid>` flags, respectively. This will run the Firecracker as the created non-privileged user and group. All file system resources used for Firecracker should be owned by this user and group. Apply least privilege to the resource files owned by this user and group to prevent other accounts from unauthorized file access. When running multiple Firecracker instances it is recommended that each runs with its unique `uid` and `gid` to provide an extra layer of security for their individually owned resources in the unlikely case where any one of the jails is broken out of. Firecracker's customers are strongly advised to use the provided `resource-limits` and `cgroup` functionalities encapsulated within jailer, in order to control Firecracker's resource consumption in a way that makes the most sense to their specific workload. While aiming to provide as much control as possible, we cannot enforce aggressive default constraints resources such as memory or CPU because these are highly dependent on the workload type and usecase. Here are some recommendations on how to limit the process's resources: `cgroup` provides a which allows users to control I/O operations through the following files: `blkio.throttle.io_serviced` - bounds the number of I/Os issued to disk `blkio.throttle.ioservicebytes` - sets a limit on the number of bytes transferred to/from the disk Jailer's `resource-limit` provides control on the disk usage through: `fsize` - limits the size in bytes for files created by the process `no-file` - specifies a value greater than the maximum file descriptor number that can be opened by the process. If not specified, it defaults to 4096\\. `cgroup` provides a to allow setting upper limits to memory usage: `memory.limitinbytes` - bounds the memory usage `memory.memsw.limitinbytes` - limits the memory+swap usage `memory.softlimitin_bytes` - enables flexible sharing of memory. Under normal circumstances, control groups are allowed to use as much of the memory as needed, constrained only by their hard limits set with the `memory.limitinbytes` parameter. However, when the system detects memory contention or low memory, control groups are forced to restrict their consumption to their soft" }, { "data": "`cgroup`s can guarantee a minimum number of CPU shares when a system is busy and provides CPU bandwidth control through: `cpu.shares` - limits the amount of CPU that each group it is expected to get. The percentage of CPU assigned is the value of shares divided by the sum of all shares in all `cgroups` in the same level `cpu.cfsperiodus` - bounds the duration in us of each scheduler period, for bandwidth decisions. This defaults to 100ms `cpu.cfsquotaus` - sets the maximum time in microseconds during each `cfsperiodus` for which the current group will be allowed to run `cpuacct.usage_percpu` - limits the CPU time, in ns, consumed by the process in the group, separated by CPU Additional details of Jailer features can be found in the . The current implementation results in host CPU usage increase on x86 CPUs when a guest injects timer interrupts with the help of kvm-pit kernel thread. kvm-pit kthread is by default part of the root cgroup. To mitigate the CPU overhead we recommend two system level configurations. Use an external agent to move the `kvm-pit/<pid of firecracker>` kernel thread in the microVMs cgroup (e.g., created by the Jailer). This cannot be done by Firecracker since the thread is created by the Linux kernel after guest start, at which point Firecracker is de-privileged. Configure the kvm limit to a lower value. This is a system-wide configuration available to users without Firecracker or Jailer changes. However, the same limit applies to APIC timer events, and users will need to test their workloads in order to apply this mitigation. To modify the kvm limit for interrupts that can be injected in a second. `sudo modprobe -r (kvmintel|kvmamd) kvm` `sudo modprobe kvm mintimerperiodus={newvalue}` `sudo modprobe (kvmintel|kvmamd)` To have this change persistent across boots we can append the option to `/etc/modprobe.d/kvm.conf`: `echo \"options kvm mintimerperiod_us=\" >> /etc/modprobe.d/kvm.conf` Network can be flooded by creating connections and sending/receiving a significant amount of requests. This issue can be mitigated either by configuring rate limiters for the network interface as explained within , or by using one of the tools presented below: `tc qdisc` - manipulate traffic control settings by configuring filters. When traffic enters a classful qdisc, the filters are consulted and the packet is enqueued into one of the classes within. Besides containing other qdiscs, most classful qdiscs perform rate control. `netnamespace` and `iptables` `--pid-owner` - can be used to match packets based on the PID that was responsible for them `connlimit` - restricts the number of connections for a destination IP address/from a source IP address, as well as limit the bandwidth Data written to storage devices is managed in Linux with a page cache. Updates to these pages are written through to their mapped storage devices asynchronously at the host operating system's discretion. As a result, high storage output can result in this cache being filled quickly resulting in a backlog which can slow down I/O of other guests on the" }, { "data": "To protect the resource access of the guests, make sure to tune each Firecracker process via the following tools: : A wrapper environment designed to contain Firecracker and strictly control what the process and its guest has access to. Take note of the , paying particular note to the `--resource-limit` parameter. Rate limiting: Rate limiting functionality is supported for both networking and storage devices and is configured by the operator of the environment that launches the Firecracker process and its associated guest. See the for examples of calling the API to configure rate limiting. Memory pressure on a host can cause memory to be written to drive storage when swapping is enabled. Disabling swap mitigates data remanence issues related to having guest memory contents on microVM storage devices. Verify that swap is disabled by running: ```bash grep -q \"/dev\" /proc/swaps && \\ echo \"swap partitions present (Recommendation: no swap)\" \\ || echo \"no swap partitions (OK)\" ``` \\[!CAUTION\\] Firecracker is not able to mitigate host's hardware vulnerabilities. Adequate mitigations need to be put in place when configuring the host. \\[!CAUTION\\] Firecracker is designed to provide isolation boundaries between microVMs running in different Firecracker processes. It is strongly recommended that each Firecracker process corresponds to a workload of a single tenant. \\[!CAUTION\\] For security and stability reasons it is highly recommended to load updated microcode as soon as possible. Aside from keeping the system firmware up-to-date, when the kernel is used to load updated microcode of the CPU this should be done as early as possible in the boot process. For the purposes of this document we assume a workload that involves arbitrary code execution in a multi-tenant context where each Firecracker process corresponds to a single tenant. Specific mitigations for side channel issues are constantly evolving as researchers find additional issues on a regular basis. Firecracker itself has no control over many lower-level software and hardware behaviors and capabilities and is not able to mitigate all these issues. Thus, it is strongly recommended that users follow the very latest as well as hardware/processor-specific recommendations and firmware updates (see below) when configuring mitigations against side channel attacks including \"Spectre\" and \"Meltdown\" attacks. However, some generic recommendations are also provided in what follows. Simultaneous Multi-Threading (SMT) is frequently a precondition for speculation issues utilized in side channel attacks such as Spectre variants, MDS, and others, where one tenant could leak information to another tenant or the host. As such, our recommendation is to disable SMT in production scenarios that require tenant separation. Users should disable to mitigate that rely on page deduplication for revealing what memory pages are accessed by another process. Rowhammer is a memory side-channel issue that can lead to unauthorized cross- process memory changes. Using DDR4 memory that supports Target Row Refresh (TRR) with error-correcting code (ECC) is recommended. Use of pseudo target row refresh (pTRR) for systems with pTRR-compliant DDR3 memory can help mitigate the issue, but it also incurs a performance penalty. For vendor-specific recommendations, please consult the resources below: Intel: AMD: ARM: On ARM, the physical counter (i.e `CNTPCT`) it is returning the . From the discussions before merging this change , this seems like a conscious design decision of the ARM code contributors, giving precedence to performance over the ability to trap and control this in the" }, { "data": "can be used to assess host's resilience against several transient execution CVEs and receive guidance on how to mitigate them. The script is used in integration tests by the Firecracker team. It can be downloaded and executed like: ```bash wget -O - https://meltdown.ovh | bash ``` Linux 6.1 introduced some regressions in the time it takes to boot a VM, for the x86_64 architecture. They can be mitigated depending on the CPU and the version of cgroups in use. The regression happens in the `KVMCREATEVM` ioctl and there are two factors that cause the issue: In the implementation of the mitigation for the iTLB multihit vulnerability, KVM creates a worker thread called `kvm-nx-lpage-recovery`. This thread is responsible for recovering huge pages split when the mitigation kicks-in. In the process of creating this thread, KVM calls `cgroupattachtask_all()` to move it to the same cgroup used by the hypervisor thread In kernel v4.4, upstream converted a cgroup per process read-write semaphore into a per-cpu read-write semaphore to allow to perform operations across multiple processes (). It was found that this conversion introduced high latency for write paths, which mainly includes moving tasks between cgroups. This was fixed in kernel v4.9 by which chose to favor writers over readers since moving tasks between cgroups is a common operation for Android. However, In kernel 6.0, upstream decided to revert back again and favor readers over writers re-introducing the original behavior of the rw semaphore (). At the same time, this commit provided an option called favordynmods to favor writers over readers. Since the `kvm-nx-lpage-recovery` thread creation and its cgroup change is done in the `KVMCREATEVM` call, the high latency we observe in 6.1 is due to the upstream decision to favor readers over writers for this per-cpu rw semaphore. While the 4.14 and 5.10 kernels favor writers over readers. The first step is to check if the host is vulnerable to iTLB multihit. Look at the value of `cat /sys/devices/system/cpu/vulnerabilities/itlb_multihit`. If it does says `Not affected`, the host is not vulnerable and you can apply mitigation 2, and optionally 1 for best results. Otherwise it is vulnerable and you can only apply mitigation 1. The mitigation in this case is to enable `favordynmods` in cgroupsv1 or cgroupsv2. This changes the behavior of all cgroups in the host, and makes it closer to the performance of Linux 5.10 and 4.14. For cgroupsv2, run this command: ```sh sudo mount -o remount,favordynmods /sys/fs/cgroup ``` For cgroupsv1, remounting with `favordynmods` is not supported, so it has to be done at boot time, through a kernel command line option. Add `cgroup_favordynmods=true` to your kernel command line in GRUB. Refer to your distribution's documentation for where to make this change[^1] This mitigation is preferred to the previous one as it is less invasive (it doesn't affect other cgroups), but it can also be combined with the cgroups mitigation. ```sh KVMVENDORMOD=$(lsmod |grep -P \"^kvm_(amd|intel)\" | awk '{print $1}') sudo modprobe -r $KVMVENDORMOD kvm sudo modprobe kvm nxhugepages=never sudo modprobe $KVMVENDORMOD ``` To validate that the change took effect, the file `/sys/module/kvm/parameters/nxhugepages` should say `never`. systems, and ." } ]
{ "category": "Runtime", "file_name": "prod-host-setup.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "This document describes how to set up a development environment for Calico, as well as how to build and test development code. This guide is broken into the following main sections: Additional developer docs can be found in . These build instructions assume you have a Linux build environment with: Docker git make To build all of Calico, run the following command from the root of the repository. ``` make image DEV_REGISTRY=my-registry ``` This will produce several container images, each tagged with the registry provided. To build just one image, you can run the same command in a particular sub-directory. For example, to build `calico/node`, run the following: ``` make -C node image ``` By default, images will be produced for the build machine's architecture. To cross-compile, pass the `ARCH` environment variable. For example, to build for arm64, run the following: ``` make image ARCH=arm64 ``` Now, build the YAML manifests that are used to install calico: ``` make dev-manifests ``` The build uses the go package cache and local vendor caching to increase build speed. To perform a clean build, use the `dev-clean` target. ``` make dev-clean dev-image ``` The following are the standard `Makefile` targets that are in every project directory. `make build`: build the binary for the current architecture. Normally will be in `bin/` or `dist/` and named `NAME-ARCH`, e.g. `felix-arm64` or `typha-amd64`. If there are multiple OSes available, then named `NAME-OS-ARCH`, e.g. `calicoctl-darwin-amd64`. `make build ARCH=<ARCH>`: build the binary for the given `ARCH`. Output binary will be in `bin/` or `dist/` and follows the naming convention listed above. `make build-all`: build binaries for all supported architectures. Output binaries will be in `bin/` or `dist/` and follow the naming convention listed above. `make image`: create a docker image for the current architecture. It will be named `NAME:latest-ARCH`, e.g. `calico/felix:latest-amd64` or `calico/typha:latest-s390x`. If multiple operating systems are available, will be named `NAME:latest-OS-ARCH`, e.g. `calico/ctl:latest-linux-ppc64le` `make image ARCH=<ARCH>`: create a docker image for the given `ARCH`. Images will be named according to the convention listed above. `make image-all`: create docker images for all supported architectures. Images will be named according to the convention listed above in `make image`. `make test`: run all tests `make ci`: run all CI steps for build and test, likely other targets. WARNING: It is not* recommended to run `make ci` locally, as the actions it takes may be destructive. `make cd`: run all CD steps, normally pushing images out to registries. WARNING: It is not* recommended to run `make cd` locally, as the actions it takes may be destructive, e.g. pushing out images. For your safety, it only will work if you run `make cd CONFIRM=true`, which only should be run by the proper CI system. Each directory has its own set of automated tests that live in-tree and can be run without the need to deploy an end-to-end Kubernetes system. The easiest way to run the tests is to submit a PR with your changes, which will trigger a build on the CI system. If you'd like to run them locally we recommend running each directory's test suite individually, since running the tests for the entire codebase can take a very long time. Use the `test` target in a particular directory to run that directory's tests. ``` make test ``` For information on how to run a subset of a directory's tests, refer to the documentation and Makefile in that directory." } ]
{ "category": "Runtime", "file_name": "DEVELOPER_GUIDE.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "title: FAQ menu_order: 200 search_type: Documentation <a name=\"container-ip\"></a> Q: How do I obtain the IP of a specific container when I'm using Weave? You can use `weave ps <container>` to see the allocated address of a container on a Weave network. See . <a name=\"specific-ip\"></a> Q: My dockerized app needs to check the request of an application that uses a static IP. Is it possible to manually change the IP of a container? You can manually change the IP of a container using . For more information, refer to . <a name=\"expose-container\"></a> Q: How do I expose one of my containers to the outside world? Exposing a container to the outside world is described in . <a name=\"legacy-network\"></a> Q: Can I connect my existing 'legacy' network with a Weave container network? Yes you can. For example, you have a Weave network that runs on hosts A, B, C. and you have an additional host, that we'll call P, where neither Weave nor Docker are running. However, you need to connect from a process running on host P to a specific container running on host B on the Weave network. Since the Weave network is completely separate from any network that P is connected to, you cannot connect the container using the container's IP address. A simple way to accomplish this would be to run Weave on the host and then run, `weave expose` to expose the network to any running containers. Or you set up a route from P to one of A, B or C. See . Yet another option is to expose a port from the container on host B and then connect to it. You can read about exposing ports in . <a name=\"duplicate-peer\"></a> Q: Why am I seeing \"peer names collision\" and failed connections? This sometimes happens when machines are cloned; we require each machine in your cluster to have a unique identity. For more information see . Depending on your Linux distribution you may need to [set up machine-id](https://www.freedesktop.org/software/systemd/man/machine-id.html) or . <a name=\"duplicate-ip\"></a> Q: Why am I seeing the same IP address assigned to two different containers on different hosts? Under normal circumstances, this should never happen, but it can occur if `weave rmpeer` was run on more than one host. For more information see . <a name=\"dead-node\"></a> Q: What is the best practice for resetting a node that goes out of service? When a node goes out of service, the best option is to call `weave rmpeer` on one host and then `weave forget` on all the other" }, { "data": "See for an in-depth discussion. <a name=\"performance\"></a> Q: What about Weave's performance? Are software defined network overlays just as fast as native networking? All virtualization techniques have some overhead, and Weave's overhead is typically around 2-3%. Unless your system is completely bottlenecked on the network, you won't notice this during normal operation. Weave Net also automatically uses the fastest datapath between two hosts. When Weave Net can't use the fast datapath between two hosts, it falls back to the slower packet forwarding approach. Selecting the fastest forwarding approach is automatic, and is determined on a connection-by-connection basis. For example, a Weave network spanning two data centers might use fast datapath within the data centers, but not for the more constrained network link between them. For more information about fast datapath see . <a name=\"query-fastdp\"></a> Q: How can I tell if Weave is using fast datapath (fastdp) or not? To view whether Weave is using fastdp or not, you can run, `weave status connections` For more information on this command, see . <a name=\"encrypted-fastdp\"></a> Q: Does encryption work with fastdp? Yes, 1.9 version of Weave Net added the encryption feature to fastdp. See for more information. <a name=\"app-isolation\"></a> Q: Can I create multiple networks where containers can communicate on one network, but are isolated from containers on other networks? Yes, of course! Weave allows you to run isolated networks and still allow open communications between individual containers from those isolated networks. You can find information on how to do this in . <a name=\"ports\"></a>Q: Which ports does Weave Net use (e.g. if I am configuring a firewall) ? You must permit traffic to flow through TCP 6783 and UDP 6783/6784, which are Weaves control and data ports. The daemon also uses TCP 6781/6782 for , but you would only need to open up this port if you wish to collect metrics from another host. The Weave Net daemon listens on localhost (127.0.0.1) TCP port 6784 for commands from other Weave Net components. This port should not be opened to other hosts. When using encrypted fast datapath, make sure that underlying network does not block ESP traffic (IP protocol 50). For instance on Google Cloud Platform a firewall rule for allowing ESP traffic has to be installed. <a name=\"own-image\"></a>Q: Why do you use your own Docker image `weaveworks/ubuntu`? The official Ubuntu image does not contain the `ping` and `nc` commands which are used in many of our examples throughout the documentation. The `weaveworks/ubuntu` image is simply the official Ubuntu image with those two commands added. See Also *" } ]
{ "category": "Runtime", "file_name": "faq.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "This document has troubleshooting tips when installing / using Sysbox in Docker hosts. For troubleshooting in Kubernetes clusters, see . When installing the Sysbox package with the `apt-get install` command (see the ), the expected output is: ```console $ sudo apt-get install ./sysbox-ce0.5.0-0.linuxamd64.deb Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'sysbox-ce' instead of './sysbox-ce0.5.0-0.linuxamd64.deb' The following NEW packages will be installed: sysbox-ce 0 upgraded, 1 newly installed, 0 to remove and 178 not upgraded. Need to get 0 B/11.2 MB of archives. After this operation, 40.8 MB of additional disk space will be used. Get:1 /home/rmolina/wsp/02-26-2020/sysbox/sysbox-ce0.5.0-0.linuxamd64.deb sysbox-ce amd64 0.5.0-0.linux [11.2 MB] Selecting previously unselected package sysbox-ce. (Reading database ... 327292 files and directories currently installed.) Preparing to unpack .../sysbox-ce0.5.0-0.linuxamd64.deb ... Unpacking sysbox-ce (0.5.0-0.linux) ... Setting up sysbox-ce (0.5.0-0.linux) ... Created symlink /etc/systemd/system/sysbox.service.wants/sysbox-fs.service /lib/systemd/system/sysbox-fs.service. Created symlink /etc/systemd/system/sysbox.service.wants/sysbox-mgr.service /lib/systemd/system/sysbox-mgr.service. Created symlink /etc/systemd/system/multi-user.target.wants/sysbox.service /lib/systemd/system/sysbox.service. ``` If there is any missing software dependency, the 'apt-get' tool will take care of installing it accordingly during the installation process. Alternatively, you can manually execute the following instructions to install Sysbox's missing dependencies: ```console $ sudo apt-get update $ sudo apt-get install -f -y ``` There may be other issues observed during installation. For example, in Docker environments, the Sysbox installer may complain if there are active docker containers during the installation process. In this case, proceed to execute the action suggested by the installer and re-launch the installation process again. ``` $ sudo apt-get install ./deb/build/amd64/ubuntu-impish/sysbox-ce0.5.0-0.linuxamd64.deb Reading package lists... Done Building dependency tree ... The Sysbox installer requires a docker service restart to configure network parameters, but it cannot proceed due to existing Docker containers. Please remove them as indicated below and re-launch the installation process. Refer to Sysbox installation documentation for details. \"docker rm $(docker ps -a -q) -f\" dpkg: error processing package sysbox-ce (--configure): installed sysbox-ce package post-installation script subprocess returned error exit status 1 ``` Upon successful completion of the installation task, verify that Sysbox's systemd units have been properly installed, and associated daemons are properly running: ```console $ systemctl list-units -t service --all | grep sysbox sysbox-fs.service loaded active running sysbox-fs component sysbox-mgr.service loaded active running sysbox-mgr component sysbox.service loaded active exited Sysbox General Service ``` The sysbox.service is ephemeral (it exits once it launches the other sysbox services), so the `active exited` status above is expected. When creating a system container, Docker may report the following error: ```console $ docker run --runtime=sysbox-runc -it ubuntu:latest docker: Error response from daemon: Unknown runtime specified sysbox-runc. ``` This indicates that the Docker daemon is not aware of the Sysbox runtime. This is likely due to one of the following reasons: 1) Docker is installed via a Ubuntu snap package. 2) Docker is installed natively, but it's daemon configuration file (`/etc/docker/daemon.json`) has an error. For (1): At this time, Sysbox does not support Docker installations via snap. See for info on how to overcome" }, { "data": "For (2): The `/etc/docker/daemon.json` file should have an entry for `sysbox-runc` as follows: ```console { \"runtimes\": { \"sysbox-runc\": { \"path\": \"/usr/bin/sysbox-runc\" } } } ``` Double check that this is the case. If not, change the file and restart Docker: ```console $ sudo systemctl restart docker.service ``` NOTE: The Sysbox installer automatically does this configuration and restarts Docker. Thus this error is uncommon. When creating a system container, Docker may report the following error: ```console docker run --runtime=sysbox-runc -it ubuntu:latest docker: Error response from daemon: OCI runtime create failed: host is not configured properly: kernel is not configured to allow unprivileged users to create namespaces: /proc/sys/kernel/unprivilegedusernsclone: want 1, have 0: unknown. ``` This means that the host's kernel is not configured to allow unprivileged users to create user namespaces. For Ubuntu, fix this with: ```console sudo sh -c \"echo 1 > /proc/sys/kernel/unprivilegedusernsclone\" ``` Note: The Sysbox package installer automatically executes this instruction, so normally there is no need to do this configuration manually. The host's `/etc/subuid` and `/etc/subgid` files contain the host user-id and group-id ranges that Sysbox assigns to the containers. These files should have a single entry for user `sysbox` that looks similar to this: ``` $ more /etc/subuid sysbox:165536:65536 ``` If for some reason this file has more than one entry for user `sysbox`, you'll see the following error when creating a container: ``` docker: Error response from daemon: OCI runtime create failed: error in the container spec: invalid user/group ID config: sysbox-runc requires user namespace uid mapping array have one element; found [{0 231072 65536} {65536 296608 65536}]: unknown. ``` When running a system container with a bind mount, you may see that the files and directories associated with the mount have `nobody:nogroup` ownership when listed from within the container. This typically occurs when the source of the bind mount is owned by a user on the host that is different from the user on the host to which the system container's root user maps. Recall that Sysbox containers always use the Linux user namespace and thus map the root user in the system container to a non-root user on the host. See for info on how to overcome this. When creating a system container, Docker may report the following error: ```console docker run --runtime=sysbox-runc -it ubuntu:latest docker: Error response from daemon: OCI runtime create failed: failed to setup docker volume manager: host dir for docker store /var/lib/sysbox/docker can't be on ...\" ``` This means that Sysbox's `/var/lib/sysbox` directory is on a filesystem not supported by Sysbox. This directory must be on one of the following filesystems: ext4 btrfs The same requirement applies to the `/var/lib/docker` directory. This is normally the case for vanilla Ubuntu installations, so this error is not" }, { "data": "While creating a system container, Docker may report the following error: ```console $ docker run --runtime=sysbox-runc -it alpine docker: Error response from daemon: OCI runtime create failed: failed to register with sysbox-mgr: failed to invoke Register via grpc: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/sysbox/sysmgr.sock: connect: connection refused\": unknown. ``` or ```console docker run --runtime=sysbox-runc -it alpine docker: Error response from daemon: OCI runtime create failed: failed to pre-register with sysbox-fs: failed to register container with sysbox-fs: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/sysbox/sysfs.sock: connect: connection refused\": unknown. ``` This likely means that the sysbox-mgr and/or sysbox-fs daemons are not running (for some reason). Check that these are running via systemd: $ systemctl status sysbox-mgr $ systemctl status sysbox-fs If either of these services are not running, use Systemd to restart them: ```console $ sudo systemctl restart sysbox ``` Normally Systemd ensures these services are running and restarts them automatically if for some reason they stop. The following error may be reported within a system container or any of its inner (child) containers: ``` ls: cannot access '/proc/sys': Transport endpoint is not connected ``` This error usually indicates that sysbox-fs daemon (and potentially sysbox-mgr too) has been restarted after the affected system container was initiated. In this scenario user is expected to recreate (stop and start) all the active Sysbox containers. When creating a system container with Docker + Sysbox, if Docker reports an error such as: ```console docker: Error response from daemon: OCI runtime create failed: containerlinux.go:364: starting container process caused \"processlinux.go:533: container init caused \\\"rootfs_linux.go:67: setting up ptmx caused \\\\\\\"remove dev/ptmx: device or resource busy\\\\\\\"\\\"\": unknown. ``` It likely means the system container was launched with the Docker `--privileged` flag (and this flag is not compatible with Sysbox as described ). You may hit this problem when doing an `docker exec -it my-syscont bash`: OCI runtime exec failed: exec failed: containerlinux.go:364: starting container process caused \"processlinux.go:94: executing setns process caused \\\"exit status 2\\\"\": unknown This occurs if the `/proc` mount inside the system container is set to \"read-only\". For example, if you launched the system container and run the following command in it: $ mount -o remount,ro /proc The Sysbox daemons (i.e. sysbox-fs and sysbox-mgr) will log information related to their activities in `/var/log/sysbox-fs.log` and `/var/log/sysbox-mgr.log` respectively. These logs should be useful during troubleshooting exercises. You can modify the log file location, log level, and log format. See and for more info. For sysbox-runc, logging is handled as follows: When running Docker + sysbox-runc, the sysbox-runc logs are actually stored in a containerd directory such as: `/run/containerd/io.containerd.runtime.v1.linux/moby/<container-id>/log.json` where `<container-id>` is the container ID returned by Docker. When running sysbox-runc directly, sysbox-runc will not produce any logs by default. Use the `sysbox-runc --log` option to change" }, { "data": "Sysbox stores some container state under the `/var/lib/sysbox` directory (which for security reasons is only accessible to the host's root user). When no system containers are running, this directory should be clean and look like this: ```console /var/lib/sysbox containerd docker baseVol cowVol imgVol kubelet ``` When a system container is running, this directory holds state for the container: ```console /var/lib/sysbox containerd f29711b54e16ecc1a03cfabb16703565af56382c8f005f78e40d6e8b28b5d7d3 docker baseVol f29711b54e16ecc1a03cfabb16703565af56382c8f005f78e40d6e8b28b5d7d3 cowVol imgVol kubelet f29711b54e16ecc1a03cfabb16703565af56382c8f005f78e40d6e8b28b5d7d3 ``` If the system container is stopped and removed, the directory goes back to it's clean state: ```console /var/lib/sysbox containerd docker baseVol cowVol imgVol kubelet ``` If you have no system containers created yet `/var/lib/sysbox` is not clean, it means Sysbox is in a bad state. This is very uncommon as Sysbox is well tested. To overcome this, you'll need to follow this procedure: 1) Stop and remove all system containers (e.g., all Docker containers created with the sysbox-runc runtime). There is a bash script to do this . 2) Restart Sysbox: ```console $ sudo systemctl restart sysbox ``` 3) Verify that `/var/lib/sysbox` is back to a clean state: ```console /var/lib/sysbox containerd docker baseVol cowVol imgVol kubelet ``` When running , if you see pods failing to deploy, we suggest starting by inspecting the kubelet log inside the K8s node where the failure occurs. $ docker exec -it <k8s-node> bash This log often has useful information on why the failure occurred. One common reason for failure is that the host is lacking sufficient storage. In this case you'll see messages like these ones in the kubelet log: Disk usage on image filesystem is at 85% which is over the high threshold (85%). Trying to free 1284963532 bytes down to the low threshold (80%). evictionmanager.go:168] Failed to admit pod kube-flannel-ds-amd64-6wkdkkube-system(e3f4c428-ab15-48af-92eb-f07ce06aa4af) - node has conditions: [DiskPressure] To overcome this, make some more storage room in your host and redeploy the pods. If problem cannot be explained by any of the previous bullets, then it may be helpful to obtain core-dumps for both of the Sysbox daemons (i.e. `sysbox-fs` and `sysbox-mgr`). As an example, find below the instructions to generate a core-dump for `sysbox-fs` process. Enable core-dump creation by making use of the `ulimit` command: ```console $ ulimit -c unlimited ``` We make use of the `gcore` tool to create core-dumps, which is usually included as part of the `gdb` package in most of the Linux distros. Install `gdb` if not already present in the system: For Debian / Ubuntu distros: ```console $ sudo apt-get install gdb ``` For Fedora / CentOS / Redhat / rpm-based distros: ```console $ sudo yum install gdb ``` Create core-dump file. Notice that Sysbox containers will continue to operate as usual during (and after) the execution of this instruction, so no service impact is expected. ```console $ sudo gcore `pidof sysbox-fs` ... Saved corefile core.195835 ... ``` Compress created core file: ```console $ sudo tar -zcvf core.195835.tar.gz core.195835 $ ls -lrth core.195835.tar.gz -rw-r--r-- 1 root root 8.4M Apr 20 15:36 core.195835.tar.gz ``` Create a Sysbox and provide a link to the generated core-dump." } ]
{ "category": "Runtime", "file_name": "troubleshoot.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "This enhancement will remove the dependency of filesystem ID in the DiskStatus, because we found there is no guarantee that filesystem ID won't change after the node reboots for e.g. XFS. https://github.com/longhorn/longhorn/issues/972 Previously Longhorn is using filesystem ID as keys to the map of disks on the node. But we found there is no guarantee that filesystem ID won't change after the node reboots for certain filesystems e.g. XFS. We want to enable the ability to configure CRD directly, prepare for the CRD based API access in the future We also need to make sure previously implemented safe guards are not impacted by this change: If a disk was accidentally unmounted on the node, we should detect that and stop replica from scheduling into it. We shouldn't allow user to add two disks pointed to the same filesystem For this enhancement, we will not proactively stop replica from starting if the disk it resides in is NotReady. Lack of `replicas` directory should stop replica from starting automatically. We will generate UUID for each disk called `diskUUID` and store it as a file `longhorn-disk.cfg` in the filesystem on the disk. If the filesystem already has the `diskUUID` stored, we will retrieve and verify the `diskUUID` and make sure it doesn't change when we scan the disks. The disk name can be customized by user as well. Filesystem ID was a good identifier for the disk: Different filesystems on the same node will have different filesystem IDs. It's built-in in the filesystem. Only need one command(`stat`) to retrieve it. But there is another assumption we had which turned out not to be true. We assumed filesystem ID won't change during the lifecycle of the filesystem. But we found that some filesystem ID can change after a remount. It caused an issue on XFS. Besides that, there is another problem we want to address: currently API server is forwarding the request of updateDisks to the node of the disks, since only that node has access to the filesystem so it can fill in the FilesystemID(`fsid`). As long as we're using the `fsid` as the key of the disk map, we cannot create new disks without let the node handling the request. This become an issue when we want to allow direct editing CRDs as API. Before the enhancement, if the users add more disks to the node, API gateway will forward the request to the responsible node, which will validate the input on the fly for cases like two disks point to the same" }, { "data": "After the enhancement, when the users add more disks to the node, API gateway will only validate the basic input. The other error cases will be reflected in the disk's Condition field. If different disks point to the same directory, then: If all the disks are added new, both disks will get condition `ready = false`, with the message indicating that they're pointing to the same filesystem. If one of the disks already exists, the other disks will get condition `ready = false`, with the message indicating that they're pointing to the same filesystem as one of the existing disks. If there is more than one disk exists and pointing to the same filesystem. Longhorn will identify which disk is the valid one using `diskUUID` and set the condition of other disks to `ready = false`. API input for the diskUpdate call will be a map[string]DiskSpec instead of []DiskSpec. API no longer validates duplicate filesystem ID. UI can let the user customize the disk name. By default UI can generate name like `disk-<random>` for the disks. The validation of will be done in the node controller `syncDiskStatus`. syncDiskStatus process: Scan through the disks, and record disks in the FSID to disk map Check for each FSID after the scanning is done. If there is only one disk in for a FSID If the disk already has `status.diskUUID` Check for file `longhorn-disk.cfg` file exists: parse the value. If it doesn't match status.diskUUID, mark the disk as NotReady case: mount the wrong disk. file doesn't exist: mark the disk as NotReady case: Reboot and forget to mount. If the disk has empty `status.diskUUID` check for file `longhorn-disk.cfg`. if exists, parse uuid. If there is no duplicate UUID in the disk list, then record the uuid Otherwise mark as NotReady `duplicate UUID`. if not exists, generate the uuid, record it in the file, then fill in `status.diskUUID`. Creating new disk. If there are more than one disks with the same FSID if the disk has `status.diskUUID` follow 2.i.a If the disk doesn't have `status.diskUUID` mark as NotReady due to duplicate FSID. The default disks of the node will be called `default-disk-<fsid>`. That includes the default disks created using node labels/annotations. Update existing test plan on node testing will be enough for the first step, since it's already covered the case for changing filesystem. No change for previous disks since they all used the FSID which is at least unique on the node. Node controller will fill `diskUUID` field and create `longhorn-disk.cfg` automatically on the disk once it processed it." } ]
{ "category": "Runtime", "file_name": "20200331-replace-filesystem-id-key-in-disk-map.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "Owners: Sajay Antony (@sajayantony) Shiwei Zhang (@shizhMSFT) Steve Lasker (@stevelasker) Emeritus: Avi Deitcher (@deitch) Jimmy Zelinskie (@jzelinskie) Josh Dolitsky (@jdolitsky) Vincent Batts (@vbatts)" } ]
{ "category": "Runtime", "file_name": "OWNERS.md", "project_name": "ORAS", "subcategory": "Cloud Native Storage" }
[ { "data": "Translator development ====================== Setting the Stage -- This is the first post in a series that will explain some of the details of writing a GlusterFS translator, using some actual code to illustrate. Before we begin, a word about environments. GlusterFS is over 300K lines of code spread across a few hundred files. That's no Linux kernel or anything, but you're still going to be navigating through a lot of code in every code-editing session, so some kind of cross-referencing is essential. I use cscope with the vim bindings, and if I couldn't do Crtl+G and such to jump between definitions all the time my productivity would be cut in half. You may prefer different tools, but as I go through these examples you'll need something functionally similar to follow on. OK, on with the show. The first thing you need to know is that translators are not just bags of functions and variables. They need to have a very definite internal structure so that the translator-loading code can figure out where all the pieces are. The way it does this is to use dlsym to look for specific names within your shared-object file, as follow (from `xlator.c`): ``` if (!(xl->fops = dlsym (handle, \"fops\"))) { gflog (\"xlator\", GFLOG_WARNING, \"dlsym(fops) on %s\", dlerror ()); goto out; } if (!(xl->cbks = dlsym (handle, \"cbks\"))) { gflog (\"xlator\", GFLOG_WARNING, \"dlsym(cbks) on %s\", dlerror ()); goto out; } if (!(xl->init = dlsym (handle, \"init\"))) { gflog (\"xlator\", GFLOG_WARNING, \"dlsym(init) on %s\", dlerror ()); goto out; } if (!(xl->fini = dlsym (handle, \"fini\"))) { gflog (\"xlator\", GFLOG_WARNING, \"dlsym(fini) on %s\", dlerror ()); goto out; } ``` In this example, `xl` is a pointer to the in-memory object for the translator we're loading. As you can see, it's looking up various symbols by name in the shared object it just loaded, and storing pointers to those symbols. Some of them (e.g. init) are functions, while others (e.g. fops) are dispatch tables containing pointers to many functions. Together, these make up the translator's public interface. Most of this glue or boilerplate can easily be found at the bottom of one of the source files that make up each translator. We're going to use the `rot-13` translator just for fun, so in this case you'd look in `rot-13.c` to see this: ``` struct xlator_fops fops = { .readv = rot13_readv, .writev = rot13_writev }; struct xlator_cbks cbks = { }; struct volume_options options[] = { { .key = {\"encrypt-write\"}, .type = GFOPTIONTYPE_BOOL }, { .key = {\"decrypt-read\"}, .type = GFOPTIONTYPE_BOOL }, { .key = {NULL} }, }; ``` The `fops` table, defined in `xlator.h`, is one of the most important pieces. This table contains a pointer to each of the filesystem functions that your translator might implement -- `open`, `read`, `stat`, `chmod`, and so on. There are 82 such functions in all, but don't worry; any that you don't specify here will be see as null and filled with defaults from `defaults.c` when your translator is loaded. In this particular example, since `rot-13` is an exceptionally simple translator, we only fill in two entries for `readv` and" }, { "data": "There are actually two other tables, also required to have predefined names, that are also used to find translator functions: `cbks` (which is empty in this snippet) and `dumpops` (which is missing entirely). The first of these specify entry points for when inodes are forgotten or file descriptors are released. In other words, they're destructors for objects in which your translator might have an interest. Mostly you can ignore them, because the default behavior handles even the simpler cases of translator-specific inode/fd context automatically. However, if the context you attach is a complex structure requiring complex cleanup, you'll need to supply these functions. As for dumpops, that's just used if you want to provide functions to pretty-print various structures in logs. I've never used it myself, though I probably should. What's noteworthy here is that we don't even define dumpops. That's because all of the functions that might use these dispatch functions will check for `xl->dumpops` being `NULL` before calling through it. This is in sharp contrast to the behavior for `fops` and `cbks`, which must be present. If they're not, translator loading will fail because these pointers are not checked every time and if they're `NULL` then we'll segfault. That's why we provide an empty definition for cbks; it's OK for the individual function pointers to be NULL, but not for the whole table to be absent. The last piece I'll cover today is options. As you can see, this is a table of translator-specific option names and some information about their types. GlusterFS actually provides a pretty rich set of types (`volumeoptiontype_t` in `options.`h) which includes paths, translator names, percentages, and times in addition to the obvious integers and strings. Also, the `volumeoptiont` structure can include information about alternate names, min/max/default values, enumerated string values, and descriptions. We don't see any of these here, so let's take a quick look at some more complex examples from afr.c and then come back to `rot-13`. ``` { .key = {\"data-self-heal-algorithm\"}, .type = GFOPTIONTYPE_STR, .default_value = \"\", .description = \"Select between \\\"full\\\", \\\"diff\\\". The \" \"\\\"full\\\" algorithm copies the entire file from \" \"source to sink. The \\\"diff\\\" algorithm copies to \" \"sink only those blocks whose checksums don't match \" \"with those of source.\", .value = { \"diff\", \"full\", \"\" } }, { .key = {\"data-self-heal-window-size\"}, .type = GFOPTIONTYPE_INT, .min = 1, .max = 1024, .default_value = \"1\", .description = \"Maximum number blocks per file for which \" \"self-heal process would be applied simultaneously.\" }, ``` When your translator is loaded, all of this information is used to parse the options actually provided in the volfile, and then the result is turned into a dictionary and stored as `xl->options`. This dictionary is then processed by your init function, which you can see being looked up in the first code fragment above. We're only going to look at a small part of the `rot-13`'s init for now. ``` priv->decrypt_read = 1; priv->encrypt_write = 1; data = dict_get (this->options, \"encrypt-write\"); if (data) { if (gfstring2boolean (data->data, &priv->encryptwrite == -1) { gflog (this->name, GFLOG_ERROR, \"encrypt-write takes only boolean options\"); return -1; } } ``` What we can see here is that we're setting some defaults in our priv structure, then looking to see if an `encrypt-write` option was actually provided. If so, we convert and store" }, { "data": "This is a pretty classic use of dict_get to fetch a field from a dictionary, and of using one of many conversion functions in `common-utils.c` to convert `data->data` into something we can use. So far we've covered the basic of how a translator gets loaded, how we find its various parts, and how we process its options. In my next Translator 101 post, we'll go a little deeper into other things that init and its companion fini might do, and how some other fields in our `xlator_t` structure (commonly referred to as this) are commonly used. `init`, `fini`, and private context -- In the previous Translator 101 post, we looked at some of the dispatch tables and options processing in a translator. This time we're going to cover the rest of the \"shell\" of a translator -- i.e. the other global parts not specific to handling a particular request. Let's start by looking at the relationship between a translator and its shared library. At a first approximation, this is the relationship between an object and a class in just about any object-oriented programming language. The class defines behaviors, but has to be instantiated as an object to have any kind of existence. In our case the object is an `xlator_t`. Several of these might be created within the same daemon, sharing all of the same code through init/fini and dispatch tables, but sharing no data. You could implement shared data (as static variables in your shared libraries) but that's strongly discouraged. Every function in your shared library will get an `xlator_t` as an argument, and should use it. This lack of class-level data is one of the points where the analogy to common OOP systems starts to break down. Another place is the complete lack of inheritance. Translators inherit behavior (code) from exactly one shared library -- looked up and loaded using the `type` field in a volfile `volume ... end-volume` block -- and that's it -- not even single inheritance, no subclasses or superclasses, no mixins or prototypes, just the relationship between an object and its class. With that in mind, let's turn to the init function that we just barely touched on last time. ``` int32_t init (xlator_t *this) { data_t *data = NULL; rot13private_t *priv = NULL; if (!this->children || this->children->next) { gflog (\"rot13\", GFLOG_ERROR, \"FATAL: rot13 should have exactly one child\"); return -1; } if (!this->parents) { gflog (this->name, GFLOG_WARNING, \"dangling volume. check volfile \"); } priv = GFCALLOC (sizeof (rot13privatet), 1, 0); if (!priv) return -1; ``` At the very top, we see the function signature -- we get a pointer to the `xlatort` object that we're initializing, and we return an `int32t` status. As with most functions in the translator API, this should be zero to indicate success. In this case it's safe to return -1 for failure, but watch out: in dispatch-table functions, the return value means the status of the *function call rather than the request*. A request error should be reflected as a callback with a non-zero `op_re`t value, but the dispatch function itself should still return zero. In fact, the handling of a non-zero return from a dispatch function is not all that robust (we recently had a bug report in HekaFS related to this) so it's something you should probably avoid" }, { "data": "This only underscores the difference between dispatch functions and `init`/`fini` functions, where non-zero returns are expected and handled logically by aborting the translator setup. We can see that down at the bottom, where we return -1 to indicate that we couldn't allocate our private-data area (more about that later). The first thing this init function does is check that the translator is being set up in the right kind of environment. Translators are called by parents and in turn call children. Some translators are \"initial\" translators that inject requests into the system from elsewhere -- e.g. mount/fuse injecting requests from the kernel, protocol/server injecting requests from the network. Those translators don't need parents, but `rot-13` does and so we check for that. Similarly, some translators are \"final\" translators that (from the perspective of the current process) terminate requests instead of passing them on -- e.g. `protocol/client` passing them to another node, `storage/posix` passing them to a local filesystem. Other translators \"multiplex\" between multiple children -- passing each parent request on to one (`cluster/dht`), some (`cluster/stripe`), or all (`cluster/afr`) of those children. `rot-13` fits into none of those categories either, so it checks that it has exactly one child. It might be more convenient or robust if translator shared libraries had standard variables describing these requirements, to be checked in a consistent way by the translator-loading infrastructure itself instead of by each separate init function, but this is the way translators work today. The last thing we see in this fragment is allocating our private data area. This can literally be anything we want; the infrastructure just provides the priv pointer as a convenience but takes no responsibility for how it's used. In this case we're using `GFCALLOC` to allocate our own `rot13privatet` structure. This gets us all the benefits of GlusterFS's memory-leak detection infrastructure, but the way we're calling it is not quite ideal. For one thing, the first two arguments -- from `calloc(3)` -- are kind of reversed. For another, notice how the last argument is zero. That can actually be an enumerated value, to tell the GlusterFS allocator what type we're allocating. This can be very useful information for memory profiling and leak detection, so it's recommended that you follow the example of any x`xx-mem-types.h` file elsewhere in the source tree instead of just passing zero here (even though that works). To finish our tour of standard initialization/termination, let's look at the end of `init` and the beginning of `fini`: ``` this->private = priv; gflog (\"rot13\", GFLOG_DEBUG, \"rot13 xlator loaded\"); return 0; } void fini (xlator_t *this) { rot13private_t *priv = this->private; if (!priv) return; this->private = NULL; GF_FREE (priv); ``` At the end of init we're just storing our private-data pointer in the `priv` field of our `xlator_t`, then returning zero to indicate that initialization succeeded. As is usually the case, our fini is even simpler. All it really has to do is `GF_FREE` our private-data pointer, which we do in a slightly roundabout way here. Notice how we don't even have a return value here, since there's nothing obvious and useful that the infrastructure could do if `fini` failed. That's practically everything we need to know to get our translator through loading, initialization, options processing, and" }, { "data": "If we had defined no dispatch functions, we could actually configure a daemon to use our translator and it would work as a basic pass-through from its parent to a single child. In the next post I'll cover how to build the translator and configure a daemon to use it, so that we can actually step through it in a debugger and see how it all fits together before we actually start adding functionality. This Time For Real In the first two parts of this series, we learned how to write a basic translator skeleton that can get through loading, initialization, and option processing. This time we'll cover how to build that translator, configure a volume to use it, and run the glusterfs daemon in debug mode. Unfortunately, there's not much direct support for writing new translators. You can check out a GlusterFS tree and splice in your own translator directory, but that's a bit painful because you'll have to update multiple makefiles plus a bunch of autoconf garbage. As part of the HekaFS project, I basically reverse engineered the truly necessary parts of the translator-building process and then pestered one of the Fedora glusterfs package maintainers (thanks daMaestro!) to add a `glusterfs-devel` package with the required headers. Since then the complexity level in the HekaFS tree has crept back up a bit, but I still remember the simple method and still consider it the easiest way to get started on a new translator. For the sake of those not using Fedora, I'm going to describe a method that doesn't depend on that header package. What it does depend on is a GlusterFS source tree, much as you might have cloned from GitHub or the Gluster review site. This tree doesn't have to be fully built, but you do need to run `autogen.sh` and configure in it. Then you can take the following simple makefile and put it in a directory with your actual source. ``` TARGET = rot-13.so OBJECTS = rot-13.o GLFS_SRC = /srv/glusterfs GLFS_LIB = /usr/lib64 HOSTOS = GFLINUXHOSTOS CFLAGS = -fPIC -Wall -O0 -g \\ -DHAVECONFIGH -DFILEOFFSETBITS=64 -DGNU_SOURCE \\ -D$(HOSTOS) -I$(GLFSSRC) -I$(GLFS_SRC)/contrib/uuid \\ -I$(GLFS_SRC)/libglusterfs/src LDFLAGS = -shared -nostartfiles -L$(GLFS_LIB) LIBS = -lglusterfs -lpthread $(TARGET): $(OBJECTS) $(CC) $(OBJECTS) $(LDFLAGS) -o $(TARGET) $(OBJECTS) $(LIBS) ``` Yes, it's still Linux-specific. Mea culpa. As you can see, we're sticking with the `rot-13` example, so you can just copy the files from `xlators/encryption/rot-13/src` in your GlusterFS tree to follow on. Type `make` and you should be rewarded with a nice little `.so` file. ``` xlator_example$ ls -l rot-13.so -rwxr-xr-x. 1 jeff jeff 40784 Nov 16 16:41 rot-13.so ``` Notice that we've built with optimization level zero and debugging symbols included, which would not typically be the case for a packaged version of GlusterFS. Let's put our version of `rot-13.so` into a slightly different file on our system, so that it doesn't stomp on the installed version (not that you'd ever want to use that anyway). ``` xlator_example# ls /usr/lib64/glusterfs/3git/xlator/encryption/ crypt.so crypt.so.0 crypt.so.0.0.0 rot-13.so rot-13.so.0 rot-13.so.0.0.0 xlator_example# cp rot-13.so \\ /usr/lib64/glusterfs/3git/xlator/encryption/my-rot-13.so ``` These paths represent the current Gluster filesystem layout, which is likely to be deprecated in favor of the Fedora layout; your paths may vary. At this point we're ready to configure a volume using our new" }, { "data": "To do that, I'm going to suggest something that's strongly discouraged except during development (the Gluster guys are going to hate me for this): write our own volfile. Here's just about the simplest volfile you'll ever see. ``` volume my-posix type storage/posix option directory /srv/export end-volume volume my-rot13 type encryption/my-rot-13 subvolumes my-posix end-volume ``` All we have here is a basic brick using `/srv/export` for its data, and then an instance of our translator layered on top -- no client or server is necessary for what we're doing, and the system will automatically push a mount/fuse translator on top if there's no server translator. To try this out, all we need is the following command (assuming the directories involved already exist). ``` xlator_example$ glusterfs --debug -f my.vol /srv/import ``` You should be rewarded with a whole lot of log output, including the text of the volfile (this is very useful for debugging problems in the field). If you go to another window on the same machine, you can see that you have a new filesystem mounted. ``` ~$ df /srv/import Filesystem 1K-blocks Used Available Use% Mounted on /srv/xlator_example/my.vol 114506240 2706176 105983488 3% /srv/import ``` Just for fun, write something into a file in `/srv/import`, then look at the corresponding file in `/srv/export` to see it all `rot-13`'ed for you. ``` ~$ echo hello > /srv/import/a_file ~$ cat /srv/export/a_file uryyb ``` There you have it -- functionality you control, implemented easily, layered on top of local storage. Now you could start adding functionality -- real encryption, perhaps -- and inevitably having to debug it. You could do that the old-school way, with `gf_log` (preferred) or even plain old `printf`, or you could run daemons under `gdb` instead. Alternatively, you could wait for the next Translator 101 post, where we'll be doing exactly that. Debugging a Translator Now that we've learned what a translator looks like and how to build one, it's time to run one and actually watch it work. The best way to do this is good old-fashioned `gdb`, as follows (using some of the examples from last time). ``` xlator_example# gdb glusterfs GNU gdb (GDB) Red Hat Enterprise Linux (7.2-50.el6) ... (gdb) r --debug -f my.vol /srv/import Starting program: /usr/sbin/glusterfs --debug -f my.vol /srv/import ... [2011-11-23 11:23:16.495516] I [fuse-bridge.c:2971:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.13 ``` If you get to this point, your glusterfs client process is already running. You can go to another window to see the mountpoint, do file operations, etc. ``` ~# df /srv/import Filesystem 1K-blocks Used Available Use% Mounted on /root/xlator_example/my.vol 114506240 2643968 106045568 3% /srv/import ~# ls /srv/import a_file ~# cat /srv/import/a_file hello ``` Now let's interrupt the process and see where we are. ``` Program received signal SIGINT, Interrupt. 0x0000003a0060b3dc in pthreadcondwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 (gdb) info threads 5 Thread 0x7fffeffff700 (LWP 27206) 0x0000003a002dd8c7 in readv () from /lib64/libc.so.6 4 Thread 0x7ffff50e3700 (LWP 27205) 0x0000003a0060b75b in pthreadcondtimedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 3 Thread 0x7ffff5f02700 (LWP 27204) 0x0000003a0060b3dc in pthreadcondwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 2 Thread 0x7ffff6903700 (LWP 27203) 0x0000003a0060f245 in sigwait () from /lib64/libpthread.so.0 1 Thread 0x7ffff7957700 (LWP 27196) 0x0000003a0060b3dc in pthreadcondwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 ``` Like any non-toy server, this one has multiple threads. What are they all doing? Honestly, even I don't" }, { "data": "Thread 1 turns out to be in `eventdispatchepoll`, which means it's the one handling all of our network I/O. Note that with socket multi-threading patch this will change, with one thread in `socketpoller` per connection. Thread 2 is in `glusterfssigwaiter` which means signals will be isolated to that thread. Thread 3 is in `syncenv_task`, so it's a worker process for synchronous requests such as those used by the rebalance and repair code. Thread 4 is in `janitorgetnext_fd`, so it's waiting for a chance to close no-longer-needed file descriptors on the local filesystem. (I admit I had to look that one up, BTW.) Lastly, thread 5 is in `fusethreadproc`, so it's the one fetching requests from our FUSE interface. You'll often see many more threads than this, but it's a pretty good basic set. Now, let's set a breakpoint so we can actually watch a request. ``` (gdb) b rot13_writev Breakpoint 1 at 0x7ffff50e4f0b: file rot-13.c, line 119. (gdb) c Continuing. ``` At this point we go into our other window and do something that will involve a write. ``` ~# echo goodbye > /srv/import/another_file (back to the first window) [Switching to Thread 0x7fffeffff700 (LWP 27206)] Breakpoint 1, rot13_writev (frame=0x7ffff6e4402c, this=0x638440, fd=0x7ffff409802c, vector=0x7fffe8000cd8, count=1, offset=0, iobref=0x7fffe8001070) at rot-13.c:119 119 rot13privatet *priv = (rot13privatet *)this->private; ``` Remember how we built with debugging symbols enabled and no optimization? That will be pretty important for the next few steps. As you can see, we're in `rot13_writev`, with several parameters. `frame` is our always-present frame pointer for this request. Also, `frame->local` will point to any local data we created and attached to the request ourselves. `this` is a pointer to our instance of the `rot-13` translator. You can examine it if you like to see the name, type, options, parent/children, inode table, and other stuff associated with it. `fd` is a pointer to a file-descriptor object* (`fd_t`, not just a file-descriptor index which is what most people use \"fd\" for). This in turn points to an inode object (`inode_t`) and we can associate our own `rot-13`-specific data with either of these. `vector` and `count` together describe the data buffers for this write, which we'll get to in a moment. `offset` is the offset into the file at which we're writing. `iobref` is a buffer-reference object, which is used to track the life cycle of buffers containing read/write data. If you look closely, you'll notice that `vector[0].iov_base` points to the same address as `iobref->iobrefs[0].ptr`, which should give you some idea of the inter-relationships between vector and iobref. OK, now what about that `vector`? We can use it to examine the data being written, like this. ``` (gdb) p vector[0] $2 = {iovbase = 0x7ffff7936000, iovlen = 8} (gdb) x/s 0x7ffff7936000 0x7ffff7936000: \"goodbye\\n\" ``` It's not always safe to view this data as a string, because it might just as well be binary data, but since we're generating the write this time it's safe and convenient. With that knowledge, let's step through things a bit. ``` (gdb) s 120 if (priv->encrypt_write) (gdb) 121 rot13_iovec (vector, count); (gdb) rot13_iovec (vector=0x7fffe8000cd8, count=1) at rot-13.c:57 57 for (i = 0; i < count; i++) { (gdb) 58 rot13 (vector[i].iovbase, vector[i].iovlen); (gdb) rot13 (buf=0x7ffff7936000 \"goodbye\\n\", len=8) at" }, { "data": "45 for (i = 0; i < len; i++) { (gdb) 46 if (buf[i] >= 'a' && buf[i] <= 'z') (gdb) 47 buf[i] = 'a' + ((buf[i] - 'a' + 13) % 26); ``` Here we've stepped into `rot13_iovec`, which iterates through our vector calling `rot13`, which in turn iterates through the characters in that chunk doing the `rot-13` operation if/as appropriate. This is pretty straightforward stuff, so let's skip to the next interesting bit. ``` (gdb) fin Run till exit from #0 rot13 (buf=0x7ffff7936000 \"goodbye\\n\", len=8) at rot-13.c:47 rot13_iovec (vector=0x7fffe8000cd8, count=1) at rot-13.c:57 57 for (i = 0; i < count; i++) { (gdb) fin Run till exit from #0 rot13_iovec (vector=0x7fffe8000cd8, count=1) at rot-13.c:57 rot13_writev (frame=0x7ffff6e4402c, this=0x638440, fd=0x7ffff409802c, vector=0x7fffe8000cd8, count=1, offset=0, iobref=0x7fffe8001070) at rot-13.c:123 123 STACK_WIND (frame, (gdb) b 129 Breakpoint 2 at 0x7ffff50e4f35: file rot-13.c, line 129. (gdb) b rot13writevcbk Breakpoint 3 at 0x7ffff50e4db3: file rot-13.c, line 106. (gdb) c ``` So we've set breakpoints on both the callback and the statement following the `STACK_WIND`. Which one will we hit first? ``` Breakpoint 3, rot13writevcbk (frame=0x7ffff6e4402c, cookie=0x7ffff6e440d8, this=0x638440, opret=8, operrno=0, prebuf=0x7fffefffeca0, postbuf=0x7fffefffec30) at rot-13.c:106 106 STACKUNWINDSTRICT (writev, frame, opret, operrno, prebuf, postbuf); (gdb) bt cookie=0x7ffff6e440d8, this=0x638440, opret=8, operrno=0, prebuf=0x7fffefffeca0, postbuf=0x7fffefffec30) at rot-13.c:106 this=<value optimized out>, fd=<value optimized out>, vector=<value optimized out>, count=1, offset=<value optimized out>, iobref=0x7fffe8001070) at posix.c:2217 this=0x638440, fd=0x7ffff409802c, vector=0x7fffe8000cd8, count=1, offset=0, iobref=0x7fffe8001070) at rot-13.c:123 ``` Surprise! We're in `rot13writevcbk` now, called (indirectly) while we're still in `rot13writev` before `STACKWIND` returns (still at rot-13.c:123). If you did any request cleanup here, then you need to be careful about what you do in the remainder of `rot13_writev` because data may have been freed etc. It's tempting to say you should just do the cleanup in `rot13_writev` after the `STACK_WIND,` but that's not valid because it's also possible that some other translator returned without calling `STACK_UNWIND` -- i.e. before `rot13_writev` is called, so then it would be the one getting null-pointer errors instead. To put it another way, the callback and the return from `STACK_WIND` can occur in either order or even simultaneously on different threads. Even if you were to use reference counts, you'd have to make sure to use locking or atomic operations to avoid races, and it's not worth it. Unless you really understand the possible flows of control and know what you're doing, it's better to do cleanup in the callback and nothing after `STACK_WIND.` At this point all that's left is a `STACK_UNWIND` and a return. The `STACK_UNWIND` invokes our parent's completion callback, and in this case our parent is FUSE so at that point the VFS layer is notified of the write being complete. Finally, we return through several levels of normal function calls until we come back to fusethreadproc, which waits for the next request. So that's it. For extra fun, you might want to repeat this exercise by stepping through some other call -- stat or setxattr might be good choices -- but you'll have to use a translator that actually implements those calls to see much that's interesting. Then you'll pretty much know everything I knew when I started writing my first for-real translators, and probably even a bit more. I hope you've enjoyed this series, or at least found it useful, and if you have any suggestions for other topics I should cover please let me know (via comments or email, IRC or Twitter). Other versions -- Original author's site: * * Gluster community site:" } ]
{ "category": "Runtime", "file_name": "translator-development.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Stretch Storage Cluster For environments that only have two failure domains available where data can be replicated, consider the case where one failure domain is down and the data is still fully available in the remaining failure domain. To support this scenario, Ceph has integrated support for \"stretch\" clusters. Rook requires three zones. Two zones (A and B) will each run all types of Rook pods, which we call the \"data\" zones. Two mons run in each of the two data zones, while two replicas of the data are in each zone for a total of four data replicas. The third zone (arbiter) runs a single mon. No other Rook or Ceph daemons need to be run in the arbiter zone. For this example, we assume the desired failure domain is a zone. Another failure domain can also be specified with a known which is already being used for OSD failure domains. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: dataDirHostPath: /var/lib/rook mon: count: 5 allowMultiplePerNode: false stretchCluster: failureDomainLabel: topology.kubernetes.io/zone subFailureDomain: host zones: name: a arbiter: true name: b name: c cephVersion: image: quay.io/ceph/ceph:v18.2.2 allowUnsupported: true storage: useAllNodes: true useAllDevices: true deviceFilter: \"\" placement: osd: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: topology.kubernetes.io/zone operator: In values: b c ``` For more details, see the ." } ]
{ "category": "Runtime", "file_name": "stretch-cluster.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "English Due to StatefulSet being commonly used for stateful services, there is a higher demand for stable network identifiers. Spiderpool ensures that StatefulSet Pods consistently retain the same IP address, even in scenarios such as restarts or rebuilds. StatefulSet utilizes fixed addresses in the following scenarios: When a StatefulSet Pod fails and needs to be reconstructed. Once a Pod is deleted and needs to be restarted and the replicas of the StatefulSet remains unchanged. The requirements for fixed IP address differ between StatefulSet and Deployment: For StatefulSet, the Pod's name remains the same throughout Pod restarts despite its changed UUID. As the Pod is stateful, application administrators hope that each Pod continues to be assigned the same IP address after restarts. For Deployment, both the Pod name and its UUID change after restarts. Deployment Pods are stateless, so there is no need to maintain the same IP address between Pod restarts. Instead, administrators often prefer IP addresses to be allocated within a specified range for all replicas in Deployment. Many open-source CNI solutions provide limited support for fixing IP addresses for StatefulSet. However, Spiderpool's StatefulSet solution guarantees consistent allocation of the same IP address to the Pods during restarts and rebuilds. This feature is enabled by default. When it is enabled, StatefulSet Pods can be assigned fixed IP addresses from a specified IP pool range. Whether or not using a fixed IP pool, StatefulSet Pods will consistently receive the same IP address. StatefulSet applications will be treated as stateless if the feature is disabled. You can disable it during the installation of Spiderpool using Helm via the flag `--set ipam.enableStatefulSet=false`. During the transition from scaling down to scaling up StatefulSet replicas, Spiderpool does not guarantee that new Pods will inherit the IP addresses previously used by the scaled-down Pods. Currently, when a StatefulSet Pod is running, modifying the StatefulSet annotation to specify a different IP pool and restarting the Pod will not cause the Pod IP addresses to be allocated from the new IP pool range. Instead, the Pod will continue using their existing fixed IP addresses. A ready Kubernetes cluster. has already been installed. Refer to to install Spiderpool. And make sure that the helm installs the option `ipam.enableStatefulSet=true`. To simplify the creation of JSON-formatted Multus CNI configuration, Spiderpool introduces the SpiderMultusConfig CR, which automates the management of Multus NetworkAttachmentDefinition CRs. Here is an example of creating a Macvlan SpiderMultusConfig: master: the interface `ens192` is used as the spec for master. ```bash MACVLANMASTERINTERFACE=\"ens192\" MACVLANMULTUSNAME=\"macvlan-$MACVLANMASTERINTERFACE\" cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ${MACVLANMULTUSNAME} namespace: kube-system spec: cniType: macvlan enableCoordinator: true macvlan: master: ${MACVLANMASTERINTERFACE} EOF ``` With the provided configuration, we create the following Macvlan SpiderMultusConfig that will automatically generate a Multus NetworkAttachmentDefinition CR. ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -n kube-system NAME AGE macvlan-ens192 26m ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system NAME AGE macvlan-ens192 27m ```" }, { "data": "~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: test-ippool spec: subnet: 10.6.0.0/16 ips: 10.6.168.101-10.6.168.110 EOF ``` The following YAML example creates a StatefulSet application with 2 replicas: `ipam.spidernet.io/ippool`: specify the IP pool for Spiderpool. Spiderpool automatically selects some IP addresses from this pool and bind them to the application, ensuring that the IP addresses remain fixed for the StatefulSet application. `v1.multus-cni.io/default-network`: create a default network interface for the application. ```bash cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: StatefulSet metadata: name: test-sts spec: replicas: 2 selector: matchLabels: app: test-sts template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"test-ippool\"] } v1.multus-cni.io/default-network: kube-system/macvlan-ens192 labels: app: test-sts spec: containers: name: test-sts image: nginx imagePullPolicy: IfNotPresent EOF ``` When the StatefulSet application is created, Spiderpool will select a random set of IP addresses from the specified IP pool and bind them to the application. ```bash ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT test-ippool 4 10.6.0.0/16 2 10 false ~# kubectl get po -l app=test-sts -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-sts-0 1/1 Running 0 3m13s 10.6.168.105 node2 <none> <none> test-sts-1 1/1 Running 0 3m12s 10.6.168.102 node1 <none> <none> ``` Upon restarting StatefulSet Pods, it is observed that each Pod retains its assigned IP address. ```bash ~# kubectl get pod | grep \"test-sts\" | awk '{print $1}' | xargs kubectl delete pod pod \"test-sts-0\" deleted pod \"test-sts-1\" deleted ~# kubectl get po -l app=test-sts -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-sts-0 1/1 Running 0 18s 10.6.168.105 node2 <none> <none> test-sts-1 1/1 Running 0 17s 10.6.168.102 node1 <none> <none> ``` Upon scaling up or down the StatefulSet Pods, the IP addresses of each Pod change as expected. ```bash ~# kubectl scale deploy test-sts --replicas 3 statefulset.apps/test-sts scaled ~# kubectl get po -l app=test-sts -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-sts-0 1/1 Running 0 4m58s 10.6.168.105 node2 <none> <none> test-sts-1 1/1 Running 0 4m57s 10.6.168.102 node1 <none> <none> test-sts-2 1/1 Running 0 4s 10.6.168.109 node2 <none> <none> ~# kubectl get pod | grep \"test-sts\" | awk '{print $1}' | xargs kubectl delete pod pod \"test-sts-0\" deleted pod \"test-sts-1\" deleted pod \"test-sts-2\" deleted ~# kubectl get po -l app=test-sts -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-sts-0 1/1 Running 0 6s 10.6.168.105 node2 <none> <none> test-sts-1 1/1 Running 0 4s 10.6.168.102 node1 <none> <none> test-sts-2 1/1 Running 0 3s 10.6.168.109 node2 <none> <none> ~# kubectl scale sts test-sts --replicas 2 statefulset.apps/test-sts scaled ~# kubectl get po -l app=test-sts -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-sts-0 1/1 Running 0 88s 10.6.168.105 node2 <none> <none> test-sts-1 1/1 Running 0 86s 10.6.168.102 node1 <none> <none> ``` Spiderpool ensures that StatefulSet Pods maintain a consistent IP address even during scenarios like restarts or rebuilds, satisfying the requirement for fixed IP addresses in StatefulSet." } ]
{ "category": "Runtime", "file_name": "statefulset.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"ark schedule create\" layout: docs Create a schedule The --schedule flag is required, in cron notation: | Character Position | Character Period | Acceptable Values | | -|:-:| --:| | 1 | Minute | 0-59,* | | 2 | Hour | 0-23,* | | 3 | Day of Month | 1-31,* | | 4 | Month | 1-12,* | | 5 | Day of Week | 0-7,* | ``` ark schedule create NAME --schedule [flags] ``` ``` ark create schedule NAME --schedule=\"0 /6 \" ``` ``` --exclude-namespaces stringArray namespaces to exclude from the backup --exclude-resources stringArray resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io -h, --help help for create --include-cluster-resources optionalBool[=true] include cluster-scoped resources in the backup --include-namespaces stringArray namespaces to include in the backup (use '' for all namespaces) (default ) --include-resources stringArray resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources) --label-columns stringArray a comma-separated list of labels to be displayed as columns --labels mapStringString labels to apply to the backup -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. --schedule string a cron expression specifying a recurring schedule for this backup to run -l, --selector labelSelector only back up resources matching this label selector (default <none>) --show-labels show labels in the last column --snapshot-volumes optionalBool[=true] take snapshots of PersistentVolumes as part of the backup --ttl duration how long before the backup can be garbage collected (default 720h0m0s) ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with schedules" } ]
{ "category": "Runtime", "file_name": "ark_schedule_create.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 7 sidebar_label: \"Exporter\" Exporter is HwameiStor's metrics server which will collect the metrics for the system resources, such as Disk, Volumes, Operations, etc.., and supply for the Prometheus module." } ]
{ "category": "Runtime", "file_name": "exporter.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "Velero should provide a way to trigger actions before and after each backup and restore. Important: These proposed plugin hooks are fundamentally different from the existing plugin hooks, BackupItemAction and RestoreItemAction, which are triggered per resource item during backup and restore, respectively. The proposed plugin hooks are to be executed only once: pre-backup (before backup starts), post-backup (after the backup is completed and uploaded to object storage, including volumes snapshots), pre-restore (before restore starts) and post-restore (after the restore is completed, including volumes are restored). For the backup, the sequence of events of Velero backup are the following (these sequence depicted is prior upcoming changes for ): ``` New Backup Request |--> Validation of the request |--> Set Backup Phase \"In Progress\" | --> Start Backup | --> Discover all Plugins |--> Check if Backup Exists |--> Backup all K8s Resource Items |--> Perform all Volumes Snapshots |--> Final Backup Phase is determined |--> Persist Backup and Logs on Object Storage ``` We propose the pre-backup and post-backup plugin hooks to be executed in this sequence: ``` New Backup Request |--> Validation of the request |--> Set Backup Phase \"In Progress\" | --> Start Backup | --> Discover all Plugins |--> Check if Backup Exists |--> PreBackupActions are executed, logging actions on existent backup log file |--> Backup all K8s Resource Items |--> Perform all Volumes Snapshots |--> Final Backup Phase is determined |--> Persist Backup and logs on Object Storage |--> PostBackupActions are executed, logging to its own file ``` These plugin hooks will be invoked: PreBackupAction: plugin actions are executed after the backup object is created and validated but before the backup is being processed, more precisely before function . If the PreBackupActions return an err, the backup object is not processed and the Backup phase will be set as `FailedPreBackupActions`. PostBackupAction: plugin actions are executed after the backup is finished and persisted, more precisely after function . The proposed plugin hooks will execute actions that will have statuses on their own: `Backup.Status.PreBackupActionsStatuses` and `Backup.Status.PostBackupActionsStatuses` which will be an array of a proposed struct `ActionStatus` with PluginName, StartTimestamp, CompletionTimestamp and Phase. For the restore, the sequence of events of Velero restore are the following (these sequence depicted is prior upcoming changes for ): ``` New Restore Request |--> Validation of the request |--> Checks if restore is from a backup or a schedule |--> Fetches backup |--> Set Restore Phase \"In Progress\" |--> Start Restore |--> Discover all Plugins |--> Download backup file to temp |--> Fetch list of volumes snapshots |--> Restore K8s items, including PVs |--> Final Restore Phase is determined |--> Persist Restore logs on Object Storage ``` We propose the pre-restore and post-restore plugin hooks to be executed in this sequence: ``` New Restore Request |--> Validation of the request |--> Checks if restore is from a backup or a schedule |--> Fetches backup |--> Set Restore Phase \"In Progress\" |--> Start Restore |--> Discover all Plugins |--> Download backup file to temp |--> Fetch list of volumes snapshots |--> PreRestoreActions are executed, logging actions on existent backup log file |--> Restore K8s items, including PVs |--> Final Restore Phase is determined |--> Persist Restore logs on Object Storage |--> PostRestoreActions are executed, logging to its own file ``` These plugin hooks will be invoked: PreRestoreAction: plugin actions are executed after the restore object is created and validated and before the backup object is fetched, more precisely in function `runValidatedRestore` after function" }, { "data": "If the PreRestoreActions return an err, the restore object is not processed and the Restore phase will be set a `FailedPreRestoreActions`. PostRestoreAction: plugin actions are executed after the restore finishes processing all items and volumes snapshots are restored and logs persisted, more precisely in function `processRestore` after setting . The proposed plugin hooks will execute actions that will have statuses on their own: `Restore.Status.PreRestoreActionsStatuses` and `Restore.Status.PostRestoreActionsStatuses` which will be an array of a proposed struct `ActionStatus` with PluginName, StartTimestamp, CompletionTimestamp and Phase. Increasingly, Velero is employed for workload migrations across different Kubernetes clusters. Using Velero for migrations requires an atomic operation involving a Velero backup on a source cluster followed by a Velero restore on a destination cluster. It is common during these migrations to perform many actions inside and outside Kubernetes clusters. Attention: these actions are not per resource item, but they are actions to be executed once before and/or after the migration itself (remember, migration in this context is Velero Backup + Velero Restore). One important use case driving this proposal is migrating stateful workloads at scale across different clusters/storage backends. Today, Velero's Restic integration is the response for such use cases, but there are some limitations: Quiesce/unquiesce workloads: Pod hooks are useful for quiescing/unquiescing workloads, but platform engineers often do not have the luxury/visibility/time/knowledge to go through each pod in order to add specific commands to quiesce/unquiesce workloads. Orphan PVC/PV pairs: PVCs/PVs that do not have associated running pods are not backed up and consequently, are not migrated. Aiming to address these two limitations, and separate from this proposal, we would like to write a Velero plugin that takes advantage of the proposed Pre-Backup plugin hook. This plugin will be executed once (not per resource item) prior backup. It will scale down the applications setting `.spec.replicas=0` to all deployments, statefulsets, daemonsets, replicasets, etc. and will start a small-footprint staging pod that will mount all PVC/PV pairs. Similarly, we would like to write another plugin that will utilize the proposed Post-Restore plugin hook. This plugin will unquiesce migrated applications by killing the staging pod and reinstating original `.spec.replicas` values after the Velero restore is completed. Other examples of plugins that can use the proposed plugin hooks are: PostBackupAction: trigger a Velero Restore after a successful Velero backup (and complete the migration operation). PreRestoreAction: pre-expand the cluster's capacity via Cluster API to avoid starvation of cluster resources before the restore. PostRestoreAction: call actions to be performed outside Kubernetes clusters, such as configure a global load balancer (GLB) that enables the new cluster. The post backup actions will be executed after the backup is uploaded (persisted) on the disk. The logs of post-backup actions will be uploaded on the disk once the actions are completed. The post restore actions will be executed after the restore is uploaded (persisted) on the disk. The logs of post-restore actions will be uploaded on the disk once the actions are completed. This design seeks to provide missing extension points. This proposal's scope is to only add the new plugin hooks, not the plugins themselves. Provide PreBackupAction, PostBackupAction, PreRestoreAction, and PostRestoreAction APIs for plugins to implement. Update Velero backup and restore creation logic to invoke registered PreBackupAction and PreRestoreAction plugins before processing the backup and restore respectively. Update Velero backup and restore complete logic to invoke registered PostBackupAction and PostRestoreAction plugins the objects are uploaded on disk. Create one `ActionStatus` struct to keep track of execution of the plugin hooks. This struct has PluginName, StartTimestamp, CompletionTimestamp and Phase. Add sub statuses for the plugins on Backup object: `Backup.Status.PreBackupActionsStatuses` and `Backup.Status.PostBackupActionsStatuses`. They will be flagged as optional and" }, { "data": "They will be populated only each plugin registered for the PreBackup and PostBackup hooks, respectively. Add sub statuses for the plugins on Restore object: `Backup.Status.PreRestoreActionsStatuses` and `Backup.Status.PostRestoreActionsStatuses`. They will be flagged as optional and nullable. They will be populated only each plugin registered for the PreRestore and PostRestore hooks, respectively. that will be populated optionally if Pre/Post Backup/Restore. Specific implementations of the PreBackupAction, PostBackupAction, PreRestoreAction and PostRestoreAction API beyond test cases. For migration specific actions (Velero Backup + Velero Restore), add disk synchronization during the validation of the Restore (making sure the newly created backup will show during restore) The Velero backup controller package will be modified for `PreBackupAction` and `PostBackupAction`. The PreBackupAction plugin API will resemble the BackupItemAction plugin hook design, but with the fundamental difference that it will receive only as input the Velero `Backup` object created. It will not receive any resource list items because the backup is not yet running at that stage. In addition, the `PreBackupAction` interface will only have an `Execute()` method since the plugin will be executed once per Backup creation, not per item. The Velero backup controller will be modified so that if there are any PreBackupAction plugins registered, they will be The PostBackupAction plugin API will resemble the BackupItemAction plugin design, but with the fundamental difference that it will receive only as input the Velero `Backup` object without any resource list items. By this stage, the backup has already been executed, with items backed up and volumes snapshots processed and persisted. The `PostBackupAction` interface will only have an `Execute()` method since the plugin will be executed only once per Backup, not per item. If there are any PostBackupAction plugins registered, they will be executed after the backup is finished and persisted, more precisely after function . The Velero restore controller package will be modified for `PreRestoreAction` and `PostRestoreAction`. The PreRestoreAction plugin API will resemble the RestoreItemAction plugin design, but with the fundamental difference that it will receive only as input the Velero `Restore` object created. It will not receive any resource list items because the restore has not yet been running at that stage. In addition, the `PreRestoreAction` interface will only have an `Execute()` method since the plugin will be executed only once per Restore creation, not per item. The Velero restore controller will be modified so that if there are any PreRestoreAction plugins registered, they will be executed after the restore object is created and validated and before the backup object is fetched, more precisely in function `runValidatedRestore` after function . If the PreRestoreActions return an err, the restore object is not processed and the Restore phase will be set a `FailedPreRestoreActions`. The PostRestoreAction plugin API will resemble the RestoreItemAction plugin design, but with the fundamental difference that it will receive only as input the Velero `Restore` object without any resource list items. At this stage, the restore has already been executed. The `PostRestoreAction` interface will only have an `Execute()` method since the plugin will be executed only once per Restore, not per item. If any PostRestoreAction plugins are registered, they will be executed after the restore finishes processing all items and volumes snapshots are restored and logs persisted, more precisely in function `processRestore` after setting . To keep the status of the plugins, we propose the following struct: ```go type ActionStatus struct { // PluginName is the name of the registered plugin // retrieved by the PluginManager as id.Name // +optional // +nullable PluginName string `json:\"pluginName,omitempty\"` // StartTimestamp records the time the plugin started. // +optional // +nullable StartTimestamp *metav1.Time `json:\"startTimestamp,omitempty\"` // CompletionTimestamp records the time the plugin was" }, { "data": "// +optional // +nullable CompletionTimestamp *metav1.Time `json:\"completionTimestamp,omitempty\"` // Phase is the current state of the Action. // +optional // +nullable Phase ActionPhase `json:\"phase,omitempty\"` } // ActionPhase is a string representation of the lifecycle phase of an action being executed by a plugin // of a Velero backup. // +kubebuilder:validation:Enum=InProgress;Completed;Failed type ActionPhase string const ( // ActionPhaseInProgress means the action has being executed ActionPhaseInProgress ActionPhase = \"InProgress\" // ActionPhaseCompleted means the action finished successfully ActionPhaseCompleted ActionPhase = \"Completed\" // ActionPhaseFailed means the action failed ActionPhaseFailed ActionPhase = \"Failed\" ) ``` The `Backup` Status section will have the follow: ```go type BackupStatus struct { (...) // PreBackupActionsStatuses contains information about the pre backup plugins's execution. // Note that this information is will be only populated if there are prebackup plugins actions // registered // +optional // +nullable PreBackupActionsStatuses *[]ActionStatus `json:\"preBackupActionsStatuses,omitempty\"` // PostBackupActionsStatuses contains information about the post backup plugins's execution. // Note that this information is will be only populated if there are postbackup plugins actions // registered // +optional // +nullable PostBackupActionsStatuses *[]ActionStatus `json:\"postBackupActionsStatuses,omitempty\"` } ``` The `Restore` Status section will have the follow: ```go type RestoreStatus struct { (...) // PreRestoreActionsStatuses contains information about the pre Restore plugins's execution. // Note that this information is will be only populated if there are preRestore plugins actions // registered // +optional // +nullable PreRestoreActionsStatuses *[]ActionStatus `json:\"preRestoreActionsStatuses,omitempty\"` // PostRestoreActionsStatuses contains information about the post restore plugins's execution. // Note that this information is will be only populated if there are postrestore plugins actions // registered // +optional // +nullable PostRestoreActionsStatuses *[]ActionStatus `json:\"postRestoreActionsStatuses,omitempty\"` } ``` In case the PreBackupActionsStatuses has at least one `ActionPhase` = `Failed`, it means al least one of the plugins returned an error and consequently, the backup will not move forward. The final status of the Backup object will be set as `FailedPreBackupActions`: ```go // BackupPhase is a string representation of the lifecycle phase // of a Velero backup. // +kubebuilder:validation:Enum=New;FailedValidation;FailedPreBackupActions;InProgress;Uploading;UploadingPartialFailure;Completed;PartiallyFailed;Failed;Deleting type BackupPhase string const ( (...) // BackupPhaseFailedPreBackupActions means one or more the Pre Backup Actions has failed // and therefore backup will not run. BackupPhaseFailedPreBackupActions BackupPhase = \"FailedPreBackupActions\" (...) ) ``` In case the PreRestoreActionsStatuses has at least one `ActionPhase` = `Failed`, it means al least one of the plugins returned an error and consequently, the restore will not move forward. The final status of the Restore object will be set as `FailedPreRestoreActions`: ```go // RestorePhase is a string representation of the lifecycle phase // of a Velero restore // +kubebuilder:validation:Enum=New;FailedValidation;FailedPreRestoreActions;InProgress;Completed;PartiallyFailed;Failed type RestorePhase string const ( (...) // RestorePhaseFailedPreRestoreActions means one or more the Pre Restore Actions has failed // and therefore restore will not run. RestorePhaseFailedPreRestoreActions BackupPhase = \"FailedPreRestoreActions\" (...) ) ``` The `PreBackupAction` interface is as follows: ```go // PreBackupAction provides a hook into the backup process before it begins. type PreBackupAction interface { // Execute the PreBackupAction plugin providing it access to the Backup that // is being executed Execute(backup *api.Backup) error } ``` `PreBackupAction` will be defined in `pkg/plugin/velero/prebackupaction.go`. The `PostBackupAction` interface is as follows: ```go // PostBackupAction provides a hook into the backup process after it completes. type PostBackupAction interface { // Execute the PostBackupAction plugin providing it access to the Backup that // has been completed Execute(backup *api.Backup) error } ``` `PostBackupAction` will be defined in `pkg/plugin/velero/postbackupaction.go`. The `PreRestoreAction` interface is as follows: ```go // PreRestoreAction provides a hook into the restore process before it begins. type PreRestoreAction interface { // Execute the PreRestoreAction plugin providing it access to the Restore that // is being executed Execute(restore *api.Restore) error } ``` `PreRestoreAction` will be defined in" }, { "data": "The `PostRestoreAction` interface is as follows: ```go // PostRestoreAction provides a hook into the restore process after it completes. type PostRestoreAction interface { // Execute the PostRestoreAction plugin providing it access to the Restore that // has been completed Execute(restore *api.Restore) error } ``` `PostRestoreAction` will be defined in `pkg/plugin/velero/postrestoreaction.go`. For the persistence of the logs originated from the PostBackup and PostRestore plugins, create two additional methods on `BackupStore` interface: ```go type BackupStore interface { (...) PutPostBackuplog(backup string, log io.Reader) error PutPostRestoreLog(backup, restore string, log io.Reader) error (...) ``` The implementation of these new two methods will go hand-in-hand with the changes of uploading phases rebase. In `pkg/plugin/proto`, add the following: Protobuf definitions will be necessary for PreBackupAction in `pkg/plugin/proto/PreBackupAction.proto`. ```protobuf message PreBackupActionExecuteRequest { ... } service PreBackupAction { rpc Execute(PreBackupActionExecuteRequest) returns (Empty) } ``` Once these are written, then a client and server implementation can be written in `pkg/plugin/framework/prebackupactionclient.go` and `pkg/plugin/framework/prebackupactionserver.go`, respectively. Protobuf definitions will be necessary for PostBackupAction in `pkg/plugin/proto/PostBackupAction.proto`. ```protobuf message PostBackupActionExecuteRequest { ... } service PostBackupAction { rpc Execute(PostBackupActionExecuteRequest) returns (Empty) } ``` Once these are written, then a client and server implementation can be written in `pkg/plugin/framework/postbackupactionclient.go` and `pkg/plugin/framework/postbackupactionserver.go`, respectively. Protobuf definitions will be necessary for PreRestoreAction in `pkg/plugin/proto/PreRestoreAction.proto`. ```protobuf message PreRestoreActionExecuteRequest { ... } service PreRestoreAction { rpc Execute(PreRestoreActionExecuteRequest) returns (Empty) } ``` Once these are written, then a client and server implementation can be written in `pkg/plugin/framework/prerestoreactionclient.go` and `pkg/plugin/framework/prerestoreactionserver.go`, respectively. Protobuf definitions will be necessary for PostRestoreAction in `pkg/plugin/proto/PostRestoreAction.proto`. ```protobuf message PostRestoreActionExecuteRequest { ... } service PostRestoreAction { rpc Execute(PostRestoreActionExecuteRequest) returns (Empty) } ``` Once these are written, then a client and server implementation can be written in `pkg/plugin/framework/postrestoreactionclient.go` and `pkg/plugin/framework/postrestoreactionserver.go`, respectively. Similar to the `RestoreItemAction` and `BackupItemAction` plugins, restartable processes will need to be implemented (with the difference that there is no `AppliedTo()` method). In `pkg/plugin/clientmgmt/`, add `restartableprebackup_action.go`, creating the following unexported type: ```go type restartablePreBackupAction struct { key kindAndName sharedPluginProcess RestartableProcess } func newRestartablePreBackupAction(name string, sharedPluginProcess RestartableProcess) *restartablePreBackupAction { // ... } func (r *restartablePreBackupAction) getPreBackupAction() (velero.PreBackupAction, error) { // ... } func (r *restartablePreBackupAction) getDelegate() (velero.PreBackupAction, error) { // ... } // Execute restarts the plugin's process if needed, then delegates the call. func (r restartablePreBackupAction) Execute(input velero.PreBackupActionInput) (error) { // ... } ``` `restartablepostbackup_action.go`, creating the following unexported type: ```go type restartablePostBackupAction struct { key kindAndName sharedPluginProcess RestartableProcess } func newRestartablePostBackupAction(name string, sharedPluginProcess RestartableProcess) *restartablePostBackupAction { // ... } func (r *restartablePostBackupAction) getPostBackupAction() (velero.PostBackupAction, error) { // ... } func (r *restartablePostBackupAction) getDelegate() (velero.PostBackupAction, error) { // ... } // Execute restarts the plugin's process if needed, then delegates the call. func (r restartablePostBackupAction) Execute(input velero.PostBackupActionInput) (error) { // ... } ``` `restartableprerestore_action.go`, creating the following unexported type: ```go type restartablePreRestoreAction struct { key kindAndName sharedPluginProcess RestartableProcess } func newRestartablePreRestoreAction(name string, sharedPluginProcess RestartableProcess) *restartablePreRestoreAction { // ... } func (r *restartablePreRestoreAction) getPreRestoreAction() (velero.PreRestoreAction, error) { // ... } func (r *restartablePreRestoreAction) getDelegate() (velero.PreRestoreAction, error) { // ... } // Execute restarts the plugin's process if needed, then delegates the call. func (r restartablePreRestoreAction) Execute(input velero.PreRestoreActionInput) (error) { // ... } ``` `restartablepostrestore_action.go`, creating the following unexported type: ```go type restartablePostRestoreAction struct { key kindAndName sharedPluginProcess RestartableProcess } func newRestartablePostRestoreAction(name string, sharedPluginProcess RestartableProcess) *restartablePostRestoreAction { // ... } func (r *restartablePostRestoreAction) getPostRestoreAction() (velero.PostRestoreAction, error) { // ... } func (r *restartablePostRestoreAction) getDelegate() (velero.PostRestoreAction, error) { // ... } // Execute restarts the plugin's process if needed, then delegates the call. func (r restartablePostRestoreAction) Execute(input velero.PostRestoreActionInput) (error) { // ... } ``` Add the following methods to the `Manager` interface in `pkg/plugin/clientmgmt/manager.go`: ```go type Manager interface {" }, { "data": "// Get PreBackupAction returns a PreBackupAction plugin for name. GetPreBackupAction(name string) (PreBackupAction, error) // Get PreBackupActions returns the all PreBackupAction plugins. GetPreBackupActions() ([]PreBackupAction, error) // Get PostBackupAction returns a PostBackupAction plugin for name. GetPostBackupAction(name string) (PostBackupAction, error) // GetPostBackupActions returns the all PostBackupAction plugins. GetPostBackupActions() ([]PostBackupAction, error) // Get PreRestoreAction returns a PreRestoreAction plugin for name. GetPreRestoreAction(name string) (PreRestoreAction, error) // Get PreRestoreActions returns the all PreRestoreAction plugins. GetPreRestoreActions() ([]PreRestoreAction, error) // Get PostRestoreAction returns a PostRestoreAction plugin for name. GetPostRestoreAction(name string) (PostRestoreAction, error) // GetPostRestoreActions returns the all PostRestoreAction plugins. GetPostRestoreActions() ([]PostRestoreAction, error) } ``` `GetPreBackupAction` and `GetPreBackupActions` will invoke the `restartablePreBackupAction` implementations. `GetPostBackupAction` and `GetPostBackupActions` will invoke the `restartablePostBackupAction` implementations. `GetPreRestoreAction` and `GetPreRestoreActions` will invoke the `restartablePreRestoreAction` implementations. `GetPostRestoreAction` and `GetPostRestoreActions` will invoke the `restartablePostRestoreAction` implementations. Getting Actions on `backup_controller.go` in `runBackup`: ```go backupLog.Info(\"Getting PreBackup actions\") preBackupActions, err := pluginManager.GetPreBackupActions() if err != nil { return err } backupLog.Info(\"Getting PostBackup actions\") postBackupActions, err := pluginManager.GetPostBackupActions() if err != nil { return err } ``` Calling the Pre Backup actions: ```go for _, preBackupAction := range preBackupActions { err := preBackupAction.Execute(backup.Backup) if err != nil { backup.Backup.Status.Phase = velerov1api.BackupPhaseFailedPreBackupActions return err } } ``` Calling the Post Backup actions: ```go for _, postBackupAction := range postBackupActions { err := postBackupAction.Execute(backup.Backup) if err != nil { postBackupLog.Error(err) } } ``` Getting Actions on `restore_controller.go` in `runValidatedRestore`: ```go restoreLog.Info(\"Getting PreRestore actions\") preRestoreActions, err := pluginManager.GetPreRestoreActions() if err != nil { return errors.Wrap(err, \"error getting pre-restore actions\") } restoreLog.Info(\"Getting PostRestore actions\") postRestoreActions, err := pluginManager.GetPostRestoreActions() if err != nil { return errors.Wrap(err, \"error getting post-restore actions\") } ``` Calling the Pre Restore actions: ```go for _, preRestoreAction := range preRestoreActions { err := preRestoreAction.Execute(restoreReq.Restore) if err != nil { restoreReq.Restore.Status.Phase = velerov1api.RestorePhaseFailedPreRestoreActions return errors.Wrap(err, \"error executing pre-restore action\") } } ``` Calling the Post Restore actions: ```go for _, postRestoreAction := range postRestoreActions { err := postRestoreAction.Execute(restoreReq.Restore) if err != nil { postRestoreLog.Error(err.Error()) } } ``` Velero plugins are loaded as init containers. If plugins are unloaded, they trigger a restart of the Velero controller. Not mentioning if one plugin does get loaded for any reason (i.e., docker hub image pace limit), Velero does not start. In other words, the constant load/unload of plugins can disrupt the Velero controller, and they cannot be the only method to run the actions from these plugins selectively. As part of this proposal, we want to give the velero user the ability to skip the execution of the plugins via annotations on the Velero CR backup and restore objects. If one of these exists, the given plugin, referenced below as `plugin-name`, will be skipped. Backup Object Annotations: ``` <plugin-name>/prebackup=skip <plugin-name>/postbackup=skip ``` Restore Object Annotations: ``` <plugin-name>/prerestore=skip <plugin-name>/postrestore=skip ``` An alternative to these plugin hooks is to implement all the pre/post backup/restore logic outside Velero. In this case, one would need to write an external controller that works similar to what does today when quiescing applications. We find this a viable way, but we think that Velero users can benefit from Velero having greater embedded capabilities, which will allow users to write or load plugins extensions without relying on an external components. The plugins will only be invoked if loaded per a user's discretion. It is recommended to check security vulnerabilities before execution. In terms of backward compatibility, this design should stay compatible with most Velero installations that are upgrading. If plugins are not present, then the backup/restore process should proceed the same way it worked before their inclusion. The implementation dependencies are roughly in the order as they are described in the section." } ]
{ "category": "Runtime", "file_name": "new-prepost-backuprestore-plugin-hooks.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Directory Statistics sidebar_position: 5 From JuiceFS v1.1.0, directory statistics is enabled by default when formatting a new volume (existing ones will stay disabled, you'll have to enable it explicitly). Directory stats accelerates `quota`, `info` and the `summary` subcommands, but comes with a minor performance cost. :::tip The usage statistic relies on the mount process, please do not enable this feature until all writable mount processes are upgraded to v1.1.0. ::: Run `juicefs config $URL --dir-stats` to enable directory stats, after that, you can run `juicefs config $URL` to verify: ```shell $ juicefs config redis://localhost 2023/05/31 15:56:39.721188 juicefs[30626] <INFO>: Meta address: redis://localhost [interface.go:494] 2023/05/31 15:56:39.723284 juicefs[30626] <INFO>: Ping redis latency: 159.226s [redis.go:3566] { \"Name\": \"myjfs\", \"UUID\": \"82db28de-bf5f-43bf-bba3-eb3535a86c48\", \"Storage\": \"file\", \"Bucket\": \"/root/.juicefs/local/\", \"BlockSize\": 4096, \"Compression\": \"none\", \"EncryptAlgo\": \"aes256gcm-rsa\", \"TrashDays\": 1, \"MetaVersion\": 1, \"DirStats\": true } ``` Upon seeing `\"DirStats\": true`, directory stats is successfully enabled, if you'd like to disable it: ```shell $ juicefs config redis://localhost --dir-stats=false 2023/05/31 15:59:39.046134 juicefs[30752] <INFO>: Meta address: redis://localhost [interface.go:494] 2023/05/31 15:59:39.048301 juicefs[30752] <INFO>: Ping redis latency: 171.308s [redis.go:3566] dir-stats: true -> false ``` :::tip The functionality depends on directory stats, that's why setting a quota automatically enables directory stats. To disable directory stats for such volume, you'll need to remove all quotas. ::: Use `juicefs info $PATH` to check stats for a single directory: ```shell $ juicefs info /mnt/jfs/pjdfstest/ /mnt/jfs/pjdfstest/ : inode: 2 files: 10 dirs: 4 length: 43.74 KiB (44794 Bytes) size: 92.00 KiB (94208 Bytes) path: /pjdfstest ``` Run `juicefs info -r $PATH` to recursively sum up: ```shell /mnt/jfs/pjdfstest/: 278 921.0/s /mnt/jfs/pjdfstest/: 1.6 MiB (1642496 Bytes) 5.2 MiB/s /mnt/jfs/pjdfstest/ : inode: 2 files: 278 dirs: 37 length: 592.42 KiB (606638 Bytes) size: 1.57 MiB (1642496 Bytes) path: /pjdfstest ``` You can also use `juicefs summary $PATH` to list all directory stats: ```shell $ ./juicefs summary /mnt/jfs/pjdfstest/ /mnt/jfs/pjdfstest/: 315 1044.4/s /mnt/jfs/pjdfstest/: 1.6 MiB (1642496 Bytes) 5.2 MiB/s ++++-+ | PATH | SIZE | DIRS | FILES | ++++-+ | / | 1.6 MiB | 37 | 278 | | tests/ | 1.1 MiB | 18 | 240 | | tests/open/ | 112 KiB | 1 | 26 | | tests/... | 328 KiB | 7 | 71 | | .git/ | 432 KiB | 17 | 26 | |" }, { "data": "| 252 KiB | 3 | 2 | | ... | 12 KiB | 0 | 3 | ++++-+ ``` :::note Directory stats only stores single directory usage, to get a recursive sum, you'll need to use `juicefs info -r`, this could be a costly operation for large directories, if you need to frequently get the total stats for particular directories, consider on such directories, to achieve recursive stats this way. Different from Community Edition, JuiceFS Enterprise Edition already put a on directory stats, you can directly view the total usage by running `ls -lh`. ::: Directory stats is calculated asynchronously, and can potentially produce inaccurate results when clients run into problems, `juicefs info`, `juicefs summary` and `juicefs quota` all provide a `--strict` option to run in strict mode, which bypasses directory stats, as opposed to the default fast mode. When strict mode and fast mode produces different results, use `juicefs fsck` to fix things up: ```shell $ juicefs info -r /jfs/d /jfs/d: 1 3.3/s /jfs/d: 448.0 MiB (469766144 Bytes) 1.4 GiB/s /jfs/d : inode: 2 files: 1 dirs: 1 length: 448.00 MiB (469762048 Bytes) size: 448.00 MiB (469766144 Bytes) path: /d $ juicefs info -r --strict /jfs/d /jfs/d: 1 3.3/s /jfs/d: 1.0 GiB (1073745920 Bytes) 3.3 GiB/s /jfs/d : inode: 2 files: 1 dirs: 1 length: 1.00 GiB (1073741824 Bytes) size: 1.00 GiB (1073745920 Bytes) path: /d $ juicefs fsck sqlite3://test.db --path /d --sync-dir-stat 2023/05/31 17:14:34.700239 juicefs[32667] <INFO>: Meta address: sqlite3://test.db [interface.go:494] [xorm] [info] 2023/05/31 17:14:34.700291 PING DATABASE sqlite3 2023/05/31 17:14:34.701553 juicefs[32667] <WARNING>: usage stat of /d should be &{1073741824 1073741824 1}, but got &{469762048 469762048 1} [base.go:2010] 2023/05/31 17:14:34.701577 juicefs[32667] <WARNING>: Stat of path /d (inode 2) should be synced, please re-run with '--path /d --repair --sync-dir-stat' to fix it [base.go:2025] 2023/05/31 17:14:34.701615 juicefs[32667] <FATAL>: some errors occurred, please check the log of fsck [main.go:31] $ juicefs fsck -v sqlite3://test.db --path /d --sync-dir-stat --repair 2023/05/31 17:14:43.445153 juicefs[32721] <DEBUG>: maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined [maxprocs.go:47] 2023/05/31 17:14:43.445289 juicefs[32721] <INFO>: Meta address: sqlite3://test.db [interface.go:494] [xorm] [info] 2023/05/31 17:14:43.445350 PING DATABASE sqlite3 2023/05/31 17:14:43.462374 juicefs[32721] <DEBUG>: Stat of path /d (inode 2) is successfully synced [base.go:2018] $ juicefs info -r /jfs/d /jfs/d: 1 3.3/s /jfs/d: 1.0 GiB (1073745920 Bytes) 3.3 GiB/s /jfs/d : inode: 2 files: 1 dirs: 1 length: 1.00 GiB (1073741824 Bytes) size: 1.00 GiB (1073745920 Bytes) path: /d ```" } ]
{ "category": "Runtime", "file_name": "dir-stats.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines. Docker 18.03 or above, refer here for . etcd uses as a primary container registry. ``` rm -rf /tmp/etcd-data.tmp && mkdir -p /tmp/etcd-data.tmp && \\ podman rmi gcr.io/etcd-development/etcd:v3.3.9 || true && \\ podman run \\ -p 2379:2379 \\ -p 2380:2380 \\ --mount type=bind,source=/tmp/etcd-data.tmp,destination=/etcd-data \\ --name etcd-gcr-v3.3.9 \\ gcr.io/etcd-development/etcd:v3.3.9 \\ /usr/local/bin/etcd \\ --name s1 \\ --data-dir /etcd-data \\ --listen-client-urls http://0.0.0.0:2379 \\ --advertise-client-urls http://0.0.0.0:2379 \\ --listen-peer-urls http://0.0.0.0:2380 \\ --initial-advertise-peer-urls http://0.0.0.0:2380 \\ --initial-cluster s1=http://0.0.0.0:2380 \\ --initial-cluster-token tkn \\ --initial-cluster-state new ``` You may also setup etcd with TLS following this documentation MinIO server expects environment variable for etcd as `MINIOETCDENDPOINTS`, this environment variable takes many comma separated entries. ``` export MINIOETCDENDPOINTS=http://localhost:2379 minio server /data ``` NOTE: If `etcd` is configured with `Client-to-server authentication with HTTPS client certificates` then you need to use additional envs such as `MINIOETCDCLIENTCERT` pointing to path to `etcd-client.crt` and `MINIOETCDCLIENTCERT_KEY` path to `etcd-client.key` . Once etcd is configured, any STS configuration will work including Client Grants, Web Identity or AD/LDAP. For example, you can configure STS with Client Grants (KeyCloak) using the guides at and . Once this is done, STS credentials can be generated: ``` go run client-grants.go -cid PoEgXP6uVO45IsENRngDXj5Au5Ya -csec eKsw6z8CtOJVBtrOWvhRWL4TUCga { \"accessKey\": \"IRBLVDGN5QGMDCMO1X8V\", \"secretKey\": \"KzS3UZKE7xqNdtRbKyfcWgxBS6P1G4kwZn4DXKuY\", \"expiration\": \"2018-08-21T15:49:38-07:00\", \"sessionToken\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJJUkJMVkRHTjVRR01EQ01PMVg4ViIsImF1ZCI6IlBvRWdYUDZ1Vk80NUlzRU5SbmdEWGo1QXU1WWEiLCJhenAiOiJQb0VnWFA2dVZPNDVJc0VOUm5nRFhqNUF1NVlhIiwiZXhwIjoxNTM0ODkxNzc4LCJpYXQiOjE1MzQ4ODgxNzgsImlzcyI6Imh0dHBzOi8vbG9jYWxob3N0Ojk0NDMvb2F1dGgyL3Rva2VuIiwianRpIjoiMTg0NDMyOWMtZDY1YS00OGEzLTgyMjgtOWRmNzNmZTgzZDU2In0.4rKsZ8VkZnIS_ALzfTJ9UbEKPFlQVvIyuHw6AWTJcDFDVgQA2ooQHmH9wUDnhXBi1M7o8yWJ47DXP-TLPhwCgQ\" } ``` These credentials can now be used to perform MinIO API operations, these credentials automatically expire in 1hr. To understand more about credential expiry duration and client grants STS API read further ." } ]
{ "category": "Runtime", "file_name": "etcd.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "oep-number: CStor ClusterAutoscale REV1 title: CStor ClusterAutoscale authors: \"@vishnuitta\" owners: \"@kmova\" \"@amitkumardas\" \"@mynktl\" \"@pawanpraka1\" editor: \"@vishnuitta\" creation-date: 2019-10-14 last-updated: 2019-10-14 status: provisional * * * * * * * * * * * This proposal includes high level design for building blocks of cStor which can be triggered by an administrator or by an operator to add a pool to existing cStor pool cluster to remove a pool from cStor pool cluster with data consistency either when CA adds/wants-to-remove a node (or) admin adds/wants-to-remove a node. move pool across nodes User wants OpenEBS to work natively with K8S cluster autoscaler. This minimizes operational cost. More usescases are available at: https://docs.google.com/document/d/1zHdF7pBNYO1MJnfJqFAHBrXEbqCjistyRXTpLbnFg/edit?usp=sharing Adding a pool to existing cStor pool cluster Removing a pool from existing cStor pool cluster with data consistency Movement of pools on remote disks from one node to another Integrating with CA to automatically add/remove a pool from cluster Movement of pools on physical disks from one node to another As an OpenEBS user, I should be able to add a pool to existing cStor pool cluster. As an OpenEBS user, I should be able to remove pool from cluster by moving any replicas on old pool to pools on other nodes. As an OpenEBS user, I should be able to move pool across nodes in the cluster. Currently, CSPC is having concept of creating/maintaining cStor pools on the nodes. Design and other details of CSPC are available at: https://github.com/openebs/openebs/blob/master/contribute/design/1.x/cstor-operator/doc.md Sample CSPC yaml looks like: ``` apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-mirror spec: pools: nodeSelector: kubernetes.io/hostname: gke-cstor-it-default-pool-1 raidGroups: type: mirror isWriteCache: false isSpare: false isReadCache: false blockDevices: blockDeviceName: pool-1-bd-1 blockDeviceName: pool-1-bd-2 type: mirror name: group-2 isWriteCache: false isSpare: false isReadCache: false blockDevices: blockDeviceName: pool-1-bd-3 blockDeviceName: pool-1-bd-4 poolConfig: cacheFile: var/openebs/disk-img-0 defaultRaidGroupType: mirror overProvisioning: false compression: lz nodeSelector: kubernetes.io/hostname: gke-cstor-it-default-pool-2 raidGroups: type: mirror blockDevices: blockDeviceName: pool-2-bd-1 blockDeviceName: pool-2-bd-2 type: mirror name: group-2 blockDevices: blockDeviceName: pool-2-bd-3 blockDeviceName: pool-2-bd-4 poolConfig: cacheFile: var/openebs/disk-img-2 defaultRaidGroupType: mirror overProvisioning: false compression: off ``` cspc-operator reads the above yaml and creates CSPI CR, CSPI-MGMT deployments. It also creates BlockDeviceClaims for BlockDevices mentioned in yaml. cspc-operator looks for changes in above yaml and accordingly updates CSPI CR. If a node-selector entry is added to above yaml, it creates new CSPI CR, CSPI-MGMT deployment for newly added entry, BDCs etc. If any entry related to nodeSelector is removed, it triggers the deletion of deployment, CSPI CR and associated BDCs. More details of cspc-operator and above yaml are available at above mentioned" }, { "data": "Provisioning of cStor volumes are done on a CSPC by CVC controller. As part of volume provisioning, it creates target related CR i.e., CV, target deployment and replica related CRs i.e., CVRs. CVRs are created by CVC by selecting RF (replication factor) number of CSPIs of CSPC, and makes each CVR point to one CSPI. CA can scale down the node on which cStor pool pods are running. This causes data unavailability or data loss. So, nodes on which cStor pool pods are running should not be scaled down. This information can be passed to CA by setting `\"cluster-autoscaler.kubernetes.io/safe-to-evict\": \"false\"` as annotation on cStor pool related pods. Either admin (or) operator which recognizes the need to add new pool to existing CSPC adds a `node entry` to above yaml. cspc-operator then takes care of creating required resources to add new pool to existing pool cluster. This is required to stop provisioning new replicas on a pool that eventually gets deleted from pool cluster. A new field will be added to CSPI which can be read by CVC controller during the phase of volume provision. This being related to provision, field for this in CSPI is: `provision.status`. This can take `ONLINE` and `CORDONED` as possible values. `ONLINE` means CVC can pick the CSPI for volume provisioning. `CORDONED` means CVC SHOULD NOT pick the CSPI for any further volume provisioning. There is NO reconciliation happens for this, but, this acts as a config parameter. This is not added to `spec` of CSPI as `spec` is shared with CSPC controller also. Admin (or) operator drains a pool (DP) by moving replicas which are on DP to another pool in the same cluster. This can be in two modes. One is - by adding new replica and deleting old one Second is - by moving replica from old one to new replica Admin (or) operator need to identify the list of replicas i.e., CVRs on pool DP. For each identified CVR (CVR1), follow either the first mode or second. Add a new replica and remove old one way: Identify new pool (NP) other than DP which can take the replica Perform replica scaleup steps create a new CVR (CVR2) on NP Increase the desired replication factor by 1 in CV CR related to this volume. Once the CVR2 becomes healthy, perform replica scaledown steps remove CVR1 related details, and, reduce DRF on CV CR delete CVR1 Above steps to add new replica are given in" }, { "data": "Steps to perform replica scale down can change as its implementation starts. Move old replica to new one way: Identify new pool (NP) other than DP which can take the replica Delete CVR1 which is on old pool Create new CVR (CVR2) on new pool with status as 'Recreate' and `spec.ReplicaID` same as that of CVR1 Above steps to move replica are also available at https://github.com/openebs/openebs/blob/master/contribute/design/1.x/replica_scaleup/dataplane/20190910-replica-scaleup.md This scenario is about moving pools on remote disks from old node to another, instead of distributing replicas on pool of old node to other pools. Steps to perform are as follows: Identify the node N1 on which replicas of old pool doesn't exists Delete CSPI-mgmt deployment of old pool Patch CSPI CR with the new node information Detach disks from old node and attach them to new node Create CSPI-mgmt deployment on new node N1 Another approach for the same would be: Let cStor pool pod i.e., CSPI-mgmt deployment consume disks using PVC instead of BDC. Then, steps to follow would be: Identify the node N1 on which replicas of old pool doesn't exists Delete CSPI-mgmt deployment of old pool Create CSPI-mgmt deployment on new node N1 User will add new node entry to existing CSPC yaml Trigger operator to perform steps that cordon the node Trigger operator to perform steps to drain the node Trigger operator with old and new node information along with CSPC details In CSPI spec, `provision.status` field will be added to let CVC controller to know the provisioning status of CSPI. It excludes the CSPIs whose provision.status is NOT ONLINE for provisioning replicas. For cStor volume's replica scale down scenario, separate OEP need to be raised or existing OEP on replica scaleup need to be updated. OEP for operator that cordons and drains the node need to be raised ``` Operator > [CSPI CR] > CVC-CONTROllER > [CVR CRs] ``` In phase 1, first two user stories will be targeted. First user story is already covered in 1.2 release. For second user story, building block of doing replica scale up is available in 1.3 release. In phase 2, pool movement across nodes will be targeted. Owner acceptance of `Summary` and `Motivation` sections - YYYYMMDD Agreement on `Proposal` section - YYYYMMDD Date implementation started - YYYYMMDD First OpenEBS release where an initial version of this OEP was available - YYYYMMDD Version of OpenEBS where this OEP graduated to general availability - YYYYMMDD If this OEP was retired or superseded - YYYYMMDD NA NA NA" } ]
{ "category": "Runtime", "file_name": "20191014-cstor-autoscale.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "This directory holds the CSVs generated by the now removed benchmark-tools repository. The new functionally equivalent In the future, these will be automatically posted to a cloud storage bucket and loaded dynamically. At that point, this directory will be removed." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 11 sidebar_label: \"APIs\" HwameiStor defines some object classes to associate PV/PVC with local disks. | Name | Abbr | Kind | Function | ||-|--|-| | clusters | hmcluster | Cluster | HwameiStor cluster | | events | evt | Event | Audit information of HwameiStor cluster | | localdiskclaims | ldc | LocalDiskClaim | Filter and allocate local data disks | | localdisknodes | ldn | LocalDiskNode | Storage pool for disk volumes | | localdisks | ld | LocalDisk | Data disks on nodes and automatically find which disks are available | | localdiskvolumes | ldv | LocalDiskVolume | Disk volumes | | localstoragenodes | lsn | LocalStorageNode | Storage pool for lvm volumes | | localvolumeconverts | lvconvert | LocalVolumeConvert | Convert common LVM volume to highly available LVM volume | | localvolumeexpands | lvexpand | LocalVolumeExpand | Expand local volume storage capacity | | | localvolumegroups | lvg | LocalVolumeGroup | LVM volume groups | | | localvolumemigrates | lvmigrate | LocalVolumeMigrate | Migrate LVM volume | | localvolumereplicas | lvr | LocalVolumeReplica | Replicas of LVM volume | | localvolumereplicasnapshotrestores | lvrsrestore,lvrsnaprestore | LocalVolumeReplicaSnapshotRestore | Restore snapshots of LVM volume Replicas | | localvolumereplicasnapshots | lvrs | LocalVolumeReplicaSnapshot | Snapshots of LVM volume Replicas | | localvolumes | lv | LocalVolume | LVM local volumes | | localvolumesnapshotrestores | lvsrestore,lvsnaprestore | LocalVolumeSnapshotRestore | Restore snapshots of LVM volume | | localvolumesnapshots | lvs | LocalVolumeSnapshot | Snapshots of LVM volume | | | resizepolicies | | ResizePolicy | PVC automatic expansion policy | |" } ]
{ "category": "Runtime", "file_name": "apis.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "title: How Weave Net Implements Encryption menu_order: 60 search_type: Documentation This section describes some details of Weave Net's built-in : * * For every connection between peers, a fresh public/private key pair is created at both ends, using NaCl's `GenerateKey` function. The public key portion is sent to the other end as part of the initial handshake performed over TCP. Peers that were started with a password do not continue with connection establishment unless they receive a public key from the remote peer. Thus either all peers in a weave network must be supplied with a password, or none. When a peer has received a public key from the remote peer, it uses this to form the ephemeral session key for this connection. The public key from the remote peer is combined with the private key for the local peer in the usual , resulting in both peers arriving at the same shared key. To this is appended the supplied password, and the result is hashed through SHA256, to form the final ephemeral session key. The supplied password is never exchanged directly, and is thoroughly mixed into the shared secret. Furthermore, the rate at which TCP connections are accepted is limited by Weave to 10Hz, which thwarts online dictionary attacks on reasonably strong passwords. The shared key formed by Diffie-Hellman is 256 bits long. Appending the password to this obviously makes it longer by an unknown amount, and the use of SHA256 reduces this back to 256 bits, to form the final ephemeral session key. This late combination with the password eliminates any \"Man In The Middle\" attacks: sniffing the public key exchange between the two peers and faking their responses will not grant an attacker knowledge of the password, and therefore, an attacker would not be able to form valid ephemeral session keys. The same ephemeral session key is used for both TCP and UDP traffic between two peers. Generating fresh keys for every connection provides forward secrecy at the cost of placing a demand on the Linux CSPRNG (accessed by `GenerateKey` via `/dev/urandom`) proportional to the number of inbound connection attempts. Weave Net has accept throttling to mitigate against denial of service attacks that seek to deplete the CSPRNG entropy pool, however even at the lower bound of ten requests per second, there may not be enough entropy gathered on a headless system to keep pace. Under such conditions, the consequences will be limited to slowing down processes reading from the blocking `/dev/random` device as the kernel waits for enough new entropy to be harvested. It is important to note that contrary to intuition, this low entropy state does not compromise the ongoing use of `/dev/urandom`. [Expert" }, { "data": "asserts that as long as the CSPRNG is seeded with enough entropy (for example, 256 bits) before random number generation commences, then the output is entirely safe for use as key material. By way of comparison, this is exactly how OpenSSL works - it reads 256 bits of entropy at startup, and uses that to seed an internal CSPRNG, which is used to generate keys. While Weave Net could have taken the same approach and built a custom CSPRNG to work around the potential `/dev/random` blocking issue, the decision was made to rely on the Linux random number generator as [advised here](http://cr.yp.to/highspeed/coolnacl-20120725.pdf) (page 10, 'Centralizing randomness'). Note:The aforementioned notwithstanding, if Weave Net's demand on `/dev/urandom` is causing you problems with blocking `/dev/random` reads, please get in touch with us - we'd love to hear about your use case. TCP connection are only used to exchange topology information between peers, via a message-based protocol. Encryption of each message is carried out by NaCl's `secretbox.Seal` function using the ephemeral session key and a nonce. The nonce contains the message sequence number, which is incremented for every message sent, and a bit indicating the polarity of the connection at the sender ('1' for outbound). The latter is required by the in order to ensure that the two ends of the connection do not use the same nonces. Decryption of a message at the receiver is carried out by NaCl's `secretbox.Open` function using the ephemeral session key and a nonce. The receiver maintains its own message sequence number, which it increments for every message it decrypted successfully. The nonce is constructed from that sequence number and the connection polarity. As a result the receiver will only be able to decrypt a message if it has the expected sequence number. This prevents replay attacks. UDP connections carry captured traffic between peers. For a UDP packet sent between peers that are using crypto, the encapsulation looks as follows: +--+ | Name of sending peer | +--+ | Message Sequence No and flags | +--+ | NaCl SecretBox overheads | +--+ -+ | Frame 1: Name of capturing peer | | +--+ | This section is encrypted | Frame 1: Name of destination peer | | using the ephemeral session +--+ | key between the weave peers | Frame 1: Captured payload length | | sending and receiving this +--+ | packet. | Frame 1: Captured payload | | +--+ | | Frame 2: Name of capturing peer | | +--+ | | Frame 2: Name of destination peer | | +--+ | | Frame 2: Captured payload length | | +--+ | | Frame 2: Captured payload | | +--+ | |" }, { "data": "| | +--+ | | Frame N: Name of capturing peer | | +--+ | | Frame N: Name of destination peer | | +--+ | | Frame N: Captured payload length | | +--+ | | Frame N: Captured payload | | +--+ -+ This is very similar to the . All of the frames on a connection are encrypted with the same ephemeral session key, and a nonce constructed from a message sequence number, flags and the connection polarity. This is very similar to the TCP encryption scheme, and encryption is again done with the NaCl `secretbox.Seal` function. The main difference is that the message sequence number and flags are transmitted as part of the message, unencrypted. The receiver uses the name of the sending peer to determine which ephemeral session key and local cryptographic state to use for decryption. Frames which are to be forwarded on to some further peer will be re-encrypted with the relevant ephemeral session keys for the onward connections. Thus all traffic is fully decrypted on every peer it passes through. Decryption is once again carried out by NaCl's `secretbox.Open` function using the ephemeral session key and nonce. The latter is constructed from the message sequence number and flags that appeared in the unencrypted portion of the received message, and the connection polarity. To guard against replay attacks, the receiver maintains some state in which it remembers the highest message sequence number seen. It could simply reject messages with lower sequence numbers, but that could result in excessive message loss when messages are re-ordered. The receiver therefore additionally maintains a set of received message sequence numbers in a window below the highest number seen, and only rejects messages with a sequence number below that window, or contained in the set. The window spans at least 2^20 message sequence numbers, and hence any re-ordering between the most recent ~1 million messages is handled without dropping messages. Encryption in fastdp uses in the transport mode. Each VXLAN packet is encrypted with , with 32 byte key and 4 byte salt. This combo provides the following security properties: Data confidentiality. Data origin authentication. Integrity. Anti-replay. Limited traffic flow confidentiality as VXLAN packets are fully encrypted. For each connection direction, a different AES-GCM key and salt is used. The pairs are derived with to which we pass a randomly generated 32 byte salt transferred over the encrypted control plane channel between peers. To prevent from replay attacks, which are possible because of the size of sequence number field in ESP (4 bytes), we use extended sequence numbers implemented by . Authentication of ESP packet integrity and origin is ensured by 16 byte Integrity Check Value of AES-GCM. See Also *" } ]
{ "category": "Runtime", "file_name": "encryption-implementation.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "Of course, most applications won't notice the impact of a slow logger: they already take tens or hundreds of milliseconds for each operation, so an extra millisecond doesn't matter. On the other hand, why not make structured logging fast? The `SugaredLogger` isn't any harder to use than other logging packages, and the `Logger` makes structured logging possible in performance-sensitive contexts. Across a fleet of Go microservices, making each application even slightly more efficient adds up quickly. Unlike the familiar `io.Writer` and `http.Handler`, `Logger` and `SugaredLogger` interfaces would include many methods. As [Rob Pike points out][go-proverbs], \"The bigger the interface, the weaker the abstraction.\" Interfaces are also rigid &mdash; any change requires releasing a new major version, since it breaks all third-party implementations. Making the `Logger` and `SugaredLogger` concrete types doesn't sacrifice much abstraction, and it lets us add methods without introducing breaking changes. Your applications should define and depend upon an interface that includes just the methods you use. Logs are dropped intentionally by zap when sampling is enabled. The production configuration (as returned by `NewProductionConfig()` enables sampling which will cause repeated logs within a second to be sampled. See more details on why sampling is enabled in . Applications often experience runs of errors, either because of a bug or because of a misbehaving user. Logging errors is usually a good idea, but it can easily make this bad situation worse: not only is your application coping with a flood of errors, it's also spending extra CPU cycles and I/O logging those errors. Since writes are typically serialized, logging limits throughput when you need it most. Sampling fixes this problem by dropping repetitive log entries. Under normal conditions, your application writes out every entry. When similar entries are logged hundreds or thousands of times each second, though, zap begins dropping duplicates to preserve throughput. Subjectively, we find it helpful to accompany structured context with a brief description. This isn't critical during development, but it makes debugging and operating unfamiliar systems much easier. More concretely, zap's sampling algorithm uses the message to identify duplicate entries. In our experience, this is a practical middle ground between random sampling (which often drops the exact entry that you need while debugging) and hashing the complete entry (which is prohibitively expensive). Since so many other logging packages include a global logger, many applications aren't designed to accept loggers as explicit parameters. Changing function signatures is often a breaking change, so zap includes global loggers to simplify migration. Avoid them where" }, { "data": "In general, application code should handle errors gracefully instead of using `panic` or `os.Exit`. However, every rule has exceptions, and it's common to crash when an error is truly unrecoverable. To avoid losing any information &mdash; especially the reason for the crash &mdash; the logger must flush any buffered entries before the process exits. Zap makes this easy by offering `Panic` and `Fatal` logging methods that automatically flush before exiting. Of course, this doesn't guarantee that logs will never be lost, but it eliminates a common error. See the discussion in uber-go/zap#207 for more details. `DPanic` stands for \"panic in development.\" In development, it logs at `PanicLevel`; otherwise, it logs at `ErrorLevel`. `DPanic` makes it easier to catch errors that are theoretically possible, but shouldn't actually happen, without crashing in production. If you've ever written code like this, you need `DPanic`: ```go if err != nil { panic(fmt.Sprintf(\"shouldn't ever get here: %v\", err)) } ``` Either zap was installed incorrectly or you're referencing the wrong package name in your code. Zap's source code happens to be hosted on GitHub, but the [import path][import-path] is `go.uber.org/zap`. This gives us, the project maintainers, the freedom to move the source code if necessary. However, it means that you need to take a little care when installing and using the package. If you follow two simple rules, everything should work: install zap with `go get -u go.uber.org/zap`, and always import it in your code with `import \"go.uber.org/zap\"`. Your code shouldn't contain any references to `github.com/uber-go/zap`. Zap doesn't natively support rotating log files, since we prefer to leave this to an external program like `logrotate`. However, it's easy to integrate a log rotation package like as a `zapcore.WriteSyncer`. ```go // lumberjack.Logger is already safe for concurrent use, so we don't need to // lock it. w := zapcore.AddSync(&lumberjack.Logger{ Filename: \"/var/log/myapp/foo.log\", MaxSize: 500, // megabytes MaxBackups: 3, MaxAge: 28, // days }) core := zapcore.NewCore( zapcore.NewJSONEncoder(zap.NewProductionEncoderConfig()), w, zap.InfoLevel, ) logger := zap.New(core) ``` We'd love to support every logging need within zap itself, but we're only familiar with a handful of log ingestion systems, flag-parsing packages, and the like. Rather than merging code that we can't effectively debug and support, we'd rather grow an ecosystem of zap extensions. We're aware of the following extensions, but haven't used them ourselves: | Package | Integration | | | | | `github.com/tchap/zapext` | Sentry, syslog | | `github.com/fgrosse/zaptest` | Ginkgo | | `github.com/blendle/zapdriver` | Stackdriver | | `github.com/moul/zapgorm` | Gorm | | `github.com/moul/zapfilter` | Advanced filtering rules |" } ]
{ "category": "Runtime", "file_name": "FAQ.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "```bash cfs-cli config info ``` ```bash cfs-cli config set [flags] ``` ```bash Flags: --addr string Specify master address {HOST}:{PORT}[,{HOST}:{PORT}] -h, --help help for set --timeout string Specify timeout for requests [Unit: s] (default \"60\") ```" } ]
{ "category": "Runtime", "file_name": "config.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> ip-masq-agent CIDRs ``` -h, --help help for ipmasq ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - List ip-masq-agent CIDRs" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_ipmasq.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "If a different version of rkt is required than what ships with CoreOS Container Linux, a oneshot systemd unit can be used to download and install an alternate version on boot. The following unit will use curl to download rkt, its signature, and the CoreOS app signing key. The downloaded rkt is then verified with its signature, and extracted to /opt/rkt. ``` [Unit] Description=rkt installer Requires=network.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/bin/mkdir -p /opt/rkt ExecStart=/usr/bin/curl --silent -L -o /opt/rkt.tar.gz <rkt-url> ExecStart=/usr/bin/curl --silent -L -o /opt/rkt.tar.gz.sig <rkt-sig-url> ExecStart=/usr/bin/curl --silent -L -o /opt/coreos-app-signing-key.gpg https://coreos.com/dist/pubkeys/app-signing-pubkey.gpg ExecStart=/usr/bin/gpg --keyring /tmp/gpg-keyring --no-default-keyring --import /opt/coreos-app-signing-key.gpg ExecStart=/usr/bin/gpg --keyring /tmp/gpg-keyring --no-default-keyring --verify /opt/rkt.tar.gz.sig /opt/rkt.tar.gz ExecStart=/usr/bin/tar --strip-components=1 -xf /opt/rkt.tar.gz -C /opt/rkt ``` The URLs in this unit must be filled in before the unit is installed. Valid URLs can be found on . This unit should be installed with either or a . Other units being added can then contain a `After=rkt-install.service` (or whatever the service was named) to delay their running until rkt has been installed." } ]
{ "category": "Runtime", "file_name": "install-rkt-in-coreos.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "title: \"ark restore\" layout: docs Work with restores Work with restores ``` -h, --help help for restore ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Back up and restore Kubernetes cluster resources. - Create a restore - Delete a restore - Describe restores - Get restores - Get restore logs" } ]
{ "category": "Runtime", "file_name": "ark_restore.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Use Cases\" layout: docs This doc provides sample Ark commands for the following common scenarios: Using Schedules and Restore-Only Mode If you periodically back up your cluster's resources, you are able to return to a previous state in case of some unexpected mishap, such as a service outage. Doing so with Heptio Ark looks like the following: After you first run the Ark server on your cluster, set up a daily backup (replacing `<SCHEDULE NAME>` in the command as desired): ``` ark schedule create <SCHEDULE NAME> --schedule \"0 7 *\" ``` This creates a Backup object with the name `<SCHEDULE NAME>-<TIMESTAMP>`. A disaster happens and you need to recreate your resources. Update the , setting `restoreOnlyMode` to `true`. This prevents Backup objects from being created or deleted during your Restore process. Create a restore with your most recent Ark Backup: ``` ark restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP> ``` Using Backups and Restores Heptio Ark can help you port your resources from one cluster to another, as long as you point each Ark Config to the same cloud object storage. In this scenario, we are also assuming that your clusters are hosted by the same cloud provider. Note that Heptio Ark does not support the migration of persistent volumes across cloud providers. (Cluster 1) Assuming you haven't already been checkpointing your data with the Ark `schedule` operation, you need to first back up your entire cluster (replacing `<BACKUP-NAME>` as desired): ``` ark backup create <BACKUP-NAME> ``` The default TTL is 30 days (720 hours); you can use the `--ttl` flag to change this as necessary. (Cluster 2) Make sure that the `persistentVolumeProvider` and `backupStorageProvider` fields in the Ark Config match the ones from Cluster 1, so that your new Ark server instance is pointing to the same bucket. (Cluster 2) Make sure that the Ark Backup object has been created. Ark resources are synced with the backup files available in cloud storage. (Cluster 2) Once you have confirmed that the right Backup (`<BACKUP-NAME>`) is now present, you can restore everything with: ``` ark restore create --from-backup <BACKUP-NAME> ```" } ]
{ "category": "Runtime", "file_name": "use-cases.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "[TOC] gVisor adds a layer of security to your AI/ML applications or other CUDA workloads while adding negligible overhead. By running these applications in a sandboxed environment, you can isolate your host system from potential vulnerabilities in AI code. This is crucial for handling sensitive data or deploying untrusted AI workloads. gVisor supports running most CUDA applications on preselected versions of . To achieve this, gVisor implements a proxy driver inside the sandbox, henceforth referred to as `nvproxy`. `nvproxy` proxies the application's interactions with NVIDIA's driver on the host. It provides access to NVIDIA GPU-specific devices to the sandboxed application. The CUDA application can run unmodified inside the sandbox and interact transparently with these devices. The `runsc` flag `--nvproxy` must be specified to enable GPU support. gVisor supports GPUs in the following environments. The is packaged as part of the . This runtime is just a shim and delegates all commands to the configured low level runtime (which defaults to `runc`). To use gVisor, specify `runsc` as the low level runtime in `/etc/nvidia-container-runtime/config.toml` and then run CUDA containers with `nvidia-container-runtime`. NOTE: gVisor currently only supports . The alternative, , is not yet supported. The \"legacy\" mode of `nvidia-container-runtime` is directly compatible with the `--gpus` flag implemented by the docker CLI. So with Docker, `runsc` can be used directly (without having to go through `nvidia-container-runtime`). ``` $ docker run --runtime=runsc --gpus=all --rm -it nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubi8 [Vector addition of 50000 elements] Copy input data from the host memory to the CUDA device CUDA kernel launch with 196 blocks of 256 threads Copy output data from the CUDA device to the host memory Test PASSED Done ``` uses a different GPU container stack than NVIDIA's. GKE has (which is different from ). GKE's plugin modifies the container spec in a different way than the above-mentioned methods. NOTE: `nvproxy` does not have integration support for `k8s-device-plugin` yet. So k8s environments other than GKE might not be supported. gVisor supports a wide range of CUDA workloads, including PyTorch and various generative models like LLMs. Check out . gVisor undergoes continuous tests to ensure this functionality remains robust. of gVisor across different CUDA workloads helps discover and address potential compatibility or performance issues in `nvproxy`. `nvproxy` is a passthrough driver that forwards `ioctl(2)` calls made to NVIDIA devices by the containerized application directly to the host NVIDIA driver. This forwarding is straightforward: `ioctl` parameters are copied from the application's address space to the sentry's address space, and then a host `ioctl` syscall is made. `ioctl`s are passed through with minimal intervention; `nvproxy` does not emulate NVIDIA kernel-mode driver (KMD) logic. This design translates to minimal overhead for GPU operations, ensuring that GPU bound workloads experience negligible performance impact. However, the presence of pointers and file descriptors within some `ioctl` structs forces `nvproxy` to perform appropriate translations. This requires `nvproxy` to be aware of the KMD's ABI, specifically the layout of `ioctl` structs. The challenge is compounded by the lack of ABI stability guarantees in NVIDIA's KMD, meaning `ioctl` definitions can change arbitrarily between" }, { "data": "While the NVIDIA installer ensures matching KMD and user-mode driver (UMD) component versions, a single gVisor version might be used with multiple NVIDIA drivers. As a result, `nvproxy` must understand the ABI for each supported driver version, necessitating internal versioning logic for `ioctl`s. As a result, `nvproxy` has the following limitations: Supports selected GPU models. Supports selected NVIDIA driver versions. Supports selected NVIDIA device files. Supports selected `ioctl`s on each device file. gVisor currently supports NVIDIA GPUs: T4, L4, A100, A10G and H100. Please if you want support for another GPU model. The range of driver versions supported by `nvproxy` directly aligns with those available within GKE. As GKE incorporates newer drivers, `nvproxy` will extend support accordingly. Conversely, to manage versioning complexity, `nvproxy` will drop support for drivers removed from GKE. This strategy ensures a streamlined process and avoids unbounded growth in `nvproxy`'s versioning. To see what drivers a given `runsc` version supports, run: ``` $ runsc nvproxy list-supported-drivers ``` gVisor only exposes `/dev/nvidiactl`, `/dev/nvidia-uvm` and `/dev/nvidia#`. Some unsupported NVIDIA device files are: `/dev/nvidia-caps/*`: Controls `nvidia-capabilities`, which is mainly used by Multi-instance GPUs (MIGs). `/dev/nvidia-drm`: Plugs into Linux's Direct Rendering Manager (DRM) subsystem. `/dev/nvidia-modeset`: Enables `DRIVER_MODESET` capability in `nvidia-drm` devices. To minimize maintenance overhead across supported driver versions, the set of supported NVIDIA device `ioctl`s is intentionally limited. This set was generated by running a large number of CUDA workloads in gVisor. As `nvproxy` is adapted to more use cases, this set will continue to evolve. Currently, `nvproxy` focuses on supporting compute workloads (like CUDA). Graphics and video capabilities are not yet supported due to missing `ioctl`s. If your GPU compute workload fails with gVisor, please note that some `ioctl` commands might still be unimplemented. Please to describe about your use case. If a missing `ioctl` implementation is the problem, then the will contain warnings with prefix `nvproxy: unknown *`. While CUDA support enables important use cases for gVisor, it is important for users to understand the security model around the use of GPUs in sandboxes. In short, while gVisor will protect the host from the sandboxed application, NVIDIA driver updates must be part of any security plan with or without gVisor. First, a short discussion on . gVisor protects the host from sandboxed applications by providing several layers of defense. The layers most relevant to this discussion are the redirection of application syscalls to the gVisor sandbox and use of on gVisor sandboxes. gVisor uses a \"platform\" to tell the host kernel to reroute system calls to the sandbox process, known as the sentry. The sentry implements a syscall table, which services all application syscalls. The Sentry may make syscalls to the host kernel if it needs them to fulfill the application syscall, but it doesn't merely pass an application syscall to the host kernel. On sandbox boot, seccomp filters are applied to the sandbox. Seccomp filters applied to the sandbox constrain the set of syscalls that it can make to the host kernel, blocking access to most host kernel vulnerabilities even if the sandbox becomes" }, { "data": "For example, is mitigated because gVisor itself handles the syscalls required to use namespaces and capabilities, so the application is using gVisor's implementation, not the host kernel's. For a compromised sandbox, the syscalls required to exploit the vulnerability are blocked by seccomp filters. In addition, seccomp-bpf filters can filter by argument names allowing us to allowlist granularly by `ioctl(2)` arguments. `ioctl(2)` is a source of many bugs in any kernel due to the complexity of its implementation. As of writing, gVisor does by argument for things like terminal support. For example, is mitigated by gVisor because the application would use gVisor's implementation of `ioctl(2)`. For a compromised sentry, `ioctl(2)` calls with the needed arguments are not in the seccomp filter allowlist, blocking the attacker from making the call. gVisor also mitigates similar vulnerabilities that come with device drivers (). Recall that \"nvproxy\" allows applications to directly interact with supported ioctls defined in the NVIDIA driver. gVisor's seccomp filter rules are modified such that `ioctl(2)` calls can be made . The allowlisted rules aligned with each . This approach is similar to the allowlisted ioctls for terminal support described above. This allows gVisor to retain the vast majority of its protection for the host while allowing access to GPUs. All of the above CVEs remain mitigated even when \"nvproxy\" is used. However, gVisor is much less effective at mitigating vulnerabilities within the NVIDIA GPU drivers themselves, because gVisor passes through calls to be handled by the kernel module. If there is a vulnerability in a given driver for a given GPU `ioctl` (read feature) that gVisor passes through, then gVisor will also be vulnerable. If the vulnerability is in an unimplemented feature, gVisor will block the required calls with seccomp filters. In addition, gVisor doesn't introduce any additional hardware-level isolation beyond that which is configured by by the NVIDIA kernel-mode driver. There is no validation of things like DMA buffers. The only checks are done in seccomp-bpf rules to ensure `ioctl(2)` calls are made on supported and allowlisted `ioctl`s. Therefore, it is imperative that users update NVIDIA drivers in a timely manner with or without gVisor. To see the latest drivers gVisor supports, you can run the following with your runsc release: ``` $ runsc nvproxy list-supported-drivers ``` Alternatively you can view the or download it and run: ``` $ make run TARGETS=runsc:runsc ARGS=\"nvproxy list-supported-drivers\" ``` While gVisor doesn't protect against all NVIDIA driver vulnerabilities, it does protect against a large set of general vulnerabilities in Linux. Applications don't just use GPUs, they use them as a part of a larger application that may include third party libraries. For example, Tensorflow that every application does. Designing and implementing an application with security in mind is hard and in the emerging AI space, security is often overlooked in favor of getting to market fast. There are also many services that allow users to run external users' code on the vendor's infrastructure. gVisor is well suited as part of a larger security plan for these and other use cases." } ]
{ "category": "Runtime", "file_name": "gpu.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "There are several cases related to (auto) volume attach/detach, right now we leverage the volume attributes to achieve that, but it's better to introduce a new resource (longhorn volumeattachement) to have complete context for each scenario. https://github.com/longhorn/longhorn/issues/3715 Introduce a new resource (Longhorn volumeattachement) to cover the following scenarios for Longhorn volume's AD: Traditional CSI attachment (pod -> csi-attacher -> Longhorn API) Traditional UI attachment (Longhorn UI -> Longhorn API) Auto attach/detach volume for K8s CSI snapshot Auto attach/detach volume for recurring jobs Auto attach/detach volume for volume cloning Auto attach/detach volume for auto salvage feature Refactor RWX mechanism's volume attachment/detachment (in share manager lifecycle) Volume live migration Consider how to upgrade from previous Longhorn version which doesn't have VA resource yet NA This is where we get down to the nitty-gritty of what the proposal actually is. Before this feature there are race conditions between Longhorn auto-reattachment logic and CSI volume attachment that sometimes result in the volume CR in a weird state that volume controller can nerve resolve. Ref https://github.com/longhorn/longhorn/issues/2527#issuecomment-966597537 After this feature, the race condition should not exist Before this feature, the user cannot take a CSI snapshot for detached Longhorn volume. After this feature, user should be able to do so Ref: https://github.com/longhorn/longhorn/issues/3726 Make the attaching/detaching more resilient and transparent. User will see clearly who is requesting the volume to be attached in the AD ticket. Also, volume controller will be able to reconcile the volume in any combination value of (volume.Spec.NodeID, volume.Status.CurrentNodeID, and volume.Status.State). See the Create a new CRD called VolumeAttachment with the following structure: ```go type AttachmentTicket struct { // The unique ID of this attachment. Used to differentiate different attachments of the same volume. // +optional ID string `json:\"id\"` // +optional Type AttacherType `json:\"type\"` // The node that this attachment is requesting // +optional NodeID string `json:\"nodeID\"` // Optional additional parameter for this attachment // +optional Parameters map[string]string `json:\"parameters\"` // A sequence number representing a specific generation of the desired state. // Populated by the system. Read-only. // +optional Generation int64 `json:\"generation\"` } type AttachmentTicketStatus struct { // The unique ID of this attachment. Used to differentiate different attachments of the same volume. // +optional ID string `json:\"id\"` // Indicate whether this attachment ticket has been satisfied Satisfied bool `json:\"satisfied\"` // Record any error when trying to fulfill this attachment // +nullable Conditions []Condition `json:\"conditions\"` // A sequence number representing a specific generation of the desired state. // Populated by the system." }, { "data": "// +optional Generation int64 `json:\"generation\"` } type AttacherType string const ( AttacherTypeCSIAttacher = AttacherType(\"csi-attacher\") AttacherTypeLonghornAPI = AttacherType(\"longhorn-api\") AttacherTypeSnapshotController = AttacherType(\"snapshot-controller\") AttacherTypeBackupController = AttacherType(\"backup-controller\") AttacherTypeVolumeCloneController = AttacherType(\"volume-clone-controller\") AttacherTypeSalvageController = AttacherType(\"salvage-controller\") AttacherTypeShareManagerController = AttacherType(\"share-manager-controller\") AttacherTypeVolumeRestoreController = AttacherType(\"volume-restore-controller\") AttacherTypeVolumeEvictionController = AttacherType(\"volume-eviction-controller\") AttacherTypeVolumeExpansionController = AttacherType(\"volume-expansion-controller\") AttacherTypeBackingImageDataSourceController = AttacherType(\"bim-ds-controller\") AttacherTypeVolumeRebuildingController = AttacherType(\"volume-rebuilding-controller\") ) const ( AttacherPriorityLevelVolumeRestoreController = 2000 AttacherPriorityLevelVolumeExpansionController = 2000 AttacherPriorityLevelLonghornAPI = 1000 AttacherPriorityLevelCSIAttacher = 900 AttacherPriorityLevelSalvageController = 900 AttacherPriorityLevelShareManagerController = 900 AttacherPriorityLevelSnapshotController = 800 AttacherPriorityLevelBackupController = 800 AttacherPriorityLevelVolumeCloneController = 800 AttacherPriorityLevelVolumeEvictionController = 800 AttacherPriorityLevelBackingImageDataSourceController = 800 AttachedPriorityLevelVolumeRebuildingController = 800 ) const ( TrueValue = \"true\" FalseValue = \"false\" AnyValue = \"any\" AttachmentParameterDisableFrontend = \"disableFrontend\" AttachmentParameterLastAttachedBy = \"lastAttachedBy\" ) const ( AttachmentStatusConditionTypeSatisfied = \"Satisfied\" AttachmentStatusConditionReasonAttachedWithIncompatibleParameters = \"AttachedWithIncompatibleParameters\" ) // VolumeAttachmentSpec defines the desired state of Longhorn VolumeAttachment type VolumeAttachmentSpec struct { // +optional AttachmentTickets map[string]*AttachmentTicket `json:\"attachmentTickets\"` // The name of Longhorn volume of this VolumeAttachment Volume string `json:\"volume\"` } // VolumeAttachmentStatus defines the observed state of Longhorn VolumeAttachment type VolumeAttachmentStatus struct { // +optional AttachmentTicketStatuses map[string]*AttachmentTicketStatus `json:\"attachmentTicketStatuses\"` } // VolumeAttachment stores attachment information of a Longhorn volume type VolumeAttachment struct { metav1.TypeMeta `json:\",inline\"` metav1.ObjectMeta `json:\"metadata,omitempty\"` Spec VolumeAttachmentSpec `json:\"spec,omitempty\"` Status VolumeAttachmentStatus `json:\"status,omitempty\"` } ``` Modify volume controller Repurpose the field `volume.Status.CurrentNodeID` so that `volume.Status.CurrentNodeID` is only set once we are fully attached and is only unset once we are fully detached. See this state flow for full detail: Deprecate `volume.Status.PendingNodeID` and the auto-salvage logic. We will have a dedicated `salvage-controller` as describe in the below section. Create a controller, VolumeAttachment controller (AD controller). This controller watches the VolumeAttachment objects of a volume. When AD controller sees a newly created ticket in `VolumeAttachment.Spec.AttachmentTickets` object. If `volume.Spec.NodeID` is non-empty Do nothing. It will wait for the volume to be fully detached first before setting `volume.Spec.NodeID` If `volume.Spec.NodeID` is empty Wait for `volume.Status.State` to be `detached` Then select an attachment ticket from `va.Spec.AttachmentTickets` based on priority level of the tickets. If 2 ticket has same priority, select the ticket with shorter name. Set the `vol.Spec.NodeID = attachmentTicket.NodeID` to attach the volume When AD controller check the list of tickets in `VolumeAttachment.Spec.AttachmentTickets`. If no ticket is requesting the `volume.Spec.NodeID`, Ad controller set `volume.Spec.NodeID` to empty AD controller watch volume CR and set ticket status accordingly in the `va.Status.AttachmentTicketStatuses` If the VolumeAttachment object is pending deletion, There is no special resource need to be cleanup, directly remove the finalizer for the VolumeAttachment Note that the priority of ticket is determine in the order: volume data restoring > user workload > snapshot/backup operations Traditional CSI attachment (pod -> csi-attacher -> Longhorn API) csi-attacher send attaching request to longhorn-csi-plugin longhorn-csi-plugin sends attaching request to longhorn-manager with pod-id and attacherType `csi-attacher` longhorn manager create a VolumeAttachment object with this spec: ```yaml metadata: finalizers: longhorn.io labels: longhornvolume: <volume-name> nodeID: <node-name> spec: attachers: csi-attacher: <pod-id>: id: <pod-id> type: \"csi-attacher\" volume: <volume-name> nodeID: <node-name> status: attached: false ``` longhorn-csi-plugin watch the `volumeAttachment.Status.Attached` and `volumeAttachment.Status.AttachError` and return corresponding respond to csi-attacher Traditional UI attachment (Longhorn UI -> Longhorn API) Longhorn UI send attaching request to longhorn-manager with attacherType `longhorn-api` longhorn manager create a VolumeAttachment object with this spec: ```yaml metadata: finalizers: longhorn.io labels: longhornvolume: <volume-name> nodeID: <node-name> spec: attachers: longhorn-api: \"\": id: \"\" type: \"longhorn-api\" volume: <volume-name> nodeID: <node-name> status: attached: false ``` Longhorn UI watches the `volumeAttachment.Status.Attached` and `volumeAttachment.Status.AttachError` and display the correct message Auto attach/detach volume Longhorn snapshot Snapshot controller watches the new Longhorn snapshot" }, { "data": "If the snapshot CR request a new snapshot, snapshot controller create a new VolumeAttachment object with the content: ```yaml metadata: finalizers: longhorn.io labels: longhornvolume: <volume-name> nodeID: <node-name> spec: attachers: snapshot-controller: <snapshot-name>: id: <snapshot-name> type: \"snapshot-controller\" volume: <volume-name> nodeID: <node-name> status: attached: false ``` Snapshot controller wait for volume to be attached and take the snapshot Auto attach/detach volume Longhorn backup Backup controller watches the new Longhorn backup CR. If the backup CR request a new backup, backup controller create a new VolumeAttachment object with the content: ```yaml metadata: finalizers: longhorn.io labels: longhornvolume: <volume-name> nodeID: <node-name> spec: attachers: backup-controller: <backup-name>: id: <backup-name> type: \"backup-controller\" volume: <volume-name> nodeID: <node-name> status: attached: false ``` Backup controller wait for volume to be attached and take the backup Auto attach/detach volume for K8s CSI snapshot csi-snappshotter send a request to longhorn-csi-plugin to with snapshot name and volume name longhorn-csi-plugin send snapshot request to Longhorn manager Longhorn manager create a snapshot CR longhorn-csi-plugin watches different snapshot CR status and respond to csi-snapshotter Auto attach/detach volume for recurring jobs recurring job deploys backup/snapshot CR recurring job watch the status of backup/snapshot CR for the completion of the operation Auto attach/detach volume for volume cloning Create a new controller: cloning-controller This controller will watch new volume that need to be cloned For a volume that needs to be cloned, cloning controller deploy a VolumeAttachment for both target volume and old volume: ```yaml metadata: finalizers: longhorn.io labels: longhornvolume: <target-volume-name> nodeID: <node-name> spec: attachers: cloning-controller: <target-volume-name>: id: <target-volume-name> type: \"cloning-controller\" volume: <target-volume-name> nodeID: <node-name> status: attached: false metadata: finalizers: longhorn.io labels: longhornvolume: <source-volume-name> nodeID: <node-name> spec: attachers: cloning-controller: <target-volume-name>: id: <target-volume-name> type: \"cloning-controller\" volume: <source-volume-name> nodeID: <node-name> status: attached: false ``` Cloning controller watch for the cloning status and delete the VolumeAttachment upon completed Auto attach/detach volume for auto salvage feature We create a new controller to check if the volume is faulted and detach the volume for detach. After the volume is auto detach and replica is auto-salvaged by the volume controller, the AD controller will check the VolumeAttachment object and reattach the volume to the correct node With this design, we no longer need the volume.Status.PendingNodeID which was the source of some race conditions Refactor RWX mechanism's volume attachment/detachment (in share manager lifecycle) Each pod that use the RWX volume will directly request a CSI ticket. The share-manager controller watch the csi ticket and create share-manager ticket when there are one or more csi ticket exist. Then AD controller will attach the volume to the node that is being requested by share-manager ticket. AD controller ignore the CSI ticket for RWX volume. We are adding new APIs to operate on Snapshot CRD directly ```go \"snapshotCRCreate\": s.SnapshotCRCreate, \"snapshotCRList\": s.SnapshotCRList, \"snapshotCRGet\": s.SnapshotCRGet, \"snapshotCRDelete\": s.SnapshotCRDelete, ``` Refer to the test plan https://github.com/longhorn/longhorn/issues/3715#issuecomment-1563637861 In the upgrade path, list all volume and create VolumeAttachment for the them. For the volume that currently being used by CSI workload/Longhorn UI we create CSI ticket/Longhorn UI ticket to keep them being attached" } ]
{ "category": "Runtime", "file_name": "20221024-longhorn-volumeattachment.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "MinIO provides a custom STS API that allows authentication with client X.509 / TLS certificates. A major advantage of certificate-based authentication compared to other STS authentication methods, like OpenID Connect or LDAP/AD, is that client authentication works without any additional/external component that must be constantly available. Therefore, certificate-based authentication may provide better availability / lower operational complexity. The MinIO TLS STS API can be configured via MinIO's standard configuration API (i.e. using `mc admin config set/get`). Further, it can be configured via the following environment variables: ``` mc admin config set myminio identity_tls --env KEY: identity_tls enable X.509 TLS certificate SSO support ARGS: MINIOIDENTITYTLSSKIPVERIFY (on|off) trust client certificates without verification. Defaults to \"off\" (verify) ``` The MinIO TLS STS API is disabled by default. However, it can be enabled by setting environment variable: ``` export MINIOIDENTITYTLS_ENABLE=on ``` MinIO exposes a custom S3 STS API endpoint as `Action=AssumeRoleWithCertificate`. A client has to send an HTTP `POST` request to `https://<host>:<port>?Action=AssumeRoleWithCertificate&Version=2011-06-15`. Since the authentication and authorization happens via X.509 certificates the client has to send the request over TLS and has to provide a client certificate. The following curl example shows how to authenticate to a MinIO server with client certificate and obtain STS access credentials. ```curl curl -X POST --key private.key --cert public.crt \"https://minio:9000?Action=AssumeRoleWithCertificate&Version=2011-06-15&DurationSeconds=3600\" ``` ```xml <?xml version=\"1.0\" encoding=\"UTF-8\"?> <AssumeRoleWithCertificateResponse xmlns=\"https://sts.amazonaws.com/doc/2011-06-15/\"> <AssumeRoleWithCertificateResult> <Credentials> <AccessKeyId>YC12ZBHUVW588BQAE5BM</AccessKeyId> <SecretAccessKey>Zgl9+zdE0pZ88+hLqtfh0ocLN+WQTJixHouCkZkW</SecretAccessKey> <Expiration>2021-07-19T20:10:45Z</Expiration <SessionToken>eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJZQzEyWkJIVVZXNTg4QlFBRTVCTSIsImV4cCI6MTYyNjcyNTQ0NX0.wvMUf3wx16qpVWgua8WxnV1Sgtv1jOnSu03vbrwOMzV3cI4q39WZD9LwlP-34DTsvbsg7gCBGh6YNriMMiQw</SessionToken> </Credentials> </AssumeRoleWithCertificateResult> <ResponseMetadata> <RequestId>169339CD8B3A6948</RequestId> </ResponseMetadata> </AssumeRoleWithCertificateResponse> ``` A client can request temp. S3 credentials via the STS API. It can authenticate via a client certificate and obtain a access/secret key pair as well as a session token. These credentials are associated to an S3 policy at the MinIO server. In case of certificate-based authentication, MinIO has to map the client-provided certificate to an S3 policy. MinIO does this via the subject common name field of the X.509" }, { "data": "So, MinIO will associate a certificate with a subject `CN = foobar` to a S3 policy named `foobar`. The following self-signed certificate is issued for `consoleAdmin`. So, MinIO would associate it with the pre-defined `consoleAdmin` policy. ``` Certificate: Data: Version: 3 (0x2) Serial Number: 35:ac:60:46:ad:8d:de:18:dc:0b:f6:98:14:ee:89:e8 Signature Algorithm: ED25519 Issuer: CN = consoleAdmin Validity Not Before: Jul 19 15:08:44 2021 GMT Not After : Aug 18 15:08:44 2021 GMT Subject: CN = consoleAdmin Subject Public Key Info: Public Key Algorithm: ED25519 ED25519 Public-Key: pub: 5a:91:87:b8:77:fe:d4:af:d9:c7:c7:ce:55:ae:74: aa:f3:f1:fe:04:63:9b:cb:20:97:61:97:90:94:fa: 12:8b X509v3 extensions: X509v3 Key Usage: critical Digital Signature X509v3 Extended Key Usage: TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE Signature Algorithm: ED25519 7e:aa:be:ed:47:4d:b9:2f:fc:ed:7f:5a:fc:6b:c0:05:5b:f5: a0:31:fe:86:e3:8e:3f:49:af:6d:d5:ac:c7:c4:57:47:ce:97: 7d:ab:b8:e9:75:ec:b4:39:fb:c8:cf:53:16:5b:1f:15:b6:7f: 5a:d1:35:2d:fc:31:3a:10:e7:0c ``` Observe the `Subject: CN = consoleAdmin` field. Also, note that the certificate has to contain the `Extended Key Usage: TLS Web Client Authentication`. Otherwise, MinIO would not accept the certificate as client certificate. Now, the STS certificate-based authentication happens in 4 steps: Client sends HTTP `POST` request over a TLS connection hitting the MinIO TLS STS API. MinIO verifies that the client certificate is valid. MinIO tries to find a policy that matches the `CN` of the client certificate. MinIO returns temp. S3 credentials associated to the found policy. The returned credentials expiry after a certain period of time that can be configured via `&DurationSeconds=3600`. By default, the STS credentials are valid for 1 hour. The minimum expiration allowed is 15 minutes. Further, the temp. S3 credentials will never out-live the client certificate. For example, if the `MINIOIDENTITYTLSSTSEXPIRY` is 7 days but the certificate itself is only valid for the next 3 days, then MinIO will return S3 credentials that are valid for 3 days only. Applications that use direct S3 API will work fine, however interactive users uploading content using (when POSTing to the presigned URL an app generates) a popup becomes visible on browser to provide client certs, you would have to manually cancel and continue. This may be annoying to use but there is no workaround for now." } ]
{ "category": "Runtime", "file_name": "tls.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "- https://github.com/heptio/ark/tree/v0.6.0 Plugins* - We now support user-defined plugins that can extend Ark functionality to meet your custom backup/restore needs without needing to be compiled into the core binary. We support pluggable block and object stores as well as per-item backup and restore actions that can execute arbitrary logic, including modifying the items being backed up or restored. For more information see the , which includes a reference to a fully-functional sample plugin repository. (#174 #188 #206 #213 #215 #217 #223 #226) Describers* - The Ark CLI now includes `describe` commands for `backups`, `restores`, and `schedules` that provide human-friendly representations of the relevant API objects. The config object format has changed. In order to upgrade to v0.6.0, the config object will have to be updated to match the new format. See the and for more information. The restore object format has changed. The `warnings` and `errors` fields are now ints containing the counts, while full warnings and errors are now stored in the object store instead of etcd. Restore objects created prior to v.0.6.0 should be deleted, or a new bucket used, and the old restore objects deleted from Kubernetes (`kubectl -n heptio-ark delete restore --all`). Add `ark plugin add` and `ark plugin remove` commands #217, @skriss Add plugin support for block/object stores, backup/restore item actions #174 #188 #206 #213 #215 #223 #226, @skriss @ncdc Improve Azure deployment instructions #216, @ncdc Change default TTL for backups to 30 days #204, @nrb Improve logging for backups and restores #199, @ncdc Add `ark backup describe`, `ark schedule describe` #196, @ncdc Add `ark restore describe` and move restore warnings/errors to object storage #173 #201 #202, @ncdc Upgrade to client-go v5.0.1, kubernetes v1.8.2 #157, @ncdc Add Travis CI support #165 #166, @ncdc Fix log location hook prefix stripping #222, @ncdc When running `ark backup download`, remove file if there's an error #154, @ncdc Update documentation for AWS KMS Key alias support #163, @lli-hiya Remove clock from `volumesnapshotaction` #137, @athampy" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-0.6.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }