content
listlengths 1
171
| tag
dict |
---|---|
[
{
"data": "To refresh the gif image included in , follow these steps: install set `PS1=\"> \"` in your bash profile file (e.g. `.bashrc`, `zshrc`, ...) to simplify the prompt record the cast with the correct shell, e.g. `SHELL=zsh asciinema rec my.cast` convert the cast file to a gif file: `docker run --rm -v $PWD:/data -w /data asciinema/asciicast2gif -s 3 -w 120 -h 20 my.cast my.gif` upload the gif file to Github's CDN by following these update the link in by opening a PR"
}
] | {
"category": "Runtime",
"file_name": "getting-started-gif.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "Kubernetes cluster with CSI_VERSION = 1.5.0 If running kubelet in docker containerit should mount host 's `/dev:/dev` directory. Linux Kernel 3.10.0-1160.11.1.el7.x86_64 Each node should have multiple raw disks. Carina will ignore nodes with no empty disks. Using `kubectl get pods -n kube-system | grep carina` to check installation status. ```shell $ cd deploy/kubernetes $ ./deploy.sh $ kubectl get pods -n kube-system |grep carina carina-scheduler-6cc9cddb4b-jdt68 0/1 ContainerCreating 0 3s csi-carina-node-6bzfn 0/2 ContainerCreating 0 6s csi-carina-node-flqtk 0/2 ContainerCreating 0 6s csi-carina-provisioner-7df5d47dff-7246v 0/4 ContainerCreating 0 12s ``` Uninstallation ```shell $ cd deploy/kubernetes $ ./deploy.sh uninstall ``` The uninstallation will leave the pod still using carina PV untouched. ``` helm repo add carina-csi-driver https://carina-io.github.io helm search repo -l carina-csi-driver helm install carina-csi-driver carina-csi-driver/carina-csi-driver --namespace kube-system --version v0.9.0 ``` ``` helm uninstall carina-csi-driver helm pull carina-csi-driver/carina-csi-driver --version v0.9.1 tar -zxvf carina-csi-driver-v0.9.1.tgz helm install carina-csi-driver carina-csi-driver/ ```"
}
] | {
"category": "Runtime",
"file_name": "install.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for the specified shell Generate the autocompletion script for cilium-operator for the specified shell. See each sub-command's help for details on how to use the generated script. ``` -h, --help help for completion ``` - Run cilium-operator - Generate the autocompletion script for bash - Generate the autocompletion script for fish - Generate the autocompletion script for powershell - Generate the autocompletion script for zsh"
}
] | {
"category": "Runtime",
"file_name": "cilium-operator_completion.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "| Name | Type | Description | Required | |:|:--|:|:-| | mountPoint | string | Mount point | Yes | | volName | string | Volume name | Yes | | owner | string | Owner | Yes | | masterAddr | string | Master node address | Yes | | logDir | string | Log directory | No | | logLevel | string | Log level: debug, info, warn, error | No | | profPort | string | Golang pprof debug port | No | | exporterPort | string | Prometheus monitoring data port | No | | consulAddr | string | Monitoring registration server address | No | | lookupValid | string | Kernel FUSE lookup validity period, in seconds | No | | attrValid | string | Kernel FUSE attribute validity period, in seconds | No | | icacheTimeout | string | Client inode cache validity period, in seconds | No | | enSyncWrite | string | Enable DirectIO synchronous write, i.e., force data node to write to disk with DirectIO | No | | autoInvalData | string | Use the AutoInvalData option for FUSE mount | No | | rdonly | bool | Mount in read-only mode, default is false | No | | writecache | bool | Use the write cache function of kernel FUSE module, requires kernel FUSE module support for write cache, default is false | No | | keepcache | bool | Keep kernel page cache. This function requires the writecache option to be enabled, default is false | No | | token | string | If enableToken is enabled when creating a volume, fill in the token corresponding to the permission | No | | readRate | int | Limit the number of reads per second, default is unlimited | No | | writeRate | int | Limit the number of writes per second, default is unlimited | No | | followerRead | bool | Read data from follower, default is false | No | | accessKey | string | Authentication key of the user to whom the volume belongs | No | | secretKey | string | Authentication key of the user to whom the volume belongs | No | | disableDcache | bool | Disable Dentry cache, default is false | No | | subdir | string | Set subdirectory mount | No | | fsyncOnClose | bool | Perform fsync operation after file is closed, default is true | No | | maxcpus | int | Maximum number of CPUs that can be used, can limit the CPU usage of the client process | No | | enableXattr | bool | Whether to use xattr, default is false | No | | enableBcache | bool | Whether to enable local level-1 cache, default is false | No | | enableAudit | bool | Whether to enable local audit logs, default is false | No | ``` json { \"mountPoint\": \"/cfs/mountpoint\", \"volName\": \"ltptest\", \"owner\": \"ltptest\", \"masterAddr\": \"10.196.59.198:17010,10.196.59.199:17010,10.196.59.200:17010\", \"logDir\": \"/cfs/client/log\", \"logLevel\": \"info\", \"profPort\": \"27510\" } ```"
}
] | {
"category": "Runtime",
"file_name": "client.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "If you're using Velero and want to add your organization to this list, ! <a href=\"https://www.pitsdatarecovery.net/\" border=\"0\" target=\"_blank\"><img alt=\"pitsdatarecovery.net\" src=\"site/static/img/adopters/PITSGlobalDataRecoveryServices.svg\" height=\"50\"></a> <a href=\"https://www.bitgo.com\" border=\"0\" target=\"_blank\"><img alt=\"bitgo.com\" src=\"site/static/img/adopters/BitGo.svg\" height=\"50\"></a> <a href=\"https://www.nirmata.com\" border=\"0\" target=\"_blank\"><img alt=\"nirmata.com\" src=\"site/static/img/adopters/nirmata.svg\" height=\"50\"></a> <a href=\"https://kyma-project.io/\" border=\"0\" target=\"_blank\"><img alt=\"kyma-project.io\" src=\"site/static/img/adopters/kyma.svg\" height=\"50\"></a> <a href=\"https://redhat.com/\" border=\"0\" target=\"_blank\"><img alt=\"redhat.com\" src=\"site/static/img/adopters/redhat.svg\" height=\"50\"></a> <a href=\"https://dellemc.com/\" border=\"0\" target=\"_blank\"><img alt=\"dellemc.com\" src=\"site/static/img/adopters/DellEMC.png\" height=\"50\"></a> <a href=\"https://bugsnag.com/\" border=\"0\" target=\"_blank\"><img alt=\"bugsnag.com\" src=\"site/static/img/adopters/bugsnag.svg\" height=\"50\"></a> <a href=\"https://okteto.com/\" border=\"0\" target=\"_blank\"><img alt=\"okteto.com\" src=\"site/static/img/adopters/okteto.svg\" height=\"50\"></a> <a href=\"https://banzaicloud.com/\" border=\"0\" target=\"_blank\"><img alt=\"banzaicloud.com\" src=\"site/static/img/adopters/banzaicloud.svg\" height=\"50\"></a> <a href=\"https://sighup.io/\" border=\"0\" target=\"_blank\"><img alt=\"sighup.io\" src=\"site/static/img/adopters/sighup.svg\" height=\"50\"></a> <a href=\"https://mayadata.io/\" border=\"0\" target=\"_blank\"><img alt=\"mayadata.io\" src=\"site/static/img/adopters/mayadata.svg\" height=\"50\"></a> <a href=\"https://www.replicated.com/\" border=\"0\" target=\"_blank\"><img alt=\"replicated.com\" src=\"site/static/img/adopters/replicated-logo-red.svg\" height=\"50\"></a> <a href=\"https://cloudcasa.io/\" border=\"0\" target=\"_blank\"><img alt=\"cloudcasa.io\" src=\"site/static/img/adopters/cloudcasa.svg\" height=\"50\"></a> <a href=\"https://azure.microsoft.com/\" border=\"0\" target=\"_blank\"><img alt=\"azure.com\" src=\"site/static/img/adopters/azure.svg\" height=\"50\"></a> Below is a list of adopters of Velero in production environments that have publicly shared the details of how they use it. BitGo uses Velero backup and restore capabilities to seamlessly provision and scale fullnode statefulsets on the fly as well as having it serve an integral piece for our Kubernetes disaster-recovery story. We use Velero for managing backups of an internal instance of our on-premise clustered solution. We also recommend our users of use Velero for . <!-- Velero.io word list : ignore --> is a Kubernetes-based microservices platform that integrates services needed for Day-1 and Day-2 operations along with first-class support both for on-prem and hybrid multi-cloud deployments. We use Velero to periodically . Below is a list of solutions where Velero is being used as a component. We have integrated our to provide our customers with out of box backup/DR. Kyma to effortlessly back up and restore Kyma clusters with all its resources. Velero capabilities allow Kyma users to define and run manual and scheduled backups in order to successfully handle a disaster-recovery scenario. Red Hat has developed 2 operators for the OpenShift platform: (Crane): This operator uses to drive the migration of applications between OpenShift clusters. : This operator sets up and installs Velero on the OpenShift platform, allowing users to backup and restore applications. For Kubernetes environments, leverages the Container Storage Interface (CSI) framework to take snapshots to back up the persistent data or the data that the application creates e.g."
},
{
"data": "to backup the namespace configuration files (also known as Namespace meta data) for enterprise grade data protection. SIGHUP integrates Velero in its providing predefined schedules and configurations to ensure an optimized disaster recovery experience. is ready to be deployed into any Kubernetes cluster running anywhere. MayaData is a large user of Velero as well as a contributor. MayaData offers a Data Agility platform called , that helps customers confidently and easily manage stateful workloads in Kubernetes. Velero is one of the core software building block of the OpenEBS Director's used to enable data protection strategies. Okteto integrates Velero in and to periodically backup and restore our clusters for disaster recovery. Velero is also a core software building block to provide namespace cloning capabilities, a feature that allows our users cloning staging environments into their personal development namespace for providing production-like development environments. <br> Replicated uses the Velero open source project to enable snapshots in to backup Kubernetes manifests & persistent volumes. In addition to the default functionality that Velero provides, provides a detailed interface in the that can be used to manage the storage destination and schedule, and to perform and monitor the backup and restore process.<br> <br> integrates Velero with - A Smart Home in the Cloud for Backups. CloudCasa is a full-featured, scalable, cloud-native solution providing Kubernetes data protection, disaster recovery, and migration as a service. An option to manage existing Velero instances and an enterprise self-hosted option are also available.<br> <br> is an Azure native, Kubernetes aware, Enterprise ready backup for containerized applications deployed on Azure Kubernetes Service (AKS). AKS Backup utilizes Velero to perform backup and restore operations to protect stateful applications in AKS clusters.<br> If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/static/img/adopters` directory in this repo and submit a with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example . If you would like to add your logo to a future `Adopters of Velero` section on , follow the steps above to add your organization to the list of Velero Adopters. Our community will follow up and publish it to the website."
}
] | {
"category": "Runtime",
"file_name": "ADOPTERS.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Name | Type | Description | Notes | - | - | - SourceUrl | string | | Prefault | Pointer to bool | | [optional] `func NewRestoreConfig(sourceUrl string, ) *RestoreConfig` NewRestoreConfig instantiates a new RestoreConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewRestoreConfigWithDefaults() *RestoreConfig` NewRestoreConfigWithDefaults instantiates a new RestoreConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *RestoreConfig) GetSourceUrl() string` GetSourceUrl returns the SourceUrl field if non-nil, zero value otherwise. `func (o RestoreConfig) GetSourceUrlOk() (string, bool)` GetSourceUrlOk returns a tuple with the SourceUrl field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *RestoreConfig) SetSourceUrl(v string)` SetSourceUrl sets SourceUrl field to given value. `func (o *RestoreConfig) GetPrefault() bool` GetPrefault returns the Prefault field if non-nil, zero value otherwise. `func (o RestoreConfig) GetPrefaultOk() (bool, bool)` GetPrefaultOk returns a tuple with the Prefault field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *RestoreConfig) SetPrefault(v bool)` SetPrefault sets Prefault field to given value. `func (o *RestoreConfig) HasPrefault() bool` HasPrefault returns a boolean if a field has been set."
}
] | {
"category": "Runtime",
"file_name": "RestoreConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "Cobra will follow a steady release cadence. Non breaking changes will be released as minor versions quarterly. Patch bug releases are at the discretion of the maintainers. Users can expect security patch fixes to be released within relatively short order of a CVE becoming known. For more information on security patch fixes see the CVE section below. Releases will follow . Users tracking the Master branch should expect unpredictable breaking changes as the project continues to move forward. For stability, it is highly recommended to use a release. We will maintain two major releases in a moving window. The N-1 release will only receive bug fixes and security updates and will be dropped once N+1 is released. Deprecation of Go versions or dependent packages will only occur in major releases. To reduce the change of this taking users by surprise, any large deprecation will be preceded by an announcement in the and an Issue on Github. Maintainers will make every effort to release security patches in the case of a medium to high severity CVE directly impacting the library. The speed in which these patches reach a release is up to the discretion of the maintainers. A low severity CVE may be a lower priority than a high severity one. Cobra maintainers will use GitHub issues and the as the primary means of communication with the community. This is to foster open communication with all users and contributors. Breaking changes are generally allowed in the master branch, as this is the branch used to develop the next release of Cobra. There may be times, however, when master is closed for breaking changes. This is likely to happen as we near the release of a new version. Breaking changes are not allowed in release branches, as these represent minor versions that have already been released. These version have consumers who expect the APIs, behaviors, etc, to remain stable during the lifetime of the patch stream for the minor release. Examples of breaking changes include: Removing or renaming exported constant, variable, type, or function. Updating the version of critical libraries such as `spf13/pflag`, `spf13/viper` etc... Some version updates may be acceptable for picking up bug fixes, but maintainers must exercise caution when reviewing. There may, at times, need to be exceptions where breaking changes are allowed in release branches. These are at the discretion of the project's maintainers, and must be carefully considered before merging. Maintainers will ensure the Cobra test suite utilizes the current supported versions of Golang. Changes to this document and the contents therein are at the discretion of the maintainers. None of the contents of this document are legally binding in any way to the maintainers or the users."
}
] | {
"category": "Runtime",
"file_name": "CONDUCT.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "title: POSIX Compatibility sidebar_position: 6 slug: /posix_compatibility JuiceFS ensures POSIX compatibility with the help of pjdfstest and LTP. is a test suite that helps to test POSIX system calls. JuiceFS passed all of its latest 8813 tests: ``` All tests successful. Test Summary Report /root/soft/pjdfstest/tests/chown/00.t (Wstat: 0 Tests: 1323 Failed: 0) TODO passed: 693, 697, 708-709, 714-715, 729, 733 Files=235, Tests=8813, 233 wallclock secs ( 2.77 usr 0.38 sys + 2.57 cusr 3.93 csys = 9.65 CPU) Result: PASS ``` :::note When testing pjdfstest, the JuiceFS trash bin needs to be turned off because the delete behavior of the pjdfstest test is delete directly instead of entering the trash bin. And the JuiceFS trash bin is enabled by default. Turn off trash bin command: `juicefs config <meta-url> --trash-days 0` ::: Besides the features covered by pjdfstest, JuiceFS provides: Close-to-open consistency. Once a file is closed, it is guaranteed to view the written data in the following open and read. Within the same mount point, all the written data can be read immediately. Rename and all other metadata operations are atomic, which are guaranteed by transaction of metadata engines. Open files remain accessible after unlink from same mount point. Mmap (tested with FSx). Fallocate with punch hole support. Extended attributes (xattr). BSD locks (flock). POSIX traditional record locks (fcntl). :::note POSIX record locks are classified as traditional locks (\"process-associated\") and OFD locks (Open file description locks), and their locking operation commands are `FSETLK` and `FOFD_SETLK` respectively. Due to the implementation of the FUSE kernel module, JuiceFS currently only supports traditional record locks. More details can be found at: <https://man7.org/linux/man-pages/man2/fcntl.2.html>. ::: (Linux Test Project) is a joint project developed and maintained by IBM, Cisco, Fujitsu and others. The project goal is to deliver tests to the open source community that validate the reliability, robustness, and stability of Linux. The LTP testsuite contains a collection of tools for testing the Linux kernel and related features. Our goal is to improve the Linux kernel and system libraries by bringing test automation to the testing effort. JuiceFS passed most of the file system related tests. Host: Amazon EC2: c5d.xlarge (4C 8G) OS: Ubuntu 20.04.1 LTS (Kernel `5.4.0-1029-aws`) Object storage: Amazon S3 JuiceFS version: 0.17-dev (2021-09-16 292f2b65) Download LTP from GitHub Unarchive, compile and install: ```bash tar -jvxf ltp-full-20210524.tar.bz2 cd ltp-full-20210524 ./configure make all make install ``` Change directory to `/opt/ltp` since test tools are installed here: ```bash cd /opt/ltp ``` The test definition files are located under `runtest`. To speed up testing, we delete some pressure cases and unrelated cases in `fs` and `syscalls` (refer to , modified files are saved as `fs-jfs` and `syscalls-jfs`), then execute: ```bash ./runltp -d /mnt/jfs -f fsbind,fsperms_simple,fsx,io,smoketest,fs-jfs,syscalls-jfs ``` ```bash Testcase Result Exit Value -- - fcntl17 FAIL 7 fcntl17_64 FAIL 7 getxattr05 CONF 32 ioctl_loop05 FAIL 4 ioctl_ns07 FAIL 1 lseek11 CONF 32 open14 CONF 32 openat03 CONF 32 setxattr03 FAIL 6 -- Total Tests: 1270 Total Skipped Tests: 4 Total Failures: 5 Kernel Version: 5.4.0-1029-aws Machine Architecture: x86_64 ``` Here are causes of the skipped and failed tests: fcntl17, fcntl17_64: it requires file system to automatically detect deadlock when trying to add POSIX locks. JuiceFS doesn't support it yet. getxattr05: need extended ACL, which is not supported yet. ioctlloop05, ioctlns07, setxattr03: need `ioctl`, which is not supported yet. lseek11: require `lseek` to handle SEEKDATA and SEEKHOLE"
},
{
"data": "JuiceFS however uses kernel general function, which doesn't support these two flags. open14, openat03: need `open` to handle O_TMPFILE flag. JuiceFS can do nothing with it since it's not supported by"
},
{
"data": "Here are deleted cases in `fs` and `syscalls`: ```bash gf01 growfiles -W gf01 -b -e 1 -u -i 0 -L 20 -w -C 1 -l -I r -T 10 -f glseek20 -S 2 -d $TMPDIR gf02 growfiles -W gf02 -b -e 1 -L 10 -i 100 -I p -S 2 -u -f gf03_ -d $TMPDIR gf03 growfiles -W gf03 -b -e 1 -g 1 -i 1 -S 150 -u -f gf05_ -d $TMPDIR gf04 growfiles -W gf04 -b -e 1 -g 4090 -i 500 -t 39000 -u -f gf06_ -d $TMPDIR gf05 growfiles -W gf05 -b -e 1 -g 5000 -i 500 -t 49900 -T10 -c9 -I p -u -f gf07_ -d $TMPDIR gf06 growfiles -W gf06 -b -e 1 -u -r 1-5000 -R 0--1 -i 0 -L 30 -C 1 -f g_rand10 -S 2 -d $TMPDIR gf07 growfiles -W gf07 -b -e 1 -u -r 1-5000 -R 0--2 -i 0 -L 30 -C 1 -I p -f g_rand13 -S 2 -d $TMPDIR gf08 growfiles -W gf08 -b -e 1 -u -r 1-5000 -R 0--2 -i 0 -L 30 -C 1 -f g_rand11 -S 2 -d $TMPDIR gf09 growfiles -W gf09 -b -e 1 -u -r 1-5000 -R 0--1 -i 0 -L 30 -C 1 -I p -f g_rand12 -S 2 -d $TMPDIR gf10 growfiles -W gf10 -b -e 1 -u -r 1-5000 -i 0 -L 30 -C 1 -I l -f g_lio14 -S 2 -d $TMPDIR gf11 growfiles -W gf11 -b -e 1 -u -r 1-5000 -i 0 -L 30 -C 1 -I L -f g_lio15 -S 2 -d $TMPDIR gf12 mkfifo $TMPDIR/gffifo17; growfiles -b -W gf12 -e 1 -u -i 0 -L 30 $TMPDIR/gffifo17 gf13 mkfifo $TMPDIR/gffifo18; growfiles -b -W gf13 -e 1 -u -i 0 -L 30 -I r -r 1-4096 $TMPDIR/gffifo18 gf14 growfiles -W gf14 -b -e 1 -u -i 0 -L 20 -w -l -C 1 -T 10 -f glseek19 -S 2 -d $TMPDIR gf15 growfiles -W gf15 -b -e 1 -u -r 1-49600 -I r -u -i 0 -L 120 -f Lgfile1 -d $TMPDIR gf16 growfiles -W gf16 -b -e 1 -i 0 -L 120 -u -g 4090 -T 101 -t 408990 -l -C 10 -c 1000 -S 10 -f Lgf02_ -d $TMPDIR gf17 growfiles -W gf17 -b -e 1 -i 0 -L 120 -u -g 5000 -T 101 -t 499990 -l -C 10 -c 1000 -S 10 -f Lgf03_ -d $TMPDIR gf18 growfiles -W gf18 -b -e 1 -i 0 -L 120 -w -u -r 10-5000 -I r -l -S 2 -f Lgf04_ -d $TMPDIR gf19 growfiles -W gf19 -b -e 1 -g 5000 -i 500 -t 49900 -T10 -c9 -I p -o ORDWR,OCREAT,OTRUNC -u -f gf08i -d $TMPDIR gf20 growfiles -W gf20 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -r 1-256000:512 -R 512-256000 -T 4 -f gfbigio-$$ -d $TMPDIR gf21 growfiles -W gf21 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -g 20480 -T 10 -t 20480 -f gf-bld-$$ -d $TMPDIR gf22 growfiles -W gf22 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -g 20480 -T 10 -t 20480 -f gf-bldf-$$ -d $TMPDIR gf23 growfiles -W gf23 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -r 512-64000:1024 -R 1-384000 -T 4 -f gf-inf-$$ -d $TMPDIR gf24 growfiles -W gf24 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -g 20480 -f gf-jbld-$$ -d $TMPDIR gf25 growfiles -W gf25 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -r 1024000-2048000:2048 -R 4095-2048000 -T 1 -f gf-large-gs-$$ -d $TMPDIR gf26 growfiles"
}
] | {
"category": "Runtime",
"file_name": "posix_compatibility.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- toc --> - <!-- /toc --> Follow these to install minikube and set its development environment. ```bash curl -Lo https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea.yml minikube start --cni=antrea.yml --network-plugin=cni ``` These instructions assume that you have built the Antrea Docker image locally (e.g. by running `make` from the root of the repository, or in case of arm64 architecture by running `./hack/build-antrea-linux-all.sh --platform linux/arm64`). ```bash minikube image load antrea/antrea-controller-ubuntu:latest minikube image load antrea/antrea-agent-ubuntu:latest kubectl apply -f antrea/build/yamls/antrea.yml ``` After a few seconds you should be able to observe the following when running `kubectl get pods -l app=antrea -n kube-system`: ```txt NAME READY STATUS RESTARTS AGE antrea-agent-9ftn9 2/2 Running 0 66m antrea-controller-56f97bbcff-zbfmv 1/1 Running 0 66m ```"
}
] | {
"category": "Runtime",
"file_name": "minikube.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "This page lists all active maintainers of this repository. If you were a maintainer and would like to add your name to the Emeritus list, please send us a PR. See for governance guidelines and how to become a maintainer. See for general contribution guidelines. , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC"
}
] | {
"category": "Runtime",
"file_name": "MAINTAINERS.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "Prometheus server can monitor various metrics and provide an observation of the Antrea Controller and Agent components. The doc provides general guidelines to the configuration of Prometheus server to operate with the Antrea components. is an open source monitoring and alerting server. Prometheus is capable of collecting metrics from various Kubernetes components, storing and providing alerts. Prometheus can provide visibility by integrating with other products such as . One of Prometheus capabilities is self-discovery of Kubernetes services which expose their metrics. So Prometheus can scrape the metrics of any additional components which are added to the cluster without further configuration changes. Enable Prometheus metrics listener by setting `enablePrometheusMetrics` parameter to true in the Controller and the Agent configurations. Prometheus integration with Antrea is validated as part of CI using Prometheus v2.46.0. Prometheus requires access to Kubernetes API resources for the service discovery capability. Reading metrics also requires access to the \"/metrics\" API endpoints. ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus rules: apiGroups: [\"\"] resources: nodes nodes/proxy services endpoints pods verbs: [\"get\", \"list\", \"watch\"] apiGroups: networking.k8s.io resources: ingresses verbs: [\"get\", \"list\", \"watch\"] nonResourceURLs: [\"/metrics\"] verbs: [\"get\"] ``` To scrape the metrics from Antrea Controller and Agent, Prometheus needs the following permissions ```yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: prometheus-antrea rules: nonResourceURLs: /metrics verbs: get ``` Add the following jobs to Prometheus scraping configuration to enable metrics collection from Antrea components. Antrea Agent metrics endpoint is exposed through Antrea apiserver on `apiport` config parameter given in `antrea-agent.conf` (default value is 10350). Antrea Controller metrics endpoint is exposed through Antrea apiserver on `apiport` config parameter given in `antrea-controller.conf` (default value is 10349). ```yaml job_name: 'antrea-controllers' kubernetessdconfigs: role: endpoints scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecureskipverify: true bearertokenfile: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: sourcelabels: [metakubernetesnamespace, metakubernetespodcontainer_name] action: keep regex: kube-system;antrea-controller sourcelabels: [metakubernetespodnodename, metakubernetespodname] target_label: instance ``` ```yaml job_name: 'antrea-agents' kubernetessdconfigs: role: pod scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecureskipverify: true bearertokenfile: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: sourcelabels: [metakubernetesnamespace, metakubernetespodcontainer_name] action: keep regex: kube-system;antrea-agent sourcelabels: [metakubernetespodnodename, metakubernetespodname] target_label: instance ``` For further reference see the enclosed . The configuration file above can be used to deploy Prometheus Server with scraping configuration for Antrea services. To deploy this configuration use `kubectl apply -f build/yamls/antrea-prometheus.yml` Antrea Controller and Agents expose various metrics, some of which are provided by the Antrea components and others which are provided by 3rd party components used by the Antrea components. Below is a list of metrics, provided by the components and by 3rd parties. antrea_agent_conntrack_antrea_connection_count: Number of connections in the Antrea ZoneID of the conntrack table. This metric gets updated at an interval specified by flowPollInterval, a configuration parameter for the Agent. antrea_agent_conntrack_max_connection_count: Size of the conntrack table. This metric gets updated at an interval specified by flowPollInterval, a configuration parameter for the Agent. antrea_agent_conntrack_total_connection_count: Number of connections in the conntrack table. This metric gets updated at an interval specified by flowPollInterval, a configuration parameter for the Agent. antrea_agent_denied_connection_count: Number of denied connections detected by Flow Exporter deny connections tracking. This metric gets updated when a flow is rejected/dropped by network policy. antrea_agent_egress_networkpolicy_rule_count: Number of egress NetworkPolicy rules on local Node which are managed by the Antrea Agent. antrea_agent_flow_collector_reconnection_count: Number of re-connections between Flow Exporter and flow collector. This metric gets updated whenever the connection is re-established between the Flow Exporter and the flow collector (e.g. the Flow Aggregator). antrea_agent_ingress_networkpolicy_rule_count: Number of ingress NetworkPolicy rules on local Node which are managed by the Antrea"
},
{
"data": "antrea_agent_local_pod_count: Number of Pods on local Node which are managed by the Antrea Agent. antrea_agent_networkpolicy_count: Number of NetworkPolicies on local Node which are managed by the Antrea Agent. antrea_agent_ovs_flow_count: Flow count for each OVS flow table. The TableID and TableName are used as labels. antrea_agent_ovs_flow_ops_count: Number of OVS flow operations, partitioned by operation type (add, modify and delete). antrea_agent_ovs_flow_ops_error_count: Number of OVS flow operation errors, partitioned by operation type (add, modify and delete). antrea_agent_ovs_flow_ops_latency_milliseconds: The latency of OVS flow operations, partitioned by operation type (add, modify and delete). antrea_agent_ovs_meter_packet_dropped_count: Number of packets dropped by OVS meter. The value is greater than 0 when the packets exceed the rate-limit. antrea_agent_ovs_total_flow_count: Total flow count of all OVS flow tables. antrea_controller_acnp_status_updates: The total number of actual status updates performed for Antrea ClusterNetworkPolicy Custom Resources antrea_controller_address_group_processed: The total number of address-group processed antrea_controller_address_group_sync_duration_milliseconds: The duration of syncing address-group antrea_controller_annp_status_updates: The total number of actual status updates performed for Antrea NetworkPolicy Custom Resources antrea_controller_applied_to_group_processed: The total number of applied-to-group processed antrea_controller_applied_to_group_sync_duration_milliseconds: The duration of syncing applied-to-group antrea_controller_length_address_group_queue: The length of AddressGroupQueue antrea_controller_length_applied_to_group_queue: The length of AppliedToGroupQueue antrea_controller_length_network_policy_queue: The length of InternalNetworkPolicyQueue antrea_controller_network_policy_processed: The total number of internal-networkpolicy processed antrea_controller_network_policy_sync_duration_milliseconds: The duration of syncing internal-networkpolicy antrea_proxy_sync_proxy_rules_duration_seconds: SyncProxyRules duration of AntreaProxy in seconds antrea_proxy_total_endpoints_installed: The number of Endpoints installed by AntreaProxy antrea_proxy_total_endpoints_updates: The cumulative number of Endpoint updates received by AntreaProxy antrea_proxy_total_services_installed: The number of Services installed by AntreaProxy antrea_proxy_total_services_updates: The cumulative number of Service updates received by AntreaProxy aggregator_discovery_aggregation_count_total: Counter of number of times discovery was aggregated apiserver_audit_event_total: Counter of audit events generated and sent to the audit backend. apiserver_audit_requests_rejected_total: Counter of apiserver requests rejected due to an error in audit logging backend. apiserver_client_certificate_expiration_seconds: Distribution of the remaining lifetime on the certificate used to authenticate a request. apiserver_current_inflight_requests: Maximal number of currently used inflight request limit of this apiserver per request kind in last second. apiserver_delegated_authn_request_duration_seconds: Request latency in seconds. Broken down by status code. apiserver_delegated_authn_request_total: Number of HTTP requests partitioned by status code. apiserver_delegated_authz_request_duration_seconds: Request latency in seconds. Broken down by status code. apiserver_delegated_authz_request_total: Number of HTTP requests partitioned by status code. apiserver_envelope_encryption_dek_cache_fill_percent: Percent of the cache slots currently occupied by cached DEKs. apiserver_flowcontrol_read_vs_write_current_requests: EXPERIMENTAL: Observations, at the end of every nanosecond, of the number of requests (as a fraction of the relevant limit) waiting or in regular stage of execution apiserver_flowcontrol_seat_fair_frac: Fair fraction of server's concurrency to allocate to each priority level that can use it apiserver_longrunning_requests: Gauge of all active long-running apiserver requests broken out by verb, group, version, resource, scope and component. Not all requests are tracked this way. apiserver_request_duration_seconds: Response latency distribution in seconds for each verb, dry run value, group, version, resource, subresource, scope and component. apiserver_request_filter_duration_seconds: Request filter latency distribution in seconds, for each filter type apiserver_request_sli_duration_seconds: Response latency distribution (not counting webhook duration and priority & fairness queue wait times) in seconds for each verb, group, version, resource, subresource, scope and component. apiserver_request_slo_duration_seconds: Response latency distribution (not counting webhook duration and priority & fairness queue wait times) in seconds for each verb, group, version, resource, subresource, scope and component. apiserver_request_total: Counter of apiserver requests broken out for each verb, dry run value, group, version, resource, scope, component, and HTTP response code. apiserver_response_sizes: Response size distribution in bytes for each group, version, verb, resource, subresource, scope and component. apiserver_storage_data_key_generation_duration_seconds: Latencies in seconds of data encryption key(DEK) generation operations. apiserver_storage_data_key_generation_failures_total: Total number of failed data encryption key(DEK) generation operations. apiserver_storage_envelope_transformation_cache_misses_total: Total number of cache misses while accessing key decryption"
},
{
"data": "apiserver_tls_handshake_errors_total: Number of requests dropped with 'TLS handshake error from' error apiserver_watch_events_sizes: Watch event size distribution in bytes apiserver_watch_events_total: Number of events sent in watch clients apiserver_webhooks_x509_insecure_sha1_total: Counts the number of requests to servers with insecure SHA1 signatures in their serving certificate OR the number of connection failures due to the insecure SHA1 signatures (either/or, based on the runtime environment) apiserver_webhooks_x509_missing_san_total: Counts the number of requests to servers missing SAN extension in their serving certificate OR the number of connection failures due to the lack of x509 certificate SAN extension missing (either/or, based on the runtime environment) authenticated_user_requests: Counter of authenticated requests broken out by username. authentication_attempts: Counter of authenticated attempts. authentication_duration_seconds: Authentication duration in seconds broken out by result. authentication_token_cache_active_fetch_count: authentication_token_cache_fetch_total: authentication_token_cache_request_duration_seconds: authentication_token_cache_request_total: authorization_attempts_total: Counter of authorization attempts broken down by result. It can be either 'allowed', 'denied', 'no-opinion' or 'error'. authorization_duration_seconds: Authorization duration in seconds broken out by result. cardinality_enforcement_unexpected_categorizations_total: The count of unexpected categorizations during cardinality enforcement. disabled_metrics_total: The count of disabled metrics. field_validation_request_duration_seconds: Response latency distribution in seconds for each field validation value go_cgo_go_to_c_calls_calls_total: Count of calls made from Go to C by the current process. go_cpu_classes_gc_mark_assist_cpu_seconds_total: Estimated total CPU time goroutines spent performing GC tasks to assist the GC and prevent it from falling behind the application. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes metrics. go_cpu_classes_gc_mark_dedicated_cpu_seconds_total: Estimated total CPU time spent performing GC tasks on processors (as defined by GOMAXPROCS) dedicated to those tasks. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes metrics. go_cpu_classes_gc_mark_idle_cpu_seconds_total: Estimated total CPU time spent performing GC tasks on spare CPU resources that the Go scheduler could not otherwise find a use for. This should be subtracted from the total GC CPU time to obtain a measure of compulsory GC CPU time. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes metrics. go_cpu_classes_gc_pause_cpu_seconds_total: Estimated total CPU time spent with the application paused by the GC. Even if only one thread is running during the pause, this is computed as GOMAXPROCS times the pause latency because nothing else can be executing. This is the exact sum of samples in /gc/pause:seconds if each sample is multiplied by GOMAXPROCS at the time it is taken. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes metrics. go_cpu_classes_gc_total_cpu_seconds_total: Estimated total CPU time spent performing GC tasks. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes metrics. Sum of all metrics in /cpu/classes/gc. go_cpu_classes_idle_cpu_seconds_total: Estimated total available CPU time not spent executing any Go or Go runtime code. In other words, the part of /cpu/classes/total:cpu-seconds that was unused. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes metrics. go_cpu_classes_scavenge_assist_cpu_seconds_total: Estimated total CPU time spent returning unused memory to the underlying platform in response eagerly in response to memory pressure. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes metrics. go_cpu_classes_scavenge_background_cpu_seconds_total: Estimated total CPU time spent performing background tasks to return unused memory to the underlying platform. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes"
},
{
"data": "go_cpu_classes_scavenge_total_cpu_seconds_total: Estimated total CPU time spent performing tasks that return unused memory to the underlying platform. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes metrics. Sum of all metrics in /cpu/classes/scavenge. go_cpu_classes_total_cpu_seconds_total: Estimated total available CPU time for user Go code or the Go runtime, as defined by GOMAXPROCS. In other words, GOMAXPROCS integrated over the wall-clock duration this process has been executing for. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes metrics. Sum of all metrics in /cpu/classes. go_cpu_classes_user_cpu_seconds_total: Estimated total CPU time spent running user Go code. This may also include some small amount of time spent in the Go runtime. This metric is an overestimate, and not directly comparable to system CPU time measurements. Compare only with other /cpu/classes metrics. go_gc_cycles_automatic_gc_cycles_total: Count of completed GC cycles generated by the Go runtime. go_gc_cycles_forced_gc_cycles_total: Count of completed GC cycles forced by the application. go_gc_cycles_total_gc_cycles_total: Count of all completed GC cycles. go_gc_duration_seconds: A summary of the pause duration of garbage collection cycles. go_gc_gogc_percent: Heap size target percentage configured by the user, otherwise 100. This value is set by the GOGC environment variable, and the runtime/debug.SetGCPercent function. go_gc_gomemlimit_bytes: Go runtime memory limit configured by the user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT environment variable, and the runtime/debug.SetMemoryLimit function. go_gc_heap_allocs_by_size_bytes: Distribution of heap allocations by approximate size. Bucket counts increase monotonically. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks. go_gc_heap_allocs_bytes_total: Cumulative sum of memory allocated to the heap by the application. go_gc_heap_allocs_objects_total: Cumulative count of heap allocations triggered by the application. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks. go_gc_heap_frees_by_size_bytes: Distribution of freed heap allocations by approximate size. Bucket counts increase monotonically. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks. go_gc_heap_frees_bytes_total: Cumulative sum of heap memory freed by the garbage collector. go_gc_heap_frees_objects_total: Cumulative count of heap allocations whose storage was freed by the garbage collector. Note that this does not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny blocks. go_gc_heap_goal_bytes: Heap size target for the end of the GC cycle. go_gc_heap_live_bytes: Heap memory occupied by live objects that were marked by the previous GC. go_gc_heap_objects_objects: Number of objects, live or unswept, occupying heap memory. go_gc_heap_tiny_allocs_objects_total: Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. go_gc_limiter_last_enabled_gc_cycle: GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. go_gc_pauses_seconds: Distribution of individual GC-related stop-the-world pause latencies. Bucket counts increase monotonically. go_gc_scan_globals_bytes: The total amount of global variable space that is scannable. go_gc_scan_heap_bytes: The total amount of heap space that is scannable. go_gc_scan_stack_bytes: The number of bytes of stack that were scanned last GC cycle. go_gc_scan_total_bytes: The total amount space that is scannable. Sum of all metrics in"
},
{
"data": "go_gc_stack_starting_size_bytes: The stack size of new goroutines. go_godebug_non_default_behavior_execerrdot_events_total: The number of non-default behaviors executed by the os/exec package due to a non-default GODEBUG=execerrdot=... setting. go_godebug_non_default_behavior_gocachehash_events_total: The number of non-default behaviors executed by the cmd/go package due to a non-default GODEBUG=gocachehash=... setting. go_godebug_non_default_behavior_gocachetest_events_total: The number of non-default behaviors executed by the cmd/go package due to a non-default GODEBUG=gocachetest=... setting. go_godebug_non_default_behavior_gocacheverify_events_total: The number of non-default behaviors executed by the cmd/go package due to a non-default GODEBUG=gocacheverify=... setting. go_godebug_non_default_behavior_http2client_events_total: The number of non-default behaviors executed by the net/http package due to a non-default GODEBUG=http2client=... setting. go_godebug_non_default_behavior_http2server_events_total: The number of non-default behaviors executed by the net/http package due to a non-default GODEBUG=http2server=... setting. go_godebug_non_default_behavior_installgoroot_events_total: The number of non-default behaviors executed by the go/build package due to a non-default GODEBUG=installgoroot=... setting. go_godebug_non_default_behavior_jstmpllitinterp_events_total: The number of non-default behaviors executed by the html/template package due to a non-default GODEBUG=jstmpllitinterp=... setting. go_godebug_non_default_behavior_multipartmaxheaders_events_total: The number of non-default behaviors executed by the mime/multipart package due to a non-default GODEBUG=multipartmaxheaders=... setting. go_godebug_non_default_behavior_multipartmaxparts_events_total: The number of non-default behaviors executed by the mime/multipart package due to a non-default GODEBUG=multipartmaxparts=... setting. go_godebug_non_default_behavior_multipathtcp_events_total: The number of non-default behaviors executed by the net package due to a non-default GODEBUG=multipathtcp=... setting. go_godebug_non_default_behavior_panicnil_events_total: The number of non-default behaviors executed by the runtime package due to a non-default GODEBUG=panicnil=... setting. go_godebug_non_default_behavior_randautoseed_events_total: The number of non-default behaviors executed by the math/rand package due to a non-default GODEBUG=randautoseed=... setting. go_godebug_non_default_behavior_tarinsecurepath_events_total: The number of non-default behaviors executed by the archive/tar package due to a non-default GODEBUG=tarinsecurepath=... setting. go_godebug_non_default_behavior_tlsmaxrsasize_events_total: The number of non-default behaviors executed by the crypto/tls package due to a non-default GODEBUG=tlsmaxrsasize=... setting. go_godebug_non_default_behavior_x509sha1_events_total: The number of non-default behaviors executed by the crypto/x509 package due to a non-default GODEBUG=x509sha1=... setting. go_godebug_non_default_behavior_x509usefallbackroots_events_total: The number of non-default behaviors executed by the crypto/x509 package due to a non-default GODEBUG=x509usefallbackroots=... setting. go_godebug_non_default_behavior_zipinsecurepath_events_total: The number of non-default behaviors executed by the archive/zip package due to a non-default GODEBUG=zipinsecurepath=... setting. go_goroutines: Number of goroutines that currently exist. go_info: Information about the Go environment. go_memory_classes_heap_free_bytes: Memory that is completely free and eligible to be returned to the underlying system, but has not been. This metric is the runtime's estimate of free address space that is backed by physical memory. go_memory_classes_heap_objects_bytes: Memory occupied by live objects and dead objects that have not yet been marked free by the garbage collector. go_memory_classes_heap_released_bytes: Memory that is completely free and has been returned to the underlying system. This metric is the runtime's estimate of free address space that is still mapped into the process, but is not backed by physical memory. go_memory_classes_heap_stacks_bytes: Memory allocated from the heap that is reserved for stack space, whether or not it is currently in-use. Currently, this represents all stack memory for goroutines. It also includes all OS thread stacks in non-cgo programs. Note that stacks may be allocated differently in the future, and this may change. go_memory_classes_heap_unused_bytes: Memory that is reserved for heap objects but is not currently used to hold heap objects. go_memory_classes_metadata_mcache_free_bytes: Memory that is reserved for runtime mcache structures, but not in-use. go_memory_classes_metadata_mcache_inuse_bytes: Memory that is occupied by runtime mcache structures that are currently being used. go_memory_classes_metadata_mspan_free_bytes: Memory that is reserved for runtime mspan structures, but not in-use. go_memory_classes_metadata_mspan_inuse_bytes: Memory that is occupied by runtime mspan structures that are currently being used. go_memory_classes_metadata_other_bytes: Memory that is reserved for or used to hold runtime metadata. go_memory_classes_os_stacks_bytes: Stack memory allocated by the underlying operating system. In non-cgo programs this metric is currently zero. This may change in the"
},
{
"data": "cgo programs this metric includes OS thread stacks allocated directly from the OS. Currently, this only accounts for one stack in c-shared and c-archive build modes, and other sources of stacks from the OS are not measured. This too may change in the future. go_memory_classes_other_bytes: Memory used by execution trace buffers, structures for debugging the runtime, finalizer and profiler specials, and more. go_memory_classes_profiling_buckets_bytes: Memory that is used by the stack trace hash map used for profiling. go_memory_classes_total_bytes: All memory mapped by the Go runtime into the current process as read-write. Note that this does not include memory mapped by code called via cgo or via the syscall package. Sum of all metrics in /memory/classes. go_memstats_alloc_bytes: Number of bytes allocated and still in use. go_memstats_alloc_bytes_total: Total number of bytes allocated, even if freed. go_memstats_buck_hash_sys_bytes: Number of bytes used by the profiling bucket hash table. go_memstats_frees_total: Total number of frees. go_memstats_gc_sys_bytes: Number of bytes used for garbage collection system metadata. go_memstats_heap_alloc_bytes: Number of heap bytes allocated and still in use. go_memstats_heap_idle_bytes: Number of heap bytes waiting to be used. go_memstats_heap_inuse_bytes: Number of heap bytes that are in use. go_memstats_heap_objects: Number of allocated objects. go_memstats_heap_released_bytes: Number of heap bytes released to OS. go_memstats_heap_sys_bytes: Number of heap bytes obtained from system. go_memstats_last_gc_time_seconds: Number of seconds since 1970 of last garbage collection. go_memstats_lookups_total: Total number of pointer lookups. go_memstats_mallocs_total: Total number of mallocs. go_memstats_mcache_inuse_bytes: Number of bytes in use by mcache structures. go_memstats_mcache_sys_bytes: Number of bytes used for mcache structures obtained from system. go_memstats_mspan_inuse_bytes: Number of bytes in use by mspan structures. go_memstats_mspan_sys_bytes: Number of bytes used for mspan structures obtained from system. go_memstats_next_gc_bytes: Number of heap bytes when next garbage collection will take place. go_memstats_other_sys_bytes: Number of bytes used for other system allocations. go_memstats_stack_inuse_bytes: Number of bytes in use by the stack allocator. go_memstats_stack_sys_bytes: Number of bytes obtained from system for stack allocator. go_memstats_sys_bytes: Number of bytes obtained from system. go_sched_gomaxprocs_threads: The current runtime.GOMAXPROCS setting, or the number of operating system threads that can execute user-level Go code simultaneously. go_sched_goroutines_goroutines: Count of live goroutines. go_sched_latencies_seconds: Distribution of the time goroutines have spent in the scheduler in a runnable state before actually running. Bucket counts increase monotonically. go_sync_mutex_wait_total_seconds_total: Approximate cumulative time goroutines have spent blocked on a sync.Mutex or sync.RWMutex. This metric is useful for identifying global changes in lock contention. Collect a mutex or block profile using the runtime/pprof package for more detailed contention data. go_threads: Number of OS threads created. hidden_metrics_total: The count of hidden metrics. process_cpu_seconds_total: Total user and system CPU time spent in seconds. process_max_fds: Maximum number of open file descriptors. process_open_fds: Number of open file descriptors. process_resident_memory_bytes: Resident memory size in bytes. process_start_time_seconds: Start time of the process since unix epoch in seconds. process_virtual_memory_bytes: Virtual memory size in bytes. process_virtual_memory_max_bytes: Maximum amount of virtual memory available in bytes. registered_metrics_total: The count of registered metrics broken by stability level and deprecation version. workqueue_adds_total: Total number of adds handled by workqueue workqueue_depth: Current depth of workqueue workqueue_longest_running_processor_seconds: How many seconds has the longest running processor for workqueue been running. workqueue_queue_duration_seconds: How long in seconds an item stays in workqueue before being requested. workqueue_retries_total: Total number of retries handled by workqueue workqueue_unfinished_work_seconds: How many seconds of work has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases. workqueue_work_duration_seconds: How long in seconds processing an item from workqueue takes."
}
] | {
"category": "Runtime",
"file_name": "prometheus-integration.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "Images can be run by either their name, their hash, an explicit transport address, or a Docker registry URL. rkt will automatically them if they're not present in the local store. ``` ``` ``` ``` ``` ``` ``` ``` Multiple applications can be run in a pod by passing multiple images to the run command: ``` ``` The flag `--pod-manifest` allows users to specify a to run as a pod. This means image manifests for apps in the pod will be overriden and any configuration specified in them will be ignored. For more details about generating a runtime manifest, check the . Be default, the image's name will be used as the app's name. It can be overridden by rkt using the `--name` flag. This comes handy when we want to run multiple apps using the same image: ``` ``` Application images include an `exec` field that specifies the executable to launch. This executable can be overridden by rkt using the `--exec` flag: ``` ``` Application images can include per-app isolators and some of them can be overridden by rkt. The units come from . In the following example, the CPU isolator is defined to 750 milli-cores and the memory isolator limits the memory usage to 128MB. ``` ``` Application images must specify the username/group or the UID/GID the app is to be run as as specified in the . The user/group can be overridden by rkt using the `--user` and `--group` flags: ``` ``` To pass additional arguments to images use the pattern of `image1 -- [image1 flags] image2 -- [image2 flags]`. For example: ``` ``` This can be combined with overridden executables: ``` ``` Additional annotations and labels can be added to the app by using `--user-annotation` and `--user-label` flag. The annotations and labels will appear in the app's `UserAnnotations` and `UserLabels` field. ``` ``` To inherit all environment variables from the parent, use the `--inherit-env` flag. To explicitly set environment variables for all apps, use the `--set-env` flag. To explicitly set environment variables for all apps from a file, use the `--set-env-file` flag. Variables are expected to be in the format `VAR_NAME=VALUE` separated by the new line character `\\n`. Lines starting with `#` or `;` and empty ones will be ignored. To explicitly set environment variables for each app individually, use the `--environment` flag. The precedence is as follows with the last item replacing previous environment entries: Parent environment App image environment Explicitly set environment variables for all apps from file (`--set-env-file`) Explicitly set environment variables for all apps on command line (`--set-env`) Explicitly set environment variables for each app on command line (`--environment`) ``` EXAMPLE_ENV=hello FOO=bar EXAMPLE_OVERRIDE=over EXAMPLE_ENV=hello FOO=bar EXAMPLE_OVERRIDE=ride ``` If desired, `--insecure-options=image` can be used to disable this security check: ``` rkt: searching for app image coreos.com/etcd:v2.0.0 rkt: fetching image from https://github.com/coreos/etcd/releases/download/v2.0.0/etcd-v2.0.0-linux-amd64.aci rkt: warning: signature verification has been disabled ... ``` Each ACI can define a that the app is expecting external data to be mounted into: ```json { \"acKind\": \"ImageManifest\", \"name\": \"example.com/app1\", ... \"app\": { ... \"mountPoints\": [ { \"name\": \"data\", \"path\": \"/var/data\", \"readOnly\": false, \"recursive\": true } ] } ... } ``` To fulfill these mount points, volumes are"
},
{
"data": "A volume is assigned to a mount point if they both have the same name. There are today two kinds of volumes: `host` volumes that can expose a directory or a file from the host to the pod. `empty` volumes that initialize an empty storage to be accessed locally within the pod. When the pod is garbage collected, it will be removed. Each volume can be selectively mounted into each application at differing mount points. Note that any volumes that are specified but do not have a matching mount point (or ) will be silently ignored. If a mount point is specified in the image manifest but no matching volume is found, an implicit `empty` volume will be created automatically. Volumes are defined via the `--volume` flag, the volume is then mounted into each app running in the pod based on information defined in the ACI manifest. There are two kinds of volumes, `host` and `empty`. For `host` volumes, the `--volume` flag allows you to specify the volume name, the location on the host, whether the volume is read-only or not, and whether the volume is recursive or not. The volume name and location on the host are mandatory. The read-only parameter is false by default. The recursive parameter is true by default for the coreos and KVM stage1 flavors, and false by default for the fly stage1 flavor. Syntax: ``` --volume NAME,kind=host,source=SOURCE_PATH,readOnly=BOOL,recursive=BOOL ``` In the following example, we make the host's `/srv/data` accessible to app1 on `/var/data`: ``` ``` Here we set the recursive option to false to avoid making further mounts inside `/srv/data` available to the container: ``` ``` Using devices from the host inside the container in rkt works by just creating volumes with the source set to the particular device. You can find some examples in the . If you don't intend to persist the data and you just want to have a volume shared between all the apps in the pod, you can use an `empty` volume: ``` ``` For `empty` volumes, the `--volume` flag allows you to specify the volume name, and the mode, UID and GID of the generated volume. The volume name is mandatory. By default, `mode` is `0755`, UID is `0` and GID is `0`. Syntax: `--volume NAME,kind=empty,mode=MODE,uid=UID,gid=GID` In the following example, we create an empty volume for app1's `/var/data`: ``` ``` If the ACI doesn't have any mount points defined in its manifest, you can still mount volumes using the `--mount` flag. With `--mount` you define a mapping between volumes and a path in the app. This will supplement and override any mount points in the image manifest. In the following example, the `--mount` option is positioned after the app name; it defines the mount only in that app: ``` example.com/app1 --mount volume=logs,target=/var/log \\ example.com/app2 --mount volume=logs,target=/opt/log ``` In the following example, the `--mount` option is positioned before the app names. It defines mounts on all apps: both app1 and app2 will have `/srv/logs` accessible on `/var/log`. ``` --mount volume=data,target=/var/log \\ example.com/app1 example.com/app2 ``` Let's say we want to read data from the host directory `/opt/tenant1/work` to power a MapReduce-style worker. We'll call this app `example.com/reduce-worker`. We also want this data to be available to a backup application that runs alongside the worker (in the same pod). We'll call this app"
},
{
"data": "The backup application only needs read-only access to the data. Below we show the abbreviated manifests for the respective applications (recall that the manifest is bundled into the application's ACI): ```json { \"acKind\": \"ImageManifest\", \"name\": \"example.com/reduce-worker\", ... \"app\": { ... \"mountPoints\": [ { \"name\": \"work\", \"path\": \"/var/lib/work\", \"readOnly\": false } ], ... } ... } ``` ```json { \"acKind\": \"ImageManifest\", \"name\": \"example.com/worker-backup\", ... \"app\": { ... \"mountPoints\": [ { \"name\": \"work\", \"path\": \"/backup\", \"readOnly\": true } ], ... } ... } ``` In this case, both apps reference a volume they call \"work\", and expect it to be made available at `/var/lib/work` and `/backup` within their respective root filesystems. Since they reference the volume using an abstract name rather than a specific source path, the same image can be used on a variety of different hosts without being coupled to the host's filesystem layout. To tie it all together, we use the `rkt run` command-line to provide them with a volume by this name. Here's what it looks like: ``` example.com/reduce-worker \\ example.com/worker-backup ``` If the image didn't have any mount points, you can achieve a similar effect with the `--mount` flag (note that both would be read-write though): ``` example.com/reduce-worker --mount volume=work,target=/var/lib/work \\ example.com/worker-backup --mount volume=work,target=/backup ``` Now when the pod is running, the two apps will see the host's `/opt/tenant1/work` directory made available at their expected locations. By default, `rkt run` will not register the pod with the . You can enable registration with the `--mds-register` command line option. The `run` subcommand features the `--net` argument which takes options to configure the pod's network. When the argument is not given, `--net=default` is automatically assumed and the default contained network will be loaded. Simplified, with `--net=host` the apps within the pod will share the network stack and the interfaces with the host machine. ``` ``` Strictly seen, this is only true when `rkt run` is invoked on the host directly, because the network stack will be inherited from the process that is invoking the `rkt run` command. More details about rkt's networking options and examples can be found in the . rkt doesn't include any built-in support for running as a daemon. However, since it is a regular process, you can use your init system to achieve the same effect. For example, if you use systemd, you can . If you don't use systemd, you can use as an alternative. rkt is designed and intended to be modular, using a . You can use a custom stage1 by using the `--stage1-{url,path,name,hash,from-dir}` flags. ``` ``` rkt expects stage1 images to be signed except in the following cases: it is the default stage1 image and it's in the same directory as the rkt binary `--stage1-{name,hash}` is used and the image is already in the store `--stage1-{url,path,from-dir}` is used and the image is in the default directory configured at build time For more details see the . rkt uses overlayfs by default when running application containers. This provides immense benefits to performance and efficiency: start times for large containers are much faster, and multiple pods using the same images will consume less disk space and can share page cache"
},
{
"data": "This feature will be disabled automatically if the underlying filesystem does not support overlay fs, see the subcommand for details. This feature can also be explicitly disabled with the `--no-overlay` option: ``` ``` | Flag | Default | Options | Description | | | | | | | `--caps-remove` | none | capability to remove (e.g. `--caps-remove=CAPSYSCHROOT,CAP_MKNOD`) | Capabilities to remove from the process's capabilities bounding set; all others from the default set will be included. | | `--caps-retain` | none | capability to retain (e.g. `--caps-retain=CAPSYSADMIN,CAPNETADMIN`) | Capabilities to retain in the process's capabilities bounding set; all others will be removed. | | `--cpu` | none | CPU units (e.g. `--cpu=500m`) | CPU limit for the preceding image in format. | | `--dns` | none | IP Addresses (separated by comma), `host`, or `none` | Name server to write in `/etc/resolv.conf`. It can be specified several times. Pass `host` only to use host's resolv.conf or `none` only to ignore CNI DNS config. | | `--dns-domain` | none | DNS domain (e.g., `--dns-domain=example.com`) | DNS domain to write in `/etc/resolv.conf`. | | `--dns-opt` | none | DNS option | DNS option from resolv.conf(5) to write in `/etc/resolv.conf`. It can be specified several times. | | `--dns-search` | none | Domain name | DNS search domain to write in `/etc/resolv.conf`. It can be specified several times. | | `--environment` | none | environment variables add to the app's environment variables | Set the app's environment variables (example: '--environment=foo=bar'). | | `--exec` | none | Path to executable | Override the exec command for the preceding image. | | `--group` | root | gid, groupname or file path (e.g. `--group=core`) | Group override for the preceding image. | | `--hosts-entry` | none | an /etc/hosts entry within the container (e.g., `--hosts-entry=10.2.1.42=db`) | Entries to add to the pod-wide /etc/hosts. Pass 'host' to use the host's /etc/hosts. | | `--hostname` | `rkt-$PODUUID` | A host name | Set pod's host name. | | `--inherit-env` | `false` | `true` or `false` | Inherit all environment variables not set by apps. | | `--interactive` | `false` | `true` or `false` | Run pod interactively. If true, only one image may be supplied. | | `--ipc` | `auto` | `auto`, `private` or `parent` | Whether to stay in the host IPC namespace. | | `--mds-register` | `false` | `true` or `false` | Register pod with metadata service. It needs network connectivity to the host (`--net` as `default`, `default-restricted`, or `host`). | | `--memory` | none | Memory units (e.g. `--memory=50M`) | Memory limit for the preceding image in format. | | `--mount` | none | Mount syntax (e.g. `--mount volume=NAME,target=PATH`) | Mount point binding a volume to a path within an app. See . | | `--name` | none | Name of the app | Set the name of the app (example: '--name=foo'). If not set, then the app name default to the image's name | | `--net` | `default` | A comma-separated list of networks. (e.g. `--net[=n[:args], ...]`) | Configure the pod's networking. Optionally, pass a list of user-configured networks to load and set arguments to pass to each network, respectively. | | `--no-overlay` | `false` | `true` or `false` | Disable the overlay filesystem. | | `--oom-score-adjust` | none | adjust /proc/$pid/oomscoreadj | oom-score-adj isolator"
},
{
"data": "| | `--pod-manifest` | none | A path | The path to the pod manifest. If it's non-empty, then only `--net`, `--no-overlay` and `--interactive` will have effect. | | `--port` | none | A port name and number pair | Container port name to expose through host port number. Requires . Syntax: `--port=NAME:HOSTPORT` The NAME is that given in the ACI. By convention, Docker containers' EXPOSEd ports are given a name formed from the port number, a hyphen, and the protocol, e.g., `80-tcp`, giving something like `--port=80-tcp:8080`. | | `--private-users` | `false` | `true` or `false` | Run within user namespaces. | | `--pull-policy` | `new` | `never`, `new`, or `update` | Sets the policy for when to fetch an image. See | | `--readonly-rootfs` | none | set root filesystem readonly (e.g., `--readonly-rootfs=true`) | if set, the app's rootfs will be mounted read-only | | `--seccomp` | none | filter override (e.g., `--seccomp mode=retain,errno=EPERM,chmod,chown`) | seccomp filter override | | `--set-env` | none | An environment variable (e.g. `--set-env=NAME=VALUE`) | An environment variable to set for apps. | | `--set-env-file` | none | Path of an environment variables file (e.g. `--set-env-file=/path/to/env/file`) | Environment variables to set for apps. | | `--signature` | none | A file path | Local signature file to use in validating the preceding image. | | `--stage1-from-dir` | none | Image name (e.g. `--stage1-name=coreos.com/rkt/stage1-coreos`) | A stage1 image file name to search for inside the default stage1 images directory. | | `--stage1-hash` | none | Image hash (e.g. `--stage1-hash=sha512-dedce9f5ea50`) | A hash of a stage1 image. The image must exist in the store. | | `--stage1-name` | none | Image name (e.g. `--stage1-name=coreos.com/rkt/stage1-coreos`) | A name of a stage1 image. Will perform a discovery if the image is not in the store. | | `--stage1-path` | none | Absolute or relative path | A path to a stage1 image. | | `--stage1-url` | none | URL with protocol | A URL to a stage1 image. HTTP/HTTPS/File/Docker URLs are supported. | | `--supplementary-gids` | none | supplementary group IDs (e.g., `--supplementary-gids=1024,2048`) | supplementary group IDs override for the preceding image | | `--user` | none | uid, username or file path (e.g. `--user=core`) | User override for the preceding image. | | `--user-annotation` | none | annotation add to the app's UserAnnotations field | Set the app's annotations (example: '--user-annotation=foo=bar'). | | `--user-label` | none | label add to the apps' UserLabels field | Set the app's labels (example: '--user-label=foo=bar'). | | `--uuid-file-save` | none | A file path | Write out the pod UUID to a file. | | `--volume` | none | Volume syntax (e.g. `--volume NAME,kind=KIND,source=PATH,readOnly=BOOL`) | Volumes to make available in the pod. See . | | `--working-dir` | none | working directory override (e.g. `--working-dir=/tmp/bar`) | Override the working directory in the preceding image. | | Flag | Default | Options | Description | | | | | | | `--stdin` | \"null\" | \"null\", \"tty\", \"stream\" | Mode for this application stdin. | | `--stdout` | \"log\" | \"null\", \"tty\", \"stream\", \"log\" | Mode for this application stdout. | | `--stderr` | \"log\" | \"null\", \"tty\", \"stream\", \"log\" | Mode for this application stderr. | See the table with ."
}
] | {
"category": "Runtime",
"file_name": "run.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
} |
[
{
"data": "When starts up, carina-node will label each node with `topology.carina.storage.io/node=${nodename}`. For storageclass, user can set `allowedTopologies` to affect pod scheduling. Creating storageclass with `kubectl apply -f storageclass.yaml` ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-carina-sc provisioner: carina.storage.io parameters: csi.storage.k8s.io/fstype: xfs carina.storage.io/disk-group-name: hdd reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate mountOptions: allowedTopologies: matchLabelExpressions: key: beta.kubernetes.io/os values: linux key: kubernetes.io/hostname values: 10.20.9.153 10.20.9.154 ``` `allowedTopologies` policy only works with `volumeBindingMode: Immediate`. Example as follows: ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: carina-topo-stateful namespace: carina spec: serviceName: \"nginx\" replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: kubernetes.io/os operator: In values: linux podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: labelSelector: matchExpressions: key: app operator: In values: nginx topologyKey: topology.carina.storage.io/node containers: name: nginx image: nginx imagePullPolicy: \"IfNotPresent\" ports: containerPort: 80 name: web volumeMounts: name: www mountPath: /usr/share/nginx/html name: logs mountPath: /logs volumeClaimTemplates: metadata: name: www spec: accessModes: [ \"ReadWriteOnce\" ] storageClassName: csi-carina-sc resources: requests: storage: 10Gi metadata: name: logs spec: accessModes: [ \"ReadWriteOnce\" ] storageClassName: csi-carina-sc resources: requests: storage: 5Gi ```"
}
] | {
"category": "Runtime",
"file_name": "topology.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- toc --> - - - <!-- /toc --> NetworkPolicy was initially used to restrict network access at layer 3 (Network) and 4 (Transport) in the OSI model, based on IP address, transport protocol, and port. Securing applications at IP and port level provides limited security capabilities, as the service an application provides is either entirely exposed to a client or not accessible by that client at all. Starting with v1.10, Antrea introduces support for layer 7 NetworkPolicy, an application-aware policy which provides fine-grained control over the network traffic beyond IP, transport protocol, and port. It enables users to protect their applications by specifying how they are allowed to communicate with others, taking into account application context. For example, you can enforce policies to: Grant access of privileged URLs to specific clients while make other URLs publicly accessible. Prevent applications from accessing unauthorized domains. Block network traffic using an unauthorized application protocol regardless of port used. This guide demonstrates how to configure layer 7 NetworkPolicy. Layer 7 NetworkPolicy was introduced in v1.10 as an alpha feature and is disabled by default. A feature gate, `L7NetworkPolicy`, must be enabled in antrea-controller.conf and antrea-agent.conf in the `antrea-config` ConfigMap. Additionally, due to the constraint of the application detection engine, TX checksum offloading must be disabled via the `disableTXChecksumOffload` option in antrea-agent.conf for the feature to work. An example configuration is as below: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | disableTXChecksumOffload: true featureGates: L7NetworkPolicy: true antrea-controller.conf: | featureGates: L7NetworkPolicy: true ``` Alternatively, you can use the following helm installation command to configure the above options: ```bash helm install antrea antrea/antrea --namespace kube-system --set featureGates.L7NetworkPolicy=true,disableTXChecksumOffload=true ``` There isn't a separate resource type for layer 7 NetworkPolicy. It is one kind of Antrea-native policies, which has the `l7Protocols` field specified in the rules. Like layer 3 and layer 4 policies, the `l7Protocols` field can be specified for ingress and egress rules in Antrea ClusterNetworkPolicy and Antrea NetworkPolicy. It can be used with the `from` or `to` field to select the network peer, and the `ports` to select the transport protocol and/or port for which the layer 7 rule applies to. The `action` of a layer 7 rule can only be `Allow`. Note: Any traffic matching the layer 3/4 criteria (specified by `from`, `to`, and `port`) of a layer 7 rule will be forwarded to an application-aware engine for protocol detection and rule enforcement, and the traffic will be allowed if the layer 7 criteria is also matched, otherwise it will be dropped. Therefore, any rules after a layer 7 rule will not be enforced for the traffic that match the layer 7 rule's layer 3/4 criteria. As of now, the only supported layer 7 protocol is HTTP. Support for more protocols may be added in the future and we welcome feature requests for protocols that you are interested in. An example layer 7 NetworkPolicy for the HTTP protocol is like below: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: ingress-allow-http-request-to-api-v2 spec: priority: 5 tier: application appliedTo: podSelector: matchLabels: app: web ingress: name: allow-http # Allow inbound HTTP GET requests to \"/api/v2\" from Pods with label \"app=client\". action:"
},
{
"data": "# All other traffic from these Pods will be automatically dropped, and subsequent rules will not be considered. from: podSelector: matchLabels: app: client l7Protocols: http: path: \"/api/v2/*\" host: \"foo.bar.com\" method: \"GET\" name: drop-other # Drop all other inbound traffic (i.e., from Pods without label \"app=client\" or from external clients). action: Drop ``` path: The `path` field represents the URI path to match. Both exact matches and wildcards are supported, e.g. `/api/v2/`, `/v2/*`, `/index.html`. If not set, the rule matches all URI paths. host: The `host` field represents the hostname present in the URI or the HTTP Host header to match. It does not contain the port associated with the host. Both exact matches and wildcards are supported, e.g. `.foo.com`, `.foo.*`, `foo.bar.com`. If not set, the rule matches all hostnames. method: The `method` field represents the HTTP method to match. It could be GET, POST, PUT, HEAD, DELETE, TRACE, OPTIONS, CONNECT and PATCH. If not set, the rule matches all methods. The following NetworkPolicy grants access of privileged URLs to specific clients while making other URLs publicly accessible: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: allow-privileged-url-to-admin-role spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: app: web ingress: name: for-admin # Allow inbound HTTP GET requests to \"/admin\" and \"/public\" from Pods with label \"role=admin\". action: Allow from: podSelector: matchLabels: role: admin l7Protocols: http: path: \"/admin/*\" http: path: \"/public/*\" name: for-public # Allow inbound HTTP GET requests to \"/public\" from everyone. action: Allow # All other inbound traffic will be automatically dropped. l7Protocols: http: path: \"/public/*\" ``` The following NetworkPolicy prevents applications from accessing unauthorized domains: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: allow-web-access-to-internal-domain spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: egress-restriction: internal-domain-only egress: name: allow-dns # Allow outbound DNS requests. action: Allow ports: protocol: TCP port: 53 protocol: UDP port: 53 name: allow-http-only # Allow outbound HTTP requests towards \"*.bar.com\". action: Allow # As the rule's \"to\" and \"ports\" are empty, which means it selects traffic to any network l7Protocols: # peer's any port using any transport protocol, all outbound HTTP requests towards other http: # domains and non-HTTP requests will be automatically dropped, and subsequent rules will host: \"*.bar.com\" # not be considered. ``` The following NetworkPolicy blocks network traffic using an unauthorized application protocol regardless of the port used. ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: allow-http-only spec: priority: 5 tier: application appliedTo: podSelector: matchLabels: app: web ingress: name: http-only # Allow inbound HTTP requests only. action: Allow # As the rule's \"from\" and \"ports\" are empty, which means it selects traffic from any network l7Protocols: # peer to any port of the Pods this policy applies to, all inbound non-HTTP requests will be http: {} # automatically dropped, and subsequent rules will not be considered. ``` An example layer 7 NetworkPolicy for the TLS protocol is like below: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: ingress-allow-tls-handshake spec: priority: 5 tier: application appliedTo: podSelector: matchLabels: app: web ingress: name: allow-tls # Allow inbound TLS/SSL handshake packets to server name \"foo.bar.com\" from Pods with label \"app=client\". action: Allow # All other traffic from these Pods will be automatically dropped, and subsequent rules will not be"
},
{
"data": "from: podSelector: matchLabels: app: client l7Protocols: tls: sni: \"foo.bar.com\" name: drop-other # Drop all other inbound traffic (i.e., from Pods without label \"app=client\" or from external clients). action: Drop ``` sni: The `sni` field matches the TLS/SSL Server Name Indication (SNI) field in the TLS/SSL handshake process. Both exact matches and wildcards are supported, e.g. `.foo.com`, `.foo.*`, `foo.bar.com`. If not set, the rule matches all names. The following NetworkPolicy prevents applications from accessing unauthorized SSL/TLS server names: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: allow-tls-handshake-to-internal spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: egress-restriction: internal-tls-only egress: name: allow-dns # Allow outbound DNS requests. action: Allow ports: protocol: TCP port: 53 protocol: UDP port: 53 name: allow-tls-only # Allow outbound SSL/TLS handshake packets towards \"*.bar.com\". action: Allow # As the rule's \"to\" and \"ports\" are empty, which means it selects traffic to any network l7Protocols: # peer's any port of any transport protocol, all outbound SSL/TLS handshake packets towards tls: # other server names and non-SSL/non-TLS handshake packets will be automatically dropped, sni: \"*.bar.com\" # and subsequent rules will not be considered. ``` The following NetworkPolicy blocks network traffic using an unauthorized application protocol regardless of the port used. ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: allow-tls-only spec: priority: 5 tier: application appliedTo: podSelector: matchLabels: app: web ingress: name: tls-only # Allow inbound SSL/TLS handshake packets only. action: Allow # As the rule's \"from\" and \"ports\" are empty, which means it selects traffic from any network l7Protocols: # peer to any port of the Pods this policy applies to, all inbound non-SSL/non-TLS handshake tls: {} # packets will be automatically dropped, and subsequent rules will not be considered. ``` Layer 7 traffic that matches the NetworkPolicy will be logged in an event triggered log file (`/var/log/antrea/networkpolicy/l7engine/eve-YEAR-MONTH-DAY.json`). Logs are categorized by event_type. The event type for allowed traffic is `http`, for dropped traffic it is `alert`. If `enableLogging` is set for the rule, dropped packets that match the rule will also be logged in addition to the event with event type `packet`. Below are examples for allow, drop, packet scenarios. Allow ingress from client (10.10.1.8) to web (10.10.1.7/public/*) ```json { \"timestamp\": \"2024-02-22T21:26:07.074791+0000\", \"flow_id\": 757085628206447, \"in_iface\": \"antrea-l7-tap0\", \"event_type\": \"http\", \"vlan\": [1], \"src_ip\": \"10.10.1.8\", \"src_port\": 44132, \"dest_ip\": \"10.10.1.7\", \"dest_port\": 80, \"proto\": \"TCP\", \"tx_id\": 0, \"http\": { \"hostname\": \"10.10.1.7\", \"url\": \"/public/main.html\", \"httpuseragent\": \"Wget/1.21.1\", \"httpcontenttype\": \"text/html\", \"http_method\": \"GET\", \"protocol\": \"HTTP/1.1\", \"status\": 404, \"length\": 153 } } ``` Deny ingress from client (10.10.1.5) to web (10.10.1.4/admin) ```json { \"timestamp\": \"2023-03-09T20:00:28.210821+0000\", \"flow_id\": 627175734391745, \"in_iface\": \"antrea-l7-tap0\", \"event_type\": \"alert\", \"vlan\": [ 1 ], \"src_ip\": \"10.10.1.5\", \"src_port\": 43352, \"dest_ip\": \"10.10.1.4\", \"dest_port\": 80, \"proto\": \"TCP\", \"alert\": { \"action\": \"blocked\", \"gid\": 1, \"signature_id\": 1, \"rev\": 0, \"signature\": \"Reject by AntreaClusterNetworkPolicy:test-l7-ingress\", \"category\": \"\", \"severity\": 3, \"tenant_id\": 1 }, \"http\": { \"hostname\": \"10.10.1.4\", \"url\": \"/admin\", \"httpuseragent\": \"curl/7.74.0\", \"http_method\": \"GET\", \"protocol\": \"HTTP/1.1\", \"length\": 0 }, \"app_proto\": \"http\", \"flow\": { \"pkts_toserver\": 3, \"pkts_toclient\": 1, \"bytes_toserver\": 284, \"bytes_toclient\": 74, \"start\": \"2023-03-09T20:00:28.209857+0000\" } } ``` Additional packet log when `enableLogging` is set ```json { \"timestamp\": \"2023-03-09T20:00:28.225016+0000\", \"flow_id\": 627175734391745, \"in_iface\": \"antrea-l7-tap0\", \"event_type\": \"packet\", \"vlan\": [ 1 ], \"src_ip\": \"10.10.1.4\", \"src_port\": 80, \"dest_ip\": \"10.10.1.5\", \"dest_port\": 43352, \"proto\": \"TCP\", \"packet\": \"/lhtPRglzmQvxnJoCABFAAAoUGYAAEAGFE4KCgEECgoBBQBQqVhIGzbi/odenlAUAfsR7QAA\", \"packet_info\": { \"linktype\": 1 } } ``` This feature is currently only supported for Nodes running Linux."
}
] | {
"category": "Runtime",
"file_name": "antrea-l7-network-policy.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "| Autho | | | -- | | | Date | 2021-02-19 | | Email | | In order to improve the function of iSulad, it is necessary to support the Native network. Usually, testing and development scenarios basically start containers through the client. Therefore, this article details iSulad's Native network design. The CRI interface implements the container network capability flexibly and extensibly by encapsulating the CNI. So, can the native network of isulad also be implemented based on the capabilities of CNI? certainly! This design idea has many advantages: High flexibility and scalability; Relying on open source network plug-ins, greatly reducing the workload; Minimal impact on current iSulad architecture; In line with current industry standards, it can better expand the ecology of iSulad; We divide the network design of iSulad into four modules: network api module: provides the API interface of the entire network component, provides network capabilities for the container (the ability to create, delete the network, join the container, exit the network, etc.), and determine the network type through the `type` parameter. Adaptor module: provides different network type implementations. Currently, it supports two network types: `CRI` and `native`, which correspond to the network implementation of the CRI interface and the local network capabilities of the client respectively. cni-operator module: encapsulates the libcni module, provides a more reasonable and friendly network management interface for the upper layer, and is responsible for the combined adaptation of user configuration and network configuration; libcni module: Based on the existing clibcni self-developed project, it is upgraded and adapted to the latest cni 0.4.0 version, and provides new mechanisms and functions such as check and cache; The overall structure is as follows: ```mermaid flowchart TD classDef impStyle fill:#c19,stroke:#686,stroke-width:2px,color:#fff; classDef apiStyle stroke:#216,stroke-width:2px,stroke-dasharray: 5 5; subgraph networkAPI B(attach network) C(detach network) D(config create) E(config remove) F(config ls) G(config inspect) H(init) end networkAPI:::apiStyle subgraph CRI-adaptor XA[detach network] XB[attach network] XC[......] end CRI-adaptor:::impStyle subgraph Native-adaptor YA[detach network] YB[attach network] YC[config apis...] end subgraph config-apis ADA[create] ADB[remove] ADC[ls] ADD[inspect] end config-apis:::apiStyle subgraph bridge-driver BDA[bridge create] BDB[bridge remove] BDC[bridge ls] BDD[bridge inspect] end subgraph other-drivers ODA[implement driver] end YC --> config-apis config-apis --> bridge-driver config-apis --> other-drivers Native-adaptor:::impStyle subgraph cni-operator J[attach network] K[detach network] end subgraph libcni M[AddNetworkList] N[CheckNetworkList] O[DelNetworkList] end networkAPI --> CRI-adaptor networkAPI --> Native-adaptor CRI-adaptor --> cni-operator Native-adaptor --> cni-operator cni-operator --> libcni OA[CRI pod lifecycle] --> |call| networkAPI OB[Container lifecycle] --> |call| networkAPI ``` The sequence diagram is as follows: ```mermaid sequenceDiagram participant adaptor participant configsDatabase participant cniManager adaptor ->> configsDatabase: cniManager ->> configsDatabase: adaptor ->> cniManager: API ``` ```bash src/daemon/modules/api/network_api.h src/daemon/modules/network/ CMakeLists.txt cni_operator CMakeLists.txt cni_operate.c cni_operate.h libcni CMakeLists.txt invoke CMakeLists.txt libcni_errno.c libcni_errno.h libcni_exec.c libcni_exec.h libcniresultparse.c libcniresultparse.h libcni_api.c libcni_api.h libcni_cached.c libcni_cached.h libcni_conf.c libcni_conf.h libcniresulttype.c libcniresulttype.h cri adaptor_cri.c adaptor_cri.h CMakeLists.txt native adaptor_native.c adaptor_native.h CMakeLists.txt network.c"
},
{
"data": "```` ````c // support network type struct attachnetconf { char *name; char *interface; }; typedef struct networkapiconf_t { char *name; char *ns; char *pod_id; char *netns_path; char *default_interface; // attach network panes config struct { struct attachnetconf extral_nets; sizet extralnets_len; }; // external args; jsonmapstring_string *args; // extention configs: map<string, string> map_t *annotations; } networkapiconf; struct networkapiresult { char *name; char *interface; char ips; sizet ipslen; char *mac; }; typedef struct networkapiresultlistt { struct networkapiresult items; size_t len; size_t cap; } networkapiresult_list; ```` Support up to 1024 CNI profiles; Supports two network types: `native` and `cri`; Interface input parameter type: `networkapiconf`; Network operation result types: `networkapiresultlist` and `networkapi_result`; ````c // 1. Network module initialization interface; bool networkmoduleinit(const char network_plugin, const char cachedir, const char *confdir, const char* bin_path); // 2. The container is connected to the network plane interface; int networkmoduleattach(const networkapiconf conf, const char type, networkapiresult_list result); // 3. Network check operation, which can be used to obtain the network configuration information of the container; int networkmodulecheck(const networkapiconf conf, const char type, networkapiresult_list result); // 4. The container exits the interface from the network plane; int networkmoduledetach(const networkapiconf conf, const char type); // 5. Network configuration generation interface; int networkmoduleconfcreate(const char *type, const networkcreate_request *request, networkcreateresponse response); // 6. Network configuration view interface; int networkmoduleconfinspect(const char *type, const char *name, char **networkjson); // 7. List interfaces for network profiles; int networkmoduleconflist(const char *type, const struct filtersargs filters, network_network_info networks, sizet *networkslen); // 8. Network configuration file delete interface; int networkmoduleconfrm(const char *type, const char *name, char **resname); // 9. Interface to check if the network module is ready; bool networkmoduleready(const char *type); // 10. Network module configuration update interface; int networkmoduleupdate(const char *type); // 11. Interface for resource cleanup when network module exits; void networkmoduleexit(); // 12. Set the portmapping settings of annotations; int networkmoduleinsertportmapping(const char *val, networkapi_conf *conf); // 13. Set the bandwidth setting of annotations; int networkmoduleinsertbandwidth(const char *val, networkapi_conf *conf); // 14. Set the iprange setting of annotations; int networkmoduleinsertiprange(const char *val, networkapi_conf *conf); // 15. Check whether the network module network exists or not; int networkmoduleexist(const char type, const char name); ```` Provides the upper layer with the basic capabilities of CNI, and completes functions such as building, deleting, and checking the CNI network according to the incoming CNI network configuration information. The current libcni module has provided the capability of the `v0.3.0` version, it needs to be upgraded to `v0.4.0` after iteration, and `v0.4.0` needs to support the `check` and `cache` mechanisms. The part marked by the red part in the figure below. ```mermaid graph TD classDef unFinish fill:#c19,stroke:#216,stroke-width:2px,color:#fff,stroke-dasharray: 5 5; O(libcni) --> X O(libcni) --> Y subgraph test X[exec] X --> A(AddNetworkList) X --> B(CheckNetworkList) X --> C(DelNetworkList) X --> F(ValidateNetworkList) end subgraph cache Y[cache] Y --> D(GetNetworkListCachedResult) Y --> E(GetNetworkListCachedConfig) end A --> G[executor] B --> G[executor] C --> G[executor] F --> G[executor] G --> H(cni plugin) D --> P[cache API] E --> P[cache API] P --> R[addCache] P --> S[deleteCache] B:::unFinish F:::unFinish cache:::unFinish ``` You can see how the CRI adapter modules are designed in . You can see how the native network adapter modules are designed in . You can see how the cni operator modules are designed in ."
}
] | {
"category": "Runtime",
"file_name": "native_network_design.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
} |
[
{
"data": "imgadm help [<command>] help on commands imgadm sources [<options>] list and edit image sources imgadm avail [<filters>] list available images imgadm show <uuid|docker-repo-tag> show manifest of an available image imgadm import [-P <pool>] <uuid|docker repo:tag> import image from a source imgadm install [-P <pool>] -m <manifest> -f <file> import from local image data imgadm list [<filters>] list installed images imgadm get [-P <pool>] <uuid> info on an installed image imgadm update [<uuid>...] update installed images imgadm delete [-P <pool>] <uuid> remove an installed image imgadm ancestry [-P <pool>] <uuid> show ancestry of an installed image imgadm vacuum [-n] [-f] delete unused images imgadm create <vm-uuid> [<manifest-field>=<value> ...] ... create an image from a VM imgadm publish -m <manifest> -f <file> <imgapi-url> publish an image to an image repo The imgadm tool allows you to import and manage virtual machine images on a SmartOS system. Virtual machine images (also sometimes referred to as 'datasets') are snapshots of pre-installed virtual machines which are prepared for generic and repeated deployments. Virtual machine images are made up of two primary components: A compressed ZFS snapshot, and a manifest (metadata) which describes the contents of that file. A ZFS snapshot may be of either a ZFS filesystem (for OS-level virtual machines, a.k.a. zones), or a ZFS zvol (for KVM virtual machines). The manifest is a JSON serialized description. The identifier for an image is its UUID. Most commands operate on images by UUID. Image API servers that support channels can be configured as sources by specifying URLs with the 'channel=<channel name>' parameter. The 'import' command also allows a '-C' argument to override all sources and use the supplied channel. -h, --help Print tool (or subcommand) help and exit. --version Print the imgadm version and exit. -v, --verbose Verbose logging: trace-level logging, stack on error. See the IMGADM\\_LOG\\_LEVEL=<level> environment variable. -E On error, emit a structured JSON error object as the last line of stderr output. The following commands are supported: imgadm help [<command>] Print general tool help or help on a specific command. imgadm sources [<options>] List and edit image sources. An image source is a URL to a server implementing the IMGAPI. The default IMGAPI is https://images.smartos.org Image API server channels can be specified by including a '?channel=<channel name>' parameter as part of the supplied <url>. Usage: imgadm sources [--verbose|-v] [--json|-j] # list sources imgadm sources -a <url> [-t <type>] # add a source imgadm sources -d <url> # delete a source imgadm sources -e # edit sources imgadm sources -c # check current sources Options: -h, --help Show this help. -v, --verbose Verbose output. List source URL and TYPE. -j, --json List sources as JSON. -a <source> Add a source. It is appended to the list of sources. -d <source> Delete a source. -e Edit sources in an editor. -c, --check Ping check all sources. -t <type>, --type=<type> The source type for an added source. One of \"imgapi\" (the default), \"docker\" (deprecated), or \"dsapi\" (deprecated). -k, --insecure Allow insecure (no server certificate checking) access to the added HTTPS source URL. -f, --force Force no \"ping check\" on new source URLs. By default a ping check is done against new source URLs to attempt to ensure they are a running IMGAPI server. Examples: imgadm sources -a https://images.smartos.org imgadm avail [<filters>] List available images from all sources. This is not supported for Docker sources. Usage: imgadm avail [<options>...] Options: -h, --help Show this help. -j, --json JSON output. -H Do not print table header"
},
{
"data": "-o FIELD,... Specify fields (columns) to output. Default is \"uuid,name,version,os,published\". -s FIELD,... Sort on the given fields. Default is \"published_at,name\". Fields for \"-o\" and \"-s\": Any of the manifest fields (see `imgadm avail -j` output) plus the following computed fields for convenience. publisheddate just the date part of `publishedat` published `published_at` with the milliseconds removed source the source URL, if available clones the number of clones (dependent images and VMs) size the size, in bytes, of the image file In addition if this is a docker image, then the following: docker_id the full docker id string dockershortid the short 12 character docker id docker_repo the docker repo from which this image originates, if available docker_tags a JSON array of docker repo tags, if available imgadm show <uuid|docker-repo-tag> Show the manifest for an available image. This searches each imgadm source for an available image with this UUID and prints its manifest (in JSON format). imgadm import [-P <pool>] <uuid|docker repo:tag> Import an image from a source IMGAPI. This finds the image with the given UUID in the configured sources and imports it into the local system. Options: -h, --help Show this help. -q, --quiet Disable progress bar. -C <channel> Override the channel used for all sources when looking for images. -P <pool> Name of zpool in which to look for the image. Default is \"zones\". -S <url> Specify the URL from which to import the image. The URL may include a '?channel=' parameter, but note that the -C argument, if used, will take precedence. imgadm install [-P <pool>] -m <manifest> -f <file> Install an image from local manifest and image data files. Options: -h, --help Print this help and exit. -m <manifest> Required. Path to the image manifest file. -f <file> Required. Path to the image file to import. -P <pool> Name of zpool in which to import the image. Default is \"zones\". -q, --quiet Disable progress bar. imgadm list [<filters>] List locally installed images. Usage: imgadm list [<options>...] [<filters>] Options: -h, --help Show this help. -j, --json JSON output. -H Do not print table header row. -o FIELD,... Specify fields (columns) to output. Default is \"uuid,name,version,os,published\". -s FIELD,... Sort on the given fields. Default is \"published_at,name\". --docker Limit and format list similar to `docker images`. Filters: FIELD=VALUE exact string match FIELD=true|false boolean match FIELD=~SUBSTRING substring match Fields for filtering, \"-o\" and \"-s\": Any of the manifest fields (see `imgadm list -j` output) plus the following computed fields for convenience. publisheddate just the date part of `publishedat` published `published_at` with the milliseconds removed source the source URL, if available clones the number of clones (dependent images and VMs) size the size, in bytes, of the image file In addition if this is a docker image, then the following: docker_id the full docker id string dockershortid the short 12 character docker id docker_repo the docker repo from which this image originates, if available docker_tags a JSON array of docker repo tags, if available imgadm get [-P <pool>] <uuid> Get local information for an installed image (JSON format). Options: -r Recursively gather children (child snapshots and dependent clones). -P <pool> Name of zpool in which to look for the image. Default is \"zones\". imgadm update [<uuid>...] Update currently installed images, if necessary. This does not yet support images from a \"docker\" source. Images that are installed without \"imgadm\" (e.g. via \"zfs recv\") not have cached image manifest information. Also, images installed prior to imgadm version"
},
{
"data": "will not have a \"@final\" snapshot (preferred for provisioning and require for incremental image creation, via \"imgadm create -i ...\"). This command will attempt to retrieve manifest information and to ensure images have the correct \"@final\" snapshot, using info from current image sources. If no \"<uuid>\" is given, then update is run for all installed images. Options: -h, --help Print this help and exit. -n, --dry-run Do a dry-run (do not actually make changes). imgadm ancestry [-P <pool>] <uuid> List the ancestry (the \"origin\" chain) for the given incremental image. Usage: imgadm ancestry [<options>...] <uuid> Options: -h, --help Show this help. -j, --json JSON output. -H Do not print table header row. -o FIELD,... Specify fields (columns) to output. Default is \"uuid,name,version,published\". -P <pool> Name of zpool in which to look for the image. Default is \"zones\". Fields for \"-o\": Any of the manifest fields (see `imgadm list -j` output) plus the following computed fields for convenience. publisheddate just the date part of `publishedat` published `published_at` with the milliseconds removed source the source URL, if available clones the number of clones (dependent images and VMs) size the size, in bytes, of the image file In addition if this is a docker image, then the following: docker_id the full docker id string dockershortid the short 12 character docker id docker_repo the docker repo from which this image originates, if available docker_tags a JSON array of docker repo tags, if available imgadm delete [-P <pool>] <uuid> Delete an image from the local zpool. The removal can only succeed if the image is not actively in use by a VM -- i.e. has no dependent ZFS children. \"imgadm get -r <uuid>\" can be used to show dependent children. Options: -h, --help Print this help and exit. -P <pool> Name of zpool from which to delete the image. Default is \"zones\". imgadm vacuum [-n] [-f] delete unused images Remove unused images -- i.e. not used for any VMs or child images. Usage: imgadm vacuum [<options>] Options: -h, --help Show this help. -n, --dry-run Do a dry-run (do not actually make changes). -f, --force Force deletion without prompting for confirmation. imgadm create [<options>] <vm-uuid> [<manifest-field>=<value> ...] Create an image from the given VM and manifest data. There are two basic calling modes: (1) a prepare-image script is provided (via \"-s\") to have imgadm automatically run the script inside the VM before image creation; or (2) the given VM is already \"prepared\" and shutdown. The former involves snapshotting the VM, running the prepare-image script (via the SmartOS mdata operator-script facility), creating the image, rolling back to the pre-prepared state. This is preferred because it is (a) easier (fewer steps to follow for imaging) and (b) safe (gating with snapshot/rollback ensures the VM is unchanged by imaging -- the preparation script is typically destructive. With the latter, one first creates a VM from an existing image, customizes it, runs \"sm-prepare-image\" (or equivalent for KVM guest OSes), shuts it down, runs this \"imgadm create\" to create the image file and manifest, and finally destroys the \"proto\" VM. With either calling mode, the image can optionally be published directly to a given image repository (IMGAPI) via \"-p URL\". This can also be done separately via \"imgadm publish\". Note: When creating an image from a VM with brand 'bhyve', 'lx', or 'kvm', the resulting manifest will have requirements.brand set to match the brand of the source VM. If this is undesirable, the"
},
{
"data": "can be set (optionally empty if the resulting image should not have this value set) in the manifest passed with the '-m' option. Options: -h, --help Print this help and exit. -m <manifest> Path to image manifest data (as JSON) to include in the created manifest. Specify \"-\" to read manifest JSON from stdin. -o <path>, --output-template <path> Path prefix to which to save the created manifest and image file. By default \"NAME-VER.imgmanifest and \"NAME-VER.zfs[.EXT]\" are saved to the current dir. If \"PATH\" is a dir, then the files are saved to it. If the basename of \"PATH\" is not a dir, then \"PATH.imgmanifest\" and \"PATH.zfs[.EXT]\" are created. -c <comp>, --compression=<comp> One of \"none\", \"gzip\", \"bzip2\" or \"xz\" for the compression to use on the image file, if any. Default is \"none\". -i Build an incremental image (based on the \"@final\" snapshot of the source image for the VM). --max-origin-depth <max-origin-depth> Maximum origin depth to allow when creating incremental images. E.g. a value of 3 means that the image will only be created if there are no more than 3 parent images in the origin chain. -s <prepare-image-path> Path to a script that is run inside the VM to prepare it for imaging. Specifying this triggers the full snapshot/prepare-image/create-image/rollback automatic image creation process (see notes above). There is a contract with \"imgadm\" that a prepare-image script must follow. See the \"PREPARE IMAGE SCRIPT\" section in \"man imgadm\". -p <url>, --publish <url> Publish directly to the given image source (an IMGAPI server). You may not specify both \"-p\" and \"-o\". -q, --quiet Disable progress bar in upload. Arguments: <uuid> The UUID of the prepared and shutdown VM from which to create the image. <manifest-field>=<value> Zero or more manifest fields to include in in the created manifest. The \"<value>\" is first interpreted as JSON, else as a string. E.g. 'disabled=true' will be a boolean true and both 'name=foo' and 'name=\"true\"' will be strings. Examples: echo '{\"name\": \"foo\", \"version\": \"1.0.0\"}' \\ | imgadm create -m - -s /path/to/prepare-image \\ 5f7a53e9-fc4d-d94b-9205-9ff110742aaf imgadm create -s prep-image 5f7a53e9-fc4d-d94b-9205-9ff110742aaf \\ name=foo version=1.0.0 imgadm create -s prep-image 5f7a53e9-fc4d-d94b-9205-9ff110742aaf \\ name=foo version=1.0.0 -o /var/tmp imgadm create -s prep-image 5f7a53e9-fc4d-d94b-9205-9ff110742aaf \\ name=foo version=1.0.0 --publish https://images.example.com echo '{\"name\": \"foo\", \"version\": \"1.0.0\"}' \\ | imgadm create -m - 5f7a53e9-fc4d-d94b-9205-9ff110742aaf imgadm publish [<options>] -m <manifest> -f <file> <imgapi-url> Publish an image (local manifest and data) to a remote IMGAPI repo. Typically the local manifest and image file are created with \"imgadm create ...\". Note that \"imgadm create\" supports a \"-p/--publish\" option to publish directly in one step. Limitation: This does not yet support authentication that some IMGAPI image repositories require. Options: -h, --help Print this help and exit. -m <manifest> Required. Path to the image manifest to import. -f <file> Required. Path to the image file to import. -q, --quiet Disable progress bar. Image creation basically involves a `zfs send` of a customized and prepared VM to a file for use in creating new VMs (along with a manifest file that captures metadata about the image). \"Customized\" means software in the VM is installed and setup as desired. \"Prepared\" means that the VM is cleaned up (e.g. host keys removed, log files removed or truncated, hardcoded IP information removed) and tools required for VM creation (e.g. zoneinit in SmartOS VMs, guest tools for Linux and Windows OSes) are layed"
},
{
"data": "As described above \"imgadm create\" has two modes: one where a prepare-image script is given for \"imgadm create\" to run (gated by VM snapshotting and rollback for safety); and another where one manually prepares and stops a VM before calling \"imgadm create\". This section describes prepare-image and guest requirements for the former. The given prepare-image script is run via the SmartOS mdata \"sdc:operator-script\" facility. This requires the guest tools in the VM to support \"sdc:operator-script\" (SmartOS zones running on SDC 7.0 platforms with OS-2515, from 24 Sep 2013, support this.) For orderly VM preparation, a prepare-image script must implement the following contract: The script starts out by setting: mdata-put prepare-image:state running On successful completion it sets: mdata-put prepare-image:state success On error it sets: mdata-put prepare-image:state error mdata-put prepare-image:error '... some error details ...' These are not required as, obviously, `imgadm create` needs to reliably handle a prepare-image script crash. However setting these enables `imgadm create` to fail fast. Shutdown the VM when done. Preparing a VM for imaging is meant to be a quick activity. By default there is a 5 minute timeout on state transitions: (VM booted) -> running -> success or error -> (VM stopped). Since version 3.0.0 imgadm has support for importing Docker images in importing images of `type=docker` from an IMGAPI source. Use the following to mimic `docker images`: imgadm list --docker and list all Docker images (including intermediate layers) with: imgadm list type=docker A subset of the full Docker \"image json\" metadata is stored as \"docker:*\" tags on the image. E.g. for the current \"busybox:latest\": ... \"tags\": { \"docker:id\": \"4986bf8c15363d1c5d15512d5266f8777bfba4974ac56e3270e7760f6f0a8125\", \"docker:architecture\": \"amd64\", \"docker:repo\": \"docker.io/library/busybox\", \"docker:tag:buildroot-2014.02\": true, \"docker:tag:latest\": true, \"docker:config\": { \"Cmd\": [ \"/bin/sh\" ], \"Entrypoint\": null, \"Env\": [ \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\" ], \"WorkingDir\": \"\" } ... Version 3 of \"imgadm\" added Docker image support. This involved a significant refactoring of import and source handling leading to a few compatibility differences with previous versions. These are: \"imgadm sources -j\" is an object for each source; previously each listed sources we just a string (the URL). \"imgadm sources -e\" includes type in edited lines. The imgadm tool was re-written for version 2. There are a few minor compatibility differences with earlier imgadm. These are: \"imgadm show <uuid>\" no longer includes the non-standard \"_url\" field. The equivalent data is now in the \"source\" field of \"imgadm info <uuid>\". \"imgadm update\" used to be required to fetch current available image manifests from the image source(s). That is no longer required. However, \"imgadm update\" remains (for backwards compat) and now fetches image manifests for locally install images that were not installed by imgadm. If there are none, then \"imgadm update\" is a no-op. \"imgadm list\" default output columns have changed from \"UUID, OS, PUBLISHED, URN\" to \"UUID, NAME, VERSION, OS, PUBLISHED\". The image \"urn\" field is now deprecated, hence the change. The old output can be achieved via: \"imgadm list -o uuid,os,published,urn\" The internal database dir has changed from \"/var/db/imgadm\" to \"/var/imgadm\". One side-effect of this is that one can no longer edit image sources by editing \"/var/db/imgadm/sources.list\". The \"imgadm sources\" command should now be used for this. \"imgadm info <uuid>\" output no longer includes the (previously undocumented) \"volume\" key. IMGADM_INSECURE Set to 1 to allow an imgadm source URL that uses HTTPS to a server without a valid SSL certificate. IMGADMLOGLEVEL Set the level at which imgadm will log to stderr. Supported levels are \"trace\", \"debug\", \"info\", \"warn\" (default), \"error\", \"fatal\". REQ_ID If provided, this value is used for imgadm's logging. The following exit values are returned: 0 Successful completion. 1 An error occurred. 2 Usage error. 3 \"ImageNotInstalled\" error. Returned"
}
] | {
"category": "Runtime",
"file_name": "imgadm.8.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
} |
[
{
"data": "This enhancement adds revision counter disable support globally and individually per each volume with a logic based on latest modified time and total block counts of volume-head image file to select the most updated replica rather than the biggest revision counter for salvage recovering. https://github.com/longhorn/longhorn/issues/508 By default 'DisableRevisionCounter' is 'false', but Longhorn provides an optional for user to disable it. Once user set 'DisableRevisionCounter' to 'true' globally or individually, this will improve Longhorn data path performance. And for 'DisableRevisionCounter' is 'true', Longhorn will keep the ability to find the most suitable replica to recover the volume when the engine is faulted(all the replicas are in 'ERR' state). Also during Longhorn Engine starting, with head file information it's unlikely to find out out of synced replicas. So will skip the check. Currently Longhorn Replica will update revision counter to the 'revision.counter' file on every write operation, this impacts the performance a lot. And the main purpose of this revision counter is to pick the most updated replica to recover the volume. Also every time when Longhorn Engine starts, it will check all the replicas' revision counter to make sure the consistency. Having an option to disable the revision counter with a new logic which selects the most updated replica based on the last modified time and the head file size of the replica, and removing the revision counter mechanism. These can improve the Longhorn performance and keep the salvage feature as the same. Longhorn Engine will not check the synchronization of replicas during the starting time. Some user has concern about Longhorn's performance. And with this 'DisableRevisionCounter' option, Longhorn's performance will be improved a lot. In Longhorn UI 'Setting' 'General', by default 'DisableRevisionCounter' is 'false', user can set 'DisableRevisionCounter' to 'true'. This only impacts the UI setting. Or from StorageClass yaml file, user can set 'parameters' 'revisionCounterDisabled' to true. And all the volume created based on this storageclass will have the same 'revisionCounterDisabled' setting. User can also set 'DisableRevisionCounter' for each individual volumes created by Longhorn UI this individual setting will over write the global setting. Once the volume has 'DisableRevisionCounter' to 'true', there won't be revision counter file. And the 'Automatic salvage' is 'true', when the engine is faulted, the engine will pick the most suitable replica as 'Source of Truth' to recover the volume. Add 'DisableRevisionCounter' global setting in Longhorn UI 'Setting' 'General'. Add 'DisableRevisionCounter' individual setting in Longhorn UI 'Volume' 'Create Volume' for each volume. For CSI driver, need to update volume creation API. Add new parameter to 'ReplicaProcessCreate' to disable revision counter. And new parameter to 'EngineProcessCreate' to indicate the salvage requested mode for recovering. Update Longhorn Engine Replica proto 'message Replica' struct with two new fields 'lastmodifiedtime' and 'headfilesize' of the head file. This is for Longhorn Engine Control to get these information from Longhorn Engine Replica. This enhancement has two phases, the first phase is to enable the setting to disable revision counter. The second phase is the implement the new gRPC APIs with new logic for salvage. And for the API compatibility issues, always check the 'EngineImage.Statue.cliAPIVersion' before making the call. Add 'Volume.Spec.RevisionCounterDisabled', 'Replica.Spec.RevisionCounterDisabled' and 'Engine.Spec.RevisionCounterDisabled' to volume, replica and engine objects. Once 'RevisionCounterDisabled' is 'true', volume controller will set 'Volume.Spec.RevisionCounterDisabled' to true, 'Replica.Spec.RevisionCounterDisabled' and"
},
{
"data": "will set to true. And during 'ReplicaProcessCreate' and 'EngineProcessCreate' , this will be passed to engine replica process and engine controller process to start a replica and controller without revision counter. During 'ReplicaProcessCreate' and 'EngineProcessCreate', if 'Replica.Spec.RevisionCounterDisabled' or 'Engine.Spec.RevisionCounterDisabled' is true, it will pass extra parameter to engine replica to start replica without revision counter or to engine controller to start controller without revision counter support, otherwise keep it the same as current and engine replica will use the default value 'false' for this extra parameter. This is the same as the engine controller to set the 'salvageRequested' flag. Add 'RevisionCounterDisabled' in 'ReplicaInfo', when engine controller start, it will get all replica information. For engine controller starting cases: If revision counter is not disabled, stay with the current logic. If revision counter is disabled, engine will not check the synchronization of the replicas. If unexpected case (engine controller has revision counter disabled but any of the replica doesn't, or engine controller has revision counter enabled, but any of the replica doesn't), engine controller will log this as error and mark unmatched replicas to 'ERR'. Once the revision counter has been disabled. Add 'SalvageRequested' in 'InstanceSpec' and 'SalvageExecuted' in 'InstanceStatus' to indicate salvage recovering status. If all replicas are failed, 'Volume Controller' will set 'Spec.SalvageRequested' to 'true'. In 'Engine Controller' will pass 'Spec.SalvageRequested' to 'EngineProcessCreate' to trigger engine controller to start with 'salvageRequested' is 'true' for the salvage logic. The salvage logic gets details of replicas to get the most suitable replica for salvage. Based on 'volume-head-xxx.img' last modified time, to get the latest one and any one within 5 second can be put in the candidate replicas for now. Compare the head file size for all the candidate replicas, pick the one with the most block numbers as the 'Source of Truth'. Only mark one candidate replica to 'RW' mode, the rest of replicas would be marked as 'ERR' mode. Once this is done, set 'SalvageExecuted' to 'true' to indicate the salvage is done and change 'SalvageRequested' back to false. Disable revision counter option should return error if only Longhorn Manager got upgraded, not Longhorn Engine. This is when user trying to disable revision counter with new Longhorn Manager but the Longhorn Engine is still the over version which doesn't have this feature support. In this case, UI will shows error message to user. Revision counter can be disabled globally via UI by set 'Setting' 'General' 'DisableRevisionCounter' to 'true'. It can be set locally per volume via UI by set 'Volume' 'Create Volume''DisableRevisionCounter' to true. It can be set via 'StorageClass', and every PV created by this 'StorageClass' will inherited the same setting: ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: best-effort-longhorn provisioner: driver.longhorn.io allowVolumeExpansion: true parameters: numberOfReplicas: \"1\" disableRevisionCounter: \"true\" staleReplicaTimeout: \"2880\" # 48 hours in minutes fromBackup: \"\" ``` Disable the revision counter. Create a volume with 3 replicas. Attach the volume to a node, and start to write data to the volume. Kill the engine process during the data writing. Verify the volume still works fine. Repeat the above test multiple times. Deploy Longhorn image with v1.0.2 and upgrade Longhorn Manager, salvage function should still work. And then update Longhorn Engine, the revision counter disabled feature should be available."
}
] | {
"category": "Runtime",
"file_name": "20200821-add-revision-counter-disable-support.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "CNI v1.0 has the following changes: non-List configurations are removed the `version` field in the `interfaces` array was redundant and is removed `/pkg/types/current` no longer exists This means that runtimes need to explicitly select a version they support. This reduces code breakage when revendoring cni into other projects and returns the decision on which CNI Spec versions a plugin supports to the plugin's authors. For example, your Go imports might look like ```go import ( cniv1 \"github.com/containernetworking/cni/pkg/types/100\" ) ``` CNI v0.4 has the following important changes: A new verb, \"CHECK\", was added. Runtimes can now ask plugins to verify the status of a container's attachment A new configuration flag, `disableCheck`, which indicates to the runtime that configuration should not be CHECK'ed No changes were made to the result type. The 0.3.0 specification contained a small error. The Result structure's `ip` field should have been renamed to `ips` to be consistent with the IPAM result structure definition; this rename was missed when updating the Result to accommodate multiple IP addresses and interfaces. All first-party CNI plugins (bridge, host-local, etc) were updated to use `ips` (and thus be inconsistent with the 0.3.0 specification) and most other plugins have not been updated to the 0.3.0 specification yet, so few (if any) users should be impacted by this change. The 0.3.1 specification corrects the `Result` structure to use the `ips` field name as originally intended. This is the only change between 0.3.0 and 0.3.1. Version 0.3.0 of the provides rich information about container network configuration, including details of network interfaces and support for multiple IP addresses. To support this new data, the specification changed in a couple significant ways that will impact CNI users, plugin authors, and runtime authors. This document provides guidance for how to upgrade: Note: the CNI Spec is versioned independently from the GitHub releases for this repo. For example, Release v0.4.0 supports Spec version v0.2.0, and Release v0.5.0 supports Spec v0.3.0. If you maintain CNI configuration files for a container runtime that uses CNI, ensure that the configuration files specify a `cniVersion` field and that the version there is supported by your container runtime and CNI plugins. Configuration files without a version field should be given version 0.2.0. The CNI spec includes example configuration files for and for . Consult the documentation for your runtime and plugins to determine what CNI spec versions they support. Test any plugin upgrades before deploying to production. You may find useful. Specifically, your configuration version should be the lowest common version supported by your plugins. This section provides guidance for upgrading plugins to CNI Spec Version 0.3.0. To provide the smoothest upgrade path, existing plugins should support multiple versions of the CNI spec. In particular, plugins with existing installed bases should add support for CNI spec version 1.0.0 while maintaining compatibility with older versions. To do this, two changes are required. First, a plugin should advertise which CNI spec versions it supports. It does this by responding to the `VERSION` command with the following JSON data: ```json { \"cniVersion\": \"1.0.0\", \"supportedVersions\": [ \"0.1.0\", \"0.2.0\", \"0.3.0\", \"0.3.1\", \"0.4.0\", \"1.0.0\" ] } ``` Second, for the `ADD` command, a plugin must respect the `cniVersion` field provided in the"
},
{
"data": "That field is a request for the plugin to return results of a particular format: If the `cniVersion` field is not present, then spec v0.2.0 should be assumed and v0.2.0 format result JSON returned. If the plugin doesn't support the version, the plugin must error. Otherwise, the plugin must return a in the format requested. Result formats for older CNI spec versions are available in the . For example, suppose a plugin, via its `VERSION` response, advertises CNI specification support for v0.2.0 and v0.3.0. When it receives `cniVersion` key of `0.2.0`, the plugin must return result JSON conforming to CNI spec version 0.2.0. Plugins written in Go may leverage the Go language packages in this repository to ease the process of upgrading and supporting multiple versions. CNI includes important changes to the Golang APIs. Plugins using these APIs will require some changes now, but should more-easily handle spec changes and new features going forward. For plugin authors, the biggest change is that `types.Result` is now an interface implemented by concrete struct types in the `types/100`, `types/040`, and `types/020` subpackages. Internally, plugins should use the latest spec version (eg `types/100`) structs, and convert to or from specific versions when required. A typical plugin will only need to do a single conversion when it is about to complete and needs to print the result JSON in the requested `cniVersion` format to stdout. The library function `types.PrintResult()` simplifies this by converting and printing in a single call. Additionally, the plugin should advertise which CNI Spec versions it supports via the 3rd argument to `skel.PluginMain()`. Here is some example code ```go import ( \"github.com/containernetworking/cni/pkg/skel\" \"github.com/containernetworking/cni/pkg/types\" current \"github.com/containernetworking/cni/pkg/types/100\" \"github.com/containernetworking/cni/pkg/version\" ) func cmdAdd(args *skel.CmdArgs) error { // determine spec version to use var netConf struct { types.NetConf // other plugin-specific configuration goes here } err := json.Unmarshal(args.StdinData, &netConf) cniVersion := netConf.CNIVersion // plugin does its work... // set up interfaces // assign addresses, etc // construct the result result := ¤t.Result{ Interfaces: []*current.Interface{ ... }, IPs: []*current.IPs{ ... }, ... } // print result to stdout, in the format defined by the requested cniVersion return types.PrintResult(result, cniVersion) } func main() { skel.PluginMain(cmdAdd, cmdDel, version.All) } ``` Alternately, to use the result from a delegated IPAM plugin, the `result` value might be formed like this: ```go ipamResult, err := ipam.ExecAdd(netConf.IPAM.Type, args.StdinData) result, err := current.NewResultFromResult(ipamResult) ``` Other examples of spec v0.3.0-compatible plugins are the This section provides guidance for upgrading container runtimes to support CNI Spec Version 0.3.0 and later. To provide the smoothest upgrade path and support the broadest range of CNI plugins, container runtimes should support multiple versions of the CNI spec. In particular, runtimes with existing installed bases should add support for CNI spec version 0.3.0 and later while maintaining compatibility with older versions. To support multiple versions of the CNI spec, runtimes should be able to call both new and legacy plugins, and handle the results from either. When calling a plugin, the runtime must request that the plugin respond in a particular format by specifying the `cniVersion` field in the JSON"
},
{
"data": "The plugin will then respond with a in the format defined by that CNI spec version, and the runtime must parse and handle this result. Plugins may respond with error indicating that they don't support the requested CNI version (see ), e.g. ```json { \"cniVersion\": \"0.2.0\", \"code\": 1, \"msg\": \"CNI version not supported\" } ``` In that case, the runtime may retry with a lower CNI spec version, or take some other action. Runtimes may discover which CNI spec versions are supported by a plugin, by calling the plugin with the `VERSION` command. The `VERSION` command was added in CNI spec v0.2.0, so older plugins may not respect it. In the absence of a successful response to `VERSION`, assume that the plugin only supports CNI spec v0.1.0. The Result for the `ADD` command in CNI spec version 0.3.0 and later includes a new field `interfaces`. An IP address in the `ip` field may describe which interface it is assigned to, by placing a numeric index in the `interface` subfield. However, some plugins which are v0.3.0 and later compatible may nonetheless omit the `interfaces` field and/or set the `interface` index value to `-1`. Runtimes should gracefully handle this situation, unless they have good reason to rely on the existence of the interface data. In that case, provide the user an error message that helps diagnose the issue. Container runtimes written in Go may leverage the Go language packages in this repository to ease the process of upgrading and supporting multiple versions. CNI includes important changes to the Golang APIs. Runtimes using these APIs will require some changes now, but should more-easily handle spec changes and new features going forward. For runtimes, the biggest changes to the Go libraries are in the `types` package. It has been refactored to make working with versioned results simpler. The top-level `types.Result` is now an opaque interface instead of a struct, and APIs exposed by other packages, such as the high-level `libcni` package, have been updated to use this interface. Concrete types are now per-version subpackages. The `types/current` subpackage contains the latest (spec v0.3.0) types. When up-converting older result types to spec v0.3.0 and later, fields new in spec v0.3.0 and later (like `interfaces`) may be empty. Conversely, when down-converting v0.3.0 and later results to an older version, any data in those fields will be lost. | From | 0.1 | 0.2 | 0.3 | 0.4 | 1.0 | |--|--|--|--|--|--| | To 0.1 | | | x | x | x | | To 0.2 | | | x | x | x | | To 0.3 | | | | | | | To 0.4 | | | | | | | To 1.0 | | | | | | Key: : lossless conversion <br> : higher-version output may have empty fields <br> x : lower-version output is missing some data <br> A container runtime should use `current.NewResultFromResult()` to convert the opaque `types.Result` to a concrete `current.Result` struct. It may then work with the fields exposed by that struct: ```go // runtime invokes the plugin to get the opaque types.Result // this may conform to any CNI spec version resultInterface, err := libcni.AddNetwork(ctx, netConf, runtimeConf) // upconvert result to the current 0.3.0 spec result, err := current.NewResultFromResult(resultInterface) // use the result fields .... for _, ip := range result.IPs { ... } ```"
}
] | {
"category": "Runtime",
"file_name": "spec-upgrades.md",
"project_name": "Container Network Interface (CNI)",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "| Author | | | | - | | Date | 2020-05-28 | | Email | | The registry module is located as follows: In addition to accepting calls from the Manager module, the Registry module also calls the store module to store the downloaded images and layers. In the process of pulling the image, the libcurl library is used to realize the interaction with the registry. The processing of some certificates and TLS algorithms that need to be used in the interaction process has been implemented in libcurl, and the path can be configured and passed in when using. Authentication in the process of interacting with the registry only needs to support basic authentication. The Bear during the interaction with the registry is generated by the registry, and the client only needs to save it and carry it in subsequent operations. It is necessary to implement the pull part of the protocol Docker Registry HTTP API V2, and the pull part of the OCI distribution spec. This article mainly describes the download of docker images. For the download of OCI images, please refer to the protocol. There are two formats for downloading the manifest of a docker image: Image Manifest Version 2, Schema 2 Image Manifest Version 2, Schema 1 The internal structure of the Registry is as follows: Registry apiv2 module: Implement the interaction protocol with the registry, including Schema1 and Schema2. Mainly to implement the specific protocols for downloading manifest, downloading config files, and downloading layers files, including the ping operation that needs to be performed when downloading. Registry module: Call the registry apiv2 module to download the mirror-related files, and after decompression/validity check, the interface of the store is registered as the mirror, and the interface of the Manager module is provided. Auth/certs module: Manage local username/password, certificate, shared private key and other data. http/https request module: use the libcurl library to encapsulate the http/https interaction process that interacts with the registry, including calling the auth/certs module to obtain/set related password certificates and other operations, as well as auth interacting with the registry Implementation of protocols such as authentication login operations. ````c typedef struct { / Username to use when logging in to the repository / char *username; / Password used to log in to the repository / char *password; }registry_auth; typedef struct { char *image_name; char *destimagename; registry_auth auth; bool skiptlsverify; bool insecure_registry; } registrypulloptions; typedef struct { / Mirror repository address / char *host; registry_auth auth; bool skiptlsverify; bool insecure_registry; } registryloginoptions; int registry_init(); int registrypull(registrypull_options *options); int registrylogin(registrylogin_options *options); int registrylogout(char *authfile_path, char *host); void freeregistrypulloptions(registrypull_options *options); void freeregistryloginoptions(registrylogin_options *options); ```` The Registry module calls the registry apiv2 module to download the mirror-related files, decompresses/checks the validity, and registers the interface of the store as the mirror, and provides a calling interface to the Manager module. Login operation: directly call the interface implementation provided by the registry apiv2 module. Logout operation; call the interface implementation provided by the auth/certs"
},
{
"data": "The following describes the process of pulling images. The protocol implementation in the interaction process is implemented by the registry apiv2 module: According to the incoming image name, assemble the address to obtain the manifest, and request the manifest from the mirror repository. The returned manifests format (assuming schema1 is returned): ````c 200 OK Docker-Content-Digest: <digest> Content-Type: <media type of manifest> { \"name\": <name>, \"tag\": <tag>, \"fsLayers\": [ { \"blobSum\": \"<digest>\" }, ... ] ], \"history\": <v1 images>,c \"signature\": <JWS> } ```` For the detailed meaning of the format, please refer to the link: https://docs.docker.com/registry/spec/manifest-v2-1/ If it is schema2, the returned json format sample is as follows: ````c { \"schemaVersion\": 2, \"mediaType\": \"application/vnd.docker.distribution.manifest.list.v2+json\", \"manifests\": [ { \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"size\": 7143, \"digest\": \"sha256:e692418e4cbaf90ca69d05a66403747baa33ee08806650b51fab815ad7fc331f\", \"platform\": { \"architecture\": \"ppc64le\", \"os\": \"linux\", } }, { \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"size\": 7682, \"digest\": \"sha256:5b0bcabd1ed22e9fb1310cf6c2dec7cdef19f0ad69efa1f392e94a4333501270\", \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\", \"features\": [ \"sse4\" ] } } ] } ```` For the detailed meaning of the format, please refer to the link: https://docs.docker.com/registry/spec/manifest-v2-2/ We only support the following MediaTypes of Manifests: application/vnd.docker.distribution.manifest.v2+json application/vnd.docker.distribution.manifest.v1+prettyjws. it needs to be able to download and use, signature will not be parsed for now. application/vnd.docker.distribution.manifest.v1+json application/vnd.docker.distribution.manifest.list.v2+json application/vnd.oci.image.manifest.v1+json supports OCI image After obtaining the manifest, parse the manifest to obtain the configuration of the image and the digest information of all layers. According to the digest of the mirror's configuration and the digest information of all layers, splicing out the url addresses for downloading all these data and downloading them (this can be downloaded concurrently). After the download is complete, you need to decompress the image layer data, decompress it into tar format data, and calculate the sha256 value. Then, it is necessary to parse the image configuration information, obtain the DiffID of the layer saved in the configuration, and compare it with the downloaded layer data for sha256 to verify its correctness. When verifying, take the rootfs.diff_ids[$i] value in the configuration (that is, the sha256 value of the $i-th layer), and take the downloaded data of the $i-th layer decompressed into tar format as the sha256 value. The two values need to be completely consistent. The values in the configuration are as follows: ````c \"RootFS\": { \"Type\": \"layers\", \"Layers\": [ \"sha256:e7ebc6e16708285bee3917ae12bf8d172ee0d7684a7830751ab9a1c070e7a125\", \"sha256:f934e33a54a60630267df295a5c232ceb15b2938ebb0476364192b1537449093\", \"sha256:bf6751561805be7d07d66f6acb2a33e99cf0cc0a20f5fd5d94a3c7f8ae55c2a1\", \"sha256:943edb549a8300092a714190dfe633341c0ffb483784c4fdfe884b9019f6a0b4\", \"sha256:c1bd37d01c89de343d68867518b1155cb297d8e03942066ecb44ae8f46b608a3\", \"sha256:cf612f747e0fbcc1674f88712b7bc1cd8b91cf0be8f9e9771235169f139d507c\", \"sha256:14dd68f4c7e23d6a2363c2320747ab88986dfd43ba0489d139eeac3ac75323b2\" ] } ```` Call the store module to register the downloaded layer data and configuration to generate an image, and add the corresponding name to the image. This module implements the protocol for interacting with the image repository (Docker Registry HTTP API V2) and the part of the protocol related to pulling images. The manifest forms include Image Manifest Version 2, Schema 1 and Schema 2. Here is a brief description of the protocol. For details, please refer to the detailed description in the link below: https://docs.docker.com/registry/spec/api/ https://docs.docker.com/registry/spec/manifest-v2-1/ https://docs.docker.com/registry/spec/manifest-v2-2/ And the OCI distribution spec based on the above protocol:"
},
{
"data": "The interaction process with the mirror repository will first try to use the https protocol, and if it fails, it will continue to try to use the http protocol to interact (this behavior can be configured). There are some general configuration/interaction procedures for the interaction process, as follows: Ping the registry. In the process of login/download, you only need to ping the registry once. The main purpose of Ping is to obtain relevant information returned by the registry. ping request format: ````http GET /v2/ Host: <registry host> Authorization: <scheme> <token> ```` success: `200 OK` V2 protocol is not supported: `404 Not Found` not certified: ````http 401 Unauthorized WWW-Authenticate: <scheme> realm=\"<realm>\", ...\" Content-Length: <length> Content-Type: application/json; charset=utf-8 { \"errors:\" [ { \"code\": <error code>, \"message\": \"<error message>\", \"detail\": ... }, ... ] } ```` The header information returned by Ping must contain the fields: `Docker-Distribution-API-Version:registry/2.0` If it returns 401 Unauthenticated, the field WWW-Authenticate must be included to indicate where we should go for authentication. Perform authentication. First obtain the relevant authentication method information from the returned 401 unauthenticated http header field WWW-Authenticate. Authentication methods only support Basic and Bearer methods. Note that multiple WWW-Authenticate fields may be carried, indicating that multiple authentication methods are supported at the same time, and each field needs to be parsed and processed. In Basic mode, all subsequent requests need to carry the following header information: `Authorization: Basic QWxhZGRpbjpPcGVuU2VzYW1l` Where QWxhZGRpbjpPcGVuU2VzYW1l is the base64 encoded string of the username and password in the format username:passord. Bearer token method, all subsequent requests need to carry the following header information: `Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiIsImtpZCI6IkJWM0Q6MkFWWjpVQjVaOktJQVA6SU5QTDo1RU42Ok40SjQ6Nk1XTzpEUktFOkJWUUs6M0ZKTDpQT1RMIn0.eyJpc3MiOiJhdXRoLmRvY2tlci5jb20iLCJzdWIiOiJCQ0NZOk9VNlo6UUVKNTpXTjJDOjJBVkM6WTdZRDpBM0xZOjQ1VVc6NE9HRDpLQUxMOkNOSjU6NUlVTCIsImF1ZCI6InJlZ2lzdHJ5LmRvY2tlci5jb20iLCJleHAiOjE0MTUzODczMTUsIm5iZiI6MTQxNTM4NzAxNSwiaWF0IjoxNDE1Mzg3MDE1LCJqdGkiOiJ0WUpDTzFjNmNueXk3a0FuMGM3cktQZ2JWMUgxYkZ3cyIsInNjb3BlIjoiamxoYXduOnJlcG9zaXRvcnk6c2FtYWxiYS9teS1hcHA6cHVzaCxwdWxsIGpsaGF3bjpuYW1lc3BhY2U6c2FtYWxiYTpwdWxsIn0.Y3zZSwaZPqy4y9oRBVRImZyv3mS9XDHF1tWwN7mL52CIiA73SJkWVNsvNqpJIn5h7A2F8biv_S2ppQ1lgkbw` One of the long list of tokens is obtained from the authentication server, For the detailed implementation of the protocol, see the link: https://docs.docker.com/registry/spec/auth/token/ The login verification process is as shown above. The meaning of each step is as follows: The client tries to pull the image from the registry, that is, sends a request to pull the image, or other operations. If the registry needs to log in, it will return 401 Unauthorized not authenticated, and the returned message carries the WWW-Authenticate field, as shown below: ````txt www-Authenticate: Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\",scope=\"repository:samalba/my-app:pull,push\" ```` The meaning of each field is as follows: Bearer realm: Authentication server address. service: mirror repository address. scope: The scope of the operation, that is, which permissions are required. According to the previously returned information, the client assembles a URL request to request a bear token from the authentication server for subsequent interaction. The assembled URL looks like this: ````url https://auth.docker.io/token?service=registry.docker.io&scope=repository:samalba/my-app:pull,push ```` The authentication server returns information such as token and expiration time ````http HTTP/1.1 200 OK Content-Type: application/json {\"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiIsImtpZCI6IlBZWU86VEVXVTpWN0pIOjI2SlY6QVFUWjpMSkMzOlNYVko6WEdIQTozNEYyOjJMQVE6WlJNSzpaN1E2In0.eyJpc3MiOiJhdXRoLmRvY2tlci5jb20iLCJzdWIiOiJqbGhhd24iLCJhdWQiOiJyZWdpc3RyeS5kb2NrZXIuY29tIiwiZXhwIjoxNDE1Mzg3MzE1LCJuYmYiOjE0MTUzODcwMTUsImlhdCI6MTQxNTM4NzAxNSwianRpIjoidFlKQ08xYzZjbnl5N2tBbjBjN3JLUGdiVjFIMWJGd3MiLCJhY2Nlc3MiOlt7InR5cGUiOiJyZXBvc2l0b3J5IiwibmFtZSI6InNhbWFsYmEvbXktYXBwIiwiYWN0aW9ucyI6WyJwdXNoIl19XX0.QhflHPfbd6eVF4lM9bwYpFZIV0PfikbyXuLx959ykRTBpe3CYnzs6YBK8FToVb5R47920PVLrh8zuLzdCr9t3w\", \"expiresin\": 3600,\"issuedat\": \"2009-11-10T23:00:00Z\"} ```` Re-request the pull operation, this time carrying the Bearer token in the Authorization field as the identification of successful authentication: ````txt Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiIsImtpZCI6IkJWM0Q6MkFWWjpVQjVaOktJQVA6SU5QTDo1RU42Ok40SjQ6Nk1XTzpEUktFOkJWUUs6M0ZKTDpQT1RMIn0.eyJpc3MiOiJhdXRoLmRvY2tlci5jb20iLCJzdWIiOiJCQ0NZOk9VNlo6UUVKNTpXTjJDOjJBVkM6WTdZRDpBM0xZOjQ1VVc6NE9HRDpLQUxMOkNOSjU6NUlVTCIsImF1ZCI6InJlZ2lzdHJ5LmRvY2tlci5jb20iLCJleHAiOjE0MTUzODczMTUsIm5iZiI6MTQxNTM4NzAxNSwiaWF0IjoxNDE1Mzg3MDE1LCJqdGkiOiJ0WUpDTzFjNmNueXk3a0FuMGM3cktQZ2JWMUgxYkZ3cyIsInNjb3BlIjoiamxoYXduOnJlcG9zaXRvcnk6c2FtYWxiYS9teS1hcHA6cHVzaCxwdWxsIGpsaGF3bjpuYW1lc3BhY2U6c2FtYWxiYTpwdWxsIn0.Y3zZSwaZPqy4y9oRBVRImZyv3mS9XDHF1tWwN7mL52CIiA73SJkWVNsvNqpJIn5h7A2F8biv_S2ppQ1lgkbw ```` The server verifies the Bearer token and allows the pull operation. The following describes the process of downloading manifest/config/layers data: According to the incoming image name, assemble the address to obtain the manifest, and request the manifest from the mirror"
},
{
"data": "Request manifests: ````http GET /v2/<name>/manifests/<reference> Host: <registry host> Authorization: <scheme> <token> ```` The name here is the image name, excluding the tag, and the reference refers to the tag. For example, to pull the image docker.io/library/node:latest, the above format is: GET /v2/library/node/manifests/latest The Content-Type field in the returned header information will carry the specific manifest type (see the description in Section 5.1.2 above). The body content is the corresponding json string. The configuration and layer of the image are blobs for the registry. As long as the digest value is parsed in the manifest, the blob data can be obtained. The value of digest is the sha256 value of the configuration/layer data (before decompression), and is also part of the url to download these blob data: Get the request for layer/digest (Requests can be concurrent): ````http GET /v2/<name>/blobs/<digest> Host: <registry host> Authorization: <scheme> <token> ```` Obtain success (for the format of failure return, please refer to the protocol): ````http 200 OK Content-Length: <length> Docker-Content-Digest: <digest> Content-Type: application/octet-stream <blob binary data> ```` This module is divided into two parts, auth is responsible for managing the username and password of login, and provides an interface for reading and setting. certs is responsible for managing the certificates and private keys used in https requests when interacting with the registry, and provides a read interface. 1) The certificate used to log in to the registry is stored in the /root/.isulad/auths.json file, as follows: ```shell /root/.isulad aeskey auths.json auths.json.lock { \"auths\": { \"dockerhub.test.com\": { \"auth\": \"nS6GX1wnf4WGe6+O+nS6Py6CVzPPIQJBOksaFSfFAy9LRUijubMIgZhtfrA=\" } } } ```` The auths in auths.json save registry information and the corresponding username and password. The user name and password encryption rule is to combine the user name and password into a $USERNAME:$PASSWORD, then use AES encryption. After that, use base64 to encode the encrypted data and store it as the auth field of the json file. 2) The certificate used for HTTPS requests is placed in the /etc/isulad/certs.d/$registry directory: ```shell /etc/isulad/certs.d/dockerhub.test.com ca.crt tls.cert tls.key ```` Interacting with image repositories requires calling the libcurl library to implement the client protocol of registry API V2. The processing of the protocol has been described above, and the encapsulation of http/https requests is mainly described here. libcurl provides atomic commands to implement requests. This module needs to encapsulate the http_request interface based on the atomic interface provided by libcurl. Multiple interfaces need to be encapsulated so that various requests can be handled easily. It mainly encapsulates three functions: Interact with the registry by returning data in memory, which is used for operations with a small amount of data such as ping. Interact with the authentication server by returning data through the memory to obtain the token. This function is used inside the module and does not provide an external interface. Return data through files. It is used for fetching relatively large amount of data requests, such as fetching blob data and fetching manifests data. In addition to the URL, the following parameters need to be supported: Authentication information such as username and password Whether to return the message header or the body or both TLS related information Customize message header information"
}
] | {
"category": "Runtime",
"file_name": "registry_degisn.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
} |
[
{
"data": "Seccompiler-bin is a tool that compiles seccomp filters expressed as JSON files into serialized, binary BPF code that is directly consumed by Firecracker, at build or launch time. Seccompiler-bin uses a custom , detailed further below, that the filters must adhere to. Besides the seccompiler-bin executable, seccompiler also exports a library interface, with helper functions for deserializing and installing the binary filters. To view the seccompiler-bin command line arguments, pass the `--help` parameter to the executable. Example usage: ```bash ./seccompiler-bin --target-arch \"x86_64\" # The CPU arch where the BPF program will run. --input-file \"x8664musl.json\" # File path of the JSON input. --output-file \"bpfx8664_musl\" # Optional path of the output file. --basic # Optional, creates basic filters, discarding any parameter checks. ``` To view the library documentation, navigate to the seccompiler source code, in `firecracker/src/seccompiler/src` and run `cargo doc --lib --open`. Seccompiler is implemented as another package in the Firecracker cargo workspace. The code is located at `firecracker/src/seccompiler/src`. Seccompiler-bin is supported on the . Seccompiler-bin follows Firecracker's and version (it's released at the same time, with the same version number and adheres to the same support window). A JSON file expresses the seccomp policy for the entire Firecracker process. It contains multiple filters, one per each thread category and is specific to just one target platform. This means that Firecracker has a JSON file for each supported target (currently determined by the arch-libc combinations). You can view them in `resources/seccomp`. At the top level, the file requires an object that maps thread categories (vmm, api and vcpu) to seccomp filters: ``` { \"vmm\": { \"default_action\": { \"errno\" : -1 }, \"filter_action\": \"allow\", \"filter\": [...] }, \"api\": {...}, \"vcpu\": {...}, } ``` The associated filter is a JSON object containing the `default_action`, `filter_action` and `filter`. The `default_action` represents the action we have to execute if none of the rules in `filter` matches, and `filter_action` is what gets executed if a rule in the filter matches (e.g: `\"Allow\"` in the case of implementing an allowlist). An action is the JSON representation of the following enum: ```rust pub enum SeccompAction { Allow, // Allows syscall. Errno(u32), // Returns from syscall with specified error number. Kill, // Kills calling process. Log, // Same as allow but logs"
},
{
"data": "Trace(u32), // Notifies tracing process of the caller with respective number. Trap, // Sends `SIGSYS` to the calling process. } ``` The `filter` property specifies the set of rules that would trigger a match. This is an array containing multiple or-bound SyscallRule objects (if one of them matches, the corresponding action gets triggered). The SyscallRule object is used for adding a rule to a syscall. It has an optional `args` property that is used to specify a vector of and-bound conditions that the syscall arguments must satisfy in order for the rule to match. In the absence of the `args` property, the corresponding action will get triggered by any call that matches that name, irrespective of the argument values. Here is the structure of the object: ``` { \"syscall\": \"accept4\", // mandatory, the syscall name \"comment\": \"Used by vsock & api thread\", // optional, for adding meaningful comments \"args\": [...] // optional, vector of and-bound conditions for the parameters } ``` Note that the file format expects syscall names, not arch-specific numbers, for increased usability. This is not true, however for the syscall arguments, which are expected as base-10 integers. In order to allow a syscall with multiple alternatives for the same parameters, you can write multiple syscall rule objects at the filter-level, each with its own rules. Note that, when passing the deprecated `--basic` flag to seccompiler-bin, all `args` fields of the `SeccompRule`s are ignored. A condition object is made up of the following mandatory properties: `index` (0-based index of the syscall argument we want to check) `type` (`dword` or `qword`, which specifies the argument size - 4 or 8 bytes respectively) `op`, which is one of `eq, ge, gt, ge, lt, masked_eq, ne` (the operator used for comparing the parameter to `val`) `val` is the integer value being checked against As mentioned eariler, we dont support any named parameters, but only numeric constants in the JSON file. You may however add an optional `comment` property to each condition object. This way, you can provide meaning to each numeric value, much like when using named parameters, like so: ``` { \"syscall\": \"accept4\", \"args\": [ { \"index\": 3, \"type\": \"dword\", \"op\": \"eq\", \"val\": 1, \"comment\": \"libc::AF_UNIX\" } ] } ``` To see example filters, look over Firecracker's JSON filters in `resources/seccomp`."
}
] | {
"category": "Runtime",
"file_name": "seccompiler.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
} |
[
{
"data": "While containerd is a daemon that provides API to manage multiple containers, the containers themselves are not tied to the lifecycle of containerd. Each container has a shim that acts as the direct parent for the container's processes as well as reporting the exit status and holding onto the STDIO of the container. This also allows containerd to crash and restore all functionality to containers. The daemon provides an API to manage multiple containers. It can handle locking in process where needed to coordinate tasks between subsystems. While the daemon does fork off the needed processes to run containers, the shim and runc, these are re-parented to the system's init. Each container has its own shim that acts as the direct parent of the container's processes. The shim is responsible for keeping the IO and/or pty master of the container open, writing the container's exit status for containerd, and reaping the container's processes when they exit. Since the shim owns the container's pty master, it provides an API for resizing. Overall, a container's lifecycle is not tied to the containerd daemon. The daemon is a management API for multiple container whose lifecycle is tied to one shim per container."
}
] | {
"category": "Runtime",
"file_name": "lifecycle.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
} |
[
{
"data": "This page describes CLI usage of spiderpoolctl for debug. Trigger the GC request to spiderpool-controller. ``` --address string [optional] address for spider-controller (default to service address) ``` Show a pod that is taking this IP. ``` --ip string [required] ip ``` Try to release an IP. ``` --ip string [optional] ip --force [optional] force release ip ``` Set IP to be taken by a pod. This will update ippool and workload endpoint resource. ``` --ip string [required] ip --pod string [required] pod name --namespace string [required] pod namespace --containerid string [required] pod container id --node string [required] the node name who the pod locates --interface string [required] pod interface who taking effect the ip ```"
}
] | {
"category": "Runtime",
"file_name": "spiderpoolctl.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "Targeted for v1.0-v1.1 Starting with Ceph Mimic, Ceph is able to store the vast majority of config options for all daemons in the Ceph mons' key-value store. This ability to centrally manage 99% of its configuration is Ceph's preferred way of managing configuration and option settings. This allows Ceph to fit more cleanly into the containerized application space than before. However, for backwards compatibility, Ceph options set in the config file or via command line arguments override the centrally-configured settings. To make the most of this functionality within Ceph, it is necessary to limit the configuration options specified by Rook in either config files or on the command line to a minimum. The end goal of this work will be to allow Ceph to centrally manage its own configuration as much as possible. Or in other terms, Rook will specify the barest minimum of configuration options in the config file or on the command line with a priority on clearing the config file. All Ceph options in the config file can be set via the command line, so it is possible to remove the need for having a `ceph.conf` in containers at all. This is preferred over a config file, as it is possible to inspect the entire config a daemon is started with by looking at the pod description. Some parts of the config file will have to be kept as long as Rook supports Ceph Luminous. See the section below for more details. The minimal set of command line options is pared down to the settings which inform the daemon where to find and/or store data. Required flags: `--fsid` is not required but is set to ensure daemons will not connect to the wrong cluster `--mon-host` is required to find mons, and when a pod (re)starts it must have the latest information. This can be achieved by storing the most up-to-date mon host members in a Kubernetes ConfigMap and setting this value a container environment variable mapped to the ConfigMap value. `--{public,cluster}-addr` are not required for most daemons but ... `--public-addr` and `--public-bind-addr` are necessary for mons `--public-addr` and `--cluster-addr` may be needed for osds `--keyring` may be necessary to inform daemons where to find the keyring which must be mounted to a nonstandard directory by virtue of being sourced via a Kubernetes secret. The keyring could copied from the secret mount location to Ceph's default location with an init container, and for some daemons, this may be necessary. This should not be done for the mons, however, as the mon keyrings also include the admin keyring, and persisting the admin key to disk should be avoided at all costs for security. Notable non-required flags: `--{mon,osd,mgr,mds,rgw}-data-dir` settings exist for all daemons, but it is more desirable to use the `/var/lib/ceph/<daemontype>/ceph-<daemonid>` directory for daemon data within containers. If possible, mapping the `dataDirHostPath/<rookdaemondata_dir>` path on hosts to this default location in the container is"
},
{
"data": "Note that currently, `dataDirHostPath` is mapped directly to containers, meaning that each daemon container has access to other daemon containers' host-persisted data. Modifying Rook's behavior to only mount the individual daemon's data dir into the container as proposed here will be a small security improvement on the existing behavior. `--run-dir` exists for all daemons, but it is likewise more desirable to use the `/var/run/ceph` path in containers. Additionally, this directory stores only unix domain sockets, and it does not need to be persisted to the host. We propose to simply use the `/var/run/ceph` location in containers for runtime storage of the data. Additional configuration which Rook sets up initially should be done by setting values in Ceph's centrally-stored config. A large group of additional configurations can be configured at once via the Ceph command `ceph config assimilage-conf`. Care should be taken to make sure that Rook does not modify preexisting user-specified values. In the initial version of this implementation, Rook will set these values on every operator restart. This may result in user configs being overwritten but will ensure the user is not able to render Rook accidentally unusable. In the future, means of determining whether a user has specified a value or whether Rook has specified it is desired which may mean a feature addition to Ceph. Existing global configs configured here: `mon allow pool delete = true` `fatal signal handlers = false` is configured here, but this could be a vestigial config from Rook's old days that can be removed (some more research needed) `log stderr prefix = \"debug \"` should be set for all daemons to differentiate logging from auditing `debug ...` configs Removed configs: `monmaxpgperosd = 1000` is a dangerous setting and should be removed regardless of whether this proposal is accepted `log file = /dev/stderr` is set by default to keep with container standards and kept here if the user needs to change this for debugging/testing `mon cluster log file = /dev/stderr` for `log file` reasons above `mon keyvaluedb = rocksdb` is not needed for Luminous+ clusters `filestoreomapbackend = rocksdb` is not needed for Luminous+ `osd pg bits = 11` set (if needed) using config override for testing or play clusters `osd pgp bits = 11` set (if needed) using config override for testing or play clusters `osd pool default size = 1` set (if needed) using config override for testing or play clusters `osd pool default min size = 1` set (if needed) using config override for testing or play clusters `osd pool default pg num = 100` set (if needed) using config override for testing or play clusters `osd pool default pgp num = 100` set (if needed) using config override for testing or play clusters `rbddefaultfeatures = 3` kubernetes should support Ceph's default RBD features after k8s v1.8 Rook currently offers the option of a config override in a ConfigMap which users may modify after the Ceph operator has"
},
{
"data": "We propose to keep the \"spirit\" of this functionality but change the method of implementation, as the ConfigMap modification approach will be hard to integrate with the final goal of eliminating the config file altogether. Instead, we propose to update the Ceph cluster CRD to support setting and/or overriding values at the time of cluster creation. The proposed format is below. ```yaml apiVersion: ceph.rook.io/v2alpha1 kind: CephCluster spec: config: global: osdpooldefault_size: 1 mon: monclusterlog_file: \"/dev/stderr\" osd.0: debug_osd: 10 ``` !!! Note All values under config are reported to Ceph as strings, but the yaml should support integer values as well if at all possible As stated in the example yaml, above, the 'config' section adds or overrides values in the Ceph config whenever the Ceph operator starts and whenever the user updates the cluster CRD. Ceph Luminous does not have a centralized config, so the overrides from this section will have to be set on the command line. For Ceph Mimic and above, the mons have a centralized config which will be used to set/override configs. Therefore, for Mimic+ clusters, the user may temporarily override values set here, and those values will be reset to the `spec:config` values whenever the Ceph operator is restarted or the cluster CRD is updated. Test (especially integration tests) may need to specify `osd pool default size = 1` and `osd pool default min size = 1` to support running clusters with only one osd. Test environments would have a means of doing this fairly easily using the config override capability. These values should not be set to these low values for production clusters, as they may allow admins to create their own pools which are not fault tolerant accidentally. There is an option to set these values automatically for clusters which run with only one osd or to set this value for clusters with a number of osds less than the default programmatically within Rook's operator; however, this adds an additional amount of code flow complexity which is unnecessary except in the integration test environments or in minimal demo environments. A middle ground proposed herein is to add `osd pool default {,min} size = 1` overrides to the example cluster CRD so that users \"just trying out Rook\" still get today's easy experience but where they can be easily removed for production clusters that should not run with potentially dangerous settings. The current method of starting mons where `mon-a` has an initial member of `a`, `mon-b` initial members `a b`, `mon-c` initial members `a b c`, etc. Has worked so far but could result in a race condition. Mon cluster stability is important to Ceph, and it is critical for this PR that the mons' centrally-stored config is stable, so we here note that this behavior should be fixed such that the mon initial members are known before the first mon is bootstrapped to consider this proposal's work completed. Practically speaking, this will merely require the mon services to be started and have IP addresses before the mons are"
},
{
"data": "Additionally, generating the monmap during mon daemon initialization is unnecessary if `--mon-host` is set for the `ceph-mon --mkfs` command. Creating `/var/lib/ceph/mon-<ID>/data/kv_backend` is no longer necessary in Luminous and can be removed. This proposal herein makes the suggestion that the changes be done with a new PR for each daemon starting with the mons, as the mons are most affected. After the mons are done, the remaining 4 daemons can be done in parallel. Once all 5 daemons are complete, there will likely be a need to refactor the codebase to remove any vestigial remnants of the old config design which have been left. It will also be a good time to look for any additional opportunities to reduce code duplication by teasing repeated patterns out into shared modules. Another option is to modify all 5 daemons such that support is focused on Luminous, and the final clean-up stage could be a good time to introduce support for Mimic and its new centralized mon KV all at once. Luminous does not have the mon's centralized kv store for Ceph configs, so any config set in the mon kv store should be set in the config file for Luminous, and users may override these values via Rook's config override feature. The implementation of this work will naturally remove most of the need for Rook to modify Ceph daemon configurations via its `config-init` code paths, so it will also be a good opportunity to move all daemon logic into the operator process where possible. ``` NEW LOCATION REMOVED [global] FLAG fsid = bd4e8c5b-80b8-47d5-9e39-460eccc09e62 REMOVED run dir = /var/lib/rook/mon-c FLAG AS NEEDED mon initial members = b c a FLAG mon host = 172.24.191.50:6790,172.24.97.67:6790,172.24.123.44:6790 MON KV log file = /dev/stderr MON KV mon cluster log file = /dev/stderr FLAG AS NEEDED public addr = 172.24.97.67 FLAG AS NEEDED cluster addr = 172.16.2.122 FLAG AS NEEDED public network = not currently used FLAG AS NEEDED cluster network = not currently used REMOVED mon keyvaluedb = rocksdb MON KV monallowpool_delete = true REMOVED monmaxpgperosd = 1000 MON KV debug default = 0 MON KV debug rados = 0 MON KV debug mon = 0 MON KV debug osd = 0 MON KV debug bluestore = 0 MON KV debug filestore = 0 MON KV debug journal = 0 MON KV debug leveldb = 0 OVERRIDE filestoreomapbackend = rocksdb OVERRIDE osd pg bits = 11 OVERRIDE osd pgp bits = 11 OVERRIDE osd pool default size = 1 OVERRIDE osd pool default min size = 1 OVERRIDE osd pool default pg num = 100 OVERRIDE osd pool default pgp num = 100 REMOVED rbddefaultfeatures = 3 MON KV / REMOVED? fatal signal handlers = false REMOVED [daemon.id] FLAG AS NEEDED keyring = /var/lib/rook/mon-c/data/keyring ``` New location key: ``` REMOVED - removed entirely from the config FLAG - flag always set FLAG AS NEEDED - set as a command line flag if/when it is needed MON KV - store in the mon's central config (except for Luminous) OVERRIDE - removed but will need to be added in override for some scenarios (test/play) ```"
}
] | {
"category": "Runtime",
"file_name": "ceph-config-updates.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Fix export/import of Services with named ports when using the Antrea Multi-cluster feature. (, [@luolanzone]) Fix handling of the \"reject\" packets generated by the Antrea Agent to avoid infinite looping. (, [@GraysonWu]) Fix DNS resolution error of Antrea Agent on AKS by using `ClusterFirst` dnsPolicy. (, [@tnqn]) Fix tolerations for Pods running on control-plane for Kubernetes >= 1.24. (, [@xliuxu]) Reduce permissions of Antrea Agent ServiceAccount. (, [@xliuxu]) Fix NetworkPolicy may not be enforced correctly after restarting a Node. (, [@tnqn]) Fix antrea-agent crash caused by interface detection in AKS/EKS with NetworkPolicyOnly mode. (, [@wenyingd]) Fix locally generated packets from Node net namespace might be SNATed mistakenly when Egress is enabled. (, [@tnqn]) Use iptables-wrapper in Antrea container. Now antrea-agent can work with distros that lack the iptables kernel module of \"legacy\" mode (ip_tables). (, [@antoninbas]) Reduce permissions of Antrea ServiceAccount for updating annotations. (, [@tnqn]) Fix NodePort/LoadBalancer Service cannot be accessed when externalTrafficPolicy changed from Cluster to Local with proxyAll enabled. (, [@hongliangl]) Fix initial egress connections from Pods may go out with node IP rather than Egress IP. (, [@tnqn]) Fix NodePort Service access when an Egress selects the same Pod as the NodePort Service. (, [@hongliangl]) Fix ipBlock referenced in nested ClusterGroup not processed correctly. (, [@Dyanngg]) Add Antrea Multi-cluster feature which allows users to export and import Services and Endpoints across multiple clusters within a ClusterSet, and enables inter-cluster Service communication in the ClusterSet. (, [@luolanzone] [@aravindakidambi] [@bangqipropel] [@hjiajing] [@Dyanngg] [@suwang48404] [@abhiraut]) [Alpha] Refer to [Antrea Multi-cluster Installation] to get started Refer to [Antrea Multi-cluster Architecture] for more information regarding the implementation Add support for multicast that allows forwarding multicast traffic within the cluster network (i.e., between Pods) and between the external network and the cluster network. ( , [@wenyingd] [@ceclinux] [@XinShuYang]) [Alpha - Feature Gate: `Multicast`] In this release the feature is only supported on Linux Nodes for IPv4 traffic in `noEncap` mode Add support for IPPool and IP annotations on Pod and PodTemplate of Deployment and StatefulSet in AntreaIPAM mode. ( , [@gran-vmv] [@annakhm]) IPPool annotation on Pod has a higher priority than the IPPool annotation on Namespace A StatefulSet Pod's IP will be kept after Pod restarts when the IP is allocated from IPPool Refer to [Antrea IPAM Capabilities] for more information Add support for SR-IOV secondary network. Antrea can now create secondary network interfaces for Pods using SR-IOV VFs on bare metal Nodes. (, [@arunvelayutham]) [Alpha - Feature Gate: `SecondaryNetwork`] Add support for allocating external IPs for Services of type LoadBalancer from an ExternalIPPool. ( [@Shengkai2000]) [Alpha - Feature Gate: `ServiceExternalIP`] Add support for antctl in the flow aggregator"
},
{
"data": "(, [@yanjunz97]) Support `antctl log-level` for changing log verbosity level Support `antctl get flowrecords [-o json]` for dumping flow records Support `antctl get recordmetrics` for dumping flow records metrics Add support for the \"Pass\" action in Antrea-native policies to skip evaluation of further Antrea-native policy rules and delegate evaluation to Kubernetes NetworkPolicy. (, [@Dyanngg]) Add user documentation for using Project Antrea with , [@qiyueyao]) Add user documentation for , [@jianjuns]) Improve , [@antoninbas]) Document how to run Antrea e2e tests on an existing K8s cluster (, [@xiaoxiaobaba]) Make LoadBalancer IP proxying configurable for AntreaProxy to support scenarios in which it is desirable to send Pod-to-ExternalIP traffic to the external LoadBalancer. (, [@antoninbas]) Add `startTime` to the Traceflow Status to avoid issues caused by clock skew. (, [@antoninbas]) Add `reason` field in antctl traceflow command output. (, [@Jexf]) Validate serviceCIDR configuration only if AntreaProxy is disabled. (, [@wenyingd]) Improve configuration parameter validation for NodeIPAM. (, [@tnqn]) More comprehensive validation for Antrea-native policies. ( , [@GraysonWu] [@tnqn]) Update Antrea Octant plugin to support Octant 0.24 and to use the Dashboard client to perform CRUD operations on Antrea CRDs. (, [@antoninbas]) Omit hostNetwork Pods when computing members of ClusterGroup and AddressGroup. (, [@Dyanngg]) Support for using an env parameter `ALLOWNOENCAPWITHOUTANTREA_PROXY` to allow running Antrea in noEncap mode without AntreaProxy. (, [@Jexf] [@WenzelZ]) Move throughput calculation for network flow visibility from logstash to flow-aggregator. (, [@heanlan]) Add Go version information to full version string for Antrea binaries. (, [@antoninbas]) Improve kind-setup.sh script and Kind documentation. (, [@antoninbas]) Enable Go benchmark tests in CI. (, [@wenqiq]) Upgrade Windows OVS version to 2.15.2 to pick up some recent patches. (, [@lzhecheng]) [Windows] Remove HNSEndpoint only if infra container fails to create. (, [@lzhecheng]) [Windows] Use OVS Port externalIDs instead of HNSEndpoint to cache the externalIDS when using containerd as the runtime on Windows. (, [@wenyingd]) [Windows] Reduce network downtime when starting antrea-agent on Windows Node by using Windows management virtual network adapter as OVS internal port. (, [@wenyingd]) [Windows] Fix error handling of the \"Reject\" action of Antrea-native policies when determining if the packet belongs to Service traffic. (, [@GraysonWu]) Make the \"Reject\" action of Antrea-native policies work in AntreaIPAM mode. (, [@GraysonWu]) Set ClusterGroup with child groups to `groupMembersComputed` after all its child groups are created and processed. (, [@Dyanngg]) Fix status report of Antrea-native policies with multiple rules that have different AppliedTo. (, [@tnqn]) Fix typos and improve the example YAML in antrea-network-policy doc. (, , [@antoninbas] [@Jexf] [@tnqn]) Fix duplicated attempts to delete unreferenced AddressGroups when deleting Antrea-native policies. (, [@Jexf]) Add retry to update NetworkPolicy status to avoid error logs. (, [@Jexf]) Fix NetworkPolicy resources dump for Agent's supportbundle. (, [@antoninbas]) Use go 1.17 to build release assets. (, [@antoninbas]) Restore the gateway route automatically configured by kernel when configuring IP address if it is missing. (, [@antoninbas]) Fix incorrect parameter used to check if a container is the infra container, which caused errors when reattaching HNS Endpoint. (, [@XinShuYang]) [Windows] Fix gateway interface MTU configuration error on Windows. (, @[lzhecheng]) [Windows] Fix initialization error of antrea-agent on Windows by specifying hostname explicitly in VMSwitch commands. (, [@XinShuYang]) [Windows]"
}
] | {
"category": "Runtime",
"file_name": "CHANGELOG-1.5.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "(incus-server)= ```{toctree} :maxdepth: 1 Migrating from LXD </howto/servermigratelxd> Configure the server <howto/server_configure> /server_config System settings <reference/server_settings> Backups <backup> Performance tuning <explanation/performance_tuning> Benchmarking <howto/benchmark_performance> Monitor metrics <metrics> Recover instances <howto/disaster_recovery> Database </database> /architectures ```"
}
] | {
"category": "Runtime",
"file_name": "server.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "layout: global title: Install Alluxio on Kubernetes This documentation shows how to install Alluxio (Dora) on Kubernetes via , a kubernetes package manager, and , a kubernetes extension for managing applications. We recommend using the operator to deploy Alluxio on Kubernetes. However, if some required permissions are missing, consider using helm chart instead. A Kubernetes cluster with version at least 1.19, with feature gate enabled. Cluster access to an Alluxio Docker image . If using a private Docker registry, refer to the Kubernetes private image registry . Ensure the cluster's allows for connectivity between applications (Alluxio clients) and the Alluxio Pods on the defined ports. The control plane of the Kubernetes cluster has with version at least 3.6.0 installed. You will need certain RBAC permission in the Kubernetes cluster to make Operator to work. Permission to create CRD (Custom Resource Definition); Permission to create ServiceAccount, ClusterRole, and ClusterRoleBinding for the operator pods; Permission to create namespace that the operator will be in. We use the Helm Chart for Alluxio K8s Operator for deploying. Following the steps below to deploy Alluxio Operator: Download the Alluxio Kubernetes Operator and enter the root directory of the project. Install the operator by running: ```shell $ helm install operator ./deploy/charts/alluxio-operator ``` Operator will automatically create namespace `alluxio-operator` and install all the components there. Make sure the operator is running as expected: ```shell $ kubectl get pods -n alluxio-operator ``` Create a dataset configuration `dataset.yaml`. Its `apiVersion` must be `k8s-operator.alluxio.com/v1alpha1` and `kind` must be `Dataset`. Here is an example: ```yaml apiVersion: k8s-operator.alluxio.com/v1alpha1 kind: Dataset metadata: name: my-dataset spec: dataset: path: <path of your dataset> credentials: <property 1 for accessing your dataset> <property 2 for accessing your dataset> ... ``` Deploy your dataset by running ```shell $ kubectl create -f dataset.yaml ``` Check the status of the dataset by running ```shell $ kubectl get dataset <dataset-name> ``` Configure for: (Optional) Embedded journal. HostPath is also supported for embedded journal. (Optional) Worker page store. HostPath is also supported for worker storage. (Optional) Worker metastore. Only required if you use RocksDB for storing metadata on workers. Here is an example of a persistent volume of type hostPath for Alluxio embedded journal: ```yaml kind: PersistentVolume apiVersion: v1 metadata: name: alluxio-journal-0 labels: type: local spec: storageClassName: standard capacity: storage: 1Gi accessModes: ReadWriteOnce hostPath: path: /tmp/alluxio-journal-0 ``` Note: If using hostPath as volume for embedded journal, Alluxio will run an init container as root to grant RWX permission of the path for itself. Each journal volume should have capacity at least requested by its corresponding persistentVolumeClaim, configurable through the configuration file which will be talked in step 2. If using local hostPath persistent volume, make sure user alluxio has RWX permission. Alluxio containers run as user `alluxio` of group `alluxio` with UID 1000 and GID 1000 by default. Prepare a resource configuration file `alluxio-config.yaml`. Its `apiVersion` must be"
},
{
"data": "and `kind` must be `AlluxioCluster`. Here is an example: ```yaml apiVersion: k8s-operator.alluxio.com/v1alpha1 kind: AlluxioCluster metadata: name: my-alluxio-cluster spec: <configurations> ``` `Dataset` in which the name of your dataset is required in the `spec` section. All other configurable properties in the `spec` section can be found in `deploy/charts/alluxio/values.yaml`. Deploy Alluxio cluster by running: ```shell $ kubectl create -f alluxio-config.yaml ``` Check the status of Alluxio cluster by running: ```shell $ kubectl get alluxiocluster <alluxio-cluster-name> ``` Run the following command to uninstall Dataset and Alluxio cluster: ```shell $ kubectl delete dataset <dataset-name> $ kubectl delete alluxiocluster <alluxio-cluster-name> ``` To load your data into Alluxio cluster, so that your application can read the data faster, create a resource file `load.yaml`. Here is an example: ```yaml apiVersion: k8s-operator.alluxio.com/v1alpha1 kind: Load metadata: name: my-load spec: dataset: <dataset-name> path: / ``` Then run the following command to start the load: ```shell $ kubectl create -f load.yaml ``` To check the status of the load: ```shell $ kubectl get load ``` ```yaml apiVersion: k8s-operator.alluxio.com/v1alpha1 kind: AlluxioCluster metadata: name: my-alluxio-cluster spec: worker: count: 4 pagestore: type: hostPath quota: 512Gi hostPath: /mnt/alluxio csi: enabled: true ``` Following the steps below to deploy Dora on Kubernetes: Download the Helm chart and enter the helm chart directory. Configure for: (Optional) Embedded journal. HostPath is also supported for journal storage. (Optional) Worker page store. HostPath is also supported for worker storage. (Optional) Worker metastore. Only required if you use RocksDB for storing metadata on workers. Here is an example of a persistent volume of type hostPath for Alluxio embedded journal: ```yaml kind: PersistentVolume apiVersion: v1 metadata: name: alluxio-journal-0 labels: type: local spec: storageClassName: standard capacity: storage: 1Gi accessModes: ReadWriteOnce hostPath: path: /tmp/alluxio-journal-0 ``` Note: If using hostPath as volume for embedded journal, Alluxio will run an init container as root to grant RWX permission of the path for itself. Each journal volume requires at least the storage of its corresponding persistentVolumeClaim, configurable through the configuration file which will be talked in step 3. If using local hostPath persistent volume, make sure the user of UID 1000 and GID 1000 has RWX permission. Alluxio containers run as user `alluxio` of group `alluxio` with UID 1000 and GID 1000 by default. Prepare a configuration file `config.yaml`. All configurable properties can be found in file `values.yaml` from the code downloaded in step 1. You MUST specify your dataset configurations to enable Dora in your `config.yaml`. More specifically, the following section: ```yaml dataset: path: credentials: ``` Install Dora cluster by running ```shell $ helm install dora -f config.yaml . ``` Wait until the cluster is ready. You can check pod status and container readiness by running ```shell $ kubectl get po ``` Uninstall Dora cluster as follows: ```shell $ helm delete dora ``` See for information on how to configure and get metrics of different metrics sinks from Alluxio deployed on Kubernetes."
}
] | {
"category": "Runtime",
"file_name": "Install-Alluxio-On-Kubernetes.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Longhorn-manager communicates with longhorn-engine's gRPC ControllerService, ReplicaService, and SyncAgentService by sending requests to TCP/IP addresses kept up-to-date by its various controllers. Additionally, the longhorn-engine controller server sends requests to the longhorn-engine replica server's ReplicaService and SyncAgentService using TCP/IP addresses it keeps in memory. These addresses are relatively stable in normal operation. However, during periods of high process turnover (e.g. a node reboot or network event), it is possible for one longhorn-engine component to stop and another longhorn-engine component to start in its place using the same ports. If this happens quickly enough, other components with stale address lists attempting to execute requests against the old component may errantly execute requests against the new component. One harmful effect of this behavior that has been observed is the [expansion of an unintended longhorn-engine replica](https://github.com/longhorn/longhorn/issues/5709). This proposal intends to ensure all gRPC requests to longhorn-engine components are actually served by the intended component. https://github.com/longhorn/longhorn/issues/5709 Eliminate the potential for negative effects caused by a Longhorn component communicating with an incorrect longhorn-engine component. Provide effective logging when incorrect communication occurs to aide in fixing TCP/IP address related race conditions. Fix race conditions within the Longhorn control plane that lead to attempts to communicate with an incorrect longhorn-engine component. Refactor the in-memory data structures the longhorn-engine controller server uses to keep track of and initiate communication with replicas. Today, longhorn-manager knows the volume name and instance name of the process it is trying to communicate with, but it only uses the TCP/IP information of each process to initiate communication. Additionally, longhorn-engine components are mostly unaware of the volume name (in the case of longhorn-engine's replica server) and instance name (for both longhorn-engine controller and replica servers) they are associated with. If we provide this information to longhorn-engine processes when we start them and then have longhorn-manager provide it on every communication attempt, we can ensure no accidental communication occurs. Add additional flags to the longhorn-engine CLI that inform controller and replica servers of their associated volume and/or instance name. Use to automatically inject (i.e. headers) containing volume and/or instance name information every time a gRPC request is made by a longhorn-engine client to a longhorn-engine server. Use to automatically validate the volume and/or instance name information in [gRPC metadata](https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md) (i.e. headers) every time a gRPC request made by a longhorn-engine client is received by a longhorn-engine server. Reject any request (with an appropriate error code) if the provided information does not match the information a controller or replica server was launched with. Log the rejection at the client and the server, making it easy to identify situations in which incorrect communication occurs. Modify instance-manager's `ProxyEngineService` (both server and client) so that longhorn-manager can provide the necessary information for gRPC metadata injection. Modify longhorn-manager so that is makes proper use of the new `ProxyEngineService` client and launches longhorn-engine controller and replica servers with additional flags. Before this proposal: As an administrator, after an intentional or unintentional node reboot, I notice one or more of my volumes is degraded and new or existing replicas aren't coming online. In some situations, the UI reports confusing information or one or more of my volumes might be unable to attach at"
},
{
"data": "Digging through logs, I see errors related to mismatched sizes, and at least one replica does appear to have a larger size reported in `volume.meta` than others. I don't know how to proceed. After this proposal: As an administrator, after an intentional or unintentional node reboot, my volumes work as expected. If I choose to dig through logs, I may see some messages about refused requests to incorrect components, but this doesn't seem to negatively affect anything. Before this proposal: As a developer, I am aware that it is possible for one Longhorn component to communicate with another, incorrect component, and that this communication can lead to unexpected replica expansion. I want to work to fix this behavior. However, when I look at a support bundle, it is very hard to catch this communication occurring. I have to trace TCP/IP addresses through logs, and if no negative effects are caused, I may never notice it. After this proposal: Any time one Longhorn component attempts to communicate with another, incorrect component, it is clearly represented in the logs. See the user stories above. This enhancement is intended to be largely transparent to the user. It should eliminate rare failures so that users can't run into them. Increment the longhorn-engine CLIAPIVersion by one. Do not increment the longhorn-engine CLIAPIMinVersion. The changes in this LEP are backwards compatible. All gRPC metadata validation is by demand of the client. If a less sophisticated (not upgraded) client does not inject any metadata, the server performs no validation. If a less sophisticated (not upgraded) client only injects some metadata (e.g. `volume-name` but not `instance-name`), the server only validates the metadata provided. Add a global `volume-name` flag and a global `engine-instance-name` flag to the engine CLI (e.g. `longhorn -volume-name <volume-name> -engine-instance-name <engine-instance-name> <command> <args>`). Virtually all CLI commands create a controller client and these flags allow appropriate gRPC metadata to be injected into every client request. Requests that reach the wrong longhorn-engine controller server are rejected. Use the global `engine-instance-name` flag and the pre-existing `volume-name` positional argument to allow the longhorn-engine controller server to remember its volume and instance name (e.g. `longhorn -engine-instance-name <instance-name> controller <volume-name>`). Ignore the global `volume-name` flag, as it is redundant. Use the global `volume-name` flag or the pre-existing local `volume-name` flag and a new `replica-instance-name` flag to allow the longhorn-engine replica server to remember its volume and instance name (e.g. `longhorn -volume-name <volume-name> replica <directory> -replica-instance-name <replica-instance-name>`). Use the global `volume-name` flag and a new `replica-instance-name` flag to allow the longhorn-engine sync-agent server to remember its volume and instance name (e.g. `longhorn -volume-name <volume-name> sync-agent -replica-instance-name <replica-instance-name>`). Add an additional `replica-instance-name` flag to CLI commands that launch asynchronous tasks that communicate directly with the longhorn-engine replica server (e.g. `longhorn -volume-name <volume-name> add-replica <address> -size <size> -current-size <current-size> -replica-instance-name <replica-instance-name>`). All such commands create a replica client and these flags allow appropriate gRPC metadata to be injected into every client request. Requests that reach the wrong longhorn-engine replica server are rejected. Return 9 FAILED_PRECONDITION with an appropriate message when metadata validation fails. This code is chosen in accordance with the , which instructs developers to use FAILED_PRECONDITION if the client should not retry until the system system has been explicitly fixed. Increment the longhorn-instance-manager InstanceManagerProxyAPIVersion by"
},
{
"data": "Do not increment the longhorn-instance-manager InstanceManagerProxyAPIMinVersion. The changes in this LEP are backwards compatible. No added fields are required and their omission is ignored. If a less sophisticated (not upgraded) client does not include them, no metadata is injected into engine or replica requests and no validation occurs (the behavior is the same as before the implementation of this LEP). Add `volumename` and `instancename` fields to the `ProxyEngineRequest` protocol buffer message. This message, which currently only contains an `address` field, is included in all `ProxyEngineService` RPCs. Updated clients can pass information about the engine process they expect to be communicating with in these fields. When instance-manager creates an asynchronous task to carry out the requested operation, the resulting controller client includes the gRPC interceptor described above. Add `replicainstancename` fields to any `ProxyEngineService` RPC associated with an asynchronous task that communicates directly with a longhorn-engine replica server. When instance-manager creates the task, the resulting replica client includes the gRPC interceptor described above. Return 5 NOT FOUND with an appropriate message when metadata validation fails at a lower layer. (The particular return code is definitely open to discussion.) Add a gRPC server interceptor to all `grpc.NewServer` calls. ```golang server := grpc.NewServer(withIdentityValidationInterceptor(volumeName, instanceName)) ``` Implement the interceptor so that it validates metadata with best effort. ```golang func withIdentityValidationInterceptor(volumeName, instanceName string) grpc.ServerOption { return grpc.UnaryInterceptor(identityValidationInterceptor(volumeName, instanceName)) } func identityValidationInterceptor(volumeName, instanceName string) grpc.UnaryServerInterceptor { // Use a closure to remember the correct volumeName and/or instanceName. return func(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (any, error) { md, ok := metadata.FromIncomingContext(ctx) if ok { incomingVolumeName, ok := md[\"volume-name\"] // Only refuse to serve if both client and server provide validation information. if ok && volumeName != \"\" && incomingVolumeName[0] != volumeName { return nil, status.Errorf(codes.InvalidArgument, \"Incorrect volume name; check controller address\") } } if ok { incomingInstanceName, ok := md[\"instance-name\"] // Only refuse to serve if both client and server provide validation information. if ok && instanceName != \"\" && incomingInstanceName[0] != instanceName { return nil, status.Errorf(codes.InvalidArgument, \"Incorrect instance name; check controller address\") } } // Call the RPC's actual handler. return handler(ctx, req) } } ``` Add a gRPC client interceptor to all `grpc.Dial` calls. ```golang connection, err := grpc.Dial(serviceUrl, grpc.WithInsecure(), withIdentityValidationInterceptor(volumeName, instanceName)) ``` Implement the interceptor so that it injects metadata with best effort. ```golang func withIdentityValidationInterceptor(volumeName, instanceName string) grpc.DialOption { return grpc.WithUnaryInterceptor(identityValidationInterceptor(volumeName, instanceName)) } func identityValidationInterceptor(volumeName, instanceName string) grpc.UnaryClientInterceptor { // Use a closure to remember the correct volumeName and/or instanceName. return func(ctx context.Context, method string, req any, reply any, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error { if volumeName != \"\" { ctx = metadata.AppendToOutgoingContext(ctx, \"volume-name\", volumeName) } if instanceName != \"\" { ctx = metadata.AppendToOutgoingContext(ctx, \"instance-name\", instanceName) } return invoker(ctx, method, req, reply, cc, opts...) } } ``` Modify all client constructors to include this additional information. Wherever these client packages are consumed (e.g. the replica client is consumed by the controller, both the replica and the controller clients are consumed by longhorn-manager), callers can inject this additional information into the constructor and get validation for free. ```golang func NewControllerClient(address, volumeName, instanceName string) (*ControllerClient, error) { // Implementation. } ``` Add additional flags to all longhorn-engine CLI commands depending on their function."
},
{
"data": "command that launches a server: ```golang func ReplicaCmd() cli.Command { return cli.Command{ Name: \"replica\", UsageText: \"longhorn controller DIRECTORY SIZE\", Flags: []cli.Flag{ // Other flags. cli.StringFlag{ Name: \"volume-name\", Value: \"\", Usage: \"Name of the volume (for validation purposes)\", }, cli.StringFlag{ Name: \"replica-instance-name\", Value: \"\", Usage: \"Name of the instance (for validation purposes)\", }, }, // Rest of implementation. } } ``` E.g. command that directly communicates with both a controller and replica server. ```golang func AddReplicaCmd() cli.Command { return cli.Command{ Name: \"add-replica\", ShortName: \"add\", Flags: []cli.Flag{ // Other flags. cli.StringFlag{ Name: \"volume-name\", Required: false, Usage: \"Name of the volume (for validation purposes)\", }, cli.StringFlag{ Name: \"engine-instance-name\", Required: false, Usage: \"Name of the controller instance (for validation purposes)\", }, cli.StringFlag{ Name: \"replica-instance-name\", Required: false, Usage: \"Name of the replica instance (for validation purposes)\", }, }, // Rest of implementation. } } ``` Modify the ProxyEngineService server functions so that they can make correct use of the changes in longhorn-engine. Funnel information from the additional fields in the ProxyEngineRequest message and in appropriate ProxyEngineService RPCs into the longhorn-engine task and controller client constructors so it can be used for validation. ```protobuf message ProxyEngineRequest{ string address = 1; string volume_name = 2; string instance_name = 3; } ``` Modify the ProxyEngineService client functions so that consumers can provide the information required to enable validation. Ensure the engine and replica controllers launch engine and replica processes with `-volume-name` and `-engine-instance-name` or `-replica-instance-name` flags so that these processes can validate identifying gRPC metadata coming from requests. Ensure the engine controller supplies correct information to the ProxyEngineService client functions so that identity validation can occur in the lower layers. This issue/LEP was inspired by . In the situation described in this issue: An engine controller with out-of-date information (including a replica address the associated volume does not own) [issues a ReplicaAdd command](https://github.com/longhorn/longhorn-manager/blob/a7dd20cdbdb1a3cea4eb7490f14d94d2b0ef273a/controller/engine_controller.go#L1819) to instance-manager's EngineProxyService. Instance-manager creates a longhorn-engine task and [calls its AddReplica method](https://github.com/longhorn/longhorn-instance-manager/blob/0e0ec6dcff9c0a56a67d51e5691a1d4a4f397f4b/pkg/proxy/replica.go#L35). The task makes appropriate calls to a longhorn-engine controller and replica. The ReplicaService's [ExpandReplica command](https://github.com/longhorn/longhorn-engine/blob/1f57dd9a235c6022d82c5631782020e84da22643/pkg/sync/sync.go#L509) is used to expand the replica before a followup failure to actually add the replica to the controller's backend. After this improvement, the above scenario will be impossible: Both the engine and replica controllers will launch engine and replica processes with the `-volume-name` and `-engine-instance-name` or `replica-instance-name` flags. When the engine controller issues a ReplicaAdd command, it will do so using the expanded embedded `ProxyEngineRequest` message (with `volumename` and `instancename` fields) and an additional `replicainstancename` field. Instance-manager will create a longhorn-engine task that automatically injects `volume-name` and `instance-name` gRPC metadata into each controller request. When the task issues an ExpandReplica command, it will do so using a client that automatically injects `volume-name` and `instance-name` gRPC metadata into it. If either the controller or the replica does not agree with the information provided, gRPC requests will fail immediately and there will be no change in any longhorn-engine component. In my test environment, I have experimented with: Running new versions of all components, making gRPC calls to the longhorn-engine controller and replica processes with wrong gRPC metadata, and verifying that these calls fail. Running new versions of all components, making gRPC calls to instance-manager with an incorrect volume-name or instance name, and verifying that these calls"
},
{
"data": "Running new versions of all components, adding additional logging to longhorn-engine and verifying that metadata validation is occurring during the normal volume lifecycle. This is really a better fit for a negative testing scenario (do something that would otherwise result in improper communication, then verify that communication fails), but we have already eliminated the only known recreate for . Rework test fixtures so that: All controller and replica processes are created with the information needed for identity validation. It is convenient to create controller and replica clients with the information needed for identity validation. gRPC metadata is automatically injected into controller and replica client requests when clients have the necessary information. Do not modify the behavior of existing tests. Since these tests were using clients with identity validation information, no identity validation is performed. Modify functions/fixtures that create engine/replica processes to allow the new flags to be passed, but do not pass them by default. Modify engine/replica clients used by tests to allow for metadata injection, but do not enable it by default. Create new tests that: Ensure validation fails when a directly created client attempts to communicate with a controller or replica server using the wrong identity validation information. Ensure validation fails when an indirectly created client (by the engine) tries to communicate with a replica server using the wrong identity validation information. Ensure validation fails when an indirectly created client (by a CLI command) tries to communicate with a controller or replica server using the wrong identity validation information. The user will get benefit from this behavior automatically, but only after they have upgraded all associated components to a supporting version (longhorn-manager, longhorn-engine, and CRITICALLY instance-manager). We will only provide volume name and instance name information to longhorn-engine controller and replica processes on a supported version (as governed by the `CLIAPIVersion`). Even if other components are upgraded, when they send gRPC metadata to non-upgraded processes, it will be ignored. We will only populate extra ProxyEngineService fields when longhorn-manager is running with an update ProxyEngineService client. RPCs from an old client to a new ProxyEngineService server will succeed, but without the extra fields, instance-manager will have no useful gRPC metadata to inject into its longhorn-engine requests. RPCs from a new client to an old ProxyEngineService will succeed, but instance-manager will ignore the new fields and not inject useful gRPC metadata into its longhorn-engine request. We initially looked at adding volume name and/or instance name fields to all longhorn-engine ReplicaService and ControllerService calls. However, this would be awkward with some of the existing RPCs. In addition, it doesn't make much intuitive sense. Why should we provide the name of an entity we are communicating with to that entity as part of its API? It makes more sense to think of this identity validation in terms of sessions or authorization/authentication. In HTTP, information of this nature is handled through the use of headers, and metadata is the gRPC equivalent. We want to ensure the same behavior in every longhorn-engine ControllerService and ReplicaService call so that it is not up to an individual developer writing a new RPC to remember to validate gRPC metadata (and to relearn how it should be done). Interceptors work mostly transparently to ensure identity validation always occurs."
}
] | {
"category": "Runtime",
"file_name": "20230420-engine-identity-validation.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "```text --config string The path to the configuration file --v Level number for the log level verbosity ``` Use `antrea-agent -h` to see complete options. The `antrea-agent` configuration file specifies the agent configuration parameters. For all the agent configuration parameters of a Linux Node, refer to this . For all the configuration parameters of a Windows Node, refer to this [base configuration file](../build/yamls/windows/base/conf/antrea-agent.conf) ```text --config string The path to the configuration file --v Level number for the log level verbosity ``` Use `antrea-controller -h` to see complete options. The `antrea-controller` configuration file specifies the controller configuration parameters. For all the controller configuration parameters, refer to this . A typical CNI configuration looks like this: ```json { \"cniVersion\":\"0.3.0\", \"name\": \"antrea\", \"plugins\": [ { \"type\": \"antrea\", \"ipam\": { \"type\": \"host-local\" } }, { \"type\": \"portmap\", \"capabilities\": { \"portMappings\": true } }, { \"type\": \"bandwidth\", \"capabilities\": { \"bandwidth\": true } } ] } ``` You can also set the MTU (for the Pod's network interface) in the CNI configuration using `\"mtu\": <MTU_SIZE>`. When using an `antrea.yml` manifest, the MTU should be set with the `antrea-agent` `defaultMTU` configuration parameter, which will apply to all Pods and the host gateway interface on every Node. It is strongly discouraged to set the `\"mtu\"` field in the CNI configuration to a value that does not match the `defaultMTU` parameter, as it may lead to performance degradation or packet drops. Antrea enables portmap and bandwidth CNI plugins by default to support `hostPort` and traffic shaping functionalities for Pods respectively. In order to disable them, remove the corresponding section from `antrea-cni.conflist` in the Antrea manifest. For example, removing the following section disables portmap plugin: ```json { \"type\": \"portmap\", \"capabilities\": { \"portMappings\": true } } ```"
}
] | {
"category": "Runtime",
"file_name": "configuration.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "According to Firecrackers , all vCPUs are considered to be running potentially malicious code from the moment they are started. This means Firecracker can make no assumptions about well-formedness of data passed to it by the guest, and have to operate safely no matter what input it is faced with. Traditional testing methods alone cannot guarantee about the general absence of safety issues, as for this we would need to write and run every possible unit test, exercising every possible code path - a prohibitively large task. To partially address these limitations, Firecracker is additionally using formal verification to go further in verifying that safety issues such as buffer overruns, panics, use-after-frees or integer overflows cannot occur in critical components. We employ , a formal verification tool written specifically for Rust, which allows us to express functional properties (such as any user-specified assertion) in familiar Rust-style by replacing concrete values in unit tests with `kani::any()`. For more details on how Kani works, and what properties it can verify, check out its official or try out this . We aim to have Kani harnesses for components that directly interact with data from the guest, such as the TCP/IP stack powering our microVM Metadata Service (MMDS) integration, or which are difficult to test traditionally, such as our I/O Rate Limiter. Our Kani harnesses live in `verification` modules that are tagged with `#[cfg(kani)]`, similar to how unit tests in Rust are usually structured. Note that for some harnesses, Kani uses a bounded approach, where the inputs are restricted based on some assumptions (e.g. the size of an Ethernet frame being 1514 bytes). Harnesses are only as strong as the assumptions they make, so all guarantees from the harness are only valid based on the set of assumptions we have in our Kani harnesses. Generally, they should strive to over-approximate, meaning it is preferred they cover some impossible situations instead of making too strong assumptions that cause them to exclude realistic scenarios. To ensure that no incoming code changes cause regressions on formally verified properties, all Kani harnesses are run on every pull request in our CI. To check whether the harnesses all work for your pull request, check out the Kani step. To run our harnesses locally, you can either enter our CI docker container via `./tools/devtool shell -p`, or by locally. Note that the first invocation of Kani post-installation might take a while, due to it setting up some"
},
{
"data": "Individual harnesses can then be executed using `cargo kani` similarly to how `cargo test` can run individual unit tests, the only difference being that the harness needs to be specified via `--harness`. Note, however, that many harnesses require significant memory, and might result in OOM conditions. The following is adapted from our Rate Limiter harness suite. It aims to verify that creation of a rate-limiting policy upholds all (which can roughly be summarized as everything that leads to a panic in a debug build), as well as results in a valid policy. A first attempt might look something like this: ``` fn verifytokenbucket_new() { let token_budget = kani::any(); let completerefilltime_ms = kani::any(); // Checks if the `TokenBucket` is created with invalid inputs, the result // is always `None`. match TokenBucket::new(tokenbudget, 0, completerefilltimems) { None => assert!(size == 0 || completerefilltime_ms == 0), Some(bucket) => assert!(bucket.is_valid()), } } ``` The `#[kani::proof]` attribute tells us that the function is a harness to be picked up by the Kani compiler. It is the Kani equivalent of `#[test]`. Lines 3-5 indicate that we want to verify that policy creation works for arbitrarily sized token buckets and arbitrary refill times. This is the key difference to a unit test, where we would be using concrete values instead (e.g. `let token_budget = 10;`). Note that Kani will not produce an executable, but instead statically verifies that code does not violate invariants. We do not actually execute the creation code for all possible inputs. The final match statement tells us the property we want to verify, which is bucket creation only fails if size of refill time are zero. In all other cases, we assert `new` to give us a valid bucket. We mapped these properties with assertions. If the verification fails, then that is because one of our properties do not hold. Now that we understand the code in the harness, let's try to verify `TokenBucket::new` with the Kani! If we run `cargo kani --harness verifytokenbucket_new` we will be greeted by ``` SUMMARY: 1 of 147 failed Failed Checks: attempt to multiply with overflow File: \"src/rate_limiter/src/lib.rs\", line 136, in TokenBucket::new VERIFICATION:- FAILED Verification Time: 0.21081695s ``` In this particular case, Kani has found a safety issue related to an integer overflow! Due to `completerefilltime_ms` getting converted from milliseconds to nanoseconds in the constructor, we have to take into consideration that the nanosecond value might not fit into a `u64`"
},
{
"data": "Here, the finding is benign, as no one would reasonably configure a `ratelimiter` with a replenish time of 599730287.457 years. A in the constructor fixes it. However, we will also have to adjust our harness! Rerunning the harness from above now yields: ``` SUMMARY: 1 of 149 failed Failed Checks: assertion failed: size == 0 || completerefilltime_ms == 0 File: \"src/ratelimiter/src/lib.rs\", line 734, in verification::verifytokenbucketnew VERIFICATION:- FAILED Verification Time: 0.21587047s ``` This makes sense: There are now more scenarios in which we explicitly fail construction. Changing our failure property from `size == 0 || completerefilltime_ms == 0` to `size == 0 || completerefilltimems == 0 || completerefilltime >= u64::MAX / 1000_000` in the harness will account for this change, and rerunning the harness will now tell us that no more issues are found: ``` SUMMARY: 0 of 150 failed VERIFICATION:- SUCCESSFUL Verification Time: 0.19135727s ``` Q: What is the Kani verifier?\\ A: The is a bit-precise model checker for Rust. Kani is particularly useful for verifying unsafe code blocks in Rust, where the \" are unchecked by the compiler. Q: What safety properties does Kani verify?\\ A: Kani verifies memory safety properties (e.g., invalid-pointer dereferences, out-of-bounds array access), user-specified assertions (i.e., `assert!(...)`), the absence of `panic!()`s (e.g., `unwrap()` on `None` values), and the absence of some types of unexpected behavior (e.g., arithmetic overflows). For a full overview, see the . Q: Do we expect all contributors to write harnesses for newly introduced code?\\ A: No. Kani is complementary to unit testing, and we do not have target for proof coverage. We employ formal verification in especially critical code areas. Generally we do not expect someone who might not be familiar with formal tools to contribute harnesses. We do expect all contributed code to pass verification though, just like we expect it to pass unit test! Q: How should I report issues related to any Firecracker harnesses?\\ A: Our Kani harnesses verify safety critical invariants. If you discover a flaw in a harness, please report it using the . Q: How do I know which properties I should prove in the Kani harness?\\ A: Generally, these are given by some sort of specification. This can either be the function contract described in its document (e.g. what relation between input and output do callers expect?), or even something formal such as the TCP/IP standard. Don't forget to mention the specification in your proof harness! Q: Where do I debug a broken proof?\\ A: Check out the Kani book section on ."
}
] | {
"category": "Runtime",
"file_name": "formal-verification.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
} |
[
{
"data": "We want to support CSI volume cloning so users can create a new PVC that has identical data as a source PVC. https://github.com/longhorn/longhorn/issues/1815 Support exporting the snapshot data of a volume Allow user to create a PVC with identical data as the source PVC There are multiple parts in implementing this feature: Implement a function that fetches data from a readable object then sends it to a remote server via HTTP. Implementing `VolumeExport()` gRPC in replica SyncAgentServer. When called, `VolumeExport()` exports volume data at the input snapshot to the receiver on the remote host. Implementing `SnapshotCloneCmd()` and `SnapshotCloneStatusCmd()` CLIs. Longhorn manager can trigger the volume cloning process by calling `SnapshotCloneCmd()` on the replica of new volume. Longhorn manager can fetch the cloning status by calling `SnapshotCloneStatusCmd()` on the replica of the new volume. When the volume controller detects that a volume clone is needed, it will attach the target volume. Start 1 replica for the target volume. Auto-attach the source volume if needed. Take a snapshot of the source volume. Copy the snapshot from a replica of the source volume to the new replica by calling `SnapshotCloneCmd()`. After the snapshot was copied over to the replica of the new volume, the volume controller marks volume as completed cloning. Once the cloning is done, the volume controller detaches the source volume if it was auto attached. Detach the target volume to allow the workload to start using it. Later on, when the target volume is attached by workload pod, Longhorn will start rebuilding other replicas. Advertise that Longhorn CSI driver has ability to clone a volume, `csi.ControllerServiceCapabilityRPCCLONE_VOLUME` When receiving a volume creat request, inspect `req.GetVolumeContentSource()` to see if it is from another volume. If so, create a new Longhorn volume with appropriate `DataSource` set so Longhorn volume controller can start cloning later on. Before this feature, to create a new PVC with the same data as another PVC, the users would have to use one of the following methods: Create a backup of the source volume. Restore the backup to a new volume. Create PV/PVC for the new volume. This method requires a backup target. Data has to move through an extra layer (the backup target) which might cost money. Create a new PVC (that leads to creating a new Longhorn volume). Mount both new PVC and source PVC to the same pod then copy the data over. See more . This copying method only applied for PVC with `Filesystem` volumeMode. Also, it requires manual steps. After this cloning feature, users can clone a volume by specifying `dataSource` in the new PVC pointing to an existing PVC. Users can create a new PVC that uses `longhorn` storageclass from an existing PVC which also uses `longhorn` storageclass by specifying `dataSource` in new PVC pointing to the existing PVC: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: clone-pvc namespace: myns spec: accessModes: ReadWriteOnce storageClassName: longhorn resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: source-pvc ``` `VolumeCreate` API will check/validate data a new field, `DataSource` which is a new field in `v.Spec` that specifies the source of the Longhorn volume. Implement a generalized function, `SyncContent()`, which syncs the content of a `ReaderWriterAt` object to a file on remote host. The `ReaderWriterAt` is interface that has `ReadAt()`, `WriteAt()` and `GetDataLayout()` method: ```go type ReaderWriterAt interface { io.ReaderAt io.WriterAt GetDataLayout (ctx context.Context) (<-chan FileInterval, <-chan error, error) } ``` Using those methods, the Sparse-tools know where is a data/hole interval to transfer to a file on the remote"
},
{
"data": "Implementing `VolumeExport()` gRPC in replica SyncAgentServer. When called, `VolumeExport()` will: Create and open a read-only replica from the input snapshot Pre-load `r.volume.location` (which is the map of data sector to snapshot file) by: If the volume has backing file layer and users want to export the backing file layer, we initialize all elements of `r.volume.location` to 1 (the index of the backing file layer). Otherwise, initialize all elements of `r.volume.location` to 0 (0 means we don't know the location for this sector yet) Looping over `r.volume.files` and populates `r.volume.location` (which is the map of data sector to snapshot file) with correct values. The replica is able to know which region is data/hole region. This logic is implemented inside the replica's method `GetDataLayout()`. The method checks `r.volume.location`. The sector at offset `i` is in data region if `r.volume.location[i] >= 1`. Otherwise, the sector is inside a hole region. Call and pass the read-only replica into `SyncContent()` function in the Sparse-tools module to copy the snapshot to a file on the remote host. Implementing `SnapshotCloneCmd()` and `SnapshotCloneStatusCmd()` CLIs. Longhorn manager can trigger the volume cloning process by calling `SnapshotCloneCmd()` on the replica of the new volume. The command finds a healthy replica of the source volume by listing replicas of the source controller and selecting a `RW` replica. The command then calls `CloneSnapshot()` method on replicas of the target volumes. This method in turn does: Call `SnapshotClone()` on the sync agent of the target replica. This will launch a receiver server on the target replica. Call `VolumeExport()` on the sync agent of the source replica to export the snapshot data to the target replica. Once the snapshot data is copied over, revert the target replica to the newly copied snapshot. Longhorn manager can fetch cloning status by calling `SnapshotCloneStatusCmd()` on the target replica. Add a new field to volume spec, `DataSource`. The `DataSource` is of type `VolumeDataSource`. Currently, there are 2 types of data sources: `volume` type and `snapshot` type. `volume` data source type has the format `vol://<VOLUME-NAME>`. `snapshot` data source type has the format `snap://<VOLUME-NAME>/<SNAPSHOT-NAME>`. In the future, we might want to refactor `fromBackup` field into a new type of data source with format `bk://<VOLUME-NAME>/<BACKUP-NAME>`. Add a new field into volume status, `CloneStatus` of type `VolumeCloneStatus`: ```go type VolumeCloneStatus struct { SourceVolume string `json:\"sourceVolume\"` Snapshot string `json:\"snapshot\"` State VolumeCloneState `json:\"state\"` } type VolumeCloneState string const ( VolumeCloneStateEmpty = VolumeCloneState(\"\") VolumeCloneStateInitiated = VolumeCloneState(\"initiated\") VolumeCloneStateCompleted = VolumeCloneState(\"completed\") VolumeCloneStateFailed = VolumeCloneState(\"failed\") ) ``` Add a new field into engine spec, `RequestedDataSource` of type `VolumeDataSource` Add a new field into engine status, `CloneStatus`. `CloneStatus` is a map of `SnapshotCloneStatus` inside each replica: ```go type SnapshotCloneStatus struct { IsCloning bool `json:\"isCloning\"` Error string `json:\"error\"` Progress int `json:\"progress\"` State string `json:\"state\"` FromReplicaAddress string `json:\"fromReplicaAddress\"` SnapshotName string `json:\"snapshotName\"` } ``` This will keep track of status of snapshot cloning inside the target replica. When the volume controller detect that a volume clone is needed (`v.Spec.DataSource` is `volume` or `snapshot` type and `v.Status.CloneStatus.State == VolumeCloneStateEmpty`), it will auto attach the source volume if needed. Take a snapshot of the source volume if needed. Fill the `v.Status.CloneStatus` with correct value for `SourceVolume`, `Snapshot`, and `State`(`initiated`). Auto attach the target volume. Start 1 replica for the target volume. Set `e.Spec.RequestedDataSource` to the correct value, `snap://<SOURCE-VOL-NAME/<SNAPSHOT-NAME>`. Engine controller monitoring loop will start the snapshot clone by calling `SnapshotCloneCmd()`. After the snapshot is copied over to the replica of the new volume, volume controller marks `v.Status.CloneStatus.State = VolumeCloneStateCompleted` and clear the"
},
{
"data": "Once the cloning is done, the volume controller detaches the source volume if it was auto attached. Detach the target volume to allow the workload to start using it. When workload attach volume, Longhorn starts rebuilding other replicas of the volume. Advertise that Longhorn CSI driver has ability to clone a volume, `csi.ControllerServiceCapabilityRPCCLONE_VOLUME` When receiving a volume creat request, inspect `req.GetVolumeContentSource()` to see if it is from another volume. If so, create a new Longhorn volume with appropriate `DataSource` set so Longhorn volume controller can start cloning later on. Integration test plan. Create a PVC: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: source-pvc spec: storageClassName: longhorn accessModes: ReadWriteOnce resources: requests: storage: 10Gi ``` Specify the `source-pvc` in a pod yaml and start the pod Wait for the pod to be running, write some data to the mount path of the volume Clone a volume by creating the PVC: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloned-pvc spec: storageClassName: longhorn dataSource: name: source-pvc kind: PersistentVolumeClaim accessModes: ReadWriteOnce resources: requests: storage: 10Gi ``` Specify the `cloned-pvc` in a cloned pod yaml and deploy the cloned pod Wait for the `CloneStatus.State` in `cloned-pvc` to be `completed` In 3-min retry loop, wait for the cloned pod to be running Verify the data in `cloned-pvc` is the same as in `source-pvc` In 2-min retry loop, verify the volume of the `clone-pvc` eventually becomes healthy Cleanup the cloned pod, `cloned-pvc`. Wait for the cleaning to finish Scale down the source pod so the `source-pvc` is detached. Wait for the `source-pvc` to be in detached state Clone a volume by creating the PVC: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloned-pvc spec: storageClassName: longhorn dataSource: name: source-pvc kind: PersistentVolumeClaim accessModes: ReadWriteOnce resources: requests: storage: 10Gi ``` Specify the `cloned-pvc` in a cloned pod yaml and deploy the cloned pod Wait for `source-pvc` to be attached Wait for a new snapshot created in `source-pvc` volume created Wait for the `CloneStatus.State` in `cloned-pvc` to be `completed` Wait for `source-pvc` to be detached In 3-min retry loop, wait for the cloned pod to be running Verify the data in `cloned-pvc` is the same as in `source-pvc` In 2-min retry loop, verify the volume of the `clone-pvc` eventually becomes healthy Cleanup the test Deploy a storage class that has backing image parameter ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: longhorn-bi-parrot provisioner: driver.longhorn.io allowVolumeExpansion: true parameters: numberOfReplicas: \"3\" staleReplicaTimeout: \"2880\" # 48 hours in minutes backingImage: \"bi-parrot\" backingImageURL: \"https://longhorn-backing-image.s3-us-west-1.amazonaws.com/parrot.qcow2\" ``` Repeat the `Clone volume that doesn't have backing image` test with `source-pvc` and `cloned-pvc` use `longhorn-bi-parrot` instead of `longhorn` storageclass Create a PVC: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: source-pvc spec: storageClassName: longhorn accessModes: ReadWriteOnce resources: requests: storage: 10Gi ``` Specify the `source-pvc` in a pod yaml and start the pod Wait for the pod to be running, write 1GB of data to the mount path of the volume Clone a volume by creating the PVC: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloned-pvc spec: storageClassName: longhorn dataSource: name: source-pvc kind: PersistentVolumeClaim accessModes: ReadWriteOnce resources: requests: storage: 10Gi ``` Specify the `cloned-pvc` in a cloned pod yaml and deploy the cloned pod Wait for the `CloneStatus.State` in `cloned-pvc` to be `initiated` Kill all replicas process of the `source-pvc` Wait for the `CloneStatus.State` in `cloned-pvc` to be `failed` In 2-min retry loop, verify cloned pod cannot start Clean up cloned pod and `clone-pvc` Redeploy `cloned-pvc` and clone pod In 3-min retry loop, verify cloned pod become running `cloned-pvc` has the same data as `source-pvc` Cleanup the test No upgrade strategy needed"
}
] | {
"category": "Runtime",
"file_name": "20210810-volume-clone.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Kuasar follows the . Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at kuasar.io.dev@gmail.com."
}
] | {
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "Kuasar",
"subcategory": "Container Runtime"
} |
[
{
"data": "Host os You can run StratoVirt on both x86_64 and aarch64 platforms. And on top of that, the StratoVirt is based on KVM, so please make sure you have KVM module on your platform. Authority You should have read/write permissions to `/dev/kvm`. If not, you can get your permissions by: ```shell $ sudo setfacl -m u:${USER}:rw /dev/kvm ``` StratoVirt is offerred by openEuler 20.09 or later. You can install by yum directly. ```shell $ sudo yum install stratovirt ``` Now you can find StratoVirt binary with path: `/usr/bin/stratovirt`. If you'd like to build StratoVirt yourself, you should check out the . With StratoVirt binary (either installed with yum, or built from source), we can boot a guest linux machine. Now StratoVirt provides two kinds of machine, which are microvm and standardvm(\"q35\" on x8664 platform and \"virt\" on aarch64 platform). As a quick start, we show how to start a VM with microvm. First, you will need an PE format Linux kernel binary, and an ext4 file system image (as rootfs). `x86_64` boot source: and . `aarch64` boot source: and . Or get the kernel and rootfs with shell: ```shell arch=`uname -m` dest_kernel=\"vmlinux.bin\" dest_rootfs=\"rootfs.ext4\" imagebucketurl=\"https://repo.openeuler.org/openEuler-22.03-LTS/stratovirt_img\" if [ ${arch} = \"x86_64\" ] || [ ${arch} = \"aarch64\" ]; then kernel=\"${imagebucketurl}/${arch}/vmlinux.bin\" rootfs=\"${imagebucketurl}/${arch}/openEuler-22.03-LTS-stratovirt-${arch}.img.xz\" else echo \"Cannot run StratoVirt on ${arch} architecture!\" exit 1 fi echo \"Downloading $kernel...\" wget ${kernel} -O ${dest_kernel} --no-check-certificate echo \"Downloading $rootfs...\" wget ${rootfs} -O ${dest_rootfs}.xz --no-check-certificate xz -d ${dest_rootfs}.xz echo \"kernel file: ${destkernel} and rootfs image: ${destrootfs} download over.\" ``` Start guest linux machine with StratoVirt: ```shell socket_path=`pwd`\"/stratovirt.sock\" kernel_path=`pwd`\"/vmlinux.bin\" rootfs_path=`pwd`\"/rootfs.ext4\" rm -f ${socket_path} /usr/bin/stratovirt \\ -machine microvm \\ -kernel ${kernel_path} \\ -smp 1 \\ -m 1024 \\ -append \"console=ttyS0 pci=off reboot=k quiet panic=1 root=/dev/vda\" \\ -drive file=${rootfs_path},id=rootfs,readonly=off,direct=off \\ -device virtio-blk-device,drive=rootfs,id=rootfs \\ -qmp unix:${socket_path},server,nowait \\ -serial stdio ``` You should now see a serial in stdio prompting you to log into the guest machine. If you used our `openEuler-22.03-LTS-stratovirt.img` image, you can login as `root`, using the password `openEuler12#$`. If you want to quit the guest machine, using a `reboot` command inside the guest will actually shutdown StratoVirt. This is due to that StratoVirt didn't implement guest power management in microvm type. If you want to know more information on running StratoVirt, go to the ."
}
] | {
"category": "Runtime",
"file_name": "quickstart.md",
"project_name": "StratoVirt",
"subcategory": "Container Runtime"
} |
[
{
"data": "Join the [kubernetes-security-announce] group for security and vulnerability announcements. You can also subscribe to an RSS feed of the above using . Instructions for reporting a vulnerability can be found on the [Kubernetes Security and Disclosure Information] page. Information about supported Kubernetes versions can be found on the [Kubernetes version and version skew support policy] page on the Kubernetes website."
}
] | {
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
} |
[
{
"data": "title: Name resolution via `/etc/hosts` menu_order: 50 search_type: Documentation When starting Weave Net enabled containers, the proxy automatically replaces the container's `/etc/hosts` file, and disables Docker's control over it. The new file contains an entry for the container's hostname and Weave Net IP address, as well as additional entries that have been specified using the `--add-host` parameters. This ensures that: name resolution of the container's hostname, for example, via `hostname -i`, returns the Weave Net IP address. This is required for many cluster-aware applications to work. unqualified names get resolved via DNS, for example typically via weaveDNS to Weave Net IP addresses. This is required so that in a typical setup one can simply \"ping `<container-name>`\", i.e. without having to specify a `.weave.local` suffix. If you prefer to keep `/etc/hosts` under Docker's control (for example, because you need the hostname to resolve to the Docker-assigned IP instead of the Weave IP, or you require name resolution for Docker-managed networks), the proxy must be launched using the `--no-rewrite-hosts` flag. host1$ weave launch --no-rewrite-hosts See Also"
}
] | {
"category": "Runtime",
"file_name": "name-resolution-proxy.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "The Kata Containers runtime MUST fulfill all of the following requirements: The Kata Containers runtime MUST implement the and support all the OCI runtime operations. In theory, being OCI compatible should be enough. In practice, the Kata Containers runtime should comply with the latest stable `runc` CLI. In particular, it MUST implement the following `runc` commands: `create` `delete` `exec` `kill` `list` `pause` `ps` `start` `state` `version` The Kata Containers runtime MUST implement the following command line options: `--console-socket` `--pid-file` The Kata Containers project MUST provide two interfaces for CRI shims to manage hardware virtualization based Kubernetes pods and containers: An OCI and `runc` compatible command line interface, as described in the previous section. This interface is used by implementations such as and , for example. A hardware virtualization runtime library API for CRI shims to consume and provide a more CRI native implementation. The CRI shim is an example of such a consumer. The Kata Containers runtime MUST NOT be architecture-specific. It should be able to support multiple hardware architectures and provide a modular and flexible design for adding support for additional ones. The Kata Containers runtime MUST NOT be tied to any specific hardware virtualization technology, hypervisor, or virtual machine monitor implementation. It should support multiple hypervisors and provide a pluggable and flexible design to add support for additional ones. The Kata Containers runtime MUST support nested virtualization environments. The Kata Containers runtime MUST* support CNI plugin. The Kata Containers runtime MUST* support both legacy and IPv6 networks. In order for containers to directly consume host hardware resources, the Kata Containers runtime MUST provide containers with secure pass through for generic devices such as GPUs, SRIOV, RDMA, QAT, by leveraging I/O virtualization technologies (IOMMU, interrupt remapping). The Kata Containers runtime MUST support accelerated and user-space-based I/O operations for networking (e.g. DPDK) as well as storage through `vhost-user` sockets. The Kata Containers runtime MUST support scalable I/O through the SRIOV technology. A compelling aspect of containers is their minimal overhead compared to bare metal applications. A container runtime should keep the overhead to a minimum in order to provide the expected user experience. The Kata Containers runtime implementation SHOULD be optimized for: Minimal workload boot and shutdown times Minimal workload memory footprint Maximal networking throughput Minimal networking latency Each Kata Containers runtime pull request MUST pass at least the following set of container-related tests: Unit tests: runtime unit tests coverage >75% Functional tests: the entire runtime CLI and APIs Integration tests: Docker and Kubernetes The Kata Containers runtime implementation MUST use structured logging in order to namespace log messages to facilitate debugging."
}
] | {
"category": "Runtime",
"file_name": "kata-design-requirements.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "Support backup and full restore for volumes with v2 data engine. Leverage the existing backup and restore frameworks Back up volumes with v2 data engine Fully restore volumes with v2 data engine Support fully restoring v2 volume backup to v1/v2 volume Incremental restore volumes with v2 data engine Backup and restore are essential for data recovery. Longhorn system can back up volumes with v1 data engine. However, it does not support snapshot backup for v2 data engine yet. For better user experience, backup and restore operations should be consistent across both data engines. Additionally, a v2 volume can support restoring to an either v1 or v2 volume. Backup and restore operations should be consistent across both data engines. engine-proxy Add `BackupRestoreFinish` for wrapping up the volume restore spdk Add `bdevlvolget_fragmap` API Get a fragmap for a specific segment of a logical volume (lvol) using the provided offset and size. A fragmap is a bitmap that records the allocation status of clusters. A value of \"1\" indicates that a cluster is allocated, whereas \"0\" signifies that a cluster is unallocated. Add `creation_time` xattr in lvols The `creation_time` is represented as a snapshot creation time. Add `bdevlvolgetxattr` and `bdevlvolsetxattr` spdk-go-helper Add `BdevLvolGetFragmap` for `bdevlvolget_fragmap` Add `BdevGetXattr` for `bdevlvolget_xattr` Add `BdevSetXattr` for `bdevlvolset_xattr` In `UI > Backup > Volume Operations > Restore` page, add `Data Engine` option. Longhorn can incrementally back up volumes that use the v1 data engine. Incremental backups are based on fiemap, which are file extent mappings of the disk files. This means that only the changes to the volume since the last backup are backed up, which can save time and space. However, snapshots of volumes that use the v2 data engine are SPDK lvols, which do not support fiemap. Instead, we can use the concept of fiemap to leverage the fragmentation map (fragmap) instead. A fragmap is a mapping of data and hole segments in a volume. This means that we can use the fragmap to identify the changed segments of a v2 volume and back them up, even though fiemap is not supported. Here are some additional details about fiemap and fragmaps: fiemap is a Linux kernel API that allows applications to query the file system for the extents of a file. This can be used to efficiently back up files, as only the changed extents need to be backed up. fragmap is a data structure that stores the fragmentation information for a volume. This information includes the size and location of each data segment in the volume, as well as the size and location of any holes in the"
},
{
"data": "spdk_tgt Add `bdevlvolget_fragmap` JSON-RPC method: retrieve the fragmap for a specific portion of the specified lvol bdev Parameters lvs_name: The name of the lvstore name lvol_name: The name of the lvol. offset: offset the a specific portion of the lvol. Unit: bytes. size: size the a specific portion of the lvol. Unit: bytes. The fragmap is internally constructed by incorporating with `spdkbdevseekhole` and `spdkbdevseekdata` The size of each data segment or hole segment is the same as the cluster size of a lvstore An example of a fragmap of a lvol Backup Flow The backup flow is explained by the an example. A snapshot chain of a volume is ``` volume-head -> snap003 -> snap002 -> snap001 (backed up) ``` `snap001` is already backed up, and we are going to back up `snap003`. The steps are longhorn-manager's backup controller is aware of the backup resource for the snapshot `snapshot3` and then issues a backup request to the instance-manager by the proxy function `SnapshotBackup()` with the specified volume and snapshot names. After the instance-manager proxy service receives the request, the proxy function transfers the request to the SPDK service by `EngineBackupCreate()`. The SPDK service randomly picks one of the replicas and executes the backup by `ReplicaBackupCreate()`. Expose snapshot (`snap003`) to be a block device locally via NVMe over TCP. The data in the block device is the accumulated data of `snap003` and its descendants. Get the fragmaps of the snapshot to be backed up and its descendants prior to the last backup. In the example, we need to get the fragmaps of `snap003` and `snap002`. Retrieve the fragmap of each lvol using `bdevlvolget_fragmap` JSON-RPC method. Overlay all the fragmaps, iterate each bit, and if it's set to 1, read data from the corresponding cluster on the block device and perform a backup. Full Restore Flow The engine controller within Longhorn Manager is informed of a volume restore request. Subsequently, it initiates a restore operation to the instance manager using the BackupRestore() proxy function. This operation includes essential details such as the volume name, backup name, credentials, and more. Once the instance manager's proxy service receives the request, the `EngineRestoreCreate()` proxy function is employed to transfer the request to the SPDK service. Within `EngineRestoreCreate()`, the raid bdev is removed, followed by issuing a replica restore command to the associated replicas through the `ReplicaRestoreCreate()` function. In the `ReplicaRestoreCreate()`, the lvol of the replica is locally exposed as a block device using NVMe over TCP. The subsequent steps involve downloading data blocks and writing them to the block device according to the block mappings. Upon the successful completion of the restoration process, Longhorn Manager sends a request to finalize the restoration to the instance manager via the `EngineRestoreFinish()` function. This function facilitates the recreation of the raid bdev. Concurrent volume backup and full restore"
}
] | {
"category": "Runtime",
"file_name": "20230809-support-backup-and-restore-for-volumes-with-v2-data-engine.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "(network-external)= <!-- Include start external intro --> External networks use network interfaces that already exist. Therefore, Incus has limited possibility to control them, and Incus features like network ACLs, network forwards and network zones are not supported. The main purpose for using external networks is to provide an uplink network through a parent interface. This external network specifies the presets to use when connecting instances or other networks to a parent interface. Incus supports the following external network types: <!-- Include end external intro --> ```{toctree} :maxdepth: 1 /reference/network_macvlan /reference/network_sriov /reference/network_physical ```"
}
] | {
"category": "Runtime",
"file_name": "network_external.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "The `yaml` Project is released on an as-needed basis. The process is as follows: An issue is proposing a new release with a changelog since the last release All must LGTM this release An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` The release issue is closed An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`"
}
] | {
"category": "Runtime",
"file_name": "RELEASE.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "title: \"Restore Hooks\" layout: docs Velero supports Restore Hooks, custom actions that can be executed during or after the restore process. There are two kinds of Restore Hooks: InitContainer Restore Hooks: These will add init containers into restored pods to perform any necessary setup before the application containers of the restored pod can start. Exec Restore Hooks: These can be used to execute custom commands or scripts in containers of a restored Kubernetes pod. Use an `InitContainer` hook to add init containers into a pod before it's restored. You can use these init containers to run any setup needed for the pod to resume running from its backed-up state. The InitContainer added by the restore hook will be the first init container in the `podSpec` of the restored pod. In the case where the pod had volumes backed up using restic, then, the restore hook InitContainer will be added after the `restic-wait` InitContainer. NOTE: This ordering can be altered by any mutating webhooks that may be installed in the cluster. There are two ways to specify `InitContainer` restore hooks: Specifying restore hooks in annotations Specifying restore hooks in the restore spec Below are the annotations that can be added to a pod to specify restore hooks: `init.hook.restore.velero.io/container-image` The container image for the init container to be added. `init.hook.restore.velero.io/container-name` The name for the init container that is being added. `init.hook.restore.velero.io/command` This is the `ENTRYPOINT` for the init container being added. This command is not executed within a shell and the container image's `ENTRYPOINT` is used if this is not provided. Use the below commands to add annotations to the pods before taking a backup. ```bash $ kubectl annotate pod -n <PODNAMESPACE> <PODNAME> \\ init.hook.restore.velero.io/container-name=restore-hook \\ init.hook.restore.velero.io/container-image=alpine:latest \\ init.hook.restore.velero.io/command='[\"/bin/ash\", \"-c\", \"date\"]' ``` With the annotation above, Velero will add the following init container to the pod when it's restored. ```json { \"command\": [ \"/bin/ash\", \"-c\", \"date\" ], \"image\": \"alpine:latest\", \"imagePullPolicy\": \"Always\", \"name\": \"restore-hook\" ... } ``` Init container restore hooks can also be specified using the `RestoreSpec`. Please refer to the documentation on the for how to specify hooks in the Restore spec. Below is an example of specifying restore hooks in `RestoreSpec` ```yaml apiVersion: velero.io/v1 kind: Restore metadata: name: r2 namespace: velero spec: backupName: b2 excludedResources: ... includedNamespaces: '*' hooks: resources: name: restore-hook-1 includedNamespaces: app postHooks: init: initContainers: name: restore-hook-init1 image: alpine:latest volumeMounts: mountPath: /restores/pvc1-vm name: pvc1-vm command: /bin/ash -c echo -n \"FOOBARBAZ\" >> /restores/pvc1-vm/foobarbaz name: restore-hook-init2 image: alpine:latest volumeMounts: mountPath: /restores/pvc2-vm name: pvc2-vm command: /bin/ash -c echo -n \"DEADFEED\" >> /restores/pvc2-vm/deadfeed ``` The `hooks` in the above `RestoreSpec`, when restored, will add two init containers to every pod in the `app` namespace ```json { \"command\": [ \"/bin/ash\", \"-c\", \"echo -n \\\"FOOBARBAZ\\\" >> /restores/pvc1-vm/foobarbaz\" ], \"image\": \"alpine:latest\", \"imagePullPolicy\": \"Always\", \"name\": \"restore-hook-init1\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [ { \"mountPath\": \"/restores/pvc1-vm\", \"name\": \"pvc1-vm\" } ] ... } ``` and ```json { \"command\": [ \"/bin/ash\", \"-c\", \"echo -n \\\"DEADFEED\\\" >> /restores/pvc2-vm/deadfeed\" ], \"image\": \"alpine:latest\", \"imagePullPolicy\": \"Always\", \"name\": \"restore-hook-init2\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [ { \"mountPath\": \"/restores/pvc2-vm\", \"name\": \"pvc2-vm\" } ] ... } ``` Use an Exec Restore hook to execute commands in a restored pod's containers after they"
},
{
"data": "There are two ways to specify `Exec` restore hooks: Specifying exec restore hooks in annotations Specifying exec restore hooks in the restore spec If a pod has the annotation `post.hook.restore.velero.io/command` then that is the only hook that will be executed in the pod. No hooks from the restore spec will be executed in that pod. Below are the annotations that can be added to a pod to specify exec restore hooks: `post.hook.restore.velero.io/container` The container name where the hook will be executed. Defaults to the first container. Optional. `post.hook.restore.velero.io/command` The command that will be executed in the container. Required. `post.hook.restore.velero.io/on-error` How to handle execution failures. Valid values are `Fail` and `Continue`. Defaults to `Continue`. With `Continue` mode, execution failures are logged only. With `Fail` mode, no more restore hooks will be executed in any container in any pod and the status of the Restore will be `PartiallyFailed`. Optional. `post.hook.restore.velero.io/exec-timeout` How long to wait once execution begins. Defaults to 30 seconds. Optional. `post.hook.restore.velero.io/wait-timeout` How long to wait for a container to become ready. This should be long enough for the container to start plus any preceding hooks in the same container to complete. The wait timeout begins when the container is restored and may require time for the image to pull and volumes to mount. If not set the restore will wait indefinitely. Optional. Use the below commands to add annotations to the pods before taking a backup. ```bash $ kubectl annotate pod -n <PODNAMESPACE> <PODNAME> \\ post.hook.restore.velero.io/container=postgres \\ post.hook.restore.velero.io/command='[\"/bin/bash\", \"-c\", \"psql < /backup/backup.sql\"]' \\ post.hook.restore.velero.io/wait-timeout=5m \\ post.hook.restore.velero.io/exec-timeout=45s \\ post.hook.restore.velero.io/on-error=Continue ``` Exec restore hooks can also be specified using the `RestoreSpec`. Please refer to the documentation on the for how to specify hooks in the Restore spec. Below is an example of specifying restore hooks in a `RestoreSpec`. When using the restore spec it is possible to specify multiple hooks for a single pod, as this example demonstrates. All hooks applicable to a single container will be executed sequentially in that container once it starts. The ordering of hooks executed in a single container follows the order of the restore spec. In this example, the `pgisready` hook is guaranteed to run before the `psql` hook because they both apply to the same container and the `pgisready` hook is defined first. If a pod has multiple containers with applicable hooks, all hooks for a single container will be executed before executing hooks in another container. In this example, if the postgres container starts before the sidecar container, both postgres hooks will run before the hook in the sidecar. This means the sidecar container may be running for several minutes before its hook is executed. Velero guarantees that no two hooks for a single pod are executed in parallel, but hooks executing in different pods may run in parallel. ```yaml apiVersion: velero.io/v1 kind: Restore metadata: name: r2 namespace: velero spec: backupName: b2 excludedResources: ... includedNamespaces: '*' hooks: resources: name: restore-hook-1 includedNamespaces: app postHooks: exec: execTimeout: 1m waitTimeout: 5m onError: Fail container: postgres command: /bin/bash '-c' 'while ! pg_isready; do sleep 1; done' exec: container: postgres waitTimeout: 6m execTimeout: 1m command: /bin/bash '-c' 'psql < /backup/backup.sql' exec: container: sidecar command: /bin/bash '-c' 'date > /start' ```"
}
] | {
"category": "Runtime",
"file_name": "restore-hooks.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "% runc \"8\" runc - Open Container Initiative runtime runc [global-option ...] command [command-option ...] [argument ...] runc is a command line client for running applications packaged according to the Open Container Initiative (OCI) format and is a compliant implementation of the Open Container Initiative specification. runc integrates well with existing process supervisors to provide a production container runtime environment for applications. It can be used with your existing process monitoring tools and the container will be spawned as a direct child of the process supervisor. Containers are configured using bundles. A bundle for a container is a directory that includes a specification file named config.json and a root filesystem. The root filesystem contains the contents of the container. To run a new instance of a container: Where container-id is your name for the instance of the container that you are starting. The name you provide for the container instance must be unique on your host. Providing the bundle directory using -b is optional. The default value for bundle is the current directory. checkpoint : Checkpoint a running container. See runc-checkpoint(8). create : Create a container. See runc-create(8). delete : Delete any resources held by the container; often used with detached containers. See runc-delete(8). events : Display container events, such as OOM notifications, CPU, memory, I/O and network statistics. See runc-events(8). exec : Execute a new process inside the container. See runc-exec(8). kill : Send a specified signal to the container's init process. See runc-kill(8). list : List containers started by runc with the given --root. See runc-list(8). pause : Suspend all processes inside the container. See runc-pause(8). ps : Show processes running inside the container. See runc-ps(8). restore : Restore a container from a previous checkpoint. See runc-restore(8). resume : Resume all processes that have been previously paused. See runc-resume(8). run : Create and start a container. See runc-run(8). spec : Create a new specification file (config.json). See runc-spec(8). start : Start a container previously created by runc create. See runc-start(8). state : Show the container state. See runc-state(8). update : Update container resource constraints. See runc-update(8). help, h : Show a list of commands or help for a particular command. These options can be used with any command, and must precede the command. --debug : Enable debug logging. --log path : Set the log destination to path. The default is to log to stderr. --log-format text|json : Set the log format (default is text). --root path : Set the root directory to store containers' state. The path should be located on tmpfs. Default is /run/runc, or $XDG_RUNTIME_DIR/runc for rootless containers. --systemd-cgroup : Enable systemd cgroup support. If this is set, the container spec (config.json) is expected to have cgroupsPath value in the slice:prefix:name form (e.g. system.slice:runc:434234). --rootless true|false|auto : Enable or disable rootless mode. Default is auto, meaning to auto-detect whether rootless should be enabled. --help|-h : Show help. --version|-v : Show version. runc-checkpoint(8), runc-create(8), runc-delete(8), runc-events(8), runc-exec(8), runc-kill(8), runc-list(8), runc-pause(8), runc-ps(8), runc-restore(8), runc-resume(8), runc-run(8), runc-spec(8), runc-start(8), runc-state(8), runc-update(8)."
}
] | {
"category": "Runtime",
"file_name": "runc.8.md",
"project_name": "runc",
"subcategory": "Container Runtime"
} |
[
{
"data": "This proposal outlines an approach to support versioning of Velero's plugin APIs to enable changes to those APIs. It will allow for backwards compatible changes to be made, such as the addition of new plugin methods, but also backwards incompatible changes such as method removal or method signature changes. When changes are made to Veleros plugin APIs, there is no mechanism for the Velero server to communicate the version of the API that is supported, or for plugins to communicate what version they implement. This means that any modification to a plugin API is a backwards incompatible change as it requires all plugins which implement the API to update and implement the new method. There are several components involved to use plugins within Velero. From the perspective of the core Velero codebase, all plugin kinds (e.g. `ObjectStore`, `BackupItemAction`) are defined by a single API interface and all interactions with plugins are managed by a plugin manager which provides an implementation of the plugin API interface for Velero to use. Velero communicates with plugins via gRPC. The core Velero project provides a framework (using the ) for plugin authors to use to implement their plugins which manages the creation of gRPC servers and clients. Velero plugins import the Velero plugin library in order to use this framework. When a change is made to a plugin API, it needs to be made to the Go interface used by the Velero codebase, and also to the rpc service definition which is compiled to form part of the framework. As each plugin kind is defined by a single interface, when a plugin imports the latest version of the Velero framework, it will need to implement the new APIs in order to build and run successfully. If a plugin does not use the latest version of the framework, and is used with a newer version of Velero that expects the plugin to implement those methods, this will result in a runtime error as the plugin is incompatible. With this proposal, we aim to break this coupling and introduce plugin API versions. The following describes interactions between Velero and its plugins that will be supported with the implementation of this proposal. For the purposes of this list, we will refer to existing Velero and plugin versions as `v1` and all following versions as version"
},
{
"data": "Velero client communicating with plugins or plugin client calling other plugins: Version `n` client will be able to communicate with Version `n` plugin Version `n` client will be able to communicate with all previous versions of the plugin (Version `n-1` back to `v1`) Velero plugins importing Velero framework: `v1` plugin built against Version `n` Velero framework A plugin may choose to only implement a `v1` API, but it must be able to be built using Version `n` of the Velero framework Allow plugin APIs to change without requiring all plugins to implement the latest changes (even if they upgrade the version of Velero that is imported) Allow plugins to choose which plugin versions they support and enable them to support multiple versions Support breaking changes in the plugin APIs such as method removal or method signature changes Establish a design process for modifying plugin APIs such as method addition and removal and signature changes Establish a process for newer Velero clients to use older versions of a plugin API through adaptation Change how plugins are managed or added Allow older plugin clients to communicate with new versions of plugins With each change to a plugin API, a new version of the plugin interface and the proto service definition will be created which describes the new plugin API. The plugin framework will be adapted to allow these new plugin versions to be registered. Plugins can opt to implement any or all versions of an API, however Velero will always attempt to use the latest version, and the plugin management will be modified to adapt earlier versions of a plugin to be compatible with the latest API where possible. Under the existing plugin framework, any new plugin version will be treated as a new plugin with a new kind. The plugin manager (which provides implementations of a plugin to Velero) will include an adapter layer which will manage the different versions and provide the adaptation for versions which do not implement the latest version of the plugin API. Providing an adaptation layer enables Velero and other plugin clients to use an older version of a plugin if it can be safely adapted. As the plugins will be able to introduce backwards incompatible changes, it will not be possible for older version of Velero to use plugins which only support the latest versions of the plugin APIs. Although adding new rpc methods to a service is considered a backwards compatible change within gRPC, due to the way the proto definitions are compiled and included in the framework used by plugins, this will require every plugin to implement the new methods. Instead, we are opting to treat the addition of a method to an API as one requiring versioning. The addition of optional fields to existing structs which are used as parameters to or return values of API methods will not be considered as a change requiring versioning. These kinds of changes do not modify method signatures and have been safely made in the past with no impact on existing plugins. The following areas will need to be adapted to support plugin versioning. To provide versioned plugins, any change to a plugin interface (method addition, removal, or signature change) will require a new versioned interface to be created. Currently, all plugin interface definitions reside in `pkg/plugin/velero` in a file corresponding to their plugin kind. These files will be rearranged to be grouped by kind and then versioned:"
},
{
"data": "The following are examples of how each change may be treated: If the entire `ObjectStore` interface is being changed such that no previous methods are being included, a file would be added to `pkg/plugin/velero/objectstore/v2/` and would contain the new interface definition: ``` type ObjectStore interface { // Only include new methods that the new API version will support NewMethod() // ... } ``` If a method is being added to the `ObjectStore` API, a file would be added to `pkg/plugin/velero/objectstore/v2/` and may contain a new API definition as follows: ``` import \"github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v1\" type ObjectStore interface { // Import all the methods from the previous version of the API if they are to be included as is v1.ObjectStore // Provide definitions of any new methods NewMethod() ``` If a method is being removed from the `ObjectStore` API, a file would be added to `pkg/plugin/velero/objectstore/v2/` and may contain a new API definition as follows: ``` type ObjectStore interface { // Methods which are required from the previous API version must be included, for example Init(config) PutObject(bucket, key, body) // ... // Methods which are to be removed are not included ``` If a method signature in the `ObjectStore` API is being modified, a file would be added to `pkg/plugin/velero/objectstore/v2/` and may contain a new API definition as follows: ``` type ObjectStore interface { // Methods which are required from the previous API version must be included, for example Init(config) PutObject(bucket, key, body) // ... // Provide new definitions for methods which are being modified List(bucket, prefix, newParameter) } ``` The proto service definitions of the plugins will also be versioned and arranged by their plugin kind. Currently, all the proto definitions reside under `pkg/plugin/proto` in a file corresponding to their plugin kind. These files will be rearranged to be grouped by kind and then versioned: `pkg/plugin/proto/<plugin_kind>/<version>`, except for the current v1 plugins. Those will remain in their current package/location for backwards compatibility. This will allow plugin images built with earlier versions of velero to work with the latest velero (for v1 plugins only). The go_package option will be added to all proto service definitions to allow the proto compilation script to place the generated go code for each plugin api version in the proper go package directory. It is not possible to import an existing proto service into a new one, so any methods will need to be duplicated across versions if they are required by the new version. The message definitions can be shared however, so these could be extracted from the service definition files and placed in a file that can be shared across all versions of the service. To allow plugins to register which versions of the API they implement, the plugin framework will need to be adapted to accept new versions. Currently, the plugin manager stores a , where the string key is the binary name for the plugin process (e.g. \"velero-plugin-for-aws\"). Each `RestartableProcess` contains a which represents each of the unique plugin implementations provided by that binary. is a struct which combines the plugin kind (`ObjectStore`, `VolumeSnapshotter`) and the plugin name (\"velero.io/aws\", \"velero.io/azure\"). Each plugin version registration must be unique (to allow for multiple versions to be implemented within the same plugin"
},
{
"data": "This will be achieved by adding a specific registration method for each version to the Server interface in the plugin framework. For example, if adding a V2 `RestoreItemAction` plugin, the Server interface would be modified to add the `RegisterRestoreItemActionV2` method. This would require to represent the new plugin version, e.g. `PluginKindRestoreItemActionV2`. It also requires the creation of a new implementation of the go-plugin interface () to support that version and use the generated gRPC code from the proto definition (including a client and server implementation). The Server will also need to be adapted to recognize this new plugin Kind and to serve the new implementation. Existing plugin Kind consts and registration methods will be left unchanged and will correspond to the current version of the plugin APIs (assumed to be v1). The plugin manager is responsible for managing the lifecycle of plugins. It provides an interface which is used by Velero to retrieve an instance of a plugin kind with a specific name (e.g. `ObjectStore` with the name \"velero.io/aws\"). The manager contains a registry of all available plugins which is populated during the main Velero server startup. When the plugin manager is requested to provide a particular plugin, it checks the registry for that plugin kind and name. If it is available in the registry, the manager retrieves a `RestartableProcess` for the plugin binary, creating it if it does not already exist. That `RestartableProcess` is then used by individual restartable implementations of a plugin kind (e.g. `restartableObjectStore`, `restartableVolumeSnapshotter`). As new plugin versions are added, the plugin manager will be modified to always retrieve the latest version of a plugin kind. This is to allow the remainder of the Velero codebase to assume that it will always interact with the latest version of a plugin. If the latest version of a plugin is not available, it will attempt to fall back to previous versions and use an implementation adapted to the latest version if available. It will be up to the author of new plugin versions to determine whether a previous version of a plugin can be adapted to work with the interface of the new version. For each plugin kind, a new `Restartable<PluginKind>` struct will be introduced which will contain the plugin Kind and a function, `Get`, which will instantiate a restartable instance of that plugin kind and perform any adaptation required to make it compatible with the latest version. For example, `RestartableObjectStore` or `RestartableVolumeSnapshotter`. For each restartable plugin kind, a new function will be introduced which will return a slice of `Restartable<PluginKind>` objects, sorted by version in descending order. The manager will iterate through the list of `Restartable<PluginKind>`s and will check the registry for the given plugin kind and name. If the requested version is not found, it will skip and continue to iterate, attempting to fetch previous versions of the plugin kind. Once the requested version is found, the `Get` function will be called, returning the restartable implementation of the latest version of that plugin Kind. ``` type RestartableObjectStore struct { kind"
},
{
"data": "// Get returns a restartable ObjectStore for the given name and process, wrapping if necessary Get func(name string, restartableProcess RestartableProcess) v2.ObjectStore } func (m *manager) restartableObjectStores() []RestartableObjectStore { return []RestartableObjectStore{ { kind: framework.PluginKindObjectStoreV2, Get: newRestartableObjectStoreV2, }, { kind: framework.PluginKindObjectStore, Get: func(name string, restartableProcess RestartableProcess) v2.ObjectStore { // Adapt the existing restartable v1 plugin to be compatible with the v2 interface return newAdaptedV1ObjectStore(newRestartableObjectStore(name, restartableProcess)) }, }, } } // GetObjectStore returns a restartableObjectStore for name. func (m *manager) GetObjectStore(name string) (v2.ObjectStore, error) { name = sanitizeName(name) for _, restartableObjStore := range m.restartableObjectStores() { restartableProcess, err := m.getRestartableProcess(restartableObjStore.kind, name) if err != nil { // Check if plugin was not found if errors.Is(err, &pluginNotFoundError{}) { continue } return nil, err } return restartableObjStore.Get(name, restartableProcess), nil } return nil, fmt.Errorf(\"unable to get valid ObjectStore for %q\", name) } ``` If the previous version is not available, or can not be adapted to the latest version, it should not be included in the `restartableObjectStores` slice. This will result in an error being returned as is currently the case when a plugin implementation for a particular kind and provider can not be found. There are situations where it may be beneficial to check at the point where a plugin API call is made whether it implements a specific version of the API. This is something that can be addressed with future amendments to this design, however it does not seem to be necessary at this time. When a new plugin API version is being proposed, it will be up to the author and the maintainer team to determine whether older versions of an API can be safely adapted to the latest version. An adaptation will implement the latest version of the plugin API interface but will use the methods from the version that is being adapted. In cases where the methods signatures remain the same, the adaptation layer will call through to the same method in the version being adapted. Examples where an adaptation may be safe: A method signature is being changed to add a new parameter but the parameter could be optional (for example, adding a context parameter). The adaptation could call through to the method provided in the previous version but omit the parameter. A method signature is being changed to remove a parameter, but it is safe to pass a default value to the previous version. The adaptation could call through to the method provided in the previous version but use a default value for the parameter. A new method is being added but does not impact any existing behaviour of Velero (for example, a new method which will allow Velero to ). The adaptation would return a value which allows the existing behaviour to be performed. A method is being deleted as it is no longer used. The adaptation would call through to any methods which are still included but would omit the deleted method in the adaptation. Examples where an adaptation may not be safe: A new method is added which is used to provide new critical functionality in"
},
{
"data": "If this functionality can not be replicated using existing plugin methods in previous API versions, this should not be adapted and instead the plugin manager should return an error indicating that the plugin implementation can not be found. As new versions of plugins are added, new restartable implementations of plugins will also need to be created. These are currently located within \"pkg/plugin/clientmgmt\" but will be rearranged to be grouped by kind and version like other plugin files. It should be noted that if changes are being made to a plugin's API, it will only be necessary to bump the API version once within a release cycle, regardless of how many changes are made within that cycle. This is because the changes will only be available to consumers when they upgrade to the next minor version of the Velero library. New plugin API versions will not be introduced or backported to patch releases. Once a new minor or major version of Velero has been released however, any further changes will need to follow the process above and use a new API version. One approach for adapting the plugin APIs would have been to rely on the fact that adding methods to gRPC services is a backwards compatible change. This approach would allow older clients to communicate with newer plugins as the existing interface would still be provided. This was considered but ruled out as our current framework would require any plugin that recompiles using the latest version of the framework to adapt to the new version. Also, without specific versioned interfaces, it would require checking plugin implementations at runtime for the specific methods that are supported. This design doc aims to allow plugin API changes to be made in a manner that may provide some backwards compatibility. Older versions of Velero will not be able to make use of new plugin versions however may continue to use previous versions of a plugin API if supported by the plugin. All compatibility concerns are addressed earlier in the document. This design document primarily outlines an approach to allow future plugin API changes to be made. However, there are changes to the existing code base that will be made to allow plugin authors to more easily propose and introduce changes to these APIs. Plugin interface definitions (currently in `pkg/plugin/velero`) will be rearranged to be grouped by kind and then versioned: `pkg/plugin/velero/<plugin_kind>/<version>/`. Proto definitions (currently in `pkg/plugin/proto`) will be rearranged to be grouped by kind and then versioned: `pkg/plugin/proto/<plugin_kind>/<version>`. This will also require changes to the `make update` build task to correctly find the new proto location and output to the versioned directories. It is anticipated that changes to the plugin APIs will be made as part of the 1.9 release cycle. To assist with this work, an additional follow-up task to the ones listed above would be to prepare a V2 version of each of the existing plugins. These new versions will not yet provide any new API methods but will provide a layout for new additions to be made"
}
] | {
"category": "Runtime",
"file_name": "plugin-versioning.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "| Author | haozi007 | | | - | | Date | 2022-04-15 | | Email | liuhao27@huawei.com | In the `iSulad` architecture, the exit status of the container process is obtained by the parent process (different runtimes have different implementations, taking shim v1 as an example is `isulad-shim`), while the parent process of the container process and the `iSulad` process There is no parent-child relationship, resulting in imperceptible state changes. Therefore, a mechanism is needed to monitor changes in the state of the container process. The `supervisor` module is designed to solve this problem. The exit phase of the container life cycle requires cleanup of container resources (for example, cgroup directories, mount points, temporary files, etc.). When the container process is abnormal (for example, the D state reaches only cgroup resources that cannot be cleaned up, etc.), a guarantee mechanism is required. The `gc` module can provide a guarantee. When the container exits abnormally, you can continue to clean up resources; and after `iSulad` exits, the container in the `gc` state can be re-managed after restarting. ````mermaid sequenceDiagram participant container_module participant supervisor participant gc containermodule -->> gc: newgchandler gc -->> gc: init gc list gc -->> gc: restore gc containers containermodule -->> supervisor: newsupervisor supervisor -->> supervisor: start supervisor thread loop epoll supervisor --> supervisor: wait new events end containermodule -->> containermodule: containers_restore containermodule -->> gc: startgchandler loop do_gc gc --> gc: dogccontainer end par supervisor new container container_module -->> supervisor: add new container and container into gc supervisor -->> gc: add container to gc end ```` Used to describe the process of changing the state of the container in multiple components. ````mermaid stateDiagram state if_state <<choice>> [*] --> Started Started --> Stopping Stopping --> if_state if_state --> GC: if something is wrong GC --> GC: gc failed GC --> Stopped: gc success if_state --> Stopped: all things are ok ```` ````c // supervisor module initialization int new_supervisor(); // Constructor that senses the full path of the fifo that the container exits char exit_fifo_name(const char contstatepath); // Add the container to the supervisor module for exit status monitoring int containersupervisoraddexitmonitor(int fd, const pidppidinfot *pidinfo, const char name, const char runtime); // Open the fifo file that senses container exit int containerexitfifoopen(const char *contexit_fifo); // Create a fifo file that senses container exit char container_exit_fifo_create(const char contstatepath); ```` ````c // gc module initialization int new_gchandler(); // Add the container to the gc module for resource cleanup int gcaddcontainer(const char id, const char runtime, const pidppidinfot *pidinfo); // When isulad restarts, reload the container state that has entered gc before int gc_restore(); // start the gc processing thread int start_gchandler(); // Determine if the container is in gc state bool gcisgc_progress(const char *id); ````"
}
] | {
"category": "Runtime",
"file_name": "gc_and_supervisor_design.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
} |
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Output shell completion code ``` cilium-dbg completion [shell] [flags] ``` ``` source <(cilium-dbg completion bash) cilium-dbg completion bash > ~/.cilium/completion.bash.inc printf \" source '$HOME/.cilium/completion.bash.inc' \" >> $HOME/.bash_profile source $HOME/.bash_profile source <(cilium-dbg completion zsh) cilium-dbg completion zsh > ~/.cilium/completion.zsh.inc printf \" source '$HOME/.cilium/completion.zsh.inc' \" >> $HOME/.zshrc source $HOME/.zshrc cilium-dbg completion fish > ~/.config/fish/completions/cilium.fish ``` ``` -h, --help help for completion ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI"
}
] | {
"category": "Runtime",
"file_name": "cilium-dbg_completion.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "<details><summary>Commit messages</summary> ``` %{all_commits} ``` </details> Do all methods have proper Javadoc*? Are (flux-) exceptions* properly documented? Are special return values* properly documented? (i.e. -1) Are parameters* properly explained? Does the Javadoc (still) properly describe the functionality*? Are all parameters annotated with `@javax.annotation.{Nullable,Nonnull}`*? Do tests (unit/E2E)* exist (if needed)? Does this MR require updates in UG*? Does this MR require updates in linstor-client*? Does this MR require updates in linstor-api-py*? CHANGELOG.md*? If `restv1openapi.yaml` was modified: Was the changelog section* of the file also updated? Version-bump* (if first change of new release)? Are new JSON structures (response AND requests) extendable*? (i.e. allows for new fields for future features)"
}
] | {
"category": "Runtime",
"file_name": "Default.md",
"project_name": "LINSTOR",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Longhorn can set a backing image of a Longhorn volume, which is designed for VM usage. https://github.com/longhorn/longhorn/issues/2006 https://github.com/longhorn/longhorn/issues/2295 https://github.com/longhorn/longhorn/issues/2530 https://github.com/longhorn/longhorn/issues/2404 A qcow2 or raw image file can be used as the backing image of a volume. The backing image works fine with backup or restore. Multiple replicas in the same disk can share one backing image. The source of a backing image file can be remote downloading, upload, Longhorn volume, etc. Once the first backing image file is ready, Longhorn can deliver it to other nodes. Checksum verification for backing image. HA backing image. This feature is not responsible for fixing issue mentioned in . Each backing image is stored as a object of a new CRD named `BackingImage`. The spec/status records if a disk requires/contains the related backing image file. To meet backing image HA requirement, some ready disks are randomly picked. Besides, whether a disk requires the backing image file is determined by if there is replicas using it in the disk. A file in a disk cannot be removed as long as there is a replica using it. Longhorn needs to prepare the 1st backing image file based on the source type: Typically, the 1st file preparation takes a short amount of time comparing with the whole lifecycle of BackingImage. Longhorn can use a temporary pod to handle it. Once the file is ready, Longhorn can stop the pod. We can use a CRD named `BackingImageDataSource` to abstract the file preparation. The source type and the parameters will be in the spec. To allow Longhorn manager query the progress of the file preparation, we should launch a service for this pod. Considering that multiple kinds of source like download or uploading involve HTTP, we can launch a HTTP server for the pod. Then there should be a component responsible for monitoring and syncing backing image files with other nodes after the 1st file ready. Similar to `BackingImageDataSource`, we will use a new CRD named `BackingImageManager` to abstract this component. The BackingImageManager pod is design to take over the ownership when the 1st file prepared by the BackingImageDataSource pod is ready. deliver the file to others if necessary. monitor all backing image files for a specific disk: Considering the disk migration and failed replica reusage features, there will be an actual backing image file for each disk rather than each node. BackingImageManager should support reuse existing backing image files. Since we will consider those files are immutable/read-only once ready. If there is an expected checksum for a BackingImage, the pod will compare the checksum before reusing. Live upgrade is possible: Different from instance managers, BackingImageManagers manage files only. We can directly shut down the old BackingImageManager pods, then let new BackingImageManager pods rely on the reuse mechanism to take over the existing files. If the disk is not available/gets replaced, the BackingImageManager cannot do anything or simply report all BackingImages failed. Once there is a modification for an image, managers will notify callers via the gRPC streaming. `longhorn-manager` will launch & update controllers for these new CRDs: BackingImageController is responsible for: Generate a UUID for each new BackingImage. Sync with the corresponding BackingImageDataSource: Handle BackingImageManager life cycle. Sync download status/info with BackingImageDataSource status or BackingImageManager status. Set timestamp if there is no replica using the backing image file in a disk. BackingImageDataSourceController is responsible for: Sync with the corresponding BackingImage. Handle BackingImageManager life cycle. Sync download status/info with BackingImageDataSource status or BackingImageManager"
},
{
"data": "Set timestamp if there is no replica using the BackingImage in a disk BackingImageManagerController is responsible for: Create pods to handle backing image files. Handle files based on the spec & BackingImageDataSource status: Delete unused BackingImages. Fetch the 1st file based on BackingImageDataSource. Otherwise, sync the files from other managers or directly reuse the existing files. For `longhorn-engine`: Most of the backing image related logic is already in there. The raw image support will be introduced. Make sure the backing file path will be updated each time when the replica starts. The lifecycle of the components: ``` |Created by HTTP API. |Set deletion timestamp, will delete BackingImageDataSource first. BackingImage: |========|============================================================================|======================================================| |Create BackingImageManagers |Deleted after cleanup. | base on HA or replica requirements. |Created by HTTP API after |Set deletion timestamp when | BackingImage creation. | BackingImage is being deleted. BackingImageDataSource: |===============|=========================================|=============|=================|=========================================| |Start a pod then |File ready. |Stop the pod when |Deleted. | file preparation immediately | BackingImageManager takes over the file. |Take over the 1st file from BackingImageDataSource. |Do cleanup if required |Created by BackingImageController | Or sync/receive files from peers. | then get deleted. BackingImageManager: |===========|==============|==================================|===============================================|============| |Start a pod. |Keep file monitoring |Set deletion timestamp since | after pod running. | no BackingImage is in the disk. ``` BackingImage CRs and BackingImageDataSource CRs are one-to-one correspondence. One backingImageDataSource CR is always created after the related BackingImage CR, but deleted before the BackingImage CR cleanup. The lifecycle of one BackingImageManager CR is not controlled by one single BackingImage. For a disk, the related BackingImageManager CR will be created as long as there is one BackingImage required. However, it will be removed only if there is no BackingImage in the disk. Before the enhancement, users need to manually copy the backing image data to the volume in advance. After the enhancement, users can directly specify the BackingImage during volume creation/restore with a click. And one backing image file can be shared among all replicas in the same disk. Users can modify the backing image file cleanup timeout setting so that all non-used files will be cleaned up automatically from disks. Create a volume with a backing image 2.1. via Longhorn UI Users add a backing image, which is similar to add an engine image or set up the backup target in the system. Users create/restore a volume with the backing image specified from the backing image list. 2.2. via CSI (StorageClass) By specifying `backingImageName` in a StorageClass, all volumes created by this StorageClass will utilize this backing image. If the optional fields `backingImageDataSourceType` and `backingImageDataSourceParameters` are set and valid, Longhorn will automatically create a volume as well as the backing image if the backing image does not exists. Users attach the volume to a node (via GUI or Kubernetes). Longhorn will automatically prepare the related backing image to the disks the volume replica are using. In brief, users don't need to do anything more for the backing image. When users backup a volume with a backing image, the backing image info will be recorded in the backup but the actual backing image data won't be uploaded to the backupstore. Instead, the backing image will be re-downloaded from the original image once it's required. A bunch of RESTful APIs is required for the new CRD `BackingImage`: `Create`, `Delete`, `List`, and `BackingImageCleanup`. Now the volume creation API receives parameter `BackingImage`. In settings: Add a setting `Backing Image Cleanup Wait Interval`. Add a read-only setting `Default Backing Image Manager Image`. Add a new CRD"
},
{
"data": "```goregexp type BackingImageSpec struct { Disks map[string]struct{} `json:\"disks\"` Checksum string `json:\"checksum\"` } type BackingImageStatus struct { OwnerID string `json:\"ownerID\"` UUID string `json:\"uuid\"` Size int64 `json:\"size\"` Checksum string `json:\"checksum\"` DiskFileStatusMap map[string]*BackingImageDiskFileStatus `json:\"diskFileStatusMap\"` DiskLastRefAtMap map[string]string `json:\"diskLastRefAtMap\"` } type BackingImageDiskFileStatus struct { State BackingImageState `json:\"state\"` Progress int `json:\"progress\"` Message string `json:\"message\"` } ``` ```goregexp const ( BackingImageStatePending = BackingImageState(\"pending\") BackingImageStateStarting = BackingImageState(\"starting\") BackingImageStateReady = BackingImageState(\"ready\") BackingImageStateInProgress = BackingImageState(\"in_progress\") BackingImageStateFailed = BackingImageState(\"failed\") BackingImageStateUnknown = BackingImageState(\"unknown\") ) ``` Field `Spec.Disks` records the disks that requires this backing image. Field `Status.DiskFileStatusMap` reflect the current file status for the disks. If there is anything wrong with the file, the error message can be recorded inside the status. Field `Status.UUID` should be generated and stored in ETCD before other operations. Considering users may create a new BackingImage with the same name but different parameters after deleting an old one, to avoid the possible leftover of the old BackingImage disturbing the new one, the manager can use a UUID to generate the work directory. Add a new CRD `backingimagedatasources.longhorn.io`. ```goregexp type BackingImageDataSourceSpec struct { NodeID string `json:\"nodeID\"` DiskUUID string `json:\"diskUUID\"` DiskPath string `json:\"diskPath\"` Checksum string `json:\"checksum\"` SourceType BackingImageDataSourceType `json:\"sourceType\"` Parameters map[string]string `json:\"parameters\"` Started bool `json:\"started\"` } type BackingImageDataSourceStatus struct { OwnerID string `json:\"ownerID\"` CurrentState BackingImageState `json:\"currentState\"` Size int64 `json:\"size\"` Progress int `json:\"progress\"` Checksum string `json:\"checksum\"` } type BackingImageDataSourceType string const ( BackingImageDataSourceTypeDownload = BackingImageDataSourceType(\"download\") BackingImageDataSourceTypeUpload = BackingImageDataSourceType(\"upload\") ) const ( DataSourceTypeDownloadParameterURL = \"url\" ) ``` Field `Started` indicates if the BackingImageManager already takes over the file. Once this is set, Longhorn can stop the corresponding pod as well as updating the object itself. Add a new CRD `backingimagemanagers.longhorn.io`. ```goregexp type BackingImageManagerSpec struct { Image string `json:\"image\"` NodeID string `json:\"nodeID\"` DiskUUID string `json:\"diskUUID\"` DiskPath string `json:\"diskPath\"` BackingImages map[string]string `json:\"backingImages\"` } type BackingImageManagerStatus struct { OwnerID string `json:\"ownerID\"` CurrentState BackingImageManagerState `json:\"currentState\"` BackingImageFileMap map[string]BackingImageFileInfo `json:\"backingImageFileMap\"` IP string `json:\"ip\"` APIMinVersion int `json:\"apiMinVersion\"` APIVersion int `json:\"apiVersion\"` } ``` ```goregexp type BackingImageFileInfo struct { Name string `json:\"name\"` UUID string `json:\"uuid\"` Size int64 `json:\"size\"` State BackingImageState `json:\"state\"` CurrentChecksum string `json:\"currentChecksum\"` Message string `json:\"message\"` SendingReference int `json:\"sendingReference\"` SenderManagerAddress string `json:\"senderManagerAddress\"` Progress int `json:\"progress\"` } ``` ```goregexp const ( BackingImageManagerStateError = BackingImageManagerState(\"error\") BackingImageManagerStateRunning = BackingImageManagerState(\"running\") BackingImageManagerStateStopped = BackingImageManagerState(\"stopped\") BackingImageManagerStateStarting = BackingImageManagerState(\"starting\") BackingImageManagerStateUnknown = BackingImageManagerState(\"unknown\") ) ``` Field `Spec.BackingImages` records which BackingImages should be monitored by the manager. the key is BackingImage name, the value is BackingImage UUID. Field `Status.BackingImageFileMap` will be updated according to the actual file status reported by the related manager pod. Struct `BackingImageFileInfo` is used to load the info from BackingImageManager pods. Add a new controller `BackingImageDataSourceController`. Important notices: Once a BackingImageManager takes over the file ownership, the controller doesn't need to update the related BackingImageDataSource CR except for cleanup. The state is designed to reflect the file state rather than the pod phase. Of course, the file state will be considered as failed if the pod somehow doesn't work correctly. e.g., the pod suddenly becomes failed or being removed. Workflow: Check and update the ownership. Do cleanup if the deletion timestamp is set. Cleanup means stopping monitoring and kill the pod. Sync with the BackingImage: For in-progress BackingImageDataSource, Make sure the disk used by this BackingImageDataSource is recorded in the BackingImage spec as well. [TODO] Guarantee the HA by adding more disks to the BackingImage spec once BackingImageDataSource is started. Skip updating \"started\" BackingImageDataSource. Handle pod: Check the pod status. Update the state based on the previous state and the current pod phase: If the pod is ready for service, do"
},
{
"data": "If the pod is not ready, but the file processing already start. It means there is something wrong with the flow. This BackingImageDataSource will be considered as `error`. If the pod is failed, the BackingImageDataSource should be `error` as well. When the pod reaches an unexpected phase or becomes failed, need to record the error message or error log in the pod. Start or stop monitoring based on pod phase. Delete the errored pod. Create or recreate the pod, then update the backoff entry. Whether the pod can be recreated is determined by the backoff window and the source type. For the source types like upload, recreating pod doesn't make sense. Users need to directly do cleanup then recreate a new backing image instead. For the monitor goroutine, it's similar to that in InstanceManagerController. It will `Get` the file info via HTTP every 3 seconds. If there are 10 continuous HTTP failures, the monitor goroutine will stop itself. Then the controller will restart it. If the backing image is ready, clean up the entry in the backoff. Add a new controller `BackingImageManagerController`. Important notices: Need to consider 2 kinds of managers: default manager, old manager(this includes all incompatible managers). All old managers will be removed immediately once there is the default image is updated. And old managers shouldn't operate any backing image files. When an old manager is removed, the files inside in won't be gone. These files will be taken by the new one. By disabling old managers operating the files, the conflicts with the default manager won't happen. The controller can directly delete old BackingImageManagers without affecting existing BackingImages. This simplifies the cleanup flow. Ideally there should be a cleanup mechanism that is responsible for removing all failed backing image files as well as the images no longer required by the new BackingImageManagers. But due to lacking of time, it will be implemented in the future. In most cases, the controller and the BackingImageManager will avoid deleting backing images files.: For example, if the pod is crashed or one image file becomes failed, the controller will directly restart the pod or re-download the image, rather than cleaning up the files only. The controller will delete image files for only 2 cases: A BackingImage is no longer valid; A default BackingImageManager is deleted. By following this strategy, we may risk at leaving some unused backing image files in some corner cases. However, the gain is that, there is lower probability of crashing a replica caused by the backing image file deletion. Besides, the existing files can be reused after recovery. And after introducing the cleanup mechanism, we should worry about the leftover anymore. With passive file cleanup strategy, default managers can directly pick up all existing files via `Fetch` requests when the old manager pods are killed. This is the essential of live upgrade. The pod not running doesn't mean all files handled by the pod become invalid. All files can be reused/re-monitored after the pod restarting. Workflow: If the deletion timestamp is set, the controller will clean up files for running default BackingImageManagers only. Then it will blindly delete the related pods. When the disk is not ready, the current manager will be marked as `unknown`. Then all not-failed file records are considered as `unknown` as well. Actually there are multiple subcases here: node down, node reboot, node disconnection, disk detachment, longhorn manager pod missing etc. It's complicated to distinguish all subcases to do something special. Hence, I choose simply marking the state to"
},
{
"data": "Create BackingImageManager pods for. If the old status is `running` but the pod is not ready now, there must be something wrong with the manager pod. Hence the controller need to update the state to `error`. When the pod is ready, considering the case that the pod creation may succeed but the CR status update will fail due to conflicts, the controller won't check the previous state. Instead, it will directly update state to `running`. Start a monitor goroutine for each running pods. If the manager is state `error`, the controller will do cleanup then recreate the pod. Handle files based on the spec: Delete invalid files: The BackingImages is no longer in `BackingImageManager.Spec.BackingImages`. The BackingImage UUID doesn't match. Make files ready for the disk: When BackingImageDataSource is not \"started\", it means BackingImageManager hasn't taken over the 1st file. Once BackingImageDataSource reports file ready, BackingImageManager can get the 1st file via API `Fetch`. Then if BackingImageDataSource is \"started\" but there is no ready record for a BackingImage among all managers, it means the pod someshow restarted (may due to upgrade). In this case, BackingImageManager can try to reuse the files via API `Fetch` as well. Otherwise, the current manager will try to sync the file with other managers: If the 1st file is not ready, do nothing. Each manager can send a ready file to 3 other managers simultaneously at max. When there is no available sender, do nothing. Before reusing or syncing files, the controller need to check the backoff entry for the corresponding BackingImageManager. And after the API call, the backoff entry will be updated. For the monitor goroutine, it's similar to that in InstanceManagerController. It will `List` all backing image files once it receives the notification from the streaming. If there are 10 continuous errors returned by the streaming receive function, the monitor goroutine will stop itself. Then the controller will restart it. Besides, if a backing image is ready, the monitor should clean up the entry from the backoff of the BackingImageManager. Add a new controller `BackingImageController`. Important notices: One main responsibility of this controller is creating, deleting, and update BackingImageManagers. It is not responsible for communicating with BackingImageManager pods or BackingImageDataSource pods. This controller can reset \"started\" BackingImageDataSource if all its backing image files are errored in the cluster and the source type is satisfied. The immutable UUID should be generated and stored in ETCD before any other update. This UUID can can be used to distinguish a new BackingImage from an old BackingImage using the same name. Beside recording the immutable UUID, the BackingImage status is used to record the file info in the managers status and present to users. Always try to create default BackingImageManagers if not exist. Aggressively delete non-default BackingImageManagers. Workflow: If the deletion timestamp is set, the controller need to do cleanup for all related BackingImageManagers as well as BackingImageDataSource. Generate a UUID for each new BackingImage. Make sure the UUID is stored in ETCD before doing anything others. Init fields in the BackingImage status. Sync with BackingImageDataSource: Mark BackingImageDataSource as started if the default BackingImageManager already takes over the file ownership. When all files failed, mark the BackingImageDataSource when the source type is downloaded. Then it can re-download the file and recover this BackingImage. Guarantee the disk info in BackingImageDataSources spec is correct if it's not started. (This can be done in Node Controller as well.) Handle BackingImageManager life cycle: Remove records in"
},
{
"data": "or directly delete the manager CR Add records to `Spec.BackingImages` for the current BackingImage. Create BackingImageManagers with default image if not exist. Sync download status/info with BackingImageManager status: If BackingImageDataSource is not started, update BackingImage status based on BackingImageDataSource status. Otherwise, sync status with BackingImageManagers. Set `Status.Size` if it's 0. If somehow the size is not same among all BackingImageManagers, this means there is an unknown bug. Similar logic applied to `Status.CurrentChecksum`. Set timestamp in `Status.DiskLastRefAtMap` if there is no replica using the BackingImage in a disk. Later NodeController will do cleanup for `Spec.DiskDownloadMap` based on the timestamp. Notice that this clean up should not break the backing image HA. Try to set timestamps for disks in which there is no replica/BackingImageDataSource using this BackingImage first. If there is no enough ready files after marking, remove timestamps for some disks that contain ready files. If HA requirement is not satisfied when all ready files are retained, remove timestamps for some disks that contain in-progress/pending files. If HA requirement is not unsatisfied, remove timestamps for some disks that contain failed files. Later Longhorn can try to do recovery for the disks contains these failed files. In Replica Controller: Request preparing the backing image file in a disk if a BackingImage used by a replica doesn't exist. Check and wait for BackingImage disk map in the status before sending requests to replica instance managers. In Node Controller: Determine if the disk needs to be cleaned up if checking BackingImage `Status.DiskLastRefAtMap` and the wait interval `BackingImageCleanupWaitInterval`. Update the spec for BackingImageManagers when there is a disk migration. For the HTTP APIs Volume creation: Longhorn needs to verify the BackingImage if it's specified. For restore/DR volumes, the BackingImage name stored in the backup volume will be used automatically if users do not specify the BackingImage name. Verify the checksum before using the BackingImage. Snapshot backup: BackingImage name and checksum will be record into BackupVolume now. BackingImage creation: Need to create both BackingImage CR and the BackingImageDataSource CR. Besides, a random ready disk will be picked up so that Longhorn can prepare the 1st file for the BackingImage immediately. BackingImage get/list: Be careful about the BackingImageDataSource not found error. There are 2 cases that would lead to this error: BackingImageDataSource has not been created. Add retry would solve this case. BackingImageDataSource is gone but BackingImage has not been cleaned up. Longhorn can ignore BackingImageDataSource when BackingImage deletion timestamp is set. BackingImage disk cleanup: This cannot break the HA besides attaching replicas. The main idea is similar to the cleanup in BackingImage Controller. In CSI: Check the backing image during the volume creation. The missing BackingImage will be created when both BackingImage name and data source info are provided. Verify the existing implementation and the related integration tests. Add raw backing file support. Update the backing file info for replicas when a replica is created/opened. A HTTP server will be launched to prepare the 1st BackingImage file based on the source type. The server will download the file immediately once the type is `download` and the server is up. A cancelled context will be put the HTTP download request. When the server is stopped/failed while downloading is still in-progress, the context can help stop the download. The service will wait for 30s at max for download start. If time exceeds, the download is considered as failed. The download file is in `<Disk path in container>/tmp/<BackingImage name>-<BackingImage UUID>` Each time when the image downloads a chunk of data, the progress will be"
},
{
"data": "For the first time updating the progress, it means the downloading starts and the state will be updated from `starting` to `in-progress`. The server is ready for handling the uploaded data once the type is `upload` and the server is up. The query `size` is required for the API `upload`. The API `upload` receives a multi-part form request. And the body request is the file data streaming. Similar to the download, the progress will be updated as long as the API receives and stores a chunk of data. For the first time updating the progress, it means the uploading starts and the state will be updated from `starting` to `in-progress`. A gRPC service will be launched to monitor and sync BackingImages: API `Fetch`: Register the image then move the file prepared by BackingImageDataSource server to the image work directory. The file is typically in a tmp directory If the file name is not specified in the request, it means reusing the existing file only. For a failed BackingImage, the manager will re-register then re-fetch it. Before fetching the file, the BackingImage will check if there are existing files in the current work directory. It the files exist and the checksum matches, the file will be directly reused and the config file is updated. Otherwise, the work directory will be cleaned up and recreated. Then the file in the tmp directory will be moved to the work directory. API `Sync`: Register the image, start a receiving server, and ask another manager to send the file via API `Send`. For a failed BackingImage, the manager will re-register then re-sync it. This should be similar to replica rebuilding. Similar to `Fetch`, the image will try to reuse existing files. The manager is responsible for managing all port. The image will use the functions provided by the manager to get then release ports. API `Send`: Send a backing image file to a receiver. This should be similar to replica rebuilding. API `Delete`: Unregister the image then delete the image work directory. Make sure syncing or pulling will be cancelled if exists. API `Get`/`List`: Collect the status of one backing image file/all backing image files. API `Watch`: establish a streaming connection to report BackingImage file info. As I mentioned above, we will use BackingImage UUID to generate work directories for each BackingImage. The work directory is like: ``` <Disk path in container>/backing-images/ <Disk path in container>/backing-images/<Syncing BackingImage name>-<Syncing BackingImage UUID>/backing.tmp <Disk path in container>/backing-images/<Ready BackingImage name>-<Ready BackingImage UUID>/backing <Disk path in container>/backing-images/<Ready BackingImage name>-<Ready BackingImage UUID>/backing.cfg ``` There is a goroutine periodically check the file existence based on the image file current state. It will verify the disk UUID in the disk config file. If there is a mismatching, it will stop checking existing files. And the calls, longhorn manager pods, won't send requests since this BackingImageManager is marked as `unknown`. The manager will provide one channel for all BackingImages. If there is an update in a BackingImage, the image will send a signal to the channel. Then there is another goroutine receive the channel and notify the longhorn manager via the streaming created by API `Watch`. Launch a new page to present and operate BackingImages. Add button `Create Backing Image` on the top right of the page: Field `name` is required and should be unique. Field `sourceType` is required and accept an enum value. This indicates how Longhorn can get the backing image file. Right now there are 2 options: `download`,"
},
{
"data": "In the future, it can be value `longhorn-volume`. Field `parameters` is a string map and is determined by `sourceType`. If the source type is `download`, the map should contain key `url`, whose value is the actual download address. If the source type is `upload`, the map is empty. Field `expectedChecksum` is optional. The user can specify the SHA512 checksum of the backing image. When the backing image fetched by Longhorn doesn't match the non-empty expected value, the backing image won't be `ready`. If the source type of the creation API is `upload`, UI should send a `upload` request with the actual file data when the upload server is ready for receiving. The upload server ready is represented by first disk file state becoming `starting`. UI can check the state and wait for up to 30 seconds before sending the request. Support batch deletion: Allow selecting multiple BackingImages; Add button `Deletion` on the top left. The columns on BackingImage list page should be: Name, Size, Created From (field `sourceType`), Operation. Show more info for each BackingImage after clicking the name: Present `Created From` (field `sourceType`) and the corresponding parameters `Parameters During Creation` (field `parameters`). If `sourceType` is `download`, present `DOWNLOAD FROM URL` instead. Show fields `expectedChecksum` and `currentChecksum` as `Expected SHA512 Checksum` and `Current SHA512 Checksum`. If `expectedChecksum` is empty, there is no need to show `Expected SHA512 Checksum`. Use a table to present the file status for each disk based on fields `diskFileStatusMap`: `diskFileStatusMap[diskUUID].progress` will be shown only when the state is `in-progress`. Add a tooltip to present `diskFileStatusMap[diskUUID].message` if it's not empty. Add the following operations under button `Operation`: `Delete`: No field is required. It should be disabled when there is one replica using the BackingImage. `Clean Up`: A disk file table will be presented. Users can choose the entries of this table as the input `disks` of API `CleanupDiskImages`. This API is dedicated for manually cleaning up the images in some disks in advance. When a BackingImage is being deleted (field `deletionTimestamp` is not empty), show an icon behind the name which indicates the deletion state. If the state of all disk records are `failed`, use an icon behind the name to indicates the BackingImage unavailable. Allow choosing a BackingImage for volume creation. Modify Backup page for BackingImage: Allow choosing/re-specifying a new BackingImage for restore/DR volume creation: If there is BackingImage info in the backup volume, an option `Use previous backing image` will be shown and checked by default. If the option is unchecked by users, UI will show the BackingImage list so that users can pick up it. Add a button `Backing Image Info` in the operation list: If the backing image name of a BackupVolume is empty, gray out the button. Otherwise, present the backing image name and the backing image checksum. | HTTP Endpoint | Operation | | -- | | | GET `/v1/backingimages` | Click button `Backing Image` | | POST `/v1/backingimages/` | Click button `Create Backing Image` | | DELETE `/v1/backingimages/{name}` | Click button `Delete` | | GET `/v1/backingimages/{name}` | Click the `name` of a backing image | | POST `/v1/backingimages/{name}?action=backingImageCleanup` | Click button`Clean Up` | | POST `/v1/backingimages/{name}?action=upload` | Longhorn UI should call it automatically when the upload server is ready | Backing image basic operation Backing image auto cleanup Backing image with disk migration The backing image on a down node The backing image works fine with system upgrade & backing image manager upgrade The incompatible backing image manager handling The error presentation of a failed backing image N/A"
}
] | {
"category": "Runtime",
"file_name": "20210701-backing-image.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "All notable changes to this project will be documented in this file. This project adheres to . Removed csi.proto upgrade CSI_VERSION=1.5 Remove device registration and use the CRD resource NodeStorageResource instead Added controllers that maintain NodeStorageResource The scheduler supports fetching resources from NodeStorageResource Upgrade go.mod to depend on K8s1.23 Upgrade the Webhook certificate using job Raw disk support under development add helm chart to deploy when node is notready, migrate pods to ready node if pod annotations contains \"carina.io/rebuild-node-notready: true\" (<https://github.com/carina-io/carina/issues/14>)) multiple VGS are supported for the same type of storage csi configmap change new version to support mutil vgroup (https://github.com/carina-io/carina/issues/10) Fixes configmap spradoutspreadout(<https://github.com/carina-io/carina/issues/12>) provisioning raw disk doc/manual_zh/velero.md replace carina.storage.io/backend-disk-type: hdd ==> carina.storage.io/backend-disk-group-name: hdd replace carina.storage.io/cache-disk-type: ssd ==> carina.storage.io/cache-disk-group-name: ssd replace carina.storage.io/disk-type: \"hdd\" ==> carina.storage.io/disk-group-name: \"hdd\" replace kubernetes.customized/blkio.throttle.readbpsdevice: \"10485760\" ==> carina.storage.io/blkio.throttle.readbpsdevice: \"10485760\" replace kubernetes.customized/blkio.throttle.readiopsdevice: \"10000\" ==> carina.storage.io/blkio.throttle.readiopsdevice: \"10000\" replace kubernetes.customized/blkio.throttle.writebpsdevice: \"10485760\" ==> carina.storage.io/blkio.throttle.writebpsdevice: \"10485760\" replace kubernetes.customized/blkio.throttle.writeiopsdevice: \"100000\" ==> carina.storage.io/blkio.throttle.writeiopsdevice: \"100000\" replace carina.io/rebuild-node-notready: true ==> carina.storage.io/allow-pod-migration-if-node-notready: true Support the cgroup v1 and v2 Adjustment of project structure The HTTP server is deleted Logicvolume changed from Namespace to Cluster, Fixed the problem that message notification is not timely Fix the metric server panic problem #91 Mirrored warehouse has personal space migrated to Carina exclusive space To improve LVM volume performance, do not create a thin-pool when creating an LVM volume #96 Add parameter `carina.storage.io/allow-pod-migration-if-notready` to storageclass. Webhook will automatically add this annotation for POD when SC has this parameter #95 Nodestorageresource structuring and issue fixing #87 Remove ConfigMap synchronization control #75 The Carina E2E test is being refined Promote carina into cncf sandbox project and roadmap Update outdated documents Optimize the container scheduling algorithm to make it more concise and understandable Repair The pv is lost due to node restart Added the upgrade upgrade script Helm chat deployment adds psp resources It is clear that the current version of carina supports 1.18-1.24 Planning discussion carina supports the Kubernetes 1.25 solution Added e2e unit test scripts"
}
] | {
"category": "Runtime",
"file_name": "CHANGELOG.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "title: Overview description: The basics of Virtual Kubelet weight: 1 Virtual Kubelet is an implementation of the Kubernetes that masquerades as a kubelet for the purpose of connecting a Kubernetes cluster to other APIs. This allows Kubernetes to be backed by other services, such as serverless container platforms. Virtual Kubelet supports a variety of providers: {{< providers >}} You can also ."
}
] | {
"category": "Runtime",
"file_name": "_index.md",
"project_name": "Virtual Kubelet",
"subcategory": "Container Runtime"
} |
[
{
"data": "- - - - - - - - - - - - - - - - - - - - - This is CNI spec version 1.1.0. Note that this is independent from the version of the CNI library and plugins in this repository (e.g. the versions of ). Released versions of the spec are available as Git tags. | tag | spec permalink | major changes | | | - | | | | | Removed non-list configurations; removed `version` field of `interfaces` array | | | | Introduce the CHECK command and passing prevResult on DEL | | | | none (typo fix only) | | | | rich result type, plugin chaining | | | | VERSION command | | | | initial version | Do not rely on these tags being stable. In the future, we may change our mind about which particular commit is the right marker for a given historical spec version. This document proposes a generic plugin-based networking solution for application containers on Linux, the Container Networking Interface, or CNI. For the purposes of this proposal, we define three terms very specifically: container is a network isolation domain, though the actual isolation technology is not defined by the specification. This could be a or a virtual machine, for example. network refers to a group of endpoints that are uniquely addressable that can communicate amongst each other. This could be either an individual container (as specified above), a machine, or some other network device (e.g. a router). Containers can be conceptually added to or removed from one or more networks. runtime is the program responsible for executing CNI plugins. plugin is a program that applies a specified network configuration. This document aims to specify the interface between \"runtimes\" and \"plugins\". The key words \"must\", \"must not\", \"required\", \"shall\", \"shall not\", \"should\", \"should not\", \"recommended\", \"may\" and \"optional\" are used as specified in . The CNI specification defines: A format for administrators to define network configuration. A protocol for container runtimes to make requests to network plugins. A procedure for executing plugins based on a supplied configuration. A procedure for plugins to delegate functionality to other plugins. Data types for plugins to return their results to the runtime. CNI defines a network configuration format for administrators. It contains directives for both the container runtime as well as the plugins to consume. At plugin execution time, this configuration format is interpreted by the runtime and transformed in to a form to be passed to the plugins. In general, the network configuration is intended to be static. It can conceptually be thought of as being \"on disk\", though the CNI specification does not actually require this. A network configuration consists of a JSON object with the following keys: `cniVersion` (string): of CNI specification to which this configuration list and all the individual configurations conform. Currently \"1.1.0\" `cniVersions` (string list): List of all CNI versions which this configuration supports. See below. `name` (string): Network name. This should be unique across all network configurations on a host (or other administrative domain). Must start with an alphanumeric character, optionally followed by any combination of one or more alphanumeric characters, underscore, dot (.) or hyphen (-). `disableCheck` (boolean): Either `true` or `false`. If `disableCheck` is `true`, runtimes must not call `CHECK` for this network configuration list. This allows an administrator to prevent `CHECK`ing where a combination of plugins is known to return spurious errors. `plugins` (list): A list of CNI plugins and their configuration, which is a list of plugin configuration"
},
{
"data": "Plugin configuration objects may contain additional fields than the ones defined here. The runtime MUST pass through these fields, unchanged, to the plugin, as defined in section 3. Required keys: `type` (string): Matches the name of the CNI plugin binary on disk. Must not contain characters disallowed in file paths for the system (e.g. / or \\\\). Optional keys, used by the protocol: `capabilities` (dictionary): Defined in Reserved keys, used by the protocol: These keys are generated by the runtime at execution time, and thus should not be used in configuration. `runtimeConfig` `args` Any keys starting with `cni.dev/` Optional keys, well-known: These keys are not used by the protocol, but have a standard meaning to plugins. Plugins that consume any of these configuration keys should respect their intended semantics. `ipMasq` (boolean): If supported by the plugin, sets up an IP masquerade on the host for this network. This is necessary if the host will act as a gateway to subnets that are not able to route to the IP assigned to the container. `ipam` (dictionary): Dictionary with IPAM (IP Address Management) specific values: `type` (string): Refers to the filename of the IPAM plugin executable. Must not contain characters disallowed in file paths for the system (e.g. / or \\\\). `dns` (dictionary, optional): Dictionary with DNS specific values: `nameservers` (list of strings, optional): list of a priority-ordered list of DNS nameservers that this network is aware of. Each entry in the list is a string containing either an IPv4 or an IPv6 address. `domain` (string, optional): the local domain used for short hostname lookups. `search` (list of strings, optional): list of priority ordered search domains for short hostname lookups. Will be preferred over `domain` by most resolvers. `options` (list of strings, optional): list of options that can be passed to the resolver Other keys: Plugins may define additional fields that they accept and may generate an error if called with unknown fields. Runtimes must preserve unknown fields in plugin configuration objects when transforming for execution. ```jsonc { \"cniVersion\": \"1.1.0\", \"cniVersions\": [\"0.3.1\", \"0.4.0\", \"1.0.0\", \"1.1.0\"], \"name\": \"dbnet\", \"plugins\": [ { \"type\": \"bridge\", // plugin specific parameters \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", // ipam specific \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\", \"routes\": [ {\"dst\": \"0.0.0.0/0\"} ] }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } }, { \"type\": \"tuning\", \"capabilities\": { \"mac\": true }, \"sysctl\": { \"net.core.somaxconn\": \"500\" } }, { \"type\": \"portmap\", \"capabilities\": {\"portMappings\": true} } ] } ``` CNI runtimes, plugins, and network configurations may support multiple CNI specification versions independently. Plugins indicate their set of supported versions through the VERSION command, while network configurations indicate their set of supported versions through the `cniVersion` and `cniVersions` fields. CNI runtimes MUST select the highest supported version from the set of network configuration versions given by the `cniVersion` and `cniVersions` fields. Runtimes MAY consider the set of supported plugin versions as reported by the VERSION command when determining available versions. The CNI protocol follows Semantic Versioning principles, so the configuration format MUST remain backwards and forwards compatible within major versions. The CNI protocol is based on execution of binaries invoked by the container runtime. CNI defines the protocol between the plugin binary and the runtime. A CNI plugin is responsible for configuring a container's network interface in some manner. Plugins fall in to two broad categories: \"Interface\" plugins, which create a network interface inside the container and ensure it has"
},
{
"data": "\"Chained\" plugins, which adjust the configuration of an already-created interface (but may need to create more interfaces to do so). The runtime passes parameters to the plugin via environment variables and configuration. It supplies configuration via stdin. The plugin returns a on stdout on success, or an error on stderr if the operation fails. Configuration and results are encoded in JSON. Parameters define invocation-specific settings, whereas configuration is, with some exceptions, the same for any given network. The runtime must execute the plugin in the runtime's networking domain. (For most cases, this means the root network namespace / `dom0`). Protocol parameters are passed to the plugins via OS environment variables. `CNI_COMMAND`: indicates the desired operation; `ADD`, `DEL`, `CHECK`, `GC`, or `VERSION`. `CNI_CONTAINERID`: Container ID. A unique plaintext identifier for a container, allocated by the runtime. Must not be empty. Must start with an alphanumeric character, optionally followed by any combination of one or more alphanumeric characters, underscore (), dot (.) or hyphen (-). `CNI_NETNS`: A reference to the container's \"isolation domain\". If using network namespaces, then a path to the network namespace (e.g. `/run/netns/[nsname]`) `CNI_IFNAME`: Name of the interface to create inside the container; if the plugin is unable to use this interface name it must return an error. `CNI_ARGS`: Extra arguments passed in by the user at invocation time. Alphanumeric key-value pairs separated by semicolons; for example, \"FOO=BAR;ABC=123\" `CNI_PATH`: List of paths to search for CNI plugin executables. Paths are separated by an OS-specific list separator; for example ':' on Linux and ';' on Windows A plugin must exit with a return code of 0 on success, and non-zero on failure. If the plugin encounters an error, it should output an (see below). CNI defines 5 operations: `ADD`, `DEL`, `CHECK`, `GC`, and `VERSION`. These are passed to the plugin via the `CNI_COMMAND` environment variable. A CNI plugin, upon receiving an `ADD` command, should either create the interface defined by `CNIIFNAME` inside the container at `CNINETNS`, or adjust the configuration of the interface defined by `CNIIFNAME` inside the container at `CNINETNS`. If the CNI plugin is successful, it must output a (see below) on standard out. If the plugin was supplied a `prevResult` as part of its input configuration, it MUST handle `prevResult` by either passing it through, or modifying it appropriately. If an interface of the requested name already exists in the container, the CNI plugin MUST return with an error. A runtime should not call `ADD` twice (without an intervening DEL) for the same `(CNICONTAINERID, CNIIFNAME)` tuple. This implies that a given container ID may be added to a specific network more than once only if each addition is done with a different interface name. Input: The runtime will provide a JSON-serialized plugin configuration object (defined below) on standard in. Required environment parameters: `CNI_COMMAND` `CNI_CONTAINERID` `CNI_NETNS` `CNI_IFNAME` Optional environment parameters: `CNI_ARGS` `CNI_PATH` A CNI plugin, upon receiving a `DEL` command, should either delete the interface defined by `CNIIFNAME` inside the container at `CNINETNS`, or undo any modifications applied in the plugin's `ADD` functionality Plugins should generally complete a `DEL` action without error even if some resources are missing. For example, an IPAM plugin should generally release an IP allocation and return success even if the container network namespace no longer exists, unless that network namespace is critical for IPAM management. While DHCP may usually send a 'release' message on the container network interface, since DHCP leases have a lifetime this release action would not be considered critical and no error should be returned if this action"
},
{
"data": "For another example, the `bridge` plugin should delegate the DEL action to the IPAM plugin and clean up its own resources even if the container network namespace and/or container network interface no longer exist. Plugins MUST accept multiple `DEL` calls for the same (`CNICONTAINERID`, `CNIIFNAME`) pair, and return success if the interface in question, or any modifications added, are missing. Input: The runtime will provide a JSON-serialized plugin configuration object (defined below) on standard in. Required environment parameters: `CNI_COMMAND` `CNI_CONTAINERID` `CNI_IFNAME` Optional environment parameters: `CNI_NETNS` `CNI_ARGS` `CNI_PATH` `CHECK` is a way for a runtime to probe the status of an existing container. Plugin considerations: The plugin must consult the `prevResult` to determine the expected interfaces and addresses. The plugin must allow for a later chained plugin to have modified networking resources, e.g. routes, on `ADD`. The plugin should return an error if a resource included in the CNI Result type (interface, address or route) was created by the plugin, and is listed in `prevResult`, but is missing or in an invalid state. The plugin should return an error if other resources not tracked in the Result type such as the following are missing or are in an invalid state: Firewall rules Traffic shaping controls IP reservations External dependencies such as a daemon required for connectivity etc. The plugin should return an error if it is aware of a condition where the container is generally unreachable. The plugin must handle `CHECK` being called immediately after an `ADD`, and therefore should allow a reasonable convergence delay for any asynchronous resources. The plugin should call `CHECK` on any delegated (e.g. IPAM) plugins and pass any errors on to its caller. Runtime considerations: A runtime must not call `CHECK` for a container that has not been `ADD`ed, or has been `DEL`eted after its last `ADD`. A runtime must not call `CHECK` if `disableCheck` is set to `true` in the . A runtime must include a `prevResult` field in the network configuration containing the `Result` of the immediately preceding `ADD` for the container. The runtime may wish to use libcni's support for caching `Result`s. A runtime may choose to stop executing `CHECK` for a chain when a plugin returns an error. A runtime may execute `CHECK` from immediately after a successful `ADD`, up until the container is `DEL`eted from the network. A runtime may assume that a failed `CHECK` means the container is permanently in a misconfigured state. Input: The runtime will provide a json-serialized plugin configuration object (defined below) on standard in. Required environment parameters: `CNI_COMMAND` `CNI_CONTAINERID` `CNI_NETNS` `CNI_IFNAME` Optional environment parameters: `CNI_ARGS` `CNI_PATH` All parameters, with the exception of `CNI_PATH`, must be the same as the corresponding `ADD` for this container. `STATUS` is a way for a runtime to determine the readiness of a network plugin. A plugin must exit with a zero (success) return code if the plugin is ready to service ADD requests. If the plugin knows that it is not able to service ADD requests, it must exit with a non-zero return code and output an error on standard out (see below). For example, if a plugin relies on an external service or daemon, it should return an error to `STATUS` if that service is unavailable. Likewise, if a plugin has a limited number of resources (e.g. IP addresses, hardware queues), it should return an error if those resources are exhausted and no new `ADD` requests can be serviced. The following error codes are defined in the context of `STATUS`: 50: The plugin is not available (i.e. cannot service `ADD` requests) 51: The plugin is not available, and existing containers in the network may have limited"
},
{
"data": "Plugin considerations: Status is purely informational. A plugin MUST NOT rely on `STATUS` being called. Plugins should always expect other CNI operations (like `ADD`, `DEL`, etc) even if `STATUS` returns an error. `STATUS` does not prevent other runtime requests. If a plugin relies on a delegated plugin (e.g. IPAM) to service `ADD` requests, it must also execute a `STATUS` request to that plugin when it receives a `STATUS` request for itself. If the delegated plugin return an error result, the executing plugin should return an error result. Input: The runtime will provide a json-serialized plugin configuration object (defined below) on standard in. Optional environment parameters: `CNI_PATH` The plugin should output via standard-out a json-serialized version result object (see below). Input: A json-serialized object, with the following key: `cniVersion`: The version of the protocol in use. Required environment parameters: `CNI_COMMAND` The GC command provides a way for runtimes to specify the expected set of attachments to a network. The network plugin may then remove any resources related to attachments that do not exist in this set. Resources may, for example, include: IPAM reservations Firewall rules A plugin SHOULD remove as many stale resources as possible. For example, a plugin should remove any IPAM reservations associated with attachments not in the provided list. The plugin MAY assume that the isolation domain (e.g. network namespace) has been deleted, and thus any resources (e.g. network interfaces) therein have been removed. Plugins should generally complete a `GC` action without error. If an error is encountered, a plugin should continue; removing as many resources as possible, and report the errors back to the runtime. Plugins MUST, additionally, forward any GC calls to delegated plugins they are configured to use (see section 4). The runtime MUST NOT use GC as a substitute for DEL. Plugins may be unable to clean up some resources from GC that they would have been able to clean up from DEL. Input: The runtime must provide a JSON-serialized plugin configuration object (defined below) on standard in. It contains an additional key; `cni.dev/attachments` (array of objects): The list of still valid attachments to this network: `containerID` (string): the value of CNI_CONTAINERID as provided during the CNI ADD operation `ifname` (string): the value of CNI_IFNAME as provided during the CNI ADD operation Required environment parameters: `CNI_COMMAND` `CNI_PATH` Output: No output on success, on error. This section describes how a container runtime interprets a network configuration (as defined in section 1) and executes plugins accordingly. A runtime may wish to add, delete, or check a network configuration in a container. This results in a series of plugin `ADD`, `DELETE`, or `CHECK` executions, correspondingly. This section also defines how a network configuration is transformed and provided to the plugin. The operation of a network configuration on a container is called an attachment. An attachment may be uniquely identified by the `(CNICONTAINERID, CNIIFNAME)` tuple. The container runtime must create a new network namespace for the container before invoking any plugins. The container runtime must not invoke parallel operations for the same container, but is allowed to invoke parallel operations for different containers. This includes across multiple attachments. Exception: The runtime must exclusively execute either gc or add and delete. The runtime must ensure that no add or delete operations are in progress before executing gc, and must wait for gc to complete before issuing new add or delete commands. Plugins must handle being executed concurrently across different containers. If necessary, they must implement locking on shared resources (e.g. IPAM"
},
{
"data": "The container runtime must ensure that add is eventually followed by a corresponding delete. The only exception is in the event of catastrophic failure, such as node loss. A delete must still be executed even if the add fails. delete may be followed by additional deletes. The network configuration should not change between add and delete. The network configuration should not change between attachments. The container runtime is responsible for cleanup of the container's network namespace. While a network configuration should not change between attachments, there are certain parameters supplied by the container runtime that are per-attachment. They are: Container ID: A unique plaintext identifier for a container, allocated by the runtime. Must not be empty. Must start with an alphanumeric character, optionally followed by any combination of one or more alphanumeric characters, underscore (), dot (.) or hyphen (-). During execution, always set as the `CNI_CONTAINERID` parameter. Namespace: A reference to the container's \"isolation domain\". If using network namespaces, then a path to the network namespace (e.g. `/run/netns/[nsname]`). During execution, always set as the `CNI_NETNS` parameter. Container interface name: Name of the interface to create inside the container. During execution, always set as the `CNI_IFNAME` parameter. Generic Arguments: Extra arguments, in the form of key-value string pairs, that are relevant to a specific attachment. During execution, always set as the `CNI_ARGS` parameter. Capability Arguments: These are also key-value pairs. The key is a string, whereas the value is any JSON-serializable type. The keys and values are defined by . Furthermore, the runtime must be provided a list of paths to search for CNI plugins. This must also be provided to plugins during execution via the `CNI_PATH` environment variable. For every configuration defined in the `plugins` key of the network configuration, Look up the executable specified in the `type` field. If this does not exist, then this is an error. Derive request configuration from the plugin configuration, with the following parameters: If this is the first plugin in the list, no previous result is provided, For all additional plugins, the previous result is the result of the previous plugins. Execute the plugin binary, with `CNI_COMMAND=ADD`. Provide parameters defined above as environment variables. Supply the derived configuration via standard in. If the plugin returns an error, halt execution and return the error to the caller. The runtime must store the result returned by the final plugin persistently, as it is required for check and delete operations. Deleting a network attachment is much the same as adding, with a few key differences: The list of plugins is executed in reverse order The previous result provided is always the final result of the add operation. For every plugin defined in the `plugins` key of the network configuration, in reverse order, Look up the executable specified in the `type` field. If this does not exist, then this is an error. Derive request configuration from the plugin configuration, with the previous result from the initial add operation. Execute the plugin binary, with `CNI_COMMAND=DEL`. Provide parameters defined above as environment variables. Supply the derived configuration via standard in. If the plugin returns an error, halt execution and return the error to the caller. If all plugins return success, return success to the caller. The runtime may also ask every plugin to confirm that a given attachment is still functional. The runtime must use the same attachment parameters as it did for the add operation. Checking is similar to add with two exceptions: the previous result provided is always the final result of the add"
},
{
"data": "If the network configuration defines `disableCheck`, then always return success to the caller. For every plugin defined in the `plugins` key of the network configuration, Look up the executable specified in the `type` field. If this does not exist, then this is an error. Derive request configuration from the plugin configuration, with the previous result from the initial add operation. Execute the plugin binary, with `CNI_COMMAND=CHECK`. Provide parameters defined above as environment variables. Supply the derived configuration via standard in. If the plugin returns an error, halt execution and return the error to the caller. If all plugins return success, return success to the caller. The runtime may also ask every plugin in a network configuration to clean up any stale resources via the GC command. When garbage-collecting a configuration, there are no . For every plugin defined in the `plugins` key of the network configuration, Look up the executable specified in the `type` field. If this does not exist, then this is an error. Derive request configuration from the plugin configuration. Execute the plugin binary, with `CNI_COMMAND=GC`. Supply the derived configuration via standard in. If the plugin returns an error, continue with execution, returning all errors to the caller. If all plugins return success, return success to the caller. The network configuration format (which is a list of plugin configurations to execute) must be transformed to a format understood by the plugin (which is a single plugin configuration). This section describes that transformation. The request configuration for a single plugin invocation is also JSON. It consists of the plugin configuration, primarily unchanged except for the specified additions and removals. The following fields are always to be inserted into the request configuration by the runtime: `cniVersion`: the protocol version selected by the runtime - the string \"1.1.0\" `name`: taken from the `name` field of the network configuration For attachment-specific operations (ADD, DEL, CHECK), additional field requirements apply: `runtimeConfig`: the runtime must insert an object consisting of the union of capabilities provided by the plugin and requested by the runtime (more details below). `prevResult`: the runtime must insert consisting of the result type returned by the \"previous\" plugin. The meaning of \"previous\" is defined by the specific operation (add, delete, or check). This field must not be set for the first add in a chain. `capabilities`: must not be set For GC operations: `cni.dev/attachments`: as specified in section 2. All other fields not prefixed with `cni.dev/` should be passed through unaltered. Whereas CNIARGS are provided to all plugins, with no indication if they are going to be consumed, Capability arguments need to be declared explicitly in configuration. The runtime, thus, can determine if a given network configuration supports a specific capability_. Capabilities are not defined by the specification - rather, they are documented . As defined in section 1, the plugin configuration includes an optional key, `capabilities`. This example shows a plugin that supports the `portMapping` capability: ```json { \"type\": \"myPlugin\", \"capabilities\": { \"portMappings\": true } } ``` The `runtimeConfig` parameter is derived from the `capabilities` in the network configuration and the capability arguments generated by the runtime. Specifically, any capability supported by the plugin configuration and provided by the runtime should be inserted in the `runtimeConfig`. Thus, the above example could result in the following being passed to the plugin as part of the execution configuration: ```json { \"type\": \"myPlugin\", \"runtimeConfig\": { \"portMappings\": [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] } ... } ``` There are some operations that, for whatever reason, cannot reasonably be implemented as a discrete chained"
},
{
"data": "Rather, a CNI plugin may wish to delegate some functionality to another plugin. One common example of this is IP address management. As part of its operation, a CNI plugin is expected to assign (and maintain) an IP address to the interface and install any necessary routes relevant for that interface. This gives the CNI plugin great flexibility but also places a large burden on it. Many CNI plugins would need to have the same code to support several IP management schemes that users may desire (e.g. dhcp, host-local). A CNI plugin may choose to delegate IP management to another plugin. To lessen the burden and make IP management strategy be orthogonal to the type of CNI plugin, we define a third type of plugin -- IP Address Management Plugin (IPAM plugin), as well as a protocol for plugins to delegate functionality to other plugins. It is however the responsibility of the CNI plugin, rather than the runtime, to invoke the IPAM plugin at the proper moment in its execution. The IPAM plugin must determine the interface IP/subnet, Gateway and Routes and return this information to the \"main\" plugin to apply. The IPAM plugin may obtain the information via a protocol (e.g. dhcp), data stored on a local filesystem, the \"ipam\" section of the Network Configuration file, etc. Like CNI plugins, delegated plugins are invoked by running an executable. The executable is searched for in a predefined list of paths, indicated to the CNI plugin via `CNI_PATH`. The delegated plugin must receive all the same environment variables that were passed in to the CNI plugin. Just like the CNI plugin, delegated plugins receive the network configuration via stdin and output results via stdout. Delegated plugins are provided the complete network configuration passed to the \"upper\" plugin. In other words, in the IPAM case, not just the `ipam` section of the configuration. Success is indicated by a zero return code and a Success result type output to stdout. When a plugin executes a delegated plugin, it should: Look up the plugin binary by searching the directories provided in `CNI_PATH` environment variable. Execute that plugin with the same environment and configuration that it received. Ensure that the delegated plugin's stderr is output to the calling plugin's stderr. If a plugin is executed with `CNI_COMMAND=CHECK`, `DEL`, or `GC`, it must also execute any delegated plugins. If any of the delegated plugins return error, error should be returned by the upper plugin. If, on `ADD`, a delegated plugin fails, the \"upper\" plugin should execute again with `DEL` before returning failure. For certain operations, plugins must output result information. The output should be serialized as JSON on standard out. Plugins must output a JSON object with the following keys upon a successful `ADD` operation: `cniVersion`: The same version supplied on input - the string \"1.1.0\" `interfaces`: An array of all interfaces created by the attachment, including any host-level interfaces: `name` (string): The name of the interface. `mac` (string): The hardware address of the interface (if applicable). `mtu`: (uint) The MTU of the interface (if applicable). `sandbox` (string): The isolation domain reference (e.g. path to network namespace) for the interface, or empty if on the host. For interfaces created inside the container, this should be the value passed via `CNI_NETNS`. `socketPath` (string, optional): An absolute path to a socket file corresponding to this interface, if applicable. `pciID` (string, optional): The platform-specific identifier of the PCI device corresponding to this interface, if applicable. `ips`: IPs assigned by this attachment. Plugins may include IPs assigned external to the"
},
{
"data": "`address` (string): an IP address in CIDR notation (eg \"192.168.1.3/24\"). `gateway` (string): the default gateway for this subnet, if one exists. `interface` (uint): the index into the `interfaces` list for a indicating which interface this IP configuration should be applied to. `routes`: Routes created by this attachment: `dst`: The destination of the route, in CIDR notation `gw`: The next hop address. If unset, a value in `gateway` in the `ips` array may be used. `mtu` (uint): The MTU (Maximum transmission unit) along the path to the destination. `advmss` (uint): The MSS (Maximal Segment Size) to advertise to these destinations when establishing TCP connections. `priority` (uint): The priority of route, lower is higher. `table` (uint): The table to add the route to. `scope` (uint): The scope of the destinations covered by the route prefix (global (0), link (253), host (254)). `dns`: a dictionary consisting of DNS configuration information `nameservers` (list of strings): list of a priority-ordered list of DNS nameservers that this network is aware of. Each entry in the list is a string containing either an IPv4 or an IPv6 address. `domain` (string): the local domain used for short hostname lookups. `search` (list of strings): list of priority ordered search domains for short hostname lookups. Will be preferred over `domain` by most resolvers. `options` (list of strings): list of options that can be passed to the resolver. Plugins provided a `prevResult` key as part of their request configuration must output it as their result, with any possible modifications made by that plugin included. If a plugin makes no changes that would be reflected in the Success result type, then it must output a result equivalent to the provided `prevResult`. Delegated plugins may omit irrelevant sections. Delegated IPAM plugins must return an abbreviated Success object. Specifically, it is missing the `interfaces` array, as well as the `interface` entry in `ips`. Plugins must output a JSON object with the following keys upon a `VERSION` operation: `cniVersion`: The value of `cniVersion` specified on input `supportedVersions`: A list of supported specification versions Example: ```json { \"cniVersion\": \"1.0.0\", \"supportedVersions\": [ \"0.1.0\", \"0.2.0\", \"0.3.0\", \"0.3.1\", \"0.4.0\", \"1.0.0\" ] } ``` Plugins should output a JSON object with the following keys if they encounter an error: `cniVersion`: The protocol version in use - \"1.1.0\" `code`: A numeric error code, see below for reserved codes. `msg`: A short message characterizing the error. `details`: A longer message describing the error. Example: ```json { \"cniVersion\": \"1.1.0\", \"code\": 7, \"msg\": \"Invalid Configuration\", \"details\": \"Network 192.168.0.0/31 too small to allocate from.\" } ``` Error codes 0-99 are reserved for well-known errors. Values of 100+ can be freely used for plugin specific errors. Error Code|Error Description | `1`|Incompatible CNI version `2`|Unsupported field in network configuration. The error message must contain the key and value of the unsupported field. `3`|Container unknown or does not exist. This error implies the runtime does not need to perform any container network cleanup (for example, calling the `DEL` action on the container). `4`|Invalid necessary environment variables, like CNICOMMAND, CNICONTAINERID, etc. The error message must contain the names of invalid variables. `5`|I/O failure. For example, failed to read network config bytes from stdin. `6`|Failed to decode content. For example, failed to unmarshal network config from bytes or failed to decode version info from string. `7`|Invalid network config. If some validations on network configs do not pass, this error will be raised. `11`|Try again later. If the plugin detects some transient condition that should clear up, it can use this code to notify the runtime it should re-try the operation"
},
{
"data": "In addition, stderr can be used for unstructured output such as logs. Plugins must output a JSON object with the following keys upon a `VERSION` operation: `cniVersion`: The value of `cniVersion` specified on input `supportedVersions`: A list of supported specification versions Example: ```json { \"cniVersion\": \"1.1.0\", \"supportedVersions\": [ \"0.1.0\", \"0.2.0\", \"0.3.0\", \"0.3.1\", \"0.4.0\", \"1.0.0\", \"1.1.0\" ] } ``` We assume the network configuration in section 1. For this attachment, the runtime produces `portmap` and `mac` capability args, along with the generic argument \"argA=foo\". The examples uses `CNI_IFNAME=eth0`. The container runtime would perform the following steps for the `add` operation. 1) Call the `bridge` plugin with the following JSON, `CNI_COMMAND=ADD`: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\" }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` The bridge plugin, as it delegates IPAM to the `host-local` plugin, would execute the `host-local` binary with the exact same input, `CNI_COMMAND=ADD`. The `host-local` plugin returns the following result: ```json { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\" } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` The bridge plugin returns the following result, configuring the interface according to the delegated IPAM configuration: ```json { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"99:88:77:66:55:44\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` 2) Next, call the `tuning` plugin, with `CNI_COMMAND=ADD`. Note that `prevResult` is supplied, along with the `mac` capability argument. The request configuration passed is: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"tuning\", \"sysctl\": { \"net.core.somaxconn\": \"500\" }, \"runtimeConfig\": { \"mac\": \"00:11:22:33:44:66\" }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"99:88:77:66:55:44\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The plugin returns the following result. Note that the mac has changed. ```json { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` 3) Finally, call the `portmap` plugin, with `CNI_COMMAND=ADD`. Note that `prevResult` matches that returned by `tuning`: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"portmap\", \"runtimeConfig\": { \"portMappings\" : [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The `portmap` plugin outputs the exact same result as that returned by `bridge`, as the plugin has not modified anything that would change the result (i.e. it only created iptables rules). Given the previous Add, the container runtime would perform the following steps for the Check action: 1) First call the `bridge` plugin with the following request configuration, including the `prevResult` field containing the final JSON response from the Add operation, including the changed"
},
{
"data": "`CNI_COMMAND=CHECK` ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\" }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The `bridge` plugin, as it delegates IPAM, calls `host-local`, `CNI_COMMAND=CHECK`. It returns no error. Assuming the `bridge` plugin is satisfied, it produces no output on standard out and exits with a 0 return code. 2) Next call the `tuning` plugin with the following request configuration: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"tuning\", \"sysctl\": { \"net.core.somaxconn\": \"500\" }, \"runtimeConfig\": { \"mac\": \"00:11:22:33:44:66\" }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` Likewise, the `tuning` plugin exits indicating success. 3) Finally, call `portmap` with the following request configuration: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"portmap\", \"runtimeConfig\": { \"portMappings\" : [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` Given the same network configuration JSON list, the container runtime would perform the following steps for the Delete action. Note that plugins are executed in reverse order from the Add and Check actions. 1) First, call `portmap` with the following request configuration, `CNI_COMMAND=DEL`: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"portmap\", \"runtimeConfig\": { \"portMappings\" : [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` 2) Next, call the `tuning` plugin with the following request configuration, `CNI_COMMAND=DEL`: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"tuning\", \"sysctl\": { \"net.core.somaxconn\": \"500\" }, \"runtimeConfig\": { \"mac\": \"00:11:22:33:44:66\" }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` 3) Finally, call `bridge`: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\" }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The bridge plugin executes the `host-local` delegated plugin with `CNI_COMMAND=DEL` before returning."
}
] | {
"category": "Runtime",
"file_name": "SPEC.md",
"project_name": "Container Network Interface (CNI)",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "title: \"FAQ\" layout: docs Etcd's backup/restore tooling is good for recovering from data loss in a single etcd cluster. For example, it is a good idea to take a backup of etcd prior to upgrading etcd itself. For more sophisticated management of your Kubernetes cluster backups and restores, we feel that Velero is generally a better approach. It gives you the ability to throw away an unstable cluster and restore your Kubernetes resources and data into a new cluster, which you can't do easily just by backing up and restoring etcd. Examples of cases where Velero is useful: you don't have access to etcd (e.g. you're running on GKE) backing up both Kubernetes resources and persistent volume state cluster migrations backing up a subset of your Kubernetes resources backing up Kubernetes resources that are stored across multiple etcd clusters (for example if you run a custom apiserver) Yes, with some exceptions. For example, when Velero restores pods it deletes the `nodeName` from the pod so that it can be scheduled onto a new node. You can see some more examples of the differences in We strongly recommend that each Velero instance use a distinct bucket/prefix combination to store backups. Having multiple Velero instances write backups to the same bucket/prefix combination can lead to numerous problems - failed backups, overwritten backups, inadvertently deleted backups, etc., all of which can be avoided by using a separate bucket + prefix per Velero instance. It's fine to have multiple Velero instances back up to the same bucket if each instance uses its own prefix within the bucket. This can be configured in your `BackupStorageLocation`, by setting the `spec.objectStorage.prefix` field. It's also fine to use a distinct bucket for each Velero instance, and not to use prefixes at all. Related to this, if you need to restore a backup that was created in cluster A into cluster B, you may configure cluster B with a backup storage location that points to cluster A's bucket/prefix. If you do this, you should configure the storage location pointing to cluster A's bucket/prefix in `ReadOnly` mode via the `--access-mode=ReadOnly` flag on the `velero backup-location create` command. This will ensure no new backups are created from Cluster B in Cluster A's bucket/prefix, and no existing backups are deleted or overwritten."
}
] | {
"category": "Runtime",
"file_name": "faq.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Who is using Cilium? ==================== Sharing experiences and learning from other users is essential. We are frequently asked who is using a particular feature of Cilium so people can get in contact with other users to share experiences and best practices. People also often want to know if product/platform X has integrated Cilium. While the allows users to get in touch, it can be challenging to find this information quickly. The following is a directory of adopters to help identify users of individual features. The users themselves directly maintain the list. Adding yourself as a user If you are using Cilium or it is integrated into your product, service, or platform, please consider adding yourself as a user with a quick description of your use case by opening a pull request to this file and adding a section describing your usage of Cilium. If you are open to others contacting you about your use of Cilium on Slack, add your Slack nickname as well. N: Name of user (company) D: Description U: Usage of features L: Link with further information (optional) Q: Contacts available for questions (optional) Example entry: N: Cilium Example User Inc. D: Cilium Example User Inc. is using Cilium for scientific purposes U: ENI networking, DNS policies, ClusterMesh Q: @slacknick1, @slacknick2 Requirements to be listed You must represent the user listed. Do NOT* add entries on behalf of other users. There is no minimum deployment size but we request to list permanent production deployments only, i.e., no demo or trial deployments. Commercial use is not required. A well-done home lab setup can be equally interesting as a large-scale commercial deployment. Users (Alphabetically) * N: AccuKnox D: AccuKnox uses Cilium for network visibility and network policy enforcement. U: L3/L4/L7 policy enforcement using Identity, External/VM Workloads, Network Visibility using Hubble L: https://www.accuknox.com/spifee-identity-for-cilium-presentation-at-kubecon-2021, https://www.accuknox.com/cilium Q: @nyrahul N: Acoss D: Acoss is using cilium as their main CNI plugin (self hosted k8s, On-premises) U: CiliumNetworkPolicy, Hubble, BPF NodePort, Direct routing L: @JrCs N: Adobe, Inc. D: Adobe's Project Ethos uses Cilium for multi-tenant, multi-cloud clusters U: L3/L4/L7 policies L: https://youtu.be/39FLsSc2P-Y N: AirQo D: AirQo uses Cilium as the CNI plugin U: CNI, Networking, NetworkPolicy, Cluster Mesh, Hubble, Kubernetes services L: @airqo-platform N: Alibaba Cloud D: Alibaba Cloud is using Cilium together with Terway CNI as the high-performance ENI dataplane U: Networking, NetworkPolicy, Services, IPVLAN L: https://www.alibabacloud.com/blog/how-does-alibaba-cloud-build-high-performance-cloud-native-pod-networks-in-production-environments_596590 N: Amazon Web Services (AWS) D: AWS uses Cilium as the default CNI for EKS Anywhere U: Networking, NetworkPolicy, Services L: https://isovalent.com/blog/post/2021-09-aws-eks-anywhere-chooses-cilium N: APPUiO by VSHN D: VSHN uses Cilium for multi-tenant networking on APPUiO Cloud and as an add-on to APPUiO Managed, both on Red Hat OpenShift and Cloud Kubernetes. U: CNI, Networking, NetworkPolicy, Hubble, IPAM, Kubernetes services L: https://products.docs.vshn.ch/products/appuio/managed/addon_cilium.html and https://www.appuio.cloud N: ArangoDB Oasis D: ArangoDB Oasis is using Cilium in to separate database deployments in our multi-tenant cloud environment U: Networking, CiliumNetworkPolicy(cluster & local), Hubble, IPAM L: https://cloud.arangodb.com Q: @ewoutp @Robert-Stam N: Ascend.io D: Ascend.io is using Cilium as a consistent CNI for our Data Automation Platform on GKE, EKS, and AKS. U: Transparent Encryption, Overlay Networking, Cluster Mesh, Egress Gateway, Network Policy, Hubble L: https://www.ascend.io/ Q: @Joe Stevens N: Ayedo D: Ayedo builds and operates cloud-native container platforms based on Kubernetes U: Hubble for Visibility, Cilium as Mesh between Services L: https://www.ayedo.de/ N: Back Market D: Back Market is using Cilium as CNI in all their clusters and environments (kOps + EKS in AWS) U: CNI, Network Policies, Transparent Encryption (WG), Hubble Q: @nitrikx L:"
},
{
"data": "N: Berops D: Cilium is used as a CNI plug-in in our open-source multi-cloud and hybrid-cloud Kubernetes platform - Claudie U: CNI, Network Policies, Hubble Q: @Bernard Halas L: https://github.com/berops/claudie N: ByteDance D: ByteDance is using Cilium as CNI plug-in for self-hosted Kubernetes. U: CNI, Networking L: @Jiang Wang N: Canonical D: Canonical's Kubernetes distribution microk8s uses Cilium as CNI plugin U: Networking, NetworkPolicy, and Kubernetes services L: https://microk8s.io/ N: Capital One D: Capital One uses Cilium as its standard CNI for all Kubernetes environments U: CNI, CiliumClusterWideNetworkpolicy, CiliumNetworkPolicy, Hubble, network visibility L: https://www.youtube.com/watch?v=hwOpCKBaJ-w N: CENGN - Centre of Excellence in Next Generation Networks D: CENGN is using Cilium in multiple clusters including production and development clusters (self-hosted k8s, On-premises) U: L3/L4/L7 network policies, Monitoring via Prometheus metrics & Hubble L: https://www.youtube.com/watch?v=yXm7yZE2rk4 Q: @rmaika @mohahmed13 N: Cistec D: Cistec is a clinical information system provider and uses Cilium as the CNI plugin. U: Networking and network policy L: https://www.cistec.com/ N: Civo D: Civo is offering Cilium as the CNI option for Civo users to choose it for their Civo Kubernetes clusters. U: Networking and network policy L: https://www.civo.com/kubernetes N: ClickHouse D: ClickHouse uses Cilium as CNI for AWS Kubernetes environments U: CiliumNetworkPolicy, Hubble, ClusterMesh L: https://clickhouse.com N: Cognite D: Cognite is an industrial DataOps provider and uses Cilium as the CNI plugin Q: @Robert Collins N: CONNY D: CONNY is legaltech platform to improve access to justice for individuals U: Networking, NetworkPolicy, Services Q: @ant31 L: https://conny.de N: Cosmonic D: Cilium is the CNI for Cosmonic's Nomad based PaaS U: Networking, NetworkPolicy, Transparent Encryption L: https://cilium.io/blog/2023/01/18/cosmonic-user-story/ N: Crane D: Crane uses Cilium as the default CNI U: Networking, NetworkPolicy, Services L: https://github.com/slzcc/crane Q: @slzcc N: Cybozu D: Cybozu deploys Cilium to on-prem Kubernetes Cluster and uses it with Coil by CNI chaining. U: CNI Chaining, L4 LoadBalancer, NetworkPolicy, Hubble L: https://cybozu-global.com/ N: Daimler Truck AG D: The CSG RuntimeDepartment of DaimlerTruck is maintaining an AKS k8s cluster as a shared resource for DevOps crews and is using Cilium as the default CNI (BYOCNI). U: Networking, NetworkPolicy and Monitoring L: https://daimlertruck.com Q: @brandshaide N: DaoCloud - spiderpool & merbridge D: spiderpool is using Cilium as their main CNI plugin for overlay and merbridge is using Cilium eBPF library to speed up your Service Mesh U: CNI, Service load-balancing, cluster mesh L: https://github.com/spidernet-io/spiderpool, https://github.com/merbridge/merbridge Q: @weizhoublue, @kebe7jun N: Datadog D: Datadog is using Cilium in AWS (self-hosted k8s) U: ENI Networking, Service load-balancing, Encryption, Network Policies, Hubble Q: @lbernail, @roboll, @mvisonneau N: Dcode.tech D: We specialize in AWS and Kubernetes, and actively implement Cilium at our clients. U: CNI, CiliumNetworkPolicy, Hubble UI L: https://dcode.tech/ Q: @eliranw, @maordavidov N: Deckhouse D: Deckhouse Kubernetes Platform is using Cilium as a one of the supported CNIs. U: Networking, Security, Hubble UI for network visibility L: https://github.com/deckhouse/deckhouse N: Deezer D: Deezer is using Cilium as CNI for all our on-prem clusters for its performance and security. We plan to leverage BGP features as well soon U: CNI, Hubble, kube-proxy replacement, eBPF L: https://github.com/deezer N: DigitalOcean D: DigitalOcean is using Cilium as the CNI for Digital Ocean's managed Kubernetes Services (DOKS) U: Networking and network policy L: https://github.com/digitalocean/DOKS N: Edgeless Systems D: Edgeless Systems is using Cilium as the CNI for Edgeless System's Confidential Kubernetes Distribution (Constellation) U: Networking (CNI), Transparent Encryption (WG), L: https://docs.edgeless.systems/constellation/architecture/networking Q: @m1ghtym0 N: Eficode D: As a cloud-native and devops consulting firm, we have implemented Cilium on customer engagements U: CNI, CiliumNetworkPolicy at L7, Hubble L:"
},
{
"data": "Q: @Andy Allred N: Elastic Path D: Elastic Path is using Cilium in their CloudOps for Kubernetes production clusters U: CNI L: https://documentation.elasticpath.com/cloudops-kubernetes/docs/index.html Q: @Neil Seward N: Equinix D: Equinix Metal is using Cilium for production and non-production environments on bare metal U: CNI, CiliumClusterWideNetworkpolicy, CiliumNetworkPolicy, BGP advertisements, Hubble, network visibility L: https://metal.equinix.com/ Q: @matoszz N: Equinix D: Equinix NL Managed Services is using Cilium with their Managed Kubernetes offering U: CNI, network policies, visibility L: https://www.equinix.nl/products/support-services/managed-services/netherlands Q: @jonkerj N: Exoscale D: Exoscale is offering Cilium as a CNI option on its managed Kubernetes service named SKS (Scalable Kubernetes Service) U: CNI, Networking L: https://www.exoscale.com/sks/ Q: @Antoine N: finleap connect D: finleap connect is using Cilium in their production clusters (self-hosted, bare-metal, private cloud) U: CNI, NetworkPolicies Q: @chue N: Form3 D: Form3 is using Cilium in their production clusters (self-hosted, bare-metal, private cloud) U: Service load-balancing, Encryption, CNI, NetworkPolicies Q: @kevholditch-f3, samo-f3, ewilde-form3 N: FRSCA - Factory for Repeatable Secure Creation of Artifacts D: FRSCA is utilizing tetragon integrated with Tekton to create runtime attestation to attest artifact and builder attributes U: Runtime observability L: https://github.com/buildsec/frsca Q: @Parth Patel N: F5 Inc D: F5 helps customers with Cilium VXLAN tunnel integration with BIG-IP U: Networking L: https://github.com/f5devcentral/f5-ci-docs/blob/master/docs/cilium/cilium-bigip-info.rst Q: @vincentmli N: Gcore D: Gcore supports Cilium as CNI provider for Gcore Managed Kubernetes Service U: CNI, Networking, NetworkPolicy, Kubernetes Services L: https://gcore.com/news/cilium-cni-support Q: @rzdebskiy N: Giant Swarm D: Giant Swarm is using Cilium in their Cluster API based managed Kubernetes service (AWS, Azure, GCP, OpenStack, VMware Cloud Director and VMware vSphere) as CNI U: Networking L: https://www.giantswarm.io/ N: GitLab D: GitLab is using Cilium to implement network policies inside Auto DevOps deployed clusters for customers using k8s U: Network policies L: https://docs.gitlab.com/ee/user/clusters/applications.html#install-cilium-using-gitlab-ci Q: @ap4y @whaber N: Google D: Google is using Cilium in Anthos and Google Kubernetes Engine (GKE) as Dataplane V2 U: Networking, network policy, and network visibility L: https://cloud.google.com/blog/products/containers-kubernetes/bringing-ebpf-and-cilium-to-google-kubernetes-engine N: G DATA CyberDefense AG D: G DATA CyberDefense AG is using Cilium on our managed on premise clusters. U: Networking, network policy, security, and network visibility L: https://gdatasoftware.com Q: @farodin91 N: IDNIC | Kadabra D: IDNIC is the National Internet Registry administering IP addresses for INDONESIA, uses Cilium to powered Kadabra project runing services across multi data centers. U: Networking, Network Policies, kube-proxy Replacement, Service Load Balancing and Cluster Mesh L: https://ris.idnic.net/ Q: @ardikabs N: IKEA IT AB D: IKEA IT AB is using Cilium for production and non-production environments (self-hosted, bare-metal, private cloud) U: Networking, CiliumclusterWideNetworkPolicy, CiliumNetworkPolicy, kube-proxy replacement, Hubble, Direct routing, egress gateway, hubble-otel, Multi Nic XDP, BGP advertisements, Bandwidth Manager, Service Load Balancing, Cluster Mesh L: https://www.ingka.com/ N: Immerok D: Immerok uses Cilium for cross-cluster communication and network isolation; Immerok Cloud is a serverless platform for the full power of at any scale. U: Networking, network policy, observability, cluster mesh, kube-proxy replacement, security, CNI L: https://immerok.io Q: @austince, @dmvk N: Infomaniak D: Infomaniak is using Cilium in their production clusters (self-hosted, bare-metal and openstack) U: Networking, CiliumNetworkPolicy, BPF NodePort, Direct routing, kube-proxy replacement L: https://www.infomaniak.com/en Q: @reneluria N: innoQ Schweiz GmbH D: As a consulting company we added Cilium to a couple of our customers infrastructure U: Networking, CiliumNetworkPolicy at L7, kube-proxy replacement, encryption L: https://www.cloud-migration.ch/ Q: @fakod N: Isovalent D: Cilium is the platform that powers Isovalents enterprise networking, observability, and security solutions U: Networking, network policy, observability, cluster mesh, kube-proxy replacement, security, egress gateway, service load balancing, CNI L:"
},
{
"data": "Q: @BillMulligan N: JUMO D: JUMO is using Cilium as their CNI plugin for all of their AWS-hosted EKS clusters U: Networking, network policy, network visibility, cluster mesh Q: @Matthieu ANTOINE, @Carlos Castro, @Joao Coutinho (Slack) N: Keploy D: Keploy is using the Cilium to capture the network traffic to perform E2E Testing. U: Networking, network policy, Monitoring, E2E Testing L: https://keploy.io/ N: Kilo D: Cilium is a supported CNI for Kilo. When used together, Cilium + Kilo create a full mesh via WireGuard for Kubernetes in edge environments. U: CNI, Networking, Hubble, kube-proxy replacement, network policy L: https://kilo.squat.ai/ Q: @squat, @arpagon N: kOps D: kOps is using Cilium as one of the supported CNIs U: Networking, Hubble, Encryption, kube-proxy replacement L: kops.sigs.k8s.io/ Q: @olemarkus N: Kryptos Logic D: Kryptos is a cyber security company that is using Kubernetes on-prem in which Cilium is our CNI of choice. U: Networking, Observability, kube-proxy replacement N: kubeasz D: kubeasz, a certified kubernetes installer, is using Cilium as a one of the supported CNIs. U: Networking, network policy, Hubble for network visibility L: https://github.com/easzlab/kubeasz N: Kube-OVN D: Kube-OVN uses Cilium to enhance service performance, security and monitoring. U: CNI-Chaining, Hubble, kube-proxy replacement L: https://github.com/kubeovn/kube-ovn/blob/master/docs/IntegrateCiliumIntoKubeOVN.md Q: @oilbeater N: Kube-Hetzner D: Kube-Hetzner is a open-source Terraform project that uses Cilium as an possible CNI in its cluster deployment on Hetzner Cloud. U: Networking, Hubble, kube-proxy replacement L: https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner#cni Q: @MysticalTech N: Kubermatic D: Kubermatic Kubernetes Platform is using Cilium as a one of the supported CNIs. U: Networking, network policy, Hubble for network visibility L: https://github.com/kubermatic/kubermatic N: KubeSphere - KubeKey D: KubeKey is an open-source lightweight tool for deploying Kubernetes clusters and addons efficiently. It uses Cilium as one of the supported CNIs. U: Networking, Security, Hubble UI for network visibility L: https://github.com/kubesphere/kubekey Q: @FeynmanZhou N: K8e - Simple Kubernetes Distribution D: Kubernetes Easy (k8e) is a lightweight, Extensible, Enterprise Kubernetes distribution. It uses Cilium as default CNI network. U: Networking, network policy, Hubble for network visibility L: https://github.com/xiaods/k8e Q: @xds2000 N: Liquid Reply D: Liquid Reply is a professional service provider and utilizes Cilium on suitable projects and implementations. U: Networking, network policy, Hubble for network visibility, Security L: http://liquidreply.com Q: @mkorbi N: Magic Leap D: Magic Leap is using Hubble plugged to GKE Dataplane v2 clusters U: Hubble Q: @romachalm N: Melenion Inc D: Melenion is using Cilium as the CNI for its on-premise production clusters U: Service Load Balancing, Hubble Q: @edude03 N: Meltwater D: Meltwater is using Cilium in AWS on self-hosted multi-tenant k8s clusters as the CNI plugin U: ENI Networking, Encryption, Monitoring via Prometheus metrics & Hubble Q: @recollir, @dezmodue N: Microsoft D: Microsoft is using Cilium in \"Azure CNI powered by Cilium\" AKS (Azure Kubernetes Services) clusters L: https://techcommunity.microsoft.com/t5/azure-networking-blog/azure-cni-powered-by-cilium-for-azure-kubernetes-service-aks/ba-p/3662341 Q: @tamilmani1989 @chandanAggarwal N: Mobilab D: Mobilab uses Cilium as the CNI for its internal cloud U: CNI L: https://mobilabsolutions.com/2019/01/why-we-switched-to-cilium/ N: MyFitnessPal D: MyFitnessPal trusts Cilium with high volume user traffic in AWS on self-hosted k8s clusters as the CNI plugin and in GKE with Dataplane V2 U: Networking (CNI, Maglev, kube-proxy replacement, local redirect policy), Observability (Network metrics with Hubble, DNS proxy, service maps, policy troubleshooting) and Security (Network Policy) L: https://www.myfitnesspal.com N: Mux, Inc. D: Mux deploys Cilium on self-hosted k8s clusters (Cluster API) in GCP and AWS to run its video streaming/analytics platforms. U: Pod networking (CNI, IPAM, Host-reachable Services), Hubble, Cluster-mesh. TBD: Network Policy, Transparent Encryption (WG), Host Firewall. L: https://mux.com Q: @dilyevsky N: NetBird D: NetBird uses Cilium to compile BPF to Go for cross-platform DNS management and NAT traversal U: bpf2go to compile a C source file into eBPF bytecode and then to Go L:"
},
{
"data": "Q: @braginini N: NETWAYS Web Services D: NETWAYS Web Services offers Cilium to their clients as CNI option for their Managed Kubernetes clusters. U: Networking (CNI), Observability (Hubble) L: https://nws.netways.de/managed-kubernetes/ N: New York Times (the) D: The New York Times is using Cilium on EKS to build multi-region multi-tenant shared clusters U: Networking (CNI, EKS IPAM, Maglev, kube-proxy replacement, Direct Routing), Observability (Network metrics with Hubble, policy troubleshooting) and Security (Network Policy) L: https://www.nytimes.com/, https://youtu.be/9FDpMNvPrCw Q: @abebars N: Nexxiot D: Nexxiot is an IoT SaaS provider using Cilium as the main CNI plugin on AWS EKS clusters U: Networking (IPAM, CNI), Security (Network Policies), Visibility (hubble) L: https://nexxiot.com N: Nine Internet Solutions AG D: Nine uses Cilium on all Nine Kubernetes Engine clusters U: CNI, network policy, kube-proxy replacement, host firewall L: https://www.nine.ch/en/kubernetes N: Northflank D: Northflank is a PaaS and uses Cilium as the main CNI plugin across GCP, Azure, AWS and bare-metal U: Networking, network policy, hubble, packet monitoring and network visibility L: https://northflank.com Q: @NorthflankWill, @Champgoblem N: Overstock Inc. D: Overstock is using Cilium as the main CNI plugin on bare-metal clusters (self hosted k8s). U: Networking, network policy, hubble, observability N: Palantir Technologies Inc. D: Palantir is using Cilium as their main CNI plugin in all major cloud providers [AWS/Azure/GCP] (self hosted k8s). U: ENI networking, L3/L4 policies, FQDN based policy, FQDN filtering, IPSec Q: ungureanuvladvictor N: Palark GmbH D: Palark uses Cilium for networking in its Kubernetes platform provided to numerous customers as a part of its DevOps as a Service offering. U: CNI, Networking, Network policy, Security, Hubble UI L: https://blog.palark.com/why-cilium-for-kubernetes-networking/ Q: @shurup N: Parseable D: Parseable uses Tertragon for collecting and ingesting eBPF logs for Kubernetes clusters. U: Security, eBPF, Tetragon L: https://www.parseable.io/blog/ebpf-log-analytics Q: @nitisht N: Pionative D: Pionative supplies all its clients across cloud providers with Kubernetes running Cilium to deliver the best performance out there. U: CNI, Networking, Security, eBPF L: https://www.pionative.com Q: @Pionerd N: Plaid Inc D: Plaid is using Cilium as their CNI plugin in self-hosted Kubernetes on AWS. U: CNI, network policies L: Q: @diversario @jandersen-plaid N: PlanetScale D: PlanetScale is using Cilium as the CNI for its serverless database platform. U: Networking (CNI, IPAM, kube-proxy replacement, native routing), Network Security, Cluster Mesh, Load Balancing L: https://planetscale.com/ Q: @dctrwatson N: plusserver Kubernetes Engine (PSKE) D: PSKE uses Cilium for multiple scenarios, for examples for managed Kubernetes clusters provided with Gardener Project across AWS and OpenStack. U: CNI , Overlay Network, Network Policies L: https://www.plusserver.com/en/product/managed-kubernetes/, https://github.com/gardener/gardener-extension-networking-cilium N: Polar Signals D: Polar Signals uses Cilium as the CNI on its GKE dataplane v2 based clusters. U: Networking L: https://polarsignals.com Q: @polarsignals @brancz N: Polverio D: Polverio KubeLift is a single-node Kubernetes distribution optimized for Azure, using Cilium as the CNI. U: CNI, IPAM L: https://polverio.com Q: @polverio @stuartpreston N: Poseidon Laboratories D: Poseidon's Typhoon Kubernetes distro uses Cilium as the default CNI and its used internally U: Networking, policies, service load balancing L: https://github.com/poseidon/typhoon/ Q: @dghubble @typhoon8s N: PostFinance AG D: PostFinance is using Cilium as their CNI for all mission critical, on premise k8s clusters U: Networking, network policies, kube-proxy replacement L: https://github.com/postfinance N: Proton AG D: Proton is using Cilium as their CNI for all their Kubernetes clusters U: Networking, network policies, host firewall, kube-proxy replacement, Hubble L:"
},
{
"data": "Q: @j4m3s @MrFreezeex N: Radio France D: Radio France is using Cilium in their production clusters (self-hosted k8s with kops on AWS) U: Mainly Service load-balancing Q: @francoisj N: Rancher Labs, now part of SUSE D: Rancher Labs certified Kubernetes distribution RKE2 can be deployed with Cilium. U: Networking and network policy L: https://github.com/rancher/rke and https://github.com/rancher/rke2 N: Rapyuta Robotics. D: Rapyuta is using cilium as their main CNI plugin. (self hosted k8s) U: CiliumNetworkPolicy, Hubble, Service Load Balancing. Q: @Gowtham N: Rafay Systems D: Rafay's Kubernetes Operations Platform uses Cilium for centralized network visibility and network policy enforcement U: NetworkPolicy, Visibility via Prometheus metrics & Hubble L: https://rafay.co/platform/network-policy-manager/ Q: @cloudnativeboy @mohanatreya N: Robinhood Markets D: Robinhood uses Cilium for Kubernetes overlay networking in an environment where we run tests for backend services U: CNI, Overlay networking Q: @Madhu CS N: Santa Claus & the Elves D: All our infrastructure to process children's letters and wishes, toy making, and delivery, distributed over multiple clusters around the world, is now powered by Cilium. U: ClusterMesh, L4LB, XDP acceleration, Bandwidth manager, Encryption, Hubble L: https://qmonnet.github.io/whirl-offload/2024/01/02/santa-switches-to-cilium/ N: SAP D: SAP uses Cilium for multiple internal scenarios. For examples for self-hosted Kubernetes scenarios on AWS with SAP Concur and for managed Kubernetes clusters provided with Gardener Project across AWS, Azure, GCP, and OpenStack. U: CNI , Overlay Network, Network Policies L: https://www.concur.com, https://gardener.cloud/, https://github.com/gardener/gardener-extension-networking-cilium Q: @dragan (SAP Concur), @docktofuture & @ScheererJ (Gardener) N: Sapian D: Sapian uses Cilium as the default CNI in our product DialBox Cloud; DialBox cloud is an Edge Kubernetes cluster using for WireGuard mesh connectivity inter-nodes. Therefore, Cilium is crucial for low latency in real-time communications environments. U: CNI, Network Policies, Hubble, kube-proxy replacement L: https://sapian.com.co, https://arpagon.co/blog/k8s-edge Q: @arpagon N: Schenker AG D: Land transportation unit of Schenker uses Cilium as default CNI in self-managed kubernetes clusters running in AWS U: CNI, Monitoring, kube-proxy replacement L: https://www.dbschenker.com/global Q: @amirkkn N: Sealos D: Sealos is using Cilium as a consistent CNI for our Sealos Cloud. U: Networking, Service, kube-proxy replacement, Network Policy, Hubble L: https://sealos.io Q: @fanux, @yangchuansheng N: Seznam.cz D: Seznam.cz uses Cilium in multiple scenarios in on-prem DCs. At first as L4LB which loadbalances external traffic into k8s+openstack clusters then as CNI in multiple k8s and openstack clusters which are all connected in a clustermesh to enforce NetworkPolicies across pods/VMs. U: L4LB, L3/4 CNPs+CCNPs, KPR, Hubble, HostPolicy, Direct-routing, IPv4+IPv6, ClusterMesh Q: @oblazek N: Simple D: Simple uses cilium as default CNI in Kubernetes clusters (AWS EKS) for both development and production environments. U: CNI, Network Policies, Hubble L: https://simple.life Q: @sergeyshevch N: Scaleway D: Scaleway uses Cilium as the default CNI for Kubernetes Kapsule U: Networking, NetworkPolicy, Services L: @jtherin @remyleone N: Schuberg Philis D: Schuberg Philis uses Cilium as CNI for mission critical kubernetes clusters we run for our customers. U: CNI (instead of amazon-vpc-cni-k8s), DefaultDeny(Zero Trust), Hubble, CiliumNetworkPolicy, CiliumClusterwideNetworkPolicy, EKS L: https://schubergphilis.com/en Q: @stimmerman @shoekstra @mbaumann N: SI Analytics D: SI Analytics uses Cilium as CNI in self-managed Kubernetes clusters in on-prem DCs. And also use Cilium as CNI in its GKE dataplane v2 based clusters. U: CNI, Network Policies, Hubble L: https://si-analytics.ai, https://ovision.ai Q: @jholee N: SIGHUP D: SIGHUP integrated Cilium as a supported CNI for KFD (Kubernetes Fury Distribution), our enterprise-grade OSS reference architecture U: Available supported CNI L: https://sighup.io, https://github.com/sighupio/fury-kubernetes-networking Q: @jnardiello @nutellino N: SmileDirectClub D: SmileDirectClub is using Cilium in manufacturing clusters (self-hosted on vSphere and AWS EC2) U: CNI Q: @joey, @onur.gokkocabas N: Snapp D: Snapp is using Cilium in production for its on premise openshift clusters U: CNI, Network Policies, Hubble Q: @m-yosefpor N:"
},
{
"data": "D: Cilium is part of Gloo Application Networking platform, with a batteries included but swappable manner U: CNI, Network Policies Q: @linsun N: S&P Global D: S&P Global uses Cilium as their multi-cloud CNI U: CNI L: https://www.youtube.com/watch?v=6CZ_SSTqb4g N: Spectro Cloud D: Spectro Cloud uses & promotes Cilium for clusters its K8S management platform (Palette) deploys U: CNI, Overlay network, kube-proxy replacement Q: @Kevin Reeuwijk N: Spherity D: Spherity is using Cilium on AWS EKS U: CNI/ENI Networking, Network policies, Hubble Q: @solidnerd N: Sportradar D: Sportradar is using Cilium as their main CNI plugin in AWS (using kops) U: L3/L4 policies, Hubble, BPF NodePort, CiliumClusterwideNetworkPolicy Q: @Eric Bailey, @Ole Markus N: Sproutfi D: Sproutfi uses Cilium as the CNI on its GKE based clusters U: Service Load Balancing, Hubble, Datadog Integration for Prometheus metrics Q: @edude03 N: SuperOrbital D: As a Kubernetes-focused consulting firm, we have implemented Cilium on customer engagements U: CNI, CiliumNetworkPolicy at L7, Hubble L: https://superorbital.io/ Q: @jmcshane N: Syself D: Syself uses Cilium as the CNI for Syself Autopilot, a managed Kubernetes platform U: CNI, HostFirewall, Monitoring, CiliumClusterwideNetworkPolicy, Hubble L: https://syself.com Q: @sbaete N: Talos D: Cilium is one of the supported CNI's in Talos U: Networking, NetworkPolicy, Hubble, BPF NodePort L: https://github.com/talos-systems/talos Q: @frezbo, @smira, @Ulexus N: Tencent Cloud D: Tencent Cloud container team designed the TKE hybrid cloud container network solution with Cilium as the cluster network base U: Networking, CNI L: https://segmentfault.com/a/1190000040298428/en N: teuto.net Netzdienste GmbH D: teuto.net is using cilium for their managed k8s service, t8s U: CNI, CiliumNetworkPolicy, Hubble, Encryption, ... L: https://teuto.net/managed-kubernetes Q: @cwrau N: Trendyol D: Trendyol.com has recently implemented Cilium as the default CNI for its production Kubernetes clusters starting from version 1.26. U: Networking, kube-proxy replacement, eBPF, Network Visibility with Hubble and Grafana, Local Redirect Policy L: https://t.ly/FDCZK N: T-Systems International D: TSI uses Cilium for it's Open Sovereign Cloud product, including as a CNI for Gardener-based Kubernetes clusters and bare-metal infrastructure managed by OnMetal. U: CNI, overlay network, NetworkPolicies Q: @ManuStoessel N: uSwitch D: uSwitch is using Cilium in AWS for all their production clusters (self hosted k8s) U: ClusterMesh, CNI-Chaining (with amazon-vpc-cni-k8s) Q: @jirving N: United Cloud D: United Cloud is using Cilium for all non-production and production clusters (on-premises) U: CNI, Hubble, CiliumNetworkPolicy, CiliumClusterwideNetworkPolicy, ClusterMesh, Encryption L: https://united.cloud Q: @boris N: Utmost Software, Inc D: Utmost is using Cilium in all tiers of its Kubernetes ecosystem to implement zero trust U: CNI, DefaultDeny(Zero Trust), Hubble, CiliumNetworkPolicy, CiliumClusterwideNetworkPolicy L: https://blog.utmost.co/zero-trust-security-at-utmost Q: @andrewholt N: Trip.com D: Trip.com is using Cilium in their production clusters (self-hosted k8s, On-premises and AWS) U: ENI Networking, Service load-balancing, Direct routing (via Bird) L: https://ctripcloud.github.io/cilium/network/2020/01/19/trip-first-step-towards-cloud-native-networking.html Q: @ArthurChiao N: Tailor Brands D: Tailor Brands is using Cilium in their production, staging, and development clusters (AWS EKS) U: CNI (instead of amazon-vpc-cni-k8s), Hubble, Datadog Integration for Prometheus metrics Q: @liorrozen N: Twilio D: Twilio Segment is using Cilium across their k8s-based compute platform U: CNI, EKS direct routing, kube-proxy replacement, Hubble, CiliumNetworkPolicies Q: @msaah N: ungleich D: ungleich is using Cilium as part of IPv6-only Kubernetes deployments. U: CNI, IPv6 only networking, BGP, eBPF Q: @Nico Schottelius, @nico:ungleich.ch (Matrix) N: Veepee D: Veepee is using Cilium on their on-premise Kubernetes clusters, hosting majority of their applications. U. CNI, BGP, eBPF, Hubble, DirectRouting (via kube-router) Q: @nerzhul N: Wildlife Studios D: Wildlife Studios is using Cilium in AWS for all their game production clusters (self hosted k8s) U: ClusterMesh, Global Service Load Balancing. Q: @Oki @luanguimaraesla @rsafonseca N: Yahoo! D: Yahoo is using Cilium for L4 North-South Load Balancing for Kubernetes Services L:"
}
] | {
"category": "Runtime",
"file_name": "USERS.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "Having a clearly defined scope of a project is important for ensuring consistency and focus. These following criteria will be used when reviewing pull requests, features, and changes for the project before being accepted. Components should not have tight dependencies on each other so that they are able to be used independently. The APIs for images and containers should be designed in a way that when used together the components have a natural flow but still be useful independently. An example for this design can be seen with the overlay filesystems and the container execution layer. The execution layer and overlay filesystems can be used independently but if you were to use both, they share a common `Mount` struct that the filesystems produce and the execution layer consumes. containerd should expose primitives to solve problems instead of building high level abstractions in the API. A common example of this is how build would be implemented. Instead of having a build API in containerd we should expose the lower level primitives that allow things required in build to work. Breaking up the filesystem APIs to allow snapshots, copy functionality, and mounts allow people implementing build at the higher levels with more flexibility. For the various components in containerd there should be defined extension points where implementations can be swapped for alternatives. The best example of this is that containerd will use `runc` from OCI as the default runtime in the execution layer but other runtimes conforming to the OCI Runtime specification can be easily added to containerd. containerd will come with a default implementation for the various components. These defaults will be chosen by the maintainers of the project and should not change unless better tech for that component comes out. Additional implementations will not be accepted into the core repository and should be developed in a separate repository not maintained by the containerd maintainers. The following table specifies the various components of containerd and general features of container runtimes. The table specifies whether the feature/component is in or out of"
},
{
"data": "| Name | Description | In/Out | Reason | ||--|--|-| | execution | Provide an extensible execution layer for executing a container | in | Create,start, stop pause, resume exec, signal, delete | | cow filesystem | Built in functionality for overlay and other copy on write filesystems for containers | in | | | distribution | Having the ability to push and pull images as well as operations on images as a first class API object | in | containerd will fully support the management and retrieval of images | | metrics | container-level metrics, cgroup stats, and OOM events | in | | networking | creation and management of network interfaces | out | Networking will be handled and provided to containerd via higher level systems. | | build | Building images as a first class API | out | Build is a higher level tooling feature and can be implemented in many different ways on top of containerd | | volumes | Volume management for external data | out | The API supports mounts, binds, etc where all volumes type systems can be built on top of containerd. | | logging | Persisting container logs | out | Logging can be build on top of containerd because the containers STDIO will be provided to the clients and they can persist any way they see fit. There is no io copying of container STDIO in containerd. | containerd is scoped to a single host and makes assumptions based on that fact. It can be used to build things like a node agent that launches containers but does not have any concepts of a distributed system. containerd is designed to be embedded into a larger system, hence it only includes a barebone CLI (`ctr`) specifically for development and debugging purpose, with no mandate to be human-friendly, and no guarantee of interface stability over time. The scope of this project is an allowed list. If it's not mentioned as being in scope, it is out of scope. For the scope of this project to change it requires a 100% vote from all maintainers of the project."
}
] | {
"category": "Runtime",
"file_name": "SCOPE.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
} |
[
{
"data": "This is the title of the enhancement. Keep it simple and descriptive. The file name should be lowercased and spaces/punctuation should be replaced with `-`. The Summary section is incredibly important for producing high-quality user-focused documentation such as release notes or a development roadmap. A good summary is probably at least a paragraph in length. The URL For the related enhancement issues in the Longhorn repository. List the specific goals of the enhancement. How will we know that this has succeeded? What is out of scope for this enhancement? Listing non-goals helps to focus discussion and make progress. This is where we get down to the nitty-gritty of what the proposal actually is. Detail the things that people will be able to do if this enhancement is implemented. A good practice is including a comparison of what the user cannot do before the enhancement is implemented, why the user would want an enhancement, and what the user needs to do after, to make it clear why the enhancement is beneficial to the user. The experience details should be in the `User Experience In Detail` later. Detail what the user needs to do to use this enhancement. Include as much detail as possible so that people can understand the \"how\" of the system. The goal here is to make this feel real for users without getting bogged down. Overview of how the enhancement will be implemented. Integration test plan. For engine enhancement, also requires engine integration test plan. Anything that requires if the user wants to upgrade to this enhancement. Additional notes."
}
] | {
"category": "Runtime",
"file_name": "YYYYMMDD-template.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "GlusterFS 3.4 introduced the libgfapi client API for C programs. This page lists bindings to the libgfapi C library from other languages. Go -- - Go language bindings for libgfapi, aiming to provide an api consistent with the default Go file apis. Java - Low level JNI binding for libgfapi - High level NIO.2 FileSystem Provider implementation for the Java platform - Java bindings for libgfapi, similar to java.io Python - Libgfapi bindings for Python Ruby - Libgfapi bindings for Ruby using FFI Rust - Libgfapi bindings for Rust using FFI Perl - Libgfapi bindings for Perl using FFI"
}
] | {
"category": "Runtime",
"file_name": "Language-Bindings.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "The following containerd command creates a container. It is referred to throughout the architecture document to help explain various points: ```bash $ sudo ctr run --runtime \"io.containerd.kata.v2\" --rm -t \"quay.io/libpod/ubuntu:latest\" foo sh ``` This command requests that containerd: Create a container (`ctr run`). Use the Kata runtime (`--runtime \"io.containerd.kata.v2\"`). Delete the container when it (`--rm`). Attach the container to the user's terminal (`-t`). Use the Ubuntu Linux to create the container that will become the (`quay.io/libpod/ubuntu:latest`). Create the container with the name \"`foo`\". Run the `sh(1)` command in the Ubuntu rootfs based container environment. The command specified here is referred to as the . Note: For the purposes of this document and to keep explanations simpler, we assume the user is running this command in the ."
}
] | {
"category": "Runtime",
"file_name": "example-command.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "title: Object Storage Overview Object storage exposes an S3 API to the storage cluster for applications to put and get data. This guide assumes a Rook cluster as explained in the . Rook can configure the Ceph Object Store for several different scenarios. See each linked section for the configuration details. Create a with dedicated Ceph pools. This option is recommended if a single object store is required, and is the simplest to get started. Create . This option is recommended when multiple object stores are required. Connect to an , rather than create a local object store. Configure to synchronize buckets between object stores in different clusters. !!! note Updating the configuration of an object store between these types is not supported. The below sample will create a `CephObjectStore` that starts the RGW service in the cluster with an S3 API. !!! note This sample requires at least 3 OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the is set to `host` and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`). See the , for more detail on the settings available for a `CephObjectStore`. ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: my-store namespace: rook-ceph spec: metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: host erasureCoded: dataChunks: 2 codingChunks: 1 preservePoolsOnDelete: true gateway: sslCertificateRef: port: 80 instances: 1 ``` After the `CephObjectStore` is created, the Rook operator will then create all the pools and other resources necessary to start the service. This may take a minute to complete. Create an object store: ```console kubectl create -f object.yaml ``` To confirm the object store is configured, wait for the RGW pod(s) to start: ```console kubectl -n rook-ceph get pod -l app=rook-ceph-rgw ``` To consume the object store, continue below in the section to . The below sample will create one or more object stores. Shared Ceph pools will be created, which reduces the overhead of additional Ceph pools for each additional object store. Data isolation is enforced between the object stores with the use of Ceph RADOS namespaces. The separate RADOS namespaces do not allow access of the data across object stores. !!! note This sample requires at least 3 OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the is set to `host` and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`). Create the shared pools that will be used by each of the object stores. !!! note If object stores have been previously created, the first pool below (`.rgw.root`) does not need to be defined again as it would have already been created with an existing object store. There is only one `.rgw.root` pool existing to store metadata for all object stores. ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: rgw-root namespace: rook-ceph # namespace:cluster spec: name: .rgw.root failureDomain: host replicated: size: 3 requireSafeReplicaSize: false parameters: pg_num: \"8\" application: rgw apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: rgw-meta-pool namespace: rook-ceph # namespace:cluster spec: failureDomain: host replicated: size: 3 requireSafeReplicaSize: false parameters: pg_num: \"8\" application: rgw apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: rgw-data-pool namespace:"
},
{
"data": "# namespace:cluster spec: failureDomain: osd erasureCoded: dataChunks: 2 codingChunks: 1 application: rgw ``` Create the shared pools: ```console kubectl create -f object-shared-pools.yaml ``` After the pools have been created above, create each object store to consume the shared pools. ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: store-a namespace: rook-ceph # namespace:cluster spec: sharedPools: metadataPoolName: rgw-meta-pool dataPoolName: rgw-data-pool preserveRadosNamespaceDataOnDelete: true gateway: sslCertificateRef: port: 80 instances: 1 ``` Create the object store: ```console kubectl create -f object-a.yaml ``` To confirm the object store is configured, wait for the RGW pod(s) to start: ```console kubectl -n rook-ceph get pod -l rgw=store-a ``` Additional object stores can be created based on the same shared pools by simply changing the `name` of the CephObjectStore. In the example manifests folder, two object store examples are provided: `object-a.yaml` and `object-b.yaml`. To consume the object store, continue below in the section to . Modify the default example object store name from `my-store` to the alternate name of the object store such as `store-a` in this example. Rook can connect to existing RGW gateways to work in conjunction with the external mode of the `CephCluster` CRD. First, create a `rgw-admin-ops-user` user in the Ceph cluster with the necessary caps: ```console radosgw-admin user create --uid=rgw-admin-ops-user --display-name=\"RGW Admin Ops User\" --caps=\"buckets=;users=;usage=read;metadata=read;zone=read\" --rgw-realm=<realm-name> --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> ``` The `rgw-admin-ops-user` user is required by the Rook operator to manage buckets and users via the admin ops and s3 api. The multisite configuration needs to be specified only if the admin sets up multisite for RGW. Then create a secret with the user credentials: ```console kubectl -n rook-ceph create secret generic --type=\"kubernetes.io/rook\" rgw-admin-ops-user --from-literal=accessKey=<access key of the user> --from-literal=secretKey=<secret key of the user> ``` If you have an external `CephCluster` CR, you can instruct Rook to consume external gateways with the following: ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: external-store namespace: rook-ceph spec: gateway: port: 8080 externalRgwEndpoints: ip: 192.168.39.182 ``` Use the existing `object-external.yaml` file. Even though multiple endpoints can be specified, it is recommend to use only one endpoint. This endpoint is randomly added to `configmap` of OBC and secret of the `cephobjectstoreuser`. Rook never guarantees the randomly picked endpoint is a working one or not. If there are multiple endpoints, please add load balancer in front of them and use the load balancer endpoint in the `externalRgwEndpoints` list. When ready, the message in the `cephobjectstore` status similar to this one: ```console kubectl -n rook-ceph get cephobjectstore external-store NAME PHASE external-store Ready ``` Any pod from your cluster can now access this endpoint: ```console $ curl 10.100.28.138:8080 <?xml version=\"1.0\" encoding=\"UTF-8\"?><ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult> ``` !!! info This document is a guide for creating bucket with an Object Bucket Claim (OBC). To create a bucket with the experimental COSI Driver, see the . Now that the object store is configured, next we need to create a bucket where a client can read and write objects. A bucket can be created by defining a storage class, similar to the pattern used by block and file storage. First, define the storage class that will allow object clients to create a bucket. The storage class defines the object storage system, the bucket retention policy, and other properties required by the administrator. Save the following as `storageclass-bucket-delete.yaml` (the example is named as such due to the `Delete` reclaim policy). ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-bucket provisioner:"
},
{
"data": "reclaimPolicy: Delete parameters: objectStoreName: my-store objectStoreNamespace: rook-ceph ``` If youve deployed the Rook operator in a namespace other than `rook-ceph`, change the prefix in the provisioner to match the namespace you used. For example, if the Rook operator is running in the namespace `my-namespace` the provisioner value should be `my-namespace.ceph.rook.io/bucket`. ```console kubectl create -f storageclass-bucket-delete.yaml ``` Based on this storage class, an object client can now request a bucket by creating an Object Bucket Claim (OBC). When the OBC is created, the Rook bucket provisioner will create a new bucket. Notice that the OBC references the storage class that was created above. Save the following as `object-bucket-claim-delete.yaml` (the example is named as such due to the `Delete` reclaim policy): ```yaml apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: ceph-bucket spec: generateBucketName: ceph-bkt storageClassName: rook-ceph-bucket ``` ```console kubectl create -f object-bucket-claim-delete.yaml ``` Now that the claim is created, the operator will create the bucket as well as generate other artifacts to enable access to the bucket. A secret and ConfigMap are created with the same name as the OBC and in the same namespace. The secret contains credentials used by the application pod to access the bucket. The ConfigMap contains bucket endpoint information and is also consumed by the pod. See the for more details on the `CephObjectBucketClaims`. The following commands extract key pieces of information from the secret and configmap:\" ```console export AWSHOST=$(kubectl -n default get cm ceph-bucket -o jsonpath='{.data.BUCKETHOST}') export PORT=$(kubectl -n default get cm ceph-bucket -o jsonpath='{.data.BUCKET_PORT}') export BUCKETNAME=$(kubectl -n default get cm ceph-bucket -o jsonpath='{.data.BUCKETNAME}') export AWSACCESSKEYID=$(kubectl -n default get secret ceph-bucket -o jsonpath='{.data.AWSACCESSKEYID}' | base64 --decode) export AWSSECRETACCESSKEY=$(kubectl -n default get secret ceph-bucket -o jsonpath='{.data.AWSSECRETACCESSKEY}' | base64 --decode) ``` If any `hosting.dnsNames` are set in the `CephObjectStore` CRD, S3 clients can access buckets in . Otherwise, S3 clients must be configured to use path-style access. Now that you have the object store configured and a bucket created, you can consume the object storage from an S3 client. This section will guide you through testing the connection to the `CephObjectStore` and uploading and downloading from it. Run the following commands after you have connected to the . To simplify the s3 client commands, you will want to set the four environment variables for use by your client (ie. inside the toolbox). See above for retrieving the variables for a bucket created by an `ObjectBucketClaim`. ```console export AWS_HOST=<host> export PORT=<port> export AWSACCESSKEY_ID=<accessKey> export AWSSECRETACCESS_KEY=<secretKey> ``` `Host`: The DNS host name where the rgw service is found in the cluster. Assuming you are using the default `rook-ceph` cluster, it will be `rook-ceph-rgw-my-store.rook-ceph.svc`. `Port`: The endpoint where the rgw service is listening. Run `kubectl -n rook-ceph get svc rook-ceph-rgw-my-store`, to get the port. `Access key`: The user's `access_key` as printed above `Secret key`: The user's `secret_key` as printed above The variables for the user generated in this example might be: ```console export AWS_HOST=rook-ceph-rgw-my-store.rook-ceph.svc export PORT=80 export AWSACCESSKEY_ID=XEZDB3UJ6X7HVBE7X7MA export AWSSECRETACCESS_KEY=7yGIZON7EhFORz0I40BFniML36D2rl8CQQ5kXU6l ``` The access key and secret key can be retrieved as described in the section above on or below in the section if you are not creating the buckets with an `ObjectBucketClaim`. To test the `CephObjectStore`, set the object store credentials in the toolbox pod that contains the `s5cmd` tool. !!! important The default toolbox.yaml does not contain the s5cmd. The toolbox must be started with the rook operator image (toolbox-operator-image), which does contain s5cmd. ```console kubectl create -f deploy/examples/toolbox-operator-image.yaml mkdir ~/.aws cat >"
},
{
"data": "<< EOF [default] awsaccesskeyid = ${AWSACCESSKEYID} awssecretaccesskey = ${AWSSECRETACCESSKEY} EOF ``` Upload a file to the newly created bucket ```console echo \"Hello Rook\" > /tmp/rookObj s5cmd --endpoint-url http://$AWSHOST:$PORT cp /tmp/rookObj s3://$BUCKETNAME ``` Download and verify the file from the bucket ```console s5cmd --endpoint-url http://$AWSHOST:$PORT cp s3://$BUCKETNAME/rookObj /tmp/rookObj-download cat /tmp/rookObj-download ``` Rook configures health probes on the deployment created for CephObjectStore gateways. Refer to for information about configuring the probes and monitoring the deployment status. Rook sets up the object storage so pods will have access internal to the cluster. If your applications are running outside the cluster, you will need to setup an external service through a `NodePort`. First, note the service that exposes RGW internal to the cluster. We will leave this service intact and create a new service for external access. ```console $ kubectl -n rook-ceph get service rook-ceph-rgw-my-store NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-rgw-my-store 10.3.0.177 <none> 80/TCP 2m ``` Save the external service as `rgw-external.yaml`: ```yaml apiVersion: v1 kind: Service metadata: name: rook-ceph-rgw-my-store-external namespace: rook-ceph labels: app: rook-ceph-rgw rook_cluster: rook-ceph rookobjectstore: my-store spec: ports: name: rgw port: 80 protocol: TCP targetPort: 80 selector: app: rook-ceph-rgw rook_cluster: rook-ceph rookobjectstore: my-store sessionAffinity: None type: NodePort ``` Now create the external service. ```console kubectl create -f rgw-external.yaml ``` See both rgw services running and notice what port the external service is running on: ```console $ kubectl -n rook-ceph get service rook-ceph-rgw-my-store rook-ceph-rgw-my-store-external NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-rgw-my-store ClusterIP 10.104.82.228 <none> 80/TCP 4m rook-ceph-rgw-my-store-external NodePort 10.111.113.237 <none> 80:31536/TCP 39s ``` Internally the rgw service is running on port `80`. The external port in this case is `31536`. Now you can access the `CephObjectStore` from anywhere! All you need is the hostname for any machine in the cluster, the external port, and the user credentials. If you need to create an independent set of user credentials to access the S3 endpoint, create a `CephObjectStoreUser`. The user will be used to connect to the RGW service in the cluster using the S3 API. The user will be independent of any object bucket claims that you might have created in the earlier instructions in this document. See the for more detail on the settings available for a `CephObjectStoreUser`. ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: my-user namespace: rook-ceph spec: store: my-store displayName: \"my display name\" ``` When the `CephObjectStoreUser` is created, the Rook operator will then create the RGW user on the specified `CephObjectStore` and store the Access Key and Secret Key in a kubernetes secret in the same namespace as the `CephObjectStoreUser`. ```console kubectl create -f object-user.yaml ``` ```console $ kubectl -n rook-ceph describe secret rook-ceph-object-user-my-store-my-user Name: rook-ceph-object-user-my-store-my-user Namespace: rook-ceph Labels: app=rook-ceph-rgw rook_cluster=rook-ceph rookobjectstore=my-store Annotations: <none> Type: kubernetes.io/rook Data ==== AccessKey: 20 bytes SecretKey: 40 bytes ``` The AccessKey and SecretKey data fields can be mounted in a pod as an environment variable. More information on consuming kubernetes secrets can be found in the To directly retrieve the secrets: ```console kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user -o jsonpath='{.data.AccessKey}' | base64 --decode kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user -o jsonpath='{.data.SecretKey}' | base64 --decode ``` Multisite is a feature of Ceph that allows object stores to replicate its data over multiple Ceph clusters. Multisite also allows object stores to be independent and isolated from other object stores in a cluster. For more information on multisite please read the for how to run it."
}
] | {
"category": "Runtime",
"file_name": "object-storage.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- Please give your pull request a title like Please use this format for each git commit message: [A longer multiline description] Fixes: [ticket URL on tracker.ceph.com, create one if necessary] Signed-off-by: [Your Name] <[your email]> For examples, use \"git log\". --> To sign and title your commits, please refer to . If you are submitting a fix for a stable branch (e.g. \"quincy\"), please refer to for the proper workflow. When filling out the below checklist, you may click boxes directly in the GitHub web UI. When entering or editing the entire PR message in the GitHub web UI editor, you may also select a checklist item by adding an `x` between the brackets: `[x]`. Spaces and capitalization matter when checking off items this way. Tracker (select at least one) [ ] References tracker ticket [ ] Very recent bug; references commit where it was introduced [ ] New feature (ticket optional) [ ] Doc update (no ticket needed) [ ] Code cleanup (no ticket needed) Component impact , opened tracker ticket , opened tracker ticket [ ] No impact that needs to be tracked Documentation (select at least one) [ ] Updates relevant documentation [ ] No doc update is appropriate Tests (select at least one) - [ ] Includes bug reproducer [ ] No tests <details> <summary>Show available Jenkins commands</summary> `jenkins retest this please` `jenkins test classic perf` `jenkins test crimson perf` `jenkins test signed` `jenkins test make check` `jenkins test make check arm64` `jenkins test submodules` `jenkins test dashboard` `jenkins test dashboard cephadm` `jenkins test api` `jenkins test docs` `jenkins render docs` `jenkins test ceph-volume all` `jenkins test ceph-volume tox` `jenkins test windows` `jenkins test rook e2e` </details>"
}
] | {
"category": "Runtime",
"file_name": "pull_request_template.md",
"project_name": "Ceph",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "`Clustermgr` is a cluster management module, mainly responsible for disk registration, generation and allocation of logical volumes, and management of cluster resources (such as disks, nodes, and storage space units). Clustermgr configuration is based on the , and the following configuration instructions mainly apply to private configuration for Clustermgr. | Configuration Item | Description | Required | |:|:--|:| | Public Configuration | Such as server ports, running logs, and audit logs, refer to the section | Yes | | chunk_size | The size of each chunk in blobnode, that is, the size of the created file | Yes | | cluster_id | Cluster ID | Yes | | idc | IDC number | Yes | | region | Region name | Yes | | db_path | Metadata DB storage path. In production environments, it is recommended to store data on an SSD disk. | Yes | | codemodepolicies | Encoding mode configuration, refer to the encoding mode detailed configuration, specific encoding information can refer to appendix | Yes | | raft_config | Detailed Raft configuration | Yes | | diskmgrconfig | Detailed disk management configuration | Yes | | host_aware | Host awareness | Yes | | rack_aware | Rack awareness | Yes | | volumemgrconfig | Volume management module | No | ```json { \"cluster_id\": \"Cluster ID\", \"idc\": \"IDC ID\", \"region\": \"Region name. There can be multiple clusters under one region. This can be used in conjunction with access to select multiple regions and clusters when writing.\", \"unavailable_idc\": \"Unavailable IDC ID\", \"readonly\": \"Whether it is read-only\", \"db_path\": \"Default path for storing metadata DB. The corresponding DB data will be automatically created in this directory if there is no additional configuration for the DB.\", \"normaldbpath\": \"Path for common DB\", \"normaldboption\": \"Common DB option configuration, mainly for tuning rocksdb\", \"kvdbpath\": \"Path for storing kv DB\", \"kvdboption\": \"KV DB option configuration, mainly for tuning rocksdb\", \"codemodepolicies\": [ { \"mode_name\": \"Encoding name\", \"min_size\": \"Minimum size of writable object\", \"max_size\": \"Maximum size of writable object\", \"size_ratio\": \"The proportion occupied by this mode\", \"enable\": \"Whether it is enabled\" }], \"raft_config\": { \"raftdbpath\": \"Path for raft DB\", \"snapshotpatchnum\": \"A snapshot of a DB is divided into multiple patches. The snapshot data of a DB is very large and needs to be divided into multiple patches to be sent.\", \"raftdboption\": \"Mainly for tuning rocksdb\", \"server_config\": { \"nodeId\": \"Raft node ID\", \"listen_port\": \"Raft dedicated port\", \"raftwaldir\": \"WAL log path\", \"raftwalsync\": \"Whether to flush to disk immediately\", \"tick_interval\": \"Heartbeat interval\", \"heartbeat_tick\": \"Heartbeat clock, default is 1\", \"electiontick\": \"Election clock, it is recommended to set it to 5*heartbeattick, default is 5\", \"max_snapshots\": \"Maximum concurrency of snapshots, default is 10\", \"snapshot_timeout\":\"Snapshot timeout\", \"propose_timeout\": \"Timeout for proposing messages\" }, \"raftnodeconfig\": { \"flushnuminterval\": \"Number of logs that trigger flush\", \"FlushTimeIntervalS\": \"Flush cycle interval\", \"truncatenuminterval\": \"The leader retains the maximum number of log"
},
{
"data": "The number of log entries loaded when the service starts can also be understood as the difference between the log entries of the leader and the follower. If it exceeds this value, log synchronization needs to go through snapshot synchronization, so this value is generally kept above 100,000.\", \"node_protocol\": \"Raft node synchronization protocol, generally http:// protocol\", \"members\": [{ \"id\":\"Node ID\", \"host\":\"Raft host address, ip:Raft port\", \"learner\": \"Whether the node participates in the election of the main node\", \"node_host\":\"Service host address, ip:Service port\" }], \"apply_flush\": \"Whether to flush\" } }, \"volumemgrconfig\": { \"blobnodeconfig\": \"See rpc configuration\", \"volumedbpath\": \"Volume data DB path\", \"volumedboption\": \"Option configuration\", \"retaintimes\": \"Renewal time, used in conjunction with the renewal time of the proxy\", \"retain_threshold\": \"The health of the volume that can be renewed. The health segment of the volume must be greater than this value to be renewed\", \"flushintervals\": \"Flush time interval\", \"checkexpiredvolumeintervals\": \"Time interval for checking whether the volume has expired\", \"volumeslicemap_num\": \"ConcurrentMap used for volume management in cm, used to improve the performance of volume read and write. This value determines how many maps all volumes are divided into for management\", \"apply_concurrency\": \"Concurrency of applying wal logs\", \"minallocablevolume_count\": \"Minimum number of allocatable volumes\", \"allocatablediskload_threshold\": \"Load of the corresponding disk that the volume can be allocated to\", \"allocatable_size\": \"Minimum capacity threshold a volume can allocate, default 10G, if the volume capacity is small adjust it to a lower value such as 10MB.\" }, \"diskmgrconfig\": { \"refreshintervals\": \"Interval for refreshing disk status of the current cluster\", \"host_aware\": \"Host awareness. Whether to allocate volumes on the same machine when allocating volumes. Host isolation must be configured in production environment\", \"heartbeatexpireinterval_s\": \"Interval for heartbeat expiration, for the heartbeat time reported by BlobNode\", \"rack_aware\": \"Rack awareness. Whether to allocate volumes on the same rack when allocating volumes. Rack isolation is configured based on the storage environment conditions\", \"flushintervals\": \"Flush time interval\", \"apply_concurrency\": \"Concurrency of application\", \"blobnodeconfig\": \"\", \"ensure_index\": \"Used to establish disk index\" }, \"clusterreportinterval_s\": \"Interval for reporting to consul\", \"consulagentaddr\": \"Consul address\", \"heartbeatnotifyinterval_s\": \"Interval for heartbeat notification, used to process the disk information reported by BlobNode regularly. This time should be smaller than the time interval reported by BlobNode to avoid disk heartbeat timeout expiration\", \"maxheartbeatnotify_num\": \"Maximum number of heartbeat notifications\", \"chunk_size\": \"Size of each chunk in BlobNode, that is, the size of the created file\" } ``` ```json { \"bind_addr\":\":9998\", \"cluster_id\":1, \"idc\":[\"z0\"], \"chunk_size\": 16777216, \"log\": { \"level\": \"info\", \"filename\": \"./run/logs/clustermgr1.log\" }, \"auth\": { \"enable_auth\": false, \"secret\": \"testsecret\" }, \"region\": \"test-region\", \"db_path\": \"./run/db1\", \"codemodepolicies\": [ {\"modename\":\"EC3P3\",\"minsize\":0,\"maxsize\":5368709120,\"sizeratio\":1,\"enable\":true} ], \"raft_config\": { \"snapshotpatchnum\": 64, \"server_config\": { \"nodeId\": 1, \"listen_port\": 10110, \"raftwaldir\": \"./run/raftwal1\" }, \"raftnodeconfig\":{ \"flushnuminterval\": 10000, \"flushtimeinterval_s\": 10, \"truncatenuminterval\": 10, \"node_protocol\": \"http://\", \"members\": [ {\"id\":1, \"host\":\"127.0.0.1:10110\", \"learner\": false, \"node_host\":\"127.0.0.1:9998\"}, {\"id\":2, \"host\":\"127.0.0.1:10111\", \"learner\": false, \"node_host\":\"127.0.0.1:9999\"}, {\"id\":3, \"host\":\"127.0.0.1:10112\", \"learner\": false, \"node_host\":\"127.0.0.1:10000\"} ] } }, \"diskmgrconfig\": { \"refreshintervals\": 10, \"rack_aware\":false, \"host_aware\":false } } ```"
}
] | {
"category": "Runtime",
"file_name": "cm.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "[TOC] The resource model for gVisor does not assume a fixed number of threads of execution (i.e. vCPUs) or amount of physical memory. Where possible, decisions about underlying physical resources are delegated to the host system, where optimizations can be made with global information. This delegation allows the sandbox to be highly dynamic in terms of resource usage: spanning a large number of cores and large amount of memory when busy, and yielding those resources back to the host when not. In order words, the shape of the sandbox should closely track the shape of the sandboxed process: Much like a Virtual Machine (VM), a gVisor sandbox appears as an opaque process on the system. Processes within the sandbox do not manifest as processes on the host system, and process-level interactions within the sandbox require entering the sandbox (e.g. via a ). The sandbox attaches a network endpoint to the system, but runs its own network stack. All network resources, other than packets in flight on the host, exist only inside the sandbox, bound by relevant resource limits. You can interact with network endpoints exposed by the sandbox, just as you would any other container, but network introspection similarly requires entering the sandbox. Files in the sandbox may be backed by different implementations. For host-native files (where a file descriptor is available), the Gofer may return a file descriptor to the Sentry via [^1]. These files may be read from and written to through standard system calls, and also mapped into the associated application's address space. This allows the same host memory to be shared across multiple sandboxes, although this mechanism does not preclude the use of side-channels (see ). Note that some file systems exist only within the context of the sandbox. For example, in many cases a `tmpfs` mount will be available at `/tmp` or `/dev/shm`, which allocates memory directly from the sandbox memory file (see below). Ultimately, these will be accounted against relevant limits in a similar way as the host native case. The Sentry models individual task threads with . As a result, each task thread is a lightweight , and may not correspond to an underlying host thread. However, application execution is modelled as a blocking system call with the Sentry. This means that additional host threads may be created, *depending on the number of active application threads*. In practice, a busy application will converge on the number of active threads, and the host will be able to make scheduling decisions about all application threads. Time in the sandbox is provided by the Sentry, through its own and time-keeping implementation. This is distinct from the host time, and no state is shared with the host, although the time will be initialized with the host"
},
{
"data": "The Sentry runs timers to note the passage of time, much like a kernel running on hardware (though the timers are software timers, in this case). These timers provide updates to the vDSO, the time returned through system calls, and the time recorded for usage or limit tracking (e.g. ). When all application threads are idle, the Sentry disables timers until an event occurs that wakes either the Sentry or an application thread, similar to a . This allows the Sentry to achieve near zero CPU usage for idle applications. The Sentry implements its own memory management, including demand-paging and a Sentry internal page cache for files that cannot be used natively. A single backs all application memory. The creation of address spaces is platform-specific. For some platforms, additional \"stub\" processes may be created on the host in order to support additional address spaces. These stubs are subject to various limits applied at the sandbox level (e.g. PID limits). The host is able to manage physical memory using regular means (e.g. tracking working sets, reclaiming and swapping under pressure). The Sentry lazily populates host mappings for applications, and allow the host to demand-page those regions, which is critical for the functioning of those mechanisms. In order to avoid excessive overhead, the Sentry does not demand-page individual pages. Instead, it selects appropriate regions based on heuristics. There is a trade-off here: the Sentry is unable to trivially determine which pages are active and which are not. Even if pages were individually faulted, the host may select pages to be reclaimed or swapped without the Sentry's knowledge. Therefore, memory usage statistics within the sandbox (e.g. via `proc`) are approximations. The Sentry maintains an internal breakdown of memory usage, and can collect accurate information but only through a relatively expensive API call. In any case, it would likely be considered unwise to share precise information about how the host is managing memory with the sandbox. Finally, when an application marks a region of memory as no longer needed, for example via a call to , the Sentry *releases this memory back to the host*. There can be performance penalties for this, since it may be cheaper in many cases to retain the memory and use it to satisfy some other request. However, releasing it immediately to the host allows the host to more effectively multiplex resources and apply an efficient global policy. All Sentry threads and Sentry memory are subject to a container cgroup. However, application usage will not appear as anonymous memory usage, and will instead be accounted to the `memfd`. All anonymous memory will correspond to Sentry usage, and host memory charged to the container will work as standard. The cgroups can be monitored for standard signals: pressure indicators, threshold notifiers, etc. and can also be adjusted dynamically. Note that the Sentry itself may listen for pressure signals in its containing cgroup, in order to purge internal caches. open host file descriptors itself, it can only receive them in this way from the Gofer."
}
] | {
"category": "Runtime",
"file_name": "resources.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
} |
[
{
"data": "dtrace-provider - Changes ========================= 0.8.8: Known support for v0.10.48, v0.12.16, v4.8.1, v6.17.0, v7.5.0, v8.16.0, v9.3.0, v10.16.0, v12.7.0 (#125) 0.8.7: Known support for v0.10.48, v0.12.16, v4.6.0, v7.5.0, v8.9.4, v10.3.0 (#119) Don't crash when attempting to fire unknown probes (#120) 0.8.6: Improved compilation failure behaviour (#96) 0.8.5: Reverted \"Install fails on Debian due to differently named node binary\" for now 0.8.4: Only log error once when DTraceProviderBindings can't be found Install fails on Debian due to differently named node binary 0.8.3: Install fails with yarn 0.8.2: Error installing in 64-bit SmartOS zones with 32-bit node 0.8.1: Support FreeBSD 10 & 11 0.8.0: Support passing additional arguments to probe function via `.fire()` 0.7.1: Update libusdt for chrisa/libusdt#12 fix 0.7.0: known support for v0.10.47, v0.12.16, v4.6.0. Updated NaN dependency to remove warnings on newer Node versions. 0.2.8: Add NODE_MODULE() declaration for compatibility with Node 0.9.1+ (reported by Trent Mick) Remove execSync dependency from tests. 0.2.7: Don't build on FreeBSD by default - DTrace is not yet built in releases. 0.2.6: Fall back to make(1) if gmake(1) is unavailable, still expected to be GNU Make (Trent Mick) 0.2.5: Add \"json\" probe argument type, automatically serialising objects as JSON Trust npm to set PATH appropriately when invoking node (reported by Dave Pacheco) libusdt update - allow provider memory to be freed (reported by Bryan Cantrill) Build libusdt with gmake by default (reported by Keith Wesolowski) Turn the various scripts in test/ into a TAP-based testsuite. 0.2.4: Improve Node architecture detection to support 0.6.x, and respect npm's prefix when choosing a node binary to use (reported by Trent Mick) 0.2.3: libusdt update - don't invoke ranlib on SunOS-derived systems Disambiguate module name in probe tuple, and optionally allow it to be specified when creating a provider. (Bryan Cantrill bcantrill@acm.org) 0.2.2: libusdt update for build fixes Respect MAKE variable in build script 0.2.1: Update binding.gyp for clang on Snow Leopard - no space after -L. 0.2.0: Update libusdt, and attempt to build it correctly for various platforms. Add support for disabling providers and removing probes. 0.1.1: Replace Node-specific implementation with wrappers for libusdt. Extend argument support to 32 primitives. Adds Solaris x86_64 support. 0.0.9: Force the build architecture to x86_64 for OS X. 0.0.8: Removed overridden \"scripts\" section from package.json, breaking Windows installs 0.0.7: Fix for multiple enable() calls breaking providers. 0.0.6: Fix for segfault trying to use non-enabled probes (Mark Cavage mcavage@gmail.com) 0.0.5: Revert changes to make probe objects available. 0.0.4: Remove unused \"sys\" import (Alex Whitman) No longer builds an empty extension on non-DTrace platforms Probe objects are made available to Javascript. 0.0.3: Builds to a stubbed-out version on non-DTrace platforms (Mark Cavage <mcavage@gmail.com>) 0.0.2: Solaris i386 support. Fixes memory leaks Improved performance, enabled- and disabled-probe. 0.0.1: First working version: OSX x86_64 only. [issue #157] Restore dtrace-provider as a dependency (in \"optionalDependencies\"). Dtrace-provider version 0.3.0 add build sugar that should eliminate the problems from older versions: The build is not attempted on Linux and Windows. The build spew is not emitted by default (use `V=1 npm install` to see it); instead a short warning is emitted if the build fails. Also, importantly, the new dtrace-provider fixes working with node v0.11/0.12. [issue #165] Include extra `err` fields in `bunyan` CLI output. Before this change only the fields part of the typical node.js error stack (err.stack, err.message, err.name) would be emitted, even though the Bunyan library would typically include err.code and err.signal in the raw JSON log record. Fix a breakage in"
},
{
"data": "on a logger with no serializers. Note: Bad release. It breaks `log.info(err)` on a logger with no serializers. Use version 1.1.2. [pull #168] Fix handling of `log.info(err)` to use the `log` Logger's `err` serializer if it has one, instead of always using the core Bunyan err serializer. (By Mihai Tomescu.) . See . [issues #105, #138, #151] Export `<Logger>.addStream(...)` and `<Logger>.addSerializers(...)` to be able to add them after Logger creation. Thanks @andreineculau! [issue #159] Fix bad handling in construtor guard intending to allow creation without \"new\": `var log = Logger(...)`. Thanks @rmg! [issue #156] Smaller install size via .npmignore file. [issue #126, #161] Ignore SIGINT (Ctrl+C) when processing stdin. `...| bunyan` should expect the preceding process in the pipeline to handle SIGINT. While it is doing so, `bunyan` should continue to process any remaining output. Thanks @timborodin and @jnordberg! [issue #160] Stop using ANSI 'grey' in `bunyan` CLI output, because of the problems that causes with Solarized Dark themes (see <https://github.com/altercation/solarized/issues/220>). [issue #87] Backward incompatible change to `-c CODE` improving performance by over 10x (good!), with a backward incompatible change to semantics (unfortunate), and adding some sugar (good!). The `-c CODE` implementation was changed to use a JS function for processing rather than `vm.runInNewContext`. The latter was specatularly slow, so won't be missed. Unfortunately this does mean a few semantic differences in the `CODE`, the most noticeable of which is that `this` is required to access the object fields: $ bunyan -c 'pid === 123' foo.log ... $ bunyan -c 'this.pid === 123' foo.log ... The old behaviour of `-c` can be restored with the `BUNYAN_EXEC=vm` environment variable: $ BUNYAN_EXEC=vm bunyan -c 'pid === 123' foo.log ... Some sugar was also added: the TRACE, DEBUG, ... constants are defined, so one can: $ bunyan -c 'this.level >= ERROR && this.component === \"http\"' foo.log ... And example of the speed improvement on a 10 MiB log example: $ time BUNYAN_EXEC=vm bunyan -c 'this.level === ERROR' big.log | cat >slow real 0m6.349s user 0m6.292s sys 0m0.110s $ time bunyan -c 'this.level === ERROR' big.log | cat >fast real 0m0.333s user 0m0.303s sys 0m0.028s The change was courtesy Patrick Mooney (https://github.com/pfmooney). Thanks! Add `bunyan -0 ...` shortcut for `bunyan -o bunyan ...`. [issue #135] Backward incompatible. Drop dtrace-provider even from `optionalDependencies`. Dtrace-provider has proven a consistent barrier to installing bunyan, because it is a binary dep. Even as an optional dep it still caused confusion and install noise. Users of Bunyan on dtrace-y platforms (SmartOS, Mac, Illumos, Solaris) will need to manually `npm install dtrace-provider` themselves to get [Bunyan's dtrace support](https://github.com/trentm/node-bunyan#runtime-log-snooping-via-dtrace) to work. If not installed, bunyan should stub it out properly. [pull #125, pull #97, issue #73] Unref rotating-file timeout which was preventing processes from exiting (by https://github.com/chakrit and https://github.com/glenn-murray-bse). Note: this only fixes the issue for node 0.10 and above. [issue #139] Fix `bunyan` crash on a log record with `res.header` that is an object. A side effect of this improvement is that a record with `res.statusCode` but no header info will render a response block, for example: [2012-08-08T10:25:47.637Z] INFO: my-service/12859 on my-host: some message (...) ... -- HTTP/1.1 200 OK -- ... [pull #42] Fix `bunyan` crash on a log record with `req.headers` that is a string (by https://github.com/aexmachina). Drop node 0.6 support. I can't effectively `npm install` with a node 0.6 anymore. [issue #85] Ensure logging a non-object/non-string doesn't throw (by https://github.com/mhart). This changes fixes: log.info(<bool>) # TypeError: Object.keys called on non-object"
},
{
"data": "# \"msg\":\"\" (instead of wanted \"msg\":\"[Function]\") log.info(<array>) # \"msg\":\"\" (instead of wanted \"msg\":util.format(<array>)) Republish the same code to npm. Note: Bad release. The published package in the npm registry got corrupted. Use 0.22.3 or later. [issue #131] Allow `log.info(<number>)` and, most importantly, don't crash on that. Update 'mv' optional dep to latest. [issue #111] Fix a crash when attempting to use `bunyan -p` on a platform without dtrace. [issue #101] Fix a crash in `bunyan` rendering a record with unexpected \"res.headers\". [issue #104] `log.reopenFileStreams()` convenience method to be used with external log rotation. [issue #96] Fix `bunyan` to default to paging (with `less`) by default in node 0.10.0. The intention has always been to default to paging for node >=0.8. [issue #90] Fix `bunyan -p '*'` breakage in version 0.21.2. Note: Bad release. The switchrate change below broke `bunyan -p '*'` usage (see issue #90). Use 0.21.3 or later. [issue #88] Should be able to efficiently combine \"-l\" with \"-p *\". Avoid DTrace buffer filling up, e.g. like this: $ bunyan -p 42241 > /tmp/all.log dtrace: error on enabled probe ID 3 (ID 75795: bunyan42241:mod-87ea640:log-trace:log-trace): out of scratch space in action #1 at DIF offset 12 dtrace: error on enabled probe ID 3 (ID 75795: bunyan42241:mod-87ea640:log-trace:log-trace): out of scratch space in action #1 at DIF offset 12 dtrace: 138 drops on CPU 4 ... From Bryan: \"the DTrace buffer is filling up because the string size is so large... by increasing the switchrate, you're increasing the rate at which that buffer is emptied.\" [pull #83] Support rendering 'client_res' key in bunyan CLI (by github.com/mcavage). 'make check' clean, 4-space indenting. No functional change here, just lots of code change. [issue #80, #82] Drop assert that broke using 'rotating-file' with a default `period` (by github.com/ricardograca). [Slight backward incompatibility] Fix serializer bug introduced in 0.18.3 (see below) to only apply serializers to log records when appropriate. This also makes a semantic change to custom serializers. Before this change a serializer function was called for a log record key when that value was truth-y. The semantic change is to call the serializer function as long as the value is not `undefined`. That means that a serializer function should handle falsey values such as `false` and `null`. Update to latest 'mv' dep (required for rotating-file support) to support node v0.10.0. WARNING: This release includes a bug introduced in bunyan 0.18.3 (see below). Please upgrade to bunyan 0.20.0. [Slight backward incompatibility] Change the default error serialization (a.k.a. `bunyan.stdSerializers.err`) to not serialize all additional attributes of the given error object. This is an open door to unsafe logging and logging should always be safe. With this change, error serialization will log these attributes: message, name, stack, code, signal. The latter two are added because some core node APIs include those fields (e.g. `child_process.exec`). Concrete examples where this has hurt have been the \"domain\" change necessitating 0.18.3 and a case where uses an error object as the response object. When logging the `err` and `res` in the same log statement (common for restify audit logging), the `res.body` would be JSON stringified as '[Circular]' as it had already been emitted for the `err` key. This results in a WTF with the bunyan CLI because the `err.body` is not rendered. If you need the old behaviour back you will need to do this: var bunyan = require('bunyan'); var errSkips = { // Skip domain keys. `domain` especially can have huge objects that can // OOM your app when trying to"
},
{
"data": "domain: true, domain_emitter: true, domain_bound: true, domain_thrown: true }; bunyan.stdSerializers.err = function err(err) { if (!err || !err.stack) return err; var obj = { message: err.message, name: err.name, stack: getFullErrorStack(err) } Object.keys(err).forEach(function (k) { if (err[k] !== undefined && !errSkips[k]) { obj[k] = err[k]; } }); return obj; }; \"long\" and \"bunyan\" output formats for the CLI. `bunyan -o long` is the default format, the same as before, just called \"long\" now instead of the cheesy \"paul\" name. The \"bunyan\" output format is the same as \"json-0\", just with a more convenient name. WARNING: This release introduced a bug such that all serializers are applied to all log records even if the log record did not contain the key for that serializer. If a logger serializer function does not handle being given `undefined`, then you'll get warnings like this on stderr: bunyan: ERROR: This should never happen. This is a bug in <https://github.com/trentm/node-bunyan> or in this application. Exception from \"foo\" Logger serializer: Error: ... at Object.bunyan.createLogger.serializers.foo (.../myapp.js:20:15) at Logger._applySerializers (.../lib/bunyan.js:644:46) at Array.forEach (native) at Logger._applySerializers (.../lib/bunyan.js:640:33) ... and the following junk in written log records: \"foo\":\"(Error in Bunyan log \"foo\" serializer broke field. See stderr for details.)\" Please upgrade to bunyan 0.20.0. Change the `bunyan.stdSerializers.err` serializer for errors to exclude . `err.domain` will include its assigned members which can arbitrarily large objects that are not intended for logging. Make the \"dtrace-provider\" dependency optional. I hate to do this, but installing bunyan on Windows is made very difficult with this as a required dep. Even though \"dtrace-provider\" stubs out for non-dtrace-y platforms, without a compiler and Python around, node-gyp just falls over. [pull #67] Remove debugging prints in rotating-file support. (by github.com/chad3814). Update to dtrace-provider@0.2.7. Get the `bunyan` CLI to not automatically page (i.e. pipe to `less`) if stdin isn't a TTY, or if following dtrace probe output (via `-p PID`), or if not given log file arguments. Automatic paging support in the `bunyan` CLI (similar to `git log` et al). IOW, `bunyan` will open your pager (by default `less`) and pipe rendered log output through it. A main benefit of this is getting colored logs with a pager without the pain. Before you had to explicit use `--color` to tell bunyan to color output when the output was not a TTY: bunyan foo.log --color | less -R # before bunyan foo.log # now Disable with the `--no-pager` option or the `BUNYANNOPAGER=1` environment variable. Limitations: Only supported for node >=0.8. Windows is not supported (at least not yet). Switch test suite to nodeunit (still using a node-tap'ish API via a helper). [issue #33] Log rotation support: var bunyan = require('bunyan'); var log = bunyan.createLogger({ name: 'myapp', streams: [{ type: 'rotating-file', path: '/var/log/myapp.log', count: 7, period: 'daily' }] }); Tweak to CLI default pretty output: don't special case \"latency\" field. The special casing was perhaps nice, but less self-explanatory. Before: [2012-12-27T21:17:38.218Z] INFO: audit/45769 on myserver: handled: 200 (15ms, audit=true, bar=baz) GET /foo ... After: [2012-12-27T21:17:38.218Z] INFO: audit/45769 on myserver: handled: 200 (audit=true, bar=baz, latency=15) GET /foo ... Exit CLI on EPIPE, otherwise we sit there useless processing a huge log file with, e.g. `bunyan huge.log | head`. Guards on `-c CONDITION` usage to attempt to be more user friendly. Bogus JS code will result in this: $ bunyan portal.log -c 'this.req.username==boo@foo' bunyan: error: illegal CONDITION code: SyntaxError: Unexpected token ILLEGAL CONDITION script: Object.prototype.TRACE = 10; Object.prototype.DEBUG = 20; Object.prototype.INFO = 30; Object.prototype.WARN = 40; Object.prototype.ERROR = 50; Object.prototype.FATAL = 60;"
},
{
"data": "Error: SyntaxError: Unexpected token ILLEGAL at new Script (vm.js:32:12) at Function.Script.createScript (vm.js:48:10) at parseArgv (/Users/trentm/tm/node-bunyan-0.x/bin/bunyan:465:27) at main (/Users/trentm/tm/node-bunyan-0.x/bin/bunyan:1252:16) at Object.<anonymous> (/Users/trentm/tm/node-bunyan-0.x/bin/bunyan:1330:3) at Module._compile (module.js:449:26) at Object.Module._extensions..js (module.js:467:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.runMain (module.js:492:10) And all CONDITION scripts will be run against a minimal valid Bunyan log record to ensure they properly guard against undefined values (at least as much as can reasonably be checked). For example: $ bunyan portal.log -c 'this.req.username==\"bob\"' bunyan: error: CONDITION code cannot safely filter a minimal Bunyan log record CONDITION script: Object.prototype.TRACE = 10; Object.prototype.DEBUG = 20; Object.prototype.INFO = 30; Object.prototype.WARN = 40; Object.prototype.ERROR = 50; Object.prototype.FATAL = 60; this.req.username==\"bob\" Minimal Bunyan log record: { \"v\": 0, \"level\": 30, \"name\": \"name\", \"hostname\": \"hostname\", \"pid\": 123, \"time\": 1355514346206, \"msg\": \"msg\" } Filter error: TypeError: Cannot read property 'username' of undefined at bunyan-condition-0:7:9 at Script.Object.keys.forEach.(anonymous function) [as runInNewContext] (vm.js:41:22) at parseArgv (/Users/trentm/tm/node-bunyan-0.x/bin/bunyan:477:18) at main (/Users/trentm/tm/node-bunyan-0.x/bin/bunyan:1252:16) at Object.<anonymous> (/Users/trentm/tm/node-bunyan-0.x/bin/bunyan:1330:3) at Module._compile (module.js:449:26) at Object.Module._extensions..js (module.js:467:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.runMain (module.js:492:10) A proper way to do that condition would be: $ bunyan portal.log -c 'this.req && this.req.username==\"bob\"' [issue #59] Clear a possibly interrupted ANSI color code on signal termination. [issue #56] Support `bunyan -p NAME` to dtrace all PIDs matching 'NAME' in their command and args (using `ps -A -o pid,command | grep NAME` or, on SunOS `pgrep -lf NAME`). E.g.: bunyan -p myappname This is useful for usage of node's [cluster module](http://nodejs.org/docs/latest/api/all.html#all_cluster) where you'll have multiple worker processes. Allow `bunyan -p ''` to capture bunyan dtrace probes from all* processes. issue #55: Add support for `BUNYANNOCOLOR` environment variable to turn off all output coloring. This is still overridden by the `--color` and `--no-color` options. issue #54: Ensure (again, see 0.16.2) that stderr from the dtrace child process (when using `bunyan -p PID`) gets through. There had been a race between exiting bunyan and the flushing of the dtrace process' stderr. Drop 'trentm-dtrace-provider' fork dep now that <https://github.com/chrisa/node-dtrace-provider/pull/24> has been resolved. Back to dtrace-provider. Ensure that stderr from the dtrace child process (when using `bunyan -p PID`) gets through. The `pipe` usage wasn't working on SmartOS. This is important to show the user if they need to 'sudo'. Ensure that a possible dtrace child process (with using `bunyan -p PID`) is terminated on signal termination of the bunyan CLI (at least for SIGINT, SIGQUIT, SIGTERM, SIGHUP). Add `bunyan -p PID` support. This is a convenience wrapper that effectively calls: dtrace -x strsize=4k -qn 'bunyan$PID:::log-*{printf(\"%s\", copyinstr(arg0))}' | bunyan issue #48: Dtrace support! The elevator pitch is you can watch all logging from all Bunyan-using process with something like this: dtrace -x strsize=4k -qn 'bunyan:::log-{printf(\"%d: %s: %s\", pid, probefunc, copyinstr(arg0))}' And this can include log levels below what the service is actually configured to log. E.g. if the service is only logging at INFO level and you need to see DEBUG log messages, with this you can. Obviously this only works on dtrace-y platforms: Illumos derivatives of SunOS (e.g. SmartOS, OmniOS), Mac, FreeBSD. Or get the bunyan CLI to render logs nicely: dtrace -x strsize=4k -qn 'bunyan:::log-{printf(\"%s\", copyinstr(arg0))}' | bunyan See <https://github.com/trentm/node-bunyan#dtrace-support> for details. By Bryan Cantrill. Export `bunyan.safeCycles()`. This may be useful for custom `type == \"raw\"` streams that may do JSON stringification of log records themselves. Usage: var str = JSON.stringify(rec, bunyan.safeCycles()); [issue #49] Allow a `log.child()` to specify the level of inherited streams. For example: var childLog = log.child({...}); childLog.level('debug'); var childLog ="
},
{
"data": "level: 'debug'}); Improve the Bunyan CLI crash message to make it easier to provide relevant details in a bug report. Fix a bug in the long-stack-trace error serialization added in 0.14.4. The symptom: bunyan@0.14.4: .../node_modules/bunyan/lib/bunyan.js:1002 var ret = ex.stack || ex.toString(); ^ TypeError: Cannot read property 'stack' of undefined at getFullErrorStack (.../node_modules/bunyan/lib/bunyan.js:1002:15) ... Bad release. Use 0.14.5 instead. Improve error serialization to walk the chain of `.cause()` errors from the likes of `WError` or `VError` error classes from and . Example: [2012-10-11T00:30:21.871Z] ERROR: imgapi/99612 on 0525989e-2086-4270-b960-41dd661ebd7d: my-message ValidationFailedError: my-message; caused by TypeError: cause-error-message at Server.apiPing (/opt/smartdc/imgapi/lib/app.js:45:23) at next (/opt/smartdc/imgapi/node_modules/restify/lib/server.js:550:50) at Server.setupReq (/opt/smartdc/imgapi/lib/app.js:178:9) at next (/opt/smartdc/imgapi/node_modules/restify/lib/server.js:550:50) at Server.parseBody (/opt/smartdc/imgapi/nodemodules/restify/lib/plugins/bodyparser.js:15:33) at next (/opt/smartdc/imgapi/node_modules/restify/lib/server.js:550:50) at Server.parseQueryString (/opt/smartdc/imgapi/node_modules/restify/lib/plugins/query.js:40:25) at next (/opt/smartdc/imgapi/node_modules/restify/lib/server.js:550:50) at Server.run (/opt/smartdc/imgapi/nodemodules/restify/lib/server.js:579:17) at Server.handle.log.trace.req (/opt/smartdc/imgapi/nodemodules/restify/lib/server.js:480:38) Caused by: TypeError: cause-error-message at Server.apiPing (/opt/smartdc/imgapi/lib/app.js:40:25) at next (/opt/smartdc/imgapi/node_modules/restify/lib/server.js:550:50) at Server.setupReq (/opt/smartdc/imgapi/lib/app.js:178:9) at next (/opt/smartdc/imgapi/node_modules/restify/lib/server.js:550:50) at Server.parseBody (/opt/smartdc/imgapi/nodemodules/restify/lib/plugins/bodyparser.js:15:33) at next (/opt/smartdc/imgapi/node_modules/restify/lib/server.js:550:50) at Server.parseQueryString (/opt/smartdc/imgapi/node_modules/restify/lib/plugins/query.js:40:25) at next (/opt/smartdc/imgapi/node_modules/restify/lib/server.js:550:50) at Server.run (/opt/smartdc/imgapi/nodemodules/restify/lib/server.js:579:17) at Server.handle.log.trace.req (/opt/smartdc/imgapi/nodemodules/restify/lib/server.js:480:38) [issue #45] Fix bunyan CLI (default output mode) to not crash on a 'res' field that isn't a response object, but a string. [issue #44] Fix the default `bunyan` CLI output of a `res.body` that is an object instead of a string. See issue#38 for the same with `req.body`. [pull #41] Safe `JSON.stringify`ing of emitted log records to avoid blowing up on circular objects (by Isaac Schlueter). [issue #39] Fix a bug with `client_req` handling in the default output of the `bunyan` CLI. [issue #38] Fix the default `bunyan` CLI output of a `req.body` that is an object instead of a string. Export `bunyan.resolveLevel(NAME-OR-NUM)` to resolve a level name or number to its log level number value: > bunyan.resolveLevel('INFO') 30 > bunyan.resolveLevel('debug') 20 A side-effect of this change is that the uppercase level name is now allowed in the logger constructor. [issue #35] Ensure that an accidental `log.info(BUFFER)`, where BUFFER is a node.js Buffer object, doesn't blow up. [issue #34] Ensure `req.body`, `res.body` and other request/response fields are emitted by the `bunyan` CLI (mostly by Rob Gulewich). [issue #31] Re-instate defines for the (uppercase) log level names (TRACE, DEBUG, etc.) in `bunyan -c \"...\"` filtering condition code. E.g.: $ ... | bunyan -c 'level >= ERROR' [pull #32] `bunyan -o short` for more concise output (by Dave Pacheco). E.g.: 22:56:52.856Z INFO myservice: My message instead of: [2012-02-08T22:56:52.856Z] INFO: myservice/123 on example.com: My message Add '--strict' option to `bunyan` CLI to suppress all but legal Bunyan JSON log lines. By default non-JSON, and non-Bunyan lines are passed through. [issue #30] Robust handling of 'req' field without a 'headers' subfield in `bunyan` CLI. [issue #31] Pull the TRACE, DEBUG, et al defines from `bunyan -c \"...\"` filtering code. This was added in v0.11.1, but has a significant adverse affect. Bad release. The TRACE et al names are bleeding into the log records when using '-c'. Add defines for the (uppercase) log level names (TRACE, DEBUG, etc.) in `bunyan -c \"...\"` filtering condition code. E.g.: $ ... | bunyan -c 'level >= ERROR' [pull #29] Add -l/--level for level filtering, and -c/--condition for arbitrary conditional filtering (by github.com/isaacs): $ ... | bunyan -l error # filter out log records below error $ ... | bunyan -l 50 # numeric value works too $ ... | bunyan -c 'level===50' # equiv with -c filtering $ ... | bunyan -c 'pid===123' # filter on any field $ ... | bunyan -c 'pid===123' -c '_audit' # multiple filters [pull #24] Support for gzip'ed log files in the bunyan CLI (by"
},
{
"data": "$ bunyan foo.log.gz ... [pull #16] Bullet proof the `bunyan.stdSerializers` (by github.com/rlidwka). [pull #15] The `bunyan` CLI will now chronologically merge multiple log streams when it is given multiple file arguments. (by github.com/davepacheco) $ bunyan foo.log bar.log ... merged log records ... [pull #15] A new `bunyan.RingBuffer` stream class that is useful for keeping the last N log messages in memory. This can be a fast way to keep recent, and thus hopefully relevant, log messages. (by @dapsays, github.com/davepacheco) Potential uses: Live debugging if a running process could inspect those messages. One could dump recent log messages at a finer log level than is typically logged on . var ringbuffer = new bunyan.RingBuffer({ limit: 100 }); var log = new bunyan({ name: 'foo', streams: [{ type: 'raw', stream: ringbuffer, level: 'debug' }] }); log.info('hello world'); console.log(ringbuffer.records); Add support for \"raw\" streams. This is a logging stream that is given raw log record objects instead of a JSON-stringified string. function Collector() { this.records = []; } Collector.prototype.write = function (rec) { this.records.push(rec); } var log = new Logger({ name: 'mylog', streams: [{ type: 'raw', stream: new Collector() }] }); See \"examples/raw-stream.js\". I expect raw streams to be useful for piping Bunyan logging to separate services (e.g. <http://www.loggly.com/>, <https://github.com/etsy/statsd>) or to separate in-process handling. Add test/corpus/*.log files (accidentally excluded) so the test suite actually works(!). [pull #21] Bunyan loggers now re-emit `fs.createWriteStream` error events. By github.com/EvanOxfeld. See \"examples/handle-fs-error.js\" and \"test/error-event.js\" for details. var log = new Logger({name: 'mylog', streams: [{path: FILENAME}]}); log.on('error', function (err, stream) { // Handle error writing to or creating FILENAME. }); jsstyle'ing (via `make check`) [issue #12] Add `bunyan.createLogger(OPTIONS)` form, as is more typical in node.js APIs. This'll eventually become the preferred form. Change `bunyan` CLI default output to color \"src\" info red. Before the \"src\" information was uncolored. The \"src\" info is the filename, line number and function name resulting from using `src: true` in `Logger` creation. I.e., the `(/Users/trentm/tm/node-bunyan/examples/hi.js:10)` in: [2012-04-10T22:28:58.237Z] INFO: myapp/39339 on banana.local (/Users/trentm/tm/node-bunyan/examples/hi.js:10): hi Tweak `bunyan` CLI default output to still show an \"err\" field if it doesn't have a \"stack\" attribute. Fix bad bug in `log.child({...}, true);` where the added child fields would be added to the parent's fields. This bug only existed for the \"fast child\" path (that second `true` argument). A side-effect of fixing this is that the \"fast child\" path is only 5 times as fast as the regular `log.child`, instead of 10 times faster. [issue #6] Fix bleeding 'type' var to global namespace. (Thanks Mike!) Add support to the `bunyan` CLI taking log file path args, `bunyan foo.log`, in addition to the usual `cat foo.log | bunyan`. Improve reliability of the default output formatting of the `bunyan` CLI. Before it could blow up processing log records missing some expected fields. ANSI coloring output from `bunyan` CLI tool (for the default output mode/style). Also add the '--color' option to force coloring if the output stream is not a TTY, e.g. `cat my.log | bunyan --color | less -R`. Use `--no-color` to disable coloring, e.g. if your terminal doesn't support ANSI codes. Add 'level' field to log record before custom fields for that record. This just means that the raw record JSON will show the 'level' field earlier, which is a bit nicer for raw reading. [issue #5] Fix `log.info() -> boolean` to work properly. Previous all were returning false. Ditto all trace/debug/.../fatal methods. Allow an optional `msg` and arguments to the `log.info(<Error> err)` logging"
},
{
"data": "For example, before: log.debug(myerrorinstance) // good log.debug(myerrorinstance, \"boom!\") // wasn't allowed Now the latter is allowed if you want to expliciting set the log msg. Of course this applies to all the `log.{trace|debug|info...}()` methods. `bunyan` cli output: clarify extra fields with quoting if empty or have spaces. E.g. 'cmd' and 'stderr' in the following: [2012-02-12T00:30:43.736Z] INFO: mo-docs/43194 on banana.local: buildDocs results (req_id=185edca2-2886-43dc-911c-fe41c09ec0f5, route=PutDocset, error=null, stderr=\"\", cmd=\"make docs\") Fix/guard against unintended inclusion of some files in npm published package due to <https://github.com/isaacs/npm/issues/2144> Internal: starting jsstyle usage. Internal: add .npmignore. Previous packages had reams of bunyan crud in them. Add 'pid' automatic log record field. Add 'client_req' (HTTP client request) standard formatting in `bunyan` CLI default output. Improve `bunyan` CLI default output to include all log record keys. Unknown keys are either included in the first line parenthetical (if short) or in the indented subsequent block (if long or multiline). [issue #3] More type checking of `new Logger(...)` and `log.child(...)` options. Start a test suite. [issue #2] Add guard on `JSON.stringify`ing of log records before emission. This will prevent `log.info` et al throwing on record fields that cannot be represented as JSON. An error will be printed on stderr and a clipped log record emitted with a 'bunyanMsg' key including error details. E.g.: bunyan: ERROR: could not stringify log record from /Users/trentm/tm/node-bunyan/examples/unstringifyable.js:12: TypeError: Converting circular structure to JSON { \"name\": \"foo\", \"hostname\": \"banana.local\", \"bunyanMsg\": \"bunyan: ERROR: could not stringify log record from /Users/trentm/tm/node-bunyan/examples/unstringifyable.js:12: TypeError: Converting circular structure to JSON\", ... Some timing shows this does effect log speed: $ node tools/timeguard.js # before Time try/catch-guard on JSON.stringify: log.info: 0.07365ms per iteration $ node tools/timeguard.js # after Time try/catch-guard on JSON.stringify: log.info: 0.07368ms per iteration Use 10/20/... instead of 1/2/... for level constant values. Ostensibly this allows for intermediary levels from the defined \"trace/debug/...\" set. However, that is discouraged. I'd need a strong user argument to add support for easily using alternative levels. Consider using a separate JSON field instead. s/service/name/ for Logger name field. \"service\" is unnecessarily tied to usage for a service. No need to differ from log4j Logger \"name\". Add `log.level(...)` and `log.levels(...)` API for changing logger stream levels. Add `TRACE|DEBUG|INFO|WARN|ERROR|FATAL` level constants to exports. Add `log.info(err)` special case for logging an `Error` instance. For example `log.info(new TypeError(\"boom\")` will produce: ... \"err\": { \"message\": \"boom\", \"name\": \"TypeError\", \"stack\": \"TypeError: boom\\n at Object.<anonymous> ...\" }, \"msg\": \"boom\", ... Add `new Logger({src: true})` config option to have a 'src' attribute be automatically added to log records with the log call source info. Example: \"src\": { \"file\": \"/Users/trentm/tm/node-bunyan/examples/src.js\", \"line\": 20, \"func\": \"Wuzzle.woos\" }, `log.child(options[, simple])` Added `simple` boolean arg. Set `true` to assert that options only add fields (no config changes). Results in a 10x speed increase in child creation. See \"tools/timechild.js\". On my Mac, \"fast child\" creation takes about 0.001ms. IOW, if your app is dishing 10,000 req/s, then creating a log child for each request will take about 1% of the request time. `log.clone` -> `log.child` to better reflect the relationship: streams and serializers are inherited. Streams can't be removed as part of the child creation. The child doesn't own the parent's streams (so can't close them). Clean up Logger creation. The goal here was to ensure `log.child` usage is fast. TODO: measure that. Add `Logger.stdSerializers.err` serializer which is necessary to get good Error object logging with node 0.6 (where core Error object properties are non-enumerable). Spec'ing core/recommended log record fields. Add `LOG_VERSION` to exports. Improvements to request/response serializations. First release."
}
] | {
"category": "Runtime",
"file_name": "CHANGES.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
} |
[
{
"data": "layout: global title: FUSE SDK with Local Cache Quick Start The followings are the basic requirements running ALLUXIO POSIX API. On one of the following supported operating systems MacOS 10.10 or later CentOS - 6.8 or 7 RHEL - 7.x Ubuntu - 16.04 Install JDK 11, or newer JDK 8 has been reported to have some bugs that may crash the FUSE applications, see for more details. Install libfuse On Linux, we support libfuse both version 2 and 3 To use with libfuse2, install 2.9.3 or newer (2.8.3 has been reported to also work with some warnings). For example on a Redhat, run `yum install fuse` To use with libfuse3 (Default), install 3.2.6 or newer (We are currently testing against 3.2.6). For example on a Redhat, run `yum install fuse3` See to learn more about the libfuse version used by alluxio On MacOS, install 3.7.1 or newer. For example, run `brew install osxfuse` Download the Alluxio tarball from . Unpack the downloaded file with the following commands. ```shell $ tar -xzf alluxio-{{site.ALLUXIOVERSIONSTRING}}-bin.tar.gz $ cd alluxio-{{site.ALLUXIOVERSIONSTRING}} ``` The `alluxio-fuse` launch command will be `dora/integration/fuse/bin/alluxio-fuse` Alluxio POSIX API allows accessing data from under storage as local directories. This is enabled by using the `mount` command to mount a dataset from under storage to local mount point: ```shell $ sudo yum install fuse3 $ alluxio-fuse mount <understoragedataset> <mount_point> -o option ``` `understoragedataset`: The full under storage dataset address. e.g. `s3://bucketname/path/to/dataset`, `hdfs://namenodeaddress:port/path/to/dataset` `mount_point`: The local mount point to mount the under storage dataset to. Note that the `<mount_point>` must be an existing and empty path in your local file system hierarchy. User that runs the `mount` command must own the mount point and have read and write permissions on it. `-o option`: All the `alluxio-fuse mount` options are provided using this format. Options include Alluxio property key value pair in `-o alluxiopropertykey=value` format Under storage credentials and configuration. Detailed configuration can be found under the `Storage Integrations` tap of the left of the doc page. Local cache configuration. Detailed usage can be found in the Generic mount options. Detailed supported mount options information can be found in the After mounting, `alluxio-fuse` mount can be found locally ```shell $ mount | grep \"alluxio-fuse\" alluxio-fuse on mountpoint type fuse.alluxio-fuse (rw,nosuid,nodev,relatime,userid=1000,group_id=1000) ``` `AlluxioFuse` process will be launched ```shell $ jps 34884 AlluxioFuse ``` All the fuse logs can be found at `logs/fuse.log` and all the fuse outputs can be found at `logs/fuse.out` which are useful for troubleshooting when errors happen on operations under the filesystem. Mounts the dataset in target S3 bucket to a local folder: ```shell $ alluxio-fuse mount s3://bucketname/path/to/dataset/ /path/to/mountpoint -o s3a.accessKeyId=<S3 ACCESS KEY> -o s3a.secretKey=<S3 SECRET KEY> ``` Other (e.g. `-o alluxio.underfs.s3.region=<region>`) can also be set via the `-o alluxiopropertykey=value`"
},
{
"data": "Mounts the dataset in target HDFS cluster to a local folder: ```shell $ alluxio-fuse mount hdfs://nameservice/path/to/dataset /path/to/mount_point -o alluxio.underfs.hdfs.configuration=/path/to/hdfs/conf/core-site.xml:/path/to/hdfs/conf/hdfs-site.xml ``` The supported versions of HDFS can be specified via `-o alluxio.underfs.version=2.7` or `-o alluxio.underfs.version=3.3`. Other can also be set via the `-o alluxiopropertykey=value` format. After mounting the dataset from under storage to local mount point, standard tools (for example, `ls`, `cat` or `mkdir`) have basic access to the under storage. With the POSIX API integration, applications can interact with the remote under storage no matter what language (C, C++, Python, Ruby, Perl, or Java) they are written in without any under storage library integrations. All the write operations happening inside the local mount point will be directly translated to write operations against the mounted under storage dataset ```shell $ cd /path/to/mount_point $ mkdir testfolder $ dd if=/dev/zero of=testfolder/testfile bs=5MB count=1 ``` `folder` will be directly created at `understoragedataset/testfolder` (e.g. `s3://bucket_name/path/to/dataset/testfolder` `testfolder/testfile` will be directly written to `understoragedataset/testfolder/testfile` (e.g. `s3://bucket_name/path/to/dataset/testfolder/testfile` Without the functionalities that we will talk about later, all the read operations via the local mount point will be translated to read operations against the underlying data storage: ```shell $ cd /path/to/mount_point $ cp -r /path/to/mount_point/testfolder /tmp/ $ ls /tmp/testfolder -rwx. 1 ec2-user ec2-user 5000000 Nov 22 00:27 testfile ``` The read from `/path/to/mountpoint/testfolder` will be translated to a read targeting `understoragedataset/testfolder/testfile` (e.g. `s3://bucketname/path/to/dataset/testfolder/testfile`. Data will be read from the under storage dataset directly. Unmount a mounted FUSE mount point ```shell $ alluxio-fuse unmount <mount_point> ``` After unmounting the FUSE mount point, the corresponding `AlluxioFuse` process should be killed and the mount point should be removed. For example: ```shell $ alluxio-fuse unmount /path/to/mount_point $ mount | grep \"alluxio-fuse\" $ jps | grep \"AlluxioFuse\" ``` Most basic file system operations are supported. However, some operations are under active development <table class=\"table table-striped\"> <tr> <th>Category</th> <th>Supported Operations</th> <th>Not Supported Operations</th> </tr> <tr> <td>Metadata Write</td> <td>Create file, delete file, create directory, delete directory, rename, change owner, change group, change mode</td> <td>Symlink, link, change access/modification time (utimens), change special file attributes (chattr), sticky bit</td> </tr> <tr> <td>Metadata Read</td> <td>Get file status, get directory status, list directory status</td> <td></td> </tr> <tr> <td>Data Write</td> <td>Sequential write</td> <td>Append write, random write, overwrite, truncate, concurrently write the same file by multiple threads/clients</td> </tr> <tr> <td>Data Read</td> <td>Sequential read, random read, multiple threads/clients concurrently read the same file</td> <td></td> </tr> <tr> <td>Combinations</td> <td></td> <td>FIFO special file type, Rename when writing the source file, reading and writing concurrently on the same file</td> </tr> </table> Note that all file/dir permissions are checked against the user launching the AlluxioFuse process instead of the end user running the operations. provides different local cache capabilities to speed up your workloads and reduce the pressure of storage services. [Local Kernel Data Cache Configuration]({{ 'en/fuse-sdk//Local-Cache-Tuning.html#local-kernel-data-cache-configuration' | relativize_url }}) [Local Userspace Data Cache Configuration]({{ 'en/fuse-sdk//Local-Cache-Tuning.html#local-userspace-data-cache-configuration' | relativize_url }}) [Local Kernel Metadata Cache Configuration]({{ 'en/fuse-sdk//Local-Cache-Tuning.html#local-kernel-metadata-cache-configuration' | relativize_url }}) [Local Userspace Metadata Cache Configuration]({{ 'en/fuse-sdk//Local-Cache-Tuning.html#local-userspace-metadata-cache-configuration' | relativize_url }})"
}
] | {
"category": "Runtime",
"file_name": "Local-Cache-Quick-Start.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Output the dependencies graph in graphviz dot format ``` cilium-agent hive dot-graph [flags] ``` ``` -h, --help help for dot-graph ``` ``` --agent-liveness-update-interval duration Interval at which the agent updates liveness time for the datapath (default 1s) --api-rate-limit stringToString API rate limiting configuration (example: --api-rate-limit endpoint-create=rate-limit:10/m,rate-burst:2) (default []) --bpf-node-map-max uint32 Sets size of node bpf map which will be the max number of unique Node IPs in the cluster (default 16384) --certificates-directory string Root directory to find certificates specified in L7 TLS policy enforcement (default \"/var/run/cilium/certs\") --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-ip-identities-sync-timeout duration Timeout waiting for the initial synchronization of IPs and identities from remote clusters before local endpoints regeneration (default 1m0s) --cni-chaining-mode string Enable CNI chaining with the specified plugin (default \"none\") --cni-chaining-target string CNI network name into which to insert the Cilium chained configuration. Use '*' to select any network. --cni-exclusive Whether to remove other CNI configurations --cni-external-routing Whether the chained CNI plugin handles routing on the node --cni-log-file string Path where the CNI plugin should write logs (default \"/var/run/cilium/cilium-cni.log\") --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. --devices strings List of devices facing cluster/external network (used for BPF NodePort, BPF masquerading and host firewall); supports '+' as wildcard in device name, e.g. 'eth+' --disable-envoy-version-check Do not perform Envoy version check --disable-iptables-feeder-rules strings Chains to ignore when installing feeder rules. --egress-gateway-policy-map-max int Maximum number of entries in egress gateway policy map (default 16384) --egress-gateway-reconciliation-trigger-interval duration Time between triggers of egress gateway state reconciliations (default 1s) --enable-bandwidth-manager Enable BPF bandwidth manager --enable-bbr Enable BBR for the bandwidth manager --enable-cilium-api-server-access strings List of cilium API APIs which are administratively enabled. Supports ''. (default []) --enable-cilium-health-api-server-access strings List of cilium health API APIs which are administratively enabled. Supports"
},
{
"data": "(default []) --enable-gateway-api Enables Envoy secret sync for Gateway API related TLS secrets --enable-ingress-controller Enables Envoy secret sync for Ingress controller related TLS secrets --enable-ipv4-big-tcp Enable IPv4 BIG TCP option which increases device's maximum GRO/GSO limits for IPv4 --enable-ipv6-big-tcp Enable IPv6 BIG TCP option which increases device's maximum GRO/GSO limits for IPv6 --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-l2-pod-announcements Enable announcing Pod IPs with Gratuitous ARP --enable-monitor Enable the monitor unix domain socket server (default true) --enable-service-topology Enable support for service topology aware hints --endpoint-bpf-prog-watchdog-interval duration Interval to trigger endpoint BPF programs load check watchdog (default 30s) --envoy-base-id uint Envoy base ID --envoy-config-retry-interval duration Interval in which an attempt is made to reconcile failed EnvoyConfigs. If the duration is zero, the retry is deactivated. (default 15s) --envoy-config-timeout duration Timeout that determines how long to wait for Envoy to N/ACK CiliumEnvoyConfig resources (default 2m0s) --envoy-keep-cap-netbindservice Keep capability NETBINDSERVICE for Envoy process --envoy-log string Path to a separate Envoy log file, if any --envoy-secrets-namespace string EnvoySecretsNamespace is the namespace having secrets used by CEC --gateway-api-secrets-namespace string GatewayAPISecretsNamespace is the namespace having tls secrets used by CEC, originating from Gateway API --gops-port uint16 Port for gops server to listen on (default 9890) --http-idle-timeout uint Time after which a non-gRPC HTTP stream is considered failed unless traffic in the stream has been processed (in seconds); defaults to 0 (unlimited) --http-max-grpc-timeout uint Time after which a forwarded gRPC request is considered failed unless completed (in seconds). A \"grpc-timeout\" header may override this with a shorter value; defaults to 0 (unlimited) --http-normalize-path Use Envoy HTTP path normalization options, which currently includes RFC 3986 path normalization, Envoy merge slashes option, and unescaping and redirecting for paths that contain escaped slashes. These are necessary to keep path based access control functional, and should not interfere with normal operation. Set this to false only with caution. (default true) --http-request-timeout uint Time after which a forwarded HTTP request is considered failed unless completed (in seconds); Use 0 for unlimited (default 3600) --http-retry-count uint Number of retries performed after a forwarded request attempt fails (default 3) --http-retry-timeout uint Time after which a forwarded but uncompleted request is retried (connection failures are retried immediately); defaults to 0 (never) --ingress-secrets-namespace string IngressSecretsNamespace is the namespace having tls secrets used by CEC, originating from Ingress controller --iptables-lock-timeout duration Time to pass to each iptables invocation to wait for xtables lock acquisition (default 5s) --iptables-random-fully Set iptables flag random-fully on masquerading rules --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --l2-pod-announcements-interface string Interface used for sending gratuitous arp messages --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255,"
},
{
"data": "(default 255) --mesh-auth-enabled Enable authentication processing & garbage collection (beta) (default true) --mesh-auth-gc-interval duration Interval in which auth entries are attempted to be garbage collected (default 5m0s) --mesh-auth-mutual-connect-timeout duration Timeout for connecting to the remote node TCP socket (default 5s) --mesh-auth-mutual-listener-port int Port on which the Cilium Agent will perform mutual authentication handshakes between other Agents --mesh-auth-queue-size int Queue size for the auth manager (default 1024) --mesh-auth-rotated-identities-queue-size int The size of the queue for signaling rotated identities. (default 1024) --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-admin-socket string The path for the SPIRE admin agent Unix socket. --metrics strings Metrics that should be enabled or disabled from the default metric list. (+metricfoo to enable metricfoo, -metricbar to disable metricbar) --monitor-queue-size int Size of the event queue when reading monitor events --multicast-enabled Enables multicast in Cilium --nodeport-addresses strings A whitelist of CIDRs to limit which IPs are used for NodePort. If not set, primary IPv4 and/or IPv6 address of each native device is used. --pprof Enable serving pprof debugging API --pprof-address string Address that pprof listens on (default \"localhost\") --pprof-port uint16 Port that pprof listens on (default 6060) --prepend-iptables-chains Prepend custom iptables chains instead of appending (default true) --procfs string Path to the host's proc filesystem mount (default \"/proc\") --prometheus-serve-addr string IP:Port on which to serve prometheus metrics (pass \":Port\" to bind on all interfaces, \"\" is off) --proxy-admin-port int Port to serve Envoy admin interface on. --proxy-connect-timeout uint Time after which a TCP connect attempt is considered failed unless completed (in seconds) (default 2) --proxy-gid uint Group ID for proxy control plane sockets. (default 1337) --proxy-idle-timeout-seconds int Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s (default 60) --proxy-max-connection-duration-seconds int Set Envoy HTTP option maxconnectionduration seconds. Default 0 (disable) --proxy-max-requests-per-connection int Set Envoy HTTP option maxrequestsper_connection. Default 0 (disable) --proxy-portrange-max uint16 End of port range that is used to allocate ports for L7 proxies. (default 20000) --proxy-portrange-min uint16 Start of port range that is used to allocate ports for L7 proxies. (default 10000) --proxy-prometheus-port int Port to serve Envoy metrics on. Default 0 (disabled). --proxy-xff-num-trusted-hops-egress uint32 Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the egress L7 policy enforcement Envoy listeners. --proxy-xff-num-trusted-hops-ingress uint32 Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the ingress L7 policy enforcement Envoy listeners. --read-cni-conf string CNI configuration file to use as a source for --write-cni-conf-when-ready. If not supplied, a suitable one will be generated. --tunnel-port uint16 Tunnel port (default 8472 for \"vxlan\" and 6081 for \"geneve\") --tunnel-protocol string Encapsulation protocol to use for the overlay (\"vxlan\" or \"geneve\") (default \"vxlan\") --use-full-tls-context If enabled, persist ca.crt keys into the Envoy config even in a terminatingTLS block on an L7 Cilium Policy. This is to enable compatibility with previously buggy behaviour. This flag is deprecated and will be removed in a future release. --write-cni-conf-when-ready string Write the CNI configuration to the specified path when agent is ready ``` - Inspect the hive"
}
] | {
"category": "Runtime",
"file_name": "cilium-agent_hive_dot-graph.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "sidebar_position: 2 sidebar_label: \"CAS\" Container Attached Storage (CAS) is software that includes microservice based storage controllers that are orchestrated by Kubernetes. These storage controllers can run anywhere that Kubernetes can run which means any cloud or even bare metal server or on top of a traditional shared storage system. Critically, the data itself is also accessed via containers as opposed to being stored in an off platform shared scale out storage system. Because CAS leverages a microservices architecture, it keeps the storage solution closely tied to the application bound to the physical storage device, reducing I/O latency. CAS is a pattern very much in line with the trend towards disaggregated data and the rise of small, autonomous teams running small, loosely coupled workloads. For example, my team might need Postgres for our microservice, and yours might depend on Redis and MongoDB. Some of our use cases might require performance, some might be gone in 20 minutes, others are write intensive, others read intensive, and so on. In a large organization, the technology that teams depend on will vary more and more as the size of the organization grows and as organizations increasingly trust teams to select their own tools. CAS means that developers can work without worrying about the underlying requirements of their organizations' storage architecture. To CAS, a cloud disk is the same as a SAN which is the same as bare metal or virtualized hosts. Developers and Platform SREs dont have meetings to select the next storage vendor or to argue for settings to support their use case, instead Developers remain autonomous and can spin up their own CAS containers with whatever storage is available to the Kubernetes clusters. CAS reflects a broader trend of solutions many of which are now part of Cloud Native Foundation that reinvent particular categories or create new ones by being built on Kubernetes and microservice and that deliver capabilities to Kubernetes based microservice environments. For example, new projects for security, DNS, networking, network policy management, messaging, tracing, logging and more have emerged in the cloud-native ecosystem and often in CNCF itself. Each storage volume in CAS has a containerized storage controller and corresponding containerized replicas. Hence, maintenance and tuning of the resources around these components are truly agile. The capability of Kubernetes for rolling upgrades enables seamless upgrades of storage controllers and storage replicas. Resources such as CPU and memory can be tuned using container"
},
{
"data": "Containerizing the storage software and dedicating the storage controller to each volume brings maximum granularity in storage policies. With CAS architecture, you can configure all storage policies on a per-volume basis. In addition, you can monitor storage parameters of every volume and dynamically update storage policies to achieve the desired result for each workload. The control of storage throughput, IOPS, and latency increases with this additional level of granularity in the volume storage policies. Avoiding cloud vendor lock-in is a common goal for many Kubernetes users. However, the data of stateful applications often remains dependent on the cloud provider and technology or on an underlying traditional shared storage system, NAS or SAN. With the CAS approach, storage controllers can migrate the data in the background per workload and live migration becomes simpler. In other words, the granularity of control of CAS simplifies the movement of stateful workloads from one Kubernetes cluster to another in a non-disruptive way. CAS containerizes the storage software and uses Kubernetes Custom Resource Definitions (CRDs) to represent low-level storage resources, such as disks and storage pools. This model enables storage to be integrated into other cloud-native tools seamlessly. The storage resources can be provisioned, monitored, and managed using cloud-native tools such as Prometheus, Grafana, Fluentd, Weavescope, Jaeger, and others. Similar to hyperconverged systems, storage and performance of a volume in CAS are scalable. As each volume has it's own storage controller, the storage can scale up within the permissible limits of a storage capacity of a node. As the number of container applications increases in a given Kubernetes cluster, more nodes are added, which increases the overall availability of storage capacity and performance, thereby making the storage available to the new application containers. Because the CAS architecture is per workload and components are loosely coupled, CAS has a much smaller blast radius than a typical distributed storage architecture. CAS can deliver high availability through synchronous replication from storage controllers to storage replicas. The metadata required to maintain the replicas is simplified to store the information of the nodes that have replicas and information about the status of replicas to help with quorum. If a node fails, the storage controller, which is a stateless container in this case, is spun on a node where second or third replica is running and data continues to be available. Hence, with CAS the blast radius is much lower and also localized to the volumes that have replicas on that node."
}
] | {
"category": "Runtime",
"file_name": "cas.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Firecracker microVMs can execute actions that can be triggered via `PUT` requests on the `/actions` resource. Details about the required fields can be found in the . The `InstanceStart` action powers on the microVM and starts the guest OS. It does not have a payload. It can only be successfully called once. ```bash curl --unix-socket ${socket} -i \\ -X PUT \"http://localhost/actions\" \\ -d '{ \"action_type\": \"InstanceStart\" }' ``` The `FlushMetrics` action flushes the metrics on user demand. ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT \"http://localhost/actions\" \\ -d '{ \"action_type\": \"FlushMetrics\" }' ``` This action will send the CTRL+ALT+DEL key sequence to the microVM. By convention, this sequence has been used to trigger a soft reboot and, as such, most Linux distributions perform an orderly shutdown and reset upon receiving this keyboard input. Since Firecracker exits on CPU reset, `SendCtrlAltDel` can be used to trigger a clean shutdown of the microVM. For this action, Firecracker emulates a standard AT keyboard, connected via an i8042 controller. Driver support for both these devices needs to be present in the guest OS. For Linux, that means the guest kernel needs `CONFIGSERIOI8042` and `CONFIGKEYBOARDATKBD`. Note1: at boot time, the Linux driver for i8042 spends a few tens of milliseconds probing the device. This can be disabled by using these kernel command line parameters: ```console i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd ``` Note2 This action is only supported on `x86_64` architecture. ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT \"http://localhost/actions\" \\ -d '{ \"action_type\": \"SendCtrlAltDel\" }' ```"
}
] | {
"category": "Runtime",
"file_name": "actions.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
} |
[
{
"data": "Please DO the following steps before submitting a PR: Run `cargo fmt` to format the code, you can also run `cargo fmt -- --check` to review the format changes; Run `cargo clippy` to make sure the code has no warnings from clippy; Run `cargo test` in project root directory to make sure no tests break. Use `impl` whenever possible, since `impl` means static dispatch, not dynamic, which has less overhead. Also `impl` makes a lot succinct. Try to avoid trait object, or don't use `dyn`. Use `impl` instead. For example: ```Rust // not recommended fn foo(input: Box<dyn SomeTrait>) // preferred fn foo(input: impl SomeTrait) ``` Try to avoid redundant generic type definition, use `impl` instead. For example: ```Rust // not recommended fn foo<P: AsRef<Path>>(input: P) // preferred fn foo(input: impl AsRef<Path>) ``` Use functional style instead of imperative one, especially when doing iteration or future chaining. For example: ```Rust fn process(i: u8) { println!(\"u8={}\", i); } // not recommend for i in vec { process(i); } // preferred vec.iter().for_each(process); ``` When using functions from external dependent crates, reserve the mod name where a function belongs to. This is a Rust tradition. For example: ```Rust // not recommended use externalcrate::somemod::some_func; some_func(); // preferred use externalcrate::somemod; somemod::somefunc(); ``` As for using Struct or Trait from external dependent crate, just directly use it. For example: ```Rust // preferred use externalcrate::somemod::{SomeStruct, SomeTrait}; let val = SomeStruct::new(); fn foo(bar: impl SomeTrait) { todo!(); } ``` Except for using `Result`, since so many crates define `Result` for theirselves, so please highlight which `Result` is using. For example: ```Rust // not recommended use std::io::Result; fn foo() -> Result<()> { todo!(); } // preferred use std::io; fn foo() -> io::Result<()> { todo!(); } ``` When using internal mod's, please use relative mod path, instead of absolute path. For example: ```Rust mod AAA { mod BBB { mod CCC { struct SomeStruct(u8); } mod DDD { // not recommended use crate::AAA::BBB::CCC::SomeStruct; // preferred use super::CCC::SomeStruct; } } } ``` DO NOT defined customized error type, unless explicit reason. Use `anyhow` crate for default error handling. It's preferred to add context to `Result` by using `anyhow::Context`. For example: ```Rust use anyhow::Context; use std::io; fn foo() -> io::Result<()> { todo!(); } fn bar() -> anyhow::Result<()> { foo().context(\"more description for the error case\")?; } ``` Every `assert!`, `asserteq!` and `assertnq!` should have a explanation about assertion failure. For example: ```Rust // not recommended assert!(some_condition); // preferred assert!(some_condition, \"explanation about assertion failure\"); ``` DO NOT* use `unwrap()` of `Result` and `Option` directly, unless `Result` or `Option` are properly checked. If you believe using `unwrap()` is safe, please add a comment for each `unwrap()`. For example: ```Rust let val = some_result.unwrap(); // safe to use unwrap() here, because ... ``` Try to use safe code as much as possible, if no explicit reason, please don't use unsafe code."
}
] | {
"category": "Runtime",
"file_name": "coding_style.md",
"project_name": "DatenLord",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "The results of IO performance testing using are as follows: ::: tip Note Multiple clients mount the same volume, and the process refers to the fio process. ::: Tool Settings ``` bash fio -directory={} \\ -ioengine=psync \\ -rw=read \\ # sequential read -bs=128k \\ # block size -direct=1 \\ # enable direct IO -group_reporting=1 \\ -fallocate=none \\ -time_based=1 \\ -runtime=120 \\ -name=testfilec{} \\ -numjobs={} \\ -nrfiles=1 \\ -size=10G ``` Bandwidth (MB/s) | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 319.000 | 1145.000 | 3496.000 | 2747.000 | | 2 Clients | 625.000 | 2237.000 | 6556.000 | 5300.000 | | 4 Clients | 1326.000 | 4433.000 | 8979.000 | 9713.000 | | 8 Clients | 2471.000 | 7963.000 | 11878.400 | 17510.400 | IOPS | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 2552 | 9158 | 27000 | 21000 | | 2 Clients | 5003 | 17900 | 52400 | 42400 | | 4 Clients | 10600 | 35500 | 71800 | 77700 | | 8 Clients | 19800 | 63700 | 94700 | 140000 | Latency (Microsecond) | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 391.350 | 436.170 | 571.200 | 2910.960 | | 2 Clients | 404.030 | 459.330 | 602.270 | 3011.920 | | 4 Clients | 374.450 | 445.550 | 892.390 | 2948.990 | | 8 Clients | 404.530 | 503.590 | 1353.910 | 4160.620 | Tool Settings ``` bash fio -directory={} \\ -ioengine=psync \\ -rw=write \\ # sequential write -bs=128k \\ # block size -direct=1 \\ # enable direct IO -group_reporting=1 \\ -fallocate=none \\ -name=testfilec{} \\ -numjobs={} \\ -nrfiles=1 \\ -size=10G ``` Bandwidth (MB/s) | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 119.000 | 473.000 | 1618.000 | 2903.000 | | 2 Clients | 203.000 | 886.000 | 2917.000 | 5465.000 | | 4 Clients | 397.000 | 1691.000 | 4708.000 | 7256.000 | | 8 Clients | 685.000 | 2648.000 | 6257.000 | 7166.000 | IOPS | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 948 | 3783 | 12900 | 23200 | | 2 Clients | 1625 | 7087 | 23300 | 43700 | | 4 Clients | 3179 | 13500 | 37700 | 58000 | | 8 Clients | 5482 | 21200 | 50100 | 57300 | Latency (Microsecond) | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 1053.240 | 1051.450 | 1228.230 | 2745.800 | | 2 Clients | 1229.270 | 1109.490 | 1359.350 | 2893.780 | | 4 Clients | 1248.990 | 1164.050 | 1642.660 |"
},
{
"data": "| | 8 Clients | 1316.560 | 1357.940 | 2378.950 | 8040.250 | Tool Settings ``` bash fio -directory={} \\ -ioengine=psync \\ -rw=randread \\ # random read -bs=4k \\ # block size -direct=1 \\ # enable direct IO -group_reporting=1 \\ -fallocate=none \\ -time_based=1 \\ -runtime=120 \\ -name=testfilec{} \\ -numjobs={} \\ -nrfiles=1 \\ -size=10G ``` Bandwidth (MB/s) | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 15.500 | 76.300 | 307.000 | 496.000 | | 2 Clients | 32.600 | 161.000 | 587.000 | 926.000 | | 4 Clients | 74.400 | 340.000 | 1088.000 | 1775.000 | | 8 Clients | 157.000 | 628.000 | 1723.000 | 2975.000 | IOPS | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 3979 | 19500 | 78700 | 127000 | | 2 Clients | 8345 | 41300 | 150000 | 237000 | | 4 Clients | 19000 | 86000 | 278000 | 454000 | | 8 Clients | 40200 | 161000 | 441000 | 762000 | Latency (Microsecond) | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 250.720 | 203.960 | 202.480 | 502.940 | | 2 Clients | 250.990 | 204.100 | 219.750 | 558.010 | | 4 Clients | 211.240 | 180.720 | 226.840 | 551.470 | | 8 Clients | 192.660 | 196.560 | 288.090 | 691.920 | Tool Settings ``` bash fio -directory={} \\ -ioengine=psync \\ -rw=randwrite \\ # random write -bs=4k \\ # block size -direct=1 \\ # enable direct IO -group_reporting=1 \\ -fallocate=none \\ -time_based=1 \\ -runtime=120 \\ -name=testfilec{} \\ -numjobs={} \\ -nrfiles=1 \\ -size=10G ``` Bandwidth (MB/s) | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 7.743 | 43.500 | 164.000 | 429.000 | | 2 Clients | 19.700 | 84.300 | 307.000 | 679.000 | | 4 Clients | 41.900 | 167.000 | 480.000 | 877.000 | | 8 Clients | 82.600 | 305.000 | 700.000 | 830.000 | IOPS | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 1982 | 11100 | 42100 | 110000 | | 2 Clients | 5050 | 21600 | 78600 | 174000 | | 4 Clients | 10700 | 42800 | 123000 | 225000 | | 8 Clients | 1100 | 78100 | 179000 | 212000 | Latency (Microsecond) | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 503.760 | 358.190 | 379.110 | 580.970 | | 2 Clients | 400.150 | 374.010 | 412.900 | 751.020 | | 4 Clients | 371.620 | 370.520 | 516.930 | 1139.920 | | 8 Clients | 380.650 | 403.510 | 718.900 | 2409.250 |"
}
] | {
"category": "Runtime",
"file_name": "io.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "The Runtime Monitoring feature provides an interface to observe runtime behavior of applications running inside gVisor. Although it can be used for many purposes, it was built with the primary focus on threat detection. NOTE: Runtime monitoring is about the ability to understand the behavior of sandboxed workloads. This differs from . Out of the box, gVisor comes with support to stream application actions (called trace points) to an external process, that is used to validate the actions and alert when abnormal behavior is detected. Trace points are available for all syscalls and other important events in the system, e.g. container start. More trace points can be easily added as needed. The trace points are sent to a process running alongside the sandbox, which is isolated from the sandbox for security reasons. Additionally, the monitoring process can be shared by many sandboxes. You can use the following links to learn more:"
}
] | {
"category": "Runtime",
"file_name": "runtime_monitoring.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
} |
[
{
"data": "Currently, as of v0.9.0, Carina separates node's local disks into different groups based on its type. User can request different storage disks using differents storageclasses. This works fine in general, but in some cases, user may prefer more flexiable usage. For example, 1#, Though all the disks are with the same type, workloads may prefer exclusively using disk groups, rather than racing with other workloads in the same pool, for better isolation. 2#, For really high performance disk types, like NVMe or Pmem, LVM and RAID are both not needed. Carina should provide raw disk usage. For now, user can configure disk groups via `diskSelector` and `diskGroupPolicy` in configmap. We should extend the current instruction and may add new one to get more flexibility. For reference, as of v0.9.0, carina configmap has below structure. ``` data: config.json: |- { \"diskSelector\": [\"loop\", \"vd\"], \"diskScanInterval\": \"300\", \"diskGroupPolicy\": \"type\", \"schedulerStrategy\": \"spreadout\" } ``` ```yaml diskSelectors: name: group1 re: sd[b-f] sd[m-o] policy: LVM nodeLabel: node-label name: group2 re: sd[h-g] policy: RAW nodeLabel: node-label ``` The `diskSelector` is a list of diskGroups. Each diskGroup has three parameters, which are all required, not optional. name User can assign a name to each diskGroup and then create one storageclass with a matching name for daily usage. Each diskGroup should have unique naming. re re is a list of reguare expression strings. Each element selects some of the local disks. All the disks selected by all re strings are grouped in this diskGroup. If a disk is selected by multiple re stings in one diskGroup, the disk will finially appears in this diskGroup. However, if one disk is selected by multiple re strings from multiple diskGroups, this disk is ignored by those diskGroups. Different types of disks can be grouped into one diskGroup by re strings. It's not RECOMMENDED, but carina allows user to do that. policy Policy specifies the way how to manage those disks in one diskGroup. Currently user can specify two policies, LVM or RAW. For LVM policy, it works as always. Disks from one diskGroup are treated as LVM-PVs and then grouped into one LVM-VG. When user requests PV from this diskGroup, carina allocates one LVM-LV as its real data backend. Raw policy is a new policy that user can comsume disks. Those disks may have different"
},
{
"data": "Assume user is requesting a PV with size of S, the procedure that carina picks up one disk works below: find out all unempty disks(already have partitions) and choose the disk with the minimum requirement that its free space is larger than S. If all unempty disks are not suitable, then choose the disk with the minimum requirement that its capacity is larger than S. If multile disks with same size is selected, then randomly choose one. If a disk is selected from above procedure, then carina create a partition as the PV's really data backend. Else, the PV binding will failed. User can specify an annotation in the PVC to claim a physical disk with exclusive usage. If this annotation is been set and its value is true, then carina will try to bind one empty disk(with the minimum requirement) as its data backend. ``` carina.storage.io/allow-pod-migration-if-node-notready: true ``` nodeLabel The configuration takes effect on all nodes. If the configuration is empty, the configuration takes effect on all nodes Deprecated. Carina will not automatically group disks by their type. For raw disk usage, to avoid the total free disk space is enough for one PV, but can't hold it in single disk device, carina need to maintain the largest single PV size it can allocate for each node. For LVM diskGroups, carina can still works as before. For raw diskGroups, carina should add extra informantion. For example, largest PV size each node can hold, total empty disks each node has, and so on. Although carina will not use any disks or partitions with any filesystem on it, eliminating misusage as much as possible, there is still possibility that some workloads are using raw devices directly. Carina now doesn't have a good default setting. User should group disks explicitly. As a typical environment, carina will use below setting as its default configmap. User should double check before put it in production environment. ```yaml diskSelectors: name: defaultGroup re: sd* policy: LVM ``` When carina starts, Old volumes can still be identified ``` data: config.json: |- { \"diskSelector\": [\"loop\", \"vd\"], \"diskScanInterval\": \"300\", \"diskGroupPolicy\": \"type\", \"schedulerStrategy\": \"spreadout\" } ``` ``` data: config.json: |- { \"diskSelector\": [ { \"name\": hdd ## if there are HDDs \"re\": [\"loop\", \"vd\"] \"policy\": \"LVM\" \"nodeLabel\": \"node-label\" }, { \"name\": ssd \"re\": [\"loop\", \"vd\"] \"policy\": \"LVM\" \"nodeLabel\": \"node-label\" } ], \"diskScanInterval\": \"300\", \"schedulerStrategy\": \"spreadout\" } ```"
}
] | {
"category": "Runtime",
"file_name": "design-diskGroup.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Network policies allow specifying whether and how different groups of Pods running in a Kubernetes cluster can communicate with one another. In other words, they can be used to control and limit the ingress and egress traffic to and from Pods. Naturally, network policies can be used to restrict which WireGuard peers have access to which Pods and vice-versa. Support for can be easily added to any cluster running Kilo by deploying a utility such as . The following command adds network policy support by deploying kube-router to work alongside Kilo: ```shell kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kube-router.yaml ``` Network policies could now be deployed to the cluster. Consider the following example scenarios. Imagine that an organization wants to limit access to a namespace to only allow traffic from the WireGuard VPN. Access to a namespace could be limited to only accept ingress from a CIDR range with: ```shell cat <<'EOF' | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-ingress-except-wireguard spec: podSelector: {} policyTypes: Ingress ingress: from: ipBlock: cidr: 10.5.0.0/16 # The WireGuard mesh/s CIDR. EOF ``` Consider the case where Pods running in one namespace should not have access to resources in the WireGuard mesh, e.g. because the Pods are potentially untrusted. In this scenario, a policy to restrict access to the WireGuard peers could be created with: ```shell cat <<'EOF' | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-egress-to-wireguard spec: podSelector: {} policyTypes: Egress egress: to: ipBlock: cidr: 0.0.0.0/0 except: 10.5.0.0/16 # The WireGuard mesh's CIDR. EOF ```"
}
] | {
"category": "Runtime",
"file_name": "network-policies.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "Like most other pieces of software, GlusterFS is not perfect in how it manages its resources like memory, threads and the like. Gluster developers try hard to prevent leaking resources but releasing and unallocating the used structures. Unfortunately every now and then some resource leaks are unintentionally added. This document tries to explain a few helpful tricks to identify resource leaks so that they can be addressed. There are certain techniques used in GlusterFS that make it difficult to use tools like Valgrind for memory leak detection. There are some build options that make it more practical to use Valgrind and other tools. When running Valgrind, it is important to have GlusterFS builds that contain the debuginfo/symbols. Some distributions (try to) strip the debuginfo to get smaller executables. Fedora and RHEL based distributions have sub-packages called ...-debuginfo that need to be installed for symbol resolving. By using memory pools, there are no allocation/freeing of single structures needed. This improves performance, but also makes it impossible to track the allocation and freeing of srtuctures. It is possible to disable the use of memory pools, and use standard `malloc()` and `free()` functions provided by the C library. Valgrind is then able to track the allocated areas and verify if they have been free'd. In order to disable memory pools, the Gluster sources needs to be configured with the `--enable-debug` option: ```shell ./configure --enable-debug ``` When building RPMs, the `.spec` handles the `--with=debug` option too: ```shell make dist rpmbuild -ta --with=debug glusterfs-....tar.gz ``` Valgrind tracks the call chain of functions that do memory allocations. The addresses of the functions are stored and before Valgrind exits the addresses are resolved into human readable function names and offsets (line numbers in source files). Because Gluster loads xlators dynamically, and unloads then before exiting, Valgrind is not able to resolve the function addresses into symbols anymore. Whenever this happend, Valgrind shows `???` in the output, like ``` ==25170== 344 bytes in 1 blocks are definitely lost in loss record 233 of 324 ==25170== at 0x4C29975: calloc (vgreplacemalloc.c:711) ==25170== by 0x52C7C0B: gf_calloc (mem-pool.c:117) ==25170== by 0x12B0638A: ??? ==25170== by 0x528FCE6: xlator_init (xlator.c:472) ==25170== by 0x528FE16: xlator_init (xlator.c:498) ... ``` These `???` can be prevented by not calling `dlclose()` for unloading the xlator. This will cause a small leak of the handle that was returned with `dlopen()`, but for improved debugging this can be acceptible. For this and other Valgrind features, a `--enable-valgrind` option is available to `./configure`. When GlusterFS is built with this option, Valgrind will be able to resolve the symbol names of the functions that do memory allocations inside xlators. ```shell ./configure --enable-valgrind ``` When building RPMs, the `.spec` handles the `--with=valgrind` option too: ```shell make dist rpmbuild -ta --with=valgrind"
},
{
"data": "``` Debugging a single xlator is not trivial. But there are some tools to make it easier. The `sink` xlator does not do any memory allocations itself, but contains just enough functionality to mount a volume with only the `sink` xlator. There is a little gfapi application under `tests/basic/gfapi/` in the GlusterFS sources that can be used to run only gfapi and the core GlusterFS infrastructure with the `sink` xlator. By extending the `.vol` file to load more xlators, each xlator can be debugged pretty much separately (as long as the xlators have no dependencies on each other). A basic Valgrind run with the suitable configure options looks like this: ```shell ./autogen.sh ./configure --enable-debug --enable-valgrind make && make install cd tests/basic/gfapi/ make gfapi-load-volfile valgrind ./gfapi-load-volfile sink.vol ``` Combined with other very useful options to Valgrind, the following execution shows many more useful details: ```shell valgrind \\ --fullpath-after= --leak-check=full --show-leak-kinds=all \\ ./gfapi-load-volfile sink.vol ``` Note that the `--fullpath-after=` option is left empty, this makes Valgrind print the full path and filename that contains the functions: ``` ==2450== 80 bytes in 1 blocks are definitely lost in loss record 8 of 60 ==2450== at 0x4C29975: calloc (/builddir/build/BUILD/valgrind-3.11.0/coregrind/mreplacemalloc/vgreplace_malloc.c:711) ==2450== by 0x52C6F73: gf_calloc (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/mem-pool.c:117) ==2450== by 0x12F10CDA: init (/usr/src/debug/glusterfs-3.11dev/xlators/meta/src/meta.c:231) ==2450== by 0x528EFD5: xlator_init (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/xlator.c:472) ==2450== by 0x528F105: xlator_init (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/xlator.c:498) ==2450== by 0x52D9D8B: glusterfsgraphinit (/usr/src/debug/glusterfs-3.11dev/libglusterfs/src/graph.c:321) ... ``` In the above example, the `init` function in `xlators/meta/src/meta.c` does a memory allocation on line 231. This memory is never free'd again, and hence Valgrind logs this call stack. When looking in the code, it seems that the allocation of `priv` is assigned to the `this->private` member of the `xlator_t` structure. Because the allocation is done in `init()`, free'ing is expected to happen in `fini()`. Both functions are shown below, with the inclusion of the empty `fini()`: ``` 226 int 227 init (xlator_t *this) 228 { 229 metaprivt *priv = NULL; 230 231 priv = GFCALLOC (sizeof(*priv), 1, gfmetamtpriv_t); 232 if (!priv) 233 return -1; 234 235 GFOPTIONINIT (\"meta-dir-name\", priv->metadirname, str, out); 236 237 this->private = priv; 238 out: 239 return 0; 240 } 241 242 243 int 244 fini (xlator_t *this) 245 { 246 return 0; 247 } ``` In this case, the resource leak can be addressed by adding a single line to the `fini()` function: ``` 243 int 244 fini (xlator_t *this) 245 { 246 GF_FREE (this->private); 247 return 0; 248 } ``` Running the same Valgrind command and comparing the output will show that the memory leak in `xlators/meta/src/meta.c:init` is not reported anymore. When configuring GlusterFS with: ```shell ./configure --enable-valgrind ``` the default Valgrind tool (Memcheck) is enabled. But it's also possble to select one of Memcheck or DRD by using: ```shell ./configure --enable-valgrind=memcheck ``` or: ```shell ./configure --enable-valgrind=drd ``` respectively. When using DRD, it's recommended to consult https://valgrind.org/docs/manual/drd-manual.html before running."
}
] | {
"category": "Runtime",
"file_name": "identifying-resource-leaks.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "(network-bgp)= ```{note} The BGP server feature is available for the {ref}`network-bridge` and the {ref}`network-physical`. ``` {abbr}`BGP (Border Gateway Protocol)` is a protocol that allows exchanging routing information between autonomous systems. If you want to directly route external addresses to specific Incus servers or instances, you can configure Incus as a BGP server. Incus will then act as a BGP peer and advertise relevant routes and next hops to external routers, for example, your network router. It automatically establishes sessions with upstream BGP routers and announces the addresses and subnets that it's using. The BGP server feature can be used to allow an Incus server or cluster to directly use internal/external address space by getting the specific subnets or addresses routed to the correct host. This way, traffic can be forwarded to the target instance. For bridge networks, the following addresses and networks are being advertised: Network `ipv4.address` or `ipv6.address` subnets (if the matching `nat` property isn't set to `true`) Network `ipv4.nat.address` or `ipv6.nat.address` subnets (if the matching `nat` property is set to `true`) Network forward addresses Addresses or subnets specified in `ipv4.routes.external` or `ipv6.routes.external` on an instance NIC that is connected to the bridge network Make sure to add your subnets to the respective configuration options. Otherwise, they won't be advertised. For physical networks, no addresses are advertised directly at the level of the physical network. Instead, the networks, forwards and routes of all downstream networks (the networks that specify the physical network as their uplink network through the `network` option) are advertised in the same way as for bridge networks. ```{note} At this time, it is not possible to announce only some specific routes/addresses to particular peers. If you need this, filter prefixes on the upstream routers. ``` To configure Incus as a BGP server, set the following server configuration options on all cluster members: {config:option}`server-core:core.bgp_address` - the IP address for the BGP server {config:option}`server-core:core.bgp_asn` - the {abbr}`ASN (Autonomous System Number)` for the local server {config:option}`server-core:core.bgp_routerid` - the unique identifier for the BGP server For example, set the following values: ```bash incus config set core.bgp_address=192.0.2.50:179 incus config set core.bgp_asn=65536 incus config set core.bgp_routerid=192.0.2.50 ``` Once these configuration options are set, Incus starts listening for BGP sessions. For bridge networks, you can override the next-hop configuration. By default, the next-hop is set to the address used for the BGP session. To configure a different address, set `bgp.ipv4.nexthop` or `bgp.ipv6.nexthop`. If you run an OVN network with an uplink network (`physical` or `bridge`), the uplink network is the one that holds the list of allowed subnets and the BGP configuration. Therefore, you must configure BGP peers on the uplink network that contain the information that is required to connect to the BGP server. Set the following configuration options on the uplink network: `bgp.peers.<name>.address` - the peer address to be used by the downstream networks `bgp.peers.<name>.asn` - the {abbr}`ASN (Autonomous System Number)` for the local server `bgp.peers.<name>.password` - an optional password for the peer session `bgp.peers.<name>.holdtime` - an optional hold time for the peer session (in seconds) Once the uplink network is configured, downstream OVN networks will get their external subnets and addresses announced over BGP. The next-hop is set to the address of the OVN router on the uplink network."
}
] | {
"category": "Runtime",
"file_name": "network_bgp.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "Most high-level container runtimes implement Kubernetes' CRI (Container Runtime Interface) spec so that they can be managed by Kubernetes tools. That means you can use Kubernetes tools to manage the WebAssembly app image in pods and namespaces. Check out specific instructions for different flavors of Kubernetes setup in this chapter."
}
] | {
"category": "Runtime",
"file_name": "kubernetes.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
} |
[
{
"data": "This is the standard configuration for version 1 containers. It includes namespaces, standard filesystem setup, a default Linux capability set, and information about resource reservations. It also has information about any populated environment settings for the processes running inside a container. Along with the configuration of how a container is created the standard also discusses actions that can be performed on a container to manage and inspect information about the processes running inside. The v1 profile is meant to be able to accommodate the majority of applications with a strong security configuration. Minimum requirements: Kernel version - 3.10 recommended 2.6.2x minimum(with backported patches) Mounted cgroups with each subsystem in its own hierarchy | Flag | Enabled | | | - | | CLONE_NEWPID | 1 | | CLONE_NEWUTS | 1 | | CLONE_NEWIPC | 1 | | CLONE_NEWNET | 1 | | CLONE_NEWNS | 1 | | CLONE_NEWUSER | 1 | | CLONE_NEWCGROUP | 1 | Namespaces are created for the container via the `unshare` syscall. A root filesystem must be provided to a container for execution. The container will use this root filesystem (rootfs) to jail and spawn processes inside where the binaries and system libraries are local to that directory. Any binaries to be executed must be contained within this rootfs. Mounts that happen inside the container are automatically cleaned up when the container exits as the mount namespace is destroyed and the kernel will unmount all the mounts that were setup within that namespace. For a container to execute properly there are certain filesystems that are required to be mounted within the rootfs that the runtime will setup. | Path | Type | Flags | Data | | -- | | -- | - | | /proc | proc | MSNOEXEC,MSNOSUID,MS_NODEV | | | /dev | tmpfs | MSNOEXEC,MSSTRICTATIME | mode=755 | | /dev/shm | tmpfs | MSNOEXEC,MSNOSUID,MS_NODEV | mode=1777,size=65536k | | /dev/mqueue | mqueue | MSNOEXEC,MSNOSUID,MS_NODEV | | | /dev/pts | devpts | MSNOEXEC,MSNOSUID | newinstance,ptmxmode=0666,mode=620,gid=5 | | /sys | sysfs | MSNOEXEC,MSNOSUID,MSNODEV,MSRDONLY | | After a container's filesystems are mounted within the newly created mount namespace `/dev` will need to be populated with a set of device nodes. It is expected that a rootfs does not need to have any device nodes specified for `/dev` within the rootfs as the container will setup the correct devices that are required for executing a container's process. | Path | Mode | Access | | | - | - | | /dev/null | 0666 | rwm | | /dev/zero | 0666 | rwm | | /dev/full | 0666 | rwm | | /dev/tty | 0666 | rwm | | /dev/random | 0666 | rwm | | /dev/urandom | 0666 | rwm | ptmx `/dev/ptmx` will need to be a symlink to the host's `/dev/ptmx` within the container. The use of a pseudo TTY is optional within a container and it should support both. If a pseudo is provided to the container `/dev/console` will need to be setup by binding the console in `/dev/` after it has been populated and mounted in"
},
{
"data": "| Source | Destination | UID GID | Mode | Type | | | | - | - | - | | pty host path | /dev/console | 0 0 | 0600 | bind | After `/dev/null` has been setup we check for any external links between the container's io, STDIN, STDOUT, STDERR. If the container's io is pointing to `/dev/null` outside the container we close and `dup2` the `/dev/null` that is local to the container's rootfs. After the container has `/proc` mounted a few standard symlinks are setup within `/dev/` for the io. | Source | Destination | | | -- | | /proc/self/fd | /dev/fd | | /proc/self/fd/0 | /dev/stdin | | /proc/self/fd/1 | /dev/stdout | | /proc/self/fd/2 | /dev/stderr | A `pivot_root` is used to change the root for the process, effectively jailing the process inside the rootfs. ```c put_old = mkdir(...); pivotroot(rootfs, putold); chdir(\"/\"); unmount(putold, MSDETACH); rmdir(put_old); ``` For container's running with a rootfs inside `ramfs` a `MS_MOVE` combined with a `chroot` is required as `pivot_root` is not supported in `ramfs`. ```c mount(rootfs, \"/\", NULL, MS_MOVE, NULL); chroot(\".\"); chdir(\"/\"); ``` The `umask` is set back to `0022` after the filesystem setup has been completed. Cgroups are used to handle resource allocation for containers. This includes system resources like cpu, memory, and device access. | Subsystem | Enabled | | - | - | | devices | 1 | | memory | 1 | | cpu | 1 | | cpuacct | 1 | | cpuset | 1 | | blkio | 1 | | perf_event | 1 | | freezer | 1 | | hugetlb | 1 | | pids | 1 | All cgroup subsystem are joined so that statistics can be collected from each of the subsystems. Freezer does not expose any stats but is joined so that containers can be paused and resumed. The parent process of the container's init must place the init pid inside the correct cgroups before the initialization begins. This is done so that no processes or threads escape the cgroups. This sync is done via a pipe ( specified in the runtime section below ) that the container's init process will block waiting for the parent to finish setup. Intel platforms with new Xeon CPU support Resource Director Technology (RDT). Cache Allocation Technology (CAT) and Memory Bandwidth Allocation (MBA) are two sub-features of RDT. Cache Allocation Technology (CAT) provides a way for the software to restrict cache allocation to a defined 'subset' of L3 cache which may be overlapping with other 'subsets'. The different subsets are identified by class of service (CLOS) and each CLOS has a capacity bitmask (CBM). Memory Bandwidth Allocation (MBA) provides indirect and approximate throttle over memory bandwidth for the software. A user controls the resource by indicating the percentage of maximum memory bandwidth or memory bandwidth limit in MBps unit if MBA Software Controller is enabled. It can be used to handle L3 cache and memory bandwidth resources allocation for containers if hardware and kernel support Intel RDT CAT and MBA features. In Linux 4.10 kernel or newer, the interface is defined and exposed via \"resource control\" filesystem, which is a \"cgroup-like\" interface. Comparing with cgroups, it has similar process management lifecycle and interfaces in a container. But unlike cgroups' hierarchy, it has single level filesystem layout. CAT and MBA features are introduced in Linux"
},
{
"data": "and 4.12 kernel via \"resource control\" filesystem. Intel RDT \"resource control\" filesystem hierarchy: ``` mount -t resctrl resctrl /sys/fs/resctrl tree /sys/fs/resctrl /sys/fs/resctrl/ |-- info | |-- L3 | | |-- cbm_mask | | |-- mincbmbits | | |-- num_closids | |-- MB | |-- bandwidth_gran | |-- delay_linear | |-- min_bandwidth | |-- num_closids |-- ... |-- schemata |-- tasks |-- <container_id> |-- ... |-- schemata |-- tasks ``` For runc, we can make use of `tasks` and `schemata` configuration for L3 cache and memory bandwidth resources constraints. The file `tasks` has a list of tasks that belongs to this group (e.g., <container_id>\" group). Tasks can be added to a group by writing the task ID to the \"tasks\" file (which will automatically remove them from the previous group to which they belonged). New tasks created by fork(2) and clone(2) are added to the same group as their parent. The file `schemata` has a list of all the resources available to this group. Each resource (L3 cache, memory bandwidth) has its own line and format. L3 cache schema: It has allocation bitmasks/values for L3 cache on each socket, which contains L3 cache id and capacity bitmask (CBM). ``` Format: \"L3:<cacheid0>=<cbm0>;<cacheid1>=<cbm1>;...\" ``` For example, on a two-socket machine, the schema line could be \"L3:0=ff;1=c0\" which means L3 cache id 0's CBM is 0xff, and L3 cache id 1's CBM is 0xc0. The valid L3 cache CBM is a contiguous bits set and number of bits that can be set is less than the max bit. The max bits in the CBM is varied among supported Intel CPU models. Kernel will check if it is valid when writing. e.g., default value 0xfffff in root indicates the max bits of CBM is 20 bits, which mapping to entire L3 cache capacity. Some valid CBM values to set in a group: 0xf, 0xf0, 0x3ff, 0x1f00 and etc. Memory bandwidth schema: It has allocation values for memory bandwidth on each socket, which contains L3 cache id and memory bandwidth. ``` Format: \"MB:<cacheid0>=bandwidth0;<cacheid1>=bandwidth1;...\" ``` For example, on a two-socket machine, the schema line could be \"MB:0=20;1=70\" The minimum bandwidth percentage value for each CPU model is predefined and can be looked up through \"info/MB/min_bandwidth\". The bandwidth granularity that is allocated is also dependent on the CPU model and can be looked up at \"info/MB/bandwidth_gran\". The available bandwidth control steps are: minbw + N * bwgran. Intermediate values are rounded to the next control step available on the hardware. If MBA Software Controller is enabled through mount option \"-o mba_MBps\" mount -t resctrl resctrl -o mba_MBps /sys/fs/resctrl We could specify memory bandwidth in \"MBps\" (Mega Bytes per second) unit instead of \"percentages\". The kernel underneath would use a software feedback mechanism or a \"Software Controller\" which reads the actual bandwidth using MBM counters and adjust the memory bandwidth percentages to ensure: \"actual memory bandwidth < user specified memory bandwidth\". For example, on a two-socket machine, the schema line could be \"MB:0=5000;1=7000\" which means 5000 MBps memory bandwidth limit on socket 0 and 7000 MBps memory bandwidth limit on socket 1. For more information about Intel RDT kernel interface:"
},
{
"data": "``` An example for runc: Consider a two-socket machine with two L3 caches where the default CBM is 0x7ff and the max CBM length is 11 bits, and minimum memory bandwidth of 10% with a memory bandwidth granularity of 10%. Tasks inside the container only have access to the \"upper\" 7/11 of L3 cache on socket 0 and the \"lower\" 5/11 L3 cache on socket 1, and may use a maximum memory bandwidth of 20% on socket 0 and 70% on socket 1. \"linux\": { \"intelRdt\": { \"closID\": \"guaranteed_group\", \"l3CacheSchema\": \"L3:0=7f0;1=1f\", \"memBwSchema\": \"MB:0=20;1=70\" } } ``` The standard set of Linux capabilities that are set in a container provide a good default for security and flexibility for the applications. | Capability | Enabled | | -- | - | | CAPNETRAW | 1 | | CAPNETBIND_SERVICE | 1 | | CAPAUDITREAD | 1 | | CAPAUDITWRITE | 1 | | CAPDACOVERRIDE | 1 | | CAP_SETFCAP | 1 | | CAP_SETPCAP | 1 | | CAP_SETGID | 1 | | CAP_SETUID | 1 | | CAP_MKNOD | 1 | | CAP_CHOWN | 1 | | CAP_FOWNER | 1 | | CAP_FSETID | 1 | | CAP_KILL | 1 | | CAPSYSCHROOT | 1 | | CAPNETBROADCAST | 0 | | CAPSYSMODULE | 0 | | CAPSYSRAWIO | 0 | | CAPSYSPACCT | 0 | | CAPSYSADMIN | 0 | | CAPSYSNICE | 0 | | CAPSYSRESOURCE | 0 | | CAPSYSTIME | 0 | | CAPSYSTTY_CONFIG | 0 | | CAPAUDITCONTROL | 0 | | CAPMACOVERRIDE | 0 | | CAPMACADMIN | 0 | | CAPNETADMIN | 0 | | CAP_SYSLOG | 0 | | CAPDACREAD_SEARCH | 0 | | CAPLINUXIMMUTABLE | 0 | | CAPIPCLOCK | 0 | | CAPIPCOWNER | 0 | | CAPSYSPTRACE | 0 | | CAPSYSBOOT | 0 | | CAP_LEASE | 0 | | CAPWAKEALARM | 0 | | CAPBLOCKSUSPEND | 0 | Additional security layers like and can be used with the containers. A container should support setting an apparmor profile or selinux process and mount labels if provided in the configuration. Standard apparmor profile: ```c profile <profilename> flags=(attachdisconnected,mediate_deleted) { network, capability, file, umount, deny @{PROC}/sys/fs/ wklx, deny @{PROC}/sysrq-trigger rwklx, deny @{PROC}/mem rwklx, deny @{PROC}/kmem rwklx, deny @{PROC}/sys/kernel/[^m]* wklx, deny @{PROC}/sys/kernel//* wklx, deny mount, deny /sys/[^f]/* wklx, deny /sys/f[^s]/* wklx, deny /sys/fs/[^c]/* wklx, deny /sys/fs/c[^g]/* wklx, deny /sys/fs/cg[^r]/* wklx, deny /sys/firmware/efi/efivars/ rwklx, deny /sys/kernel/security/ rwklx, } ``` TODO: seccomp work is being done to find a good default config During container creation the parent process needs to talk to the container's init process and have a form of synchronization. This is accomplished by creating a pipe that is passed to the container's init. When the init process first spawns it will block on its side of the pipe until the parent closes its side. This allows the parent to have time to set the new process inside a cgroup hierarchy and/or write any uid/gid mappings required for user namespaces. The pipe is passed to the init process via FD 3. The application consuming libcontainer should be compiled statically. libcontainer does not define any init process and the arguments provided are used to `exec` the process inside the application. There should be no long running init within the container"
},
{
"data": "If a pseudo tty is provided to a container it will open and `dup2` the console as the container's STDIN, STDOUT, STDERR as well as mounting the console as `/dev/console`. An extra set of mounts are provided to a container and setup for use. A container's rootfs can contain some non portable files inside that can cause side effects during execution of a process. These files are usually created and populated with the container specific information via the runtime. Extra runtime files: /etc/hosts /etc/resolv.conf /etc/hostname /etc/localtime There are a few defaults that can be overridden by users, but in their omission these apply to processes within a container. | Type | Value | | - | | | Parent Death Signal | SIGKILL | | UID | 0 | | GID | 0 | | GROUPS | 0, NULL | | CWD | \"/\" | | $HOME | Current user's home dir or \"/\" | | Readonly rootfs | false | | Pseudo TTY | false | After a container is created there is a standard set of actions that can be done to the container. These actions are part of the public API for a container. | Action | Description | | -- | | | Get processes | Return all the pids for processes running inside a container | | Get Stats | Return resource statistics for the container as a whole | | Wait | Waits on the container's init process ( pid 1 ) | | Wait Process | Wait on any of the container's processes returning the exit status | | Destroy | Kill the container's init process and remove any filesystem state | | Signal | Send a signal to the container's init process | | Signal Process | Send a signal to any of the container's processes | | Pause | Pause all processes inside the container | | Resume | Resume all processes inside the container if paused | | Exec | Execute a new process inside of the container ( requires setns ) | | Set | Setup configs of the container after it's created | User can execute a new process inside of a running container. Any binaries to be executed must be accessible within the container's rootfs. The started process will run inside the container's rootfs. Any changes made by the process to the container's filesystem will persist after the process finished executing. The started process will join all the container's existing namespaces. When the container is paused, the process will also be paused and will resume when the container is unpaused. The started process will only run when the container's primary process (PID 1) is running, and will not be restarted when the container is restarted. The started process will have its own cgroups nested inside the container's cgroups. This is used for process tracking and optionally resource allocation handling for the new process. Freezer cgroup is required, the rest of the cgroups are optional. The process executor must place its pid inside the correct cgroups before starting the process. This is done so that no child processes or threads can escape the cgroups. When the process is stopped, the process executor will try (in a best-effort way) to stop all its children and remove the sub-cgroups."
}
] | {
"category": "Runtime",
"file_name": "SPEC.md",
"project_name": "runc",
"subcategory": "Container Runtime"
} |
[
{
"data": "\\[!WARNING\\] Support is currently in developer preview. See for more info. Firecracker supports backing the guest memory of a VM by 2MB hugetlbfs pages. This can be enabled by setting the `huge_pages` field of `PUT` or `PATCH` requests to the `/machine-config` endpoint to `2M`. Backing guest memory by huge pages can bring performance improvements for specific workloads, due to less TLB contention and less overhead during virtual->physical address resolution. It can also help reduce the number of KVM_EXITS required to rebuild extended page tables post snapshot restore, as well as improve boot times (by up to 50% as measured by Firecracker's ) Using hugetlbfs requires the host running Firecracker to have a pre-allocated pool of 2M pages. Should this pool be too small, Firecracker may behave erratically or receive the `SIGBUS` signal. This is because Firecracker uses the `MAP_NORESERVE` flag when mapping guest memory. This flag means the kernel will not try to reserve sufficient hugetlbfs pages at the time of the `mmap` call, trying to claim them from the pool on-demand. For details on how to manage this pool, please refer to the . Restoring a Firecracker snapshot of a microVM backed by huge pages will also use huge pages to back the restored guest. There is no option to flip between regular, 4K, pages and huge pages at restore time. Furthermore, snapshots of microVMs backed with huge pages can only be restored via UFFD. Lastly, note that even for guests backed by huge pages, differential snapshots will always track write accesses to guest memory at 4K granularity. When restoring snapshots via UFFD, Firecracker will send the configured page size (in KiB) for each memory region as part of the initial handshake, as described in our documentation on . Currently, hugetlbfs support is mutually exclusive with the following Firecracker features: Memory Ballooning via the Initrd Firecracker's guest memory is memfd based. Linux (as of 6.1) does not offer a way to dynamically enable THP for such memory regions. Additionally, UFFD does not integrate with THP (no transparent huge pages will be allocated during userfaulting). Please refer to the for more information."
}
] | {
"category": "Runtime",
"file_name": "hugepages.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
} |
[
{
"data": "title: \"Getting started\" layout: docs The following example sets up the Velero server and client, then backs up and restores a sample application. For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster. For additional functionality with this setup, see the docs on how to . NOTE The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope. See for how to configure Velero for a production environment. If you encounter issues with installing or configuring, see . Access to a Kubernetes cluster, version 1.7 or later. Note:* restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. Restic support is not required for this example, but may be of interest later. See . A DNS server on the cluster `kubectl` installed Download the tarball for your client platform. _We strongly recommend that you use an of Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch of the Velero repository is under active development and is not guaranteed to be stable!_ Extract the tarball: ```bash tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to ``` We'll refer to the directory you extracted to as the \"Velero directory\" in subsequent steps. Move the `velero` binary from the Velero directory to somewhere in your PATH. On Mac, you can use to install the `velero` client: ```bash brew install velero ``` These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands. Create a Velero-specific credentials file (`credentials-velero`) in your local directory: ``` [default] awsaccesskey_id = minio awssecretaccess_key = minio123 ``` Start the server and the local storage service. In the Velero directory, run: ``` kubectl apply -f examples/minio/00-minio-deployment.yaml ``` ``` velero install \\ --provider aws \\ --bucket velero \\ --secret-file ./credentials-velero \\ --use-volume-snapshots=false \\ --backup-location-config region=minio,s3ForcePathStyle=\"true\",s3Url=http://minio.velero.svc:9000 ``` This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready. Deploy the example nginx application: ```bash kubectl apply -f examples/nginx-app/base.yaml ``` Check to see that both the Velero and nginx deployments are successfully created: ``` kubectl get deployments -l component=velero --namespace=velero kubectl get deployments --namespace=nginx-example ``` Create a backup for any object that matches the `app=nginx` label selector: ``` velero backup create nginx-backup --selector app=nginx ``` Alternatively if you want to backup all objects except those matching the label `backup=ignore`: ``` velero backup create nginx-backup --selector 'backup notin (ignore)' ``` (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector: ``` velero schedule create nginx-daily --schedule=\"0 1 *\" --selector app=nginx ``` Alternatively, you can use some non-standard shorthand cron expressions: ``` velero schedule create nginx-daily --schedule=\"@daily\" --selector app=nginx ``` See the for more usage examples. Simulate a disaster: ``` kubectl delete namespace nginx-example ``` To check that the nginx deployment and service are gone, run: ``` kubectl get deployments --namespace=nginx-example kubectl get services --namespace=nginx-example kubectl get namespace/nginx-example ``` You should get no results. NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned"
},
{
"data": "Run: ``` velero restore create --from-backup nginx-backup ``` Run: ``` velero restore get ``` After the restore finishes, the output looks like the following: ``` NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none> ``` NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`. After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them. If there are errors or warnings, you can look at them in detail: ``` velero restore describe <RESTORE_NAME> ``` For more information, see . If you want to delete any backups you created, including data in object storage and persistent volume snapshots, you can run: ``` velero backup delete BACKUP_NAME ``` This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label selector. Once fully removed, the backup is no longer visible when you run: ``` velero backup get BACKUP_NAME ``` To completely uninstall Velero, minio, and the nginx example app from your Kubernetes cluster: ``` kubectl delete namespace/velero clusterrolebinding/velero kubectl delete crds -l component=velero kubectl delete -f examples/nginx-app/base.yaml ``` When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can: Change the Minio Service type from `ClusterIP` to `NodePort`. Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`. You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config. For basic instructions on how to install the Velero server and client, see . The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client. You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`. Get the Minio URL: if you're running Minikube: ```shell minikube service minio --namespace=velero --url ``` in any other environment: Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client. Append the value of the NodePort to get a complete URL. You can get this value by running: ```shell kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}' ``` Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URLFROMPREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix. Kubernetes in Docker currently does not have support for NodePort services (see ). In this case, you can use a port forward to access the Minio bucket. In a terminal, run the following: ```shell MINIO_POD=$(kubectl get pods -n velero -l component=minio -o jsonpath='{.items[0].metadata.name}') kubectl port-forward $MINIO_POD -n velero 9000:9000 ``` Then, in another terminal: ```shell kubectl edit backupstoragelocation default -n velero ``` Add `publicUrl: http://localhost:9000` under the `spec.config` section. Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you"
}
] | {
"category": "Runtime",
"file_name": "get-started.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Run the cilium agent ``` cilium-agent [flags] ``` ``` --agent-health-port int TCP port for agent health status API (default 9879) --agent-labels strings Additional labels to identify this agent --agent-liveness-update-interval duration Interval at which the agent updates liveness time for the datapath (default 1s) --agent-not-ready-taint-key string Key of the taint indicating that Cilium is not ready on the node (default \"node.cilium.io/agent-not-ready\") --allocator-list-timeout duration Timeout for listing allocator state before exiting (default 3m0s) --allow-icmp-frag-needed Allow ICMP Fragmentation Needed type packets for purposes like TCP Path MTU. (default true) --allow-localhost string Policy when to allow local stack to reach local endpoints { auto | always | policy } (default \"auto\") --annotate-k8s-node Annotate Kubernetes node --api-rate-limit stringToString API rate limiting configuration (example: --api-rate-limit endpoint-create=rate-limit:10/m,rate-burst:2) (default []) --arping-refresh-period duration Period for remote node ARP entry refresh (set 0 to disable) (default 30s) --auto-create-cilium-node-resource Automatically create CiliumNode resource for own node on startup (default true) --auto-direct-node-routes Enable automatic L2 routing between nodes --bgp-announce-lb-ip Announces service IPs of type LoadBalancer via BGP --bgp-announce-pod-cidr Announces the node's pod CIDR via BGP --bgp-config-path string Path to file containing the BGP configuration (default \"/var/lib/cilium/bgp/config.yaml\") --bpf-auth-map-max int Maximum number of entries in auth map (default 524288) --bpf-ct-global-any-max int Maximum number of entries in non-TCP CT table (default 262144) --bpf-ct-global-tcp-max int Maximum number of entries in TCP CT table (default 524288) --bpf-ct-timeout-regular-any duration Timeout for entries in non-TCP CT table (default 1m0s) --bpf-ct-timeout-regular-tcp duration Timeout for established entries in TCP CT table (default 2h13m20s) --bpf-ct-timeout-regular-tcp-fin duration Teardown timeout for entries in TCP CT table (default 10s) --bpf-ct-timeout-regular-tcp-syn duration Establishment timeout for entries in TCP CT table (default 1m0s) --bpf-ct-timeout-service-any duration Timeout for service entries in non-TCP CT table (default 1m0s) --bpf-ct-timeout-service-tcp duration Timeout for established service entries in TCP CT table (default 2h13m20s) --bpf-ct-timeout-service-tcp-grace duration Timeout for graceful shutdown of service entries in TCP CT table (default 1m0s) --bpf-events-drop-enabled Expose 'drop' events for Cilium monitor and/or Hubble (default true) --bpf-events-policy-verdict-enabled Expose 'policy verdict' events for Cilium monitor and/or Hubble (default true) --bpf-events-trace-enabled Expose 'trace' events for Cilium monitor and/or Hubble (default true) --bpf-fragments-map-max int Maximum number of entries in fragments tracking map (default 8192) --bpf-lb-acceleration string BPF load balancing acceleration via XDP (\"native\", \"disabled\") (default \"disabled\") --bpf-lb-algorithm string BPF load balancing algorithm (\"random\", \"maglev\") (default \"random\") --bpf-lb-dsr-dispatch string BPF load balancing DSR dispatch method (\"opt\", \"ipip\", \"geneve\") (default \"opt\") --bpf-lb-dsr-l4-xlate string BPF load balancing DSR L4 DNAT method for IPIP (\"frontend\", \"backend\") (default \"frontend\") --bpf-lb-external-clusterip Enable external access to ClusterIP services (default false) --bpf-lb-maglev-hash-seed string Maglev cluster-wide hash seed (base64 encoded) (default \"JLfvgnHc2kaSUFaI\") --bpf-lb-maglev-table-size uint Maglev per service backend table size (parameter M) (default 16381) --bpf-lb-map-max int Maximum number of entries in Cilium BPF lbmap (default 65536) --bpf-lb-mode string BPF load balancing mode (\"snat\", \"dsr\", \"hybrid\") (default \"snat\") --bpf-lb-rss-ipv4-src-cidr string BPF load balancing RSS outer source IPv4 CIDR prefix for IPIP --bpf-lb-rss-ipv6-src-cidr string BPF load balancing RSS outer source IPv6 CIDR prefix for IPIP --bpf-lb-sock Enable socket-based LB for E/W traffic --bpf-lb-sock-hostns-only Skip socket LB for services when inside a pod namespace, in favor of service LB at the pod interface. Socket LB is still used when in the host namespace. Required by service mesh (e.g., Istio, Linkerd). --bpf-map-dynamic-size-ratio float Ratio (0.0-1.0] of total system memory to use for dynamic sizing of CT, NAT and policy BPF maps (default"
},
{
"data": "--bpf-nat-global-max int Maximum number of entries for the global BPF NAT table (default 524288) --bpf-neigh-global-max int Maximum number of entries for the global BPF neighbor table (default 524288) --bpf-node-map-max uint32 Sets size of node bpf map which will be the max number of unique Node IPs in the cluster (default 16384) --bpf-policy-map-max int Maximum number of entries in endpoint policy map (per endpoint) (default 16384) --bpf-root string Path to BPF filesystem --bpf-sock-rev-map-max int Maximum number of entries for the SockRevNAT BPF map (default 262144) --certificates-directory string Root directory to find certificates specified in L7 TLS policy enforcement (default \"/var/run/cilium/certs\") --cgroup-root string Path to Cgroup2 filesystem --cluster-health-port int TCP port for cluster-wide network connectivity health API (default 4240) --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-ip-identities-sync-timeout duration Timeout waiting for the initial synchronization of IPs and identities from remote clusters before local endpoints regeneration (default 1m0s) --cni-chaining-mode string Enable CNI chaining with the specified plugin (default \"none\") --cni-chaining-target string CNI network name into which to insert the Cilium chained configuration. Use '*' to select any network. --cni-exclusive Whether to remove other CNI configurations --cni-external-routing Whether the chained CNI plugin handles routing on the node --cni-log-file string Path where the CNI plugin should write logs (default \"/var/run/cilium/cilium-cni.log\") --config string Configuration file (default \"$HOME/ciliumd.yaml\") --config-dir string Configuration directory that contains a file for each option --conntrack-gc-interval duration Overwrite the connection-tracking garbage collection interval --conntrack-gc-max-interval duration Set the maximum interval for the connection-tracking garbage collection --container-ip-local-reserved-ports string Instructs the Cilium CNI plugin to reserve the provided comma-separated list of ports in the container network namespace. Prevents the container from using these ports as ephemeral source ports (see Linux iplocalreserved_ports). Use this flag if you observe port conflicts between transparent DNS proxy requests and host network namespace services. Value \"auto\" reserves the WireGuard and VXLAN ports used by Cilium (default \"auto\") --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. --crd-wait-timeout duration Cilium will exit if CRDs are not available within this duration upon startup (default 5m0s) --datapath-mode string Datapath mode name (default \"veth\") -D, --debug Enable debugging mode --debug-verbose strings List of enabled verbose debug groups --devices strings List of devices facing cluster/external network (used for BPF NodePort, BPF masquerading and host firewall); supports '+' as wildcard in device name, e.g. 'eth+' --direct-routing-device string Device name used to connect nodes in direct routing mode (used by BPF NodePort, BPF host routing; if empty, automatically set to a device with k8s InternalIP/ExternalIP or with a default route) --disable-endpoint-crd Disable use of CiliumEndpoint CRD --disable-envoy-version-check Do not perform Envoy version check --disable-external-ip-mitigation Disable ExternalIP mitigation (CVE-2020-8554, default false) --disable-iptables-feeder-rules strings Chains to ignore when installing feeder rules. --dns-max-ips-per-restored-rule int Maximum number of IPs to maintain for each restored DNS rule (default 1000) --dns-policy-unload-on-shutdown Unload DNS policy rules on graceful shutdown --dnsproxy-concurrency-limit int Limit concurrency of DNS message processing --dnsproxy-concurrency-processing-grace-period duration Grace time to wait when DNS proxy concurrent limit has been reached during DNS message processing --dnsproxy-enable-transparent-mode Enable DNS proxy transparent mode --egress-gateway-policy-map-max int Maximum number of entries in egress gateway policy map (default 16384) --egress-gateway-reconciliation-trigger-interval duration Time between triggers of egress gateway state reconciliations (default 1s) --egress-masquerade-interfaces strings Limit iptables-based egress masquerading to interface selector --egress-multi-home-ip-rule-compat Offset routing table IDs under ENI IPAM mode to avoid collisions with reserved table"
},
{
"data": "If false, the offset is performed (new scheme), otherwise, the old scheme stays in-place. --enable-auto-protect-node-port-range Append NodePort range to net.ipv4.iplocalreservedports if it overlaps with ephemeral port range (net.ipv4.iplocalportrange) (default true) --enable-bandwidth-manager Enable BPF bandwidth manager --enable-bbr Enable BBR for the bandwidth manager --enable-bgp-control-plane Enable the BGP control plane. --enable-bpf-clock-probe Enable BPF clock source probing for more efficient tick retrieval --enable-bpf-masquerade Masquerade packets from endpoints leaving the host with BPF instead of iptables --enable-bpf-tproxy Enable BPF-based proxy redirection, if support available --enable-cilium-api-server-access strings List of cilium API APIs which are administratively enabled. Supports ''. (default []) --enable-cilium-endpoint-slice Enable the CiliumEndpointSlice watcher in place of the CiliumEndpoint watcher (beta) --enable-cilium-health-api-server-access strings List of cilium health API APIs which are administratively enabled. Supports ''. (default []) --enable-custom-calls Enable tail call hooks for custom eBPF programs --enable-encryption-strict-mode Enable encryption strict mode --enable-endpoint-health-checking Enable connectivity health checking between virtual endpoints (default true) --enable-endpoint-routes Use per endpoint routes instead of routing via cilium_host --enable-envoy-config Enable Envoy Config CRDs --enable-external-ips Enable k8s service externalIPs feature (requires enabling enable-node-port) --enable-gateway-api Enables Envoy secret sync for Gateway API related TLS secrets --enable-health-check-loadbalancer-ip Enable access of the healthcheck nodePort on the LoadBalancerIP. Needs --enable-health-check-nodeport to be enabled --enable-health-check-nodeport Enables a healthcheck nodePort server for NodePort services with 'healthCheckNodePort' being set (default true) --enable-health-checking Enable connectivity health checking (default true) --enable-high-scale-ipcache Enable the high scale mode for ipcache --enable-host-firewall Enable host network policies --enable-host-legacy-routing Enable the legacy host forwarding model which does not bypass upper stack in host namespace --enable-host-port Enable k8s hostPort mapping feature (requires enabling enable-node-port) --enable-hubble Enable hubble server --enable-hubble-recorder-api Enable the Hubble recorder API (default true) --enable-identity-mark Enable setting identity mark for local traffic (default true) --enable-ingress-controller Enables Envoy secret sync for Ingress controller related TLS secrets --enable-ip-masq-agent Enable BPF ip-masq-agent --enable-ipip-termination Enable plain IPIP/IP6IP6 termination --enable-ipsec Enable IPsec support --enable-ipsec-encrypted-overlay Enable IPsec encrypted overlay. If enabled tunnel traffic will be encrypted before leaving the host. --enable-ipsec-key-watcher Enable watcher for IPsec key. If disabled, a restart of the agent will be necessary on key"
},
{
"data": "(default true) --enable-ipv4 Enable IPv4 support (default true) --enable-ipv4-big-tcp Enable IPv4 BIG TCP option which increases device's maximum GRO/GSO limits for IPv4 --enable-ipv4-egress-gateway Enable egress gateway for IPv4 --enable-ipv4-fragment-tracking Enable IPv4 fragments tracking for L4-based lookups (default true) --enable-ipv4-masquerade Masquerade IPv4 traffic from endpoints leaving the host (default true) --enable-ipv6 Enable IPv6 support (default true) --enable-ipv6-big-tcp Enable IPv6 BIG TCP option which increases device's maximum GRO/GSO limits for IPv6 --enable-ipv6-masquerade Masquerade IPv6 traffic from endpoints leaving the host (default true) --enable-ipv6-ndp Enable IPv6 NDP support --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-k8s-terminating-endpoint Enable auto-detect of terminating endpoint condition (default true) --enable-l2-announcements Enable L2 announcements --enable-l2-neigh-discovery Enables L2 neighbor discovery used by kube-proxy-replacement and IPsec (default true) --enable-l2-pod-announcements Enable announcing Pod IPs with Gratuitous ARP --enable-l7-proxy Enable L7 proxy for L7 policy enforcement (default true) --enable-local-node-route Enable installation of the route which points the allocation prefix of the local node (default true) --enable-local-redirect-policy Enable Local Redirect Policy --enable-masquerade-to-route-source Masquerade packets to the source IP provided from the routing layer rather than interface address --enable-monitor Enable the monitor unix domain socket server (default true) --enable-nat46x64-gateway Enable NAT46 and NAT64 gateway --enable-node-port Enable NodePort type services by Cilium --enable-node-selector-labels Enable use of node label based identity --enable-pmtu-discovery Enable path MTU discovery to send ICMP fragmentation-needed replies to the client --enable-policy string Enable policy enforcement (default \"default\") --enable-recorder Enable BPF datapath pcap recorder --enable-sctp Enable SCTP support (beta) --enable-service-topology Enable support for service topology aware hints --enable-session-affinity Enable support for service session affinity --enable-svc-source-range-check Enable check of service source ranges (currently, only for LoadBalancer) (default true) --enable-tcx Attach endpoint programs using tcx if supported by the kernel (default true) --enable-tracing Enable tracing while determining policy (debugging) --enable-unreachable-routes Add unreachable routes on pod deletion --enable-vtep Enable VXLAN Tunnel Endpoint (VTEP) Integration (beta) --enable-well-known-identities Enable well-known identities for known Kubernetes components (default true) --enable-wireguard Enable WireGuard --enable-xdp-prefilter Enable XDP prefiltering --enable-xt-socket-fallback Enable fallback for missing xt_socket module (default true) --encrypt-interface string Transparent encryption interface --encrypt-node Enables encrypting traffic from non-Cilium pods and host networking (only supported with WireGuard, beta) --encryption-strict-mode-allow-remote-node-identities Allows unencrypted traffic from pods to remote node identities within the strict mode CIDR. This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap. --encryption-strict-mode-cidr string In strict-mode encryption, all unencrypted traffic coming from this CIDR and going to this same CIDR will be dropped --endpoint-bpf-prog-watchdog-interval duration Interval to trigger endpoint BPF programs load check watchdog (default 30s) --endpoint-queue-size int Size of EventQueue per-endpoint (default 25) --envoy-base-id uint Envoy base ID --envoy-config-retry-interval duration Interval in which an attempt is made to reconcile failed EnvoyConfigs. If the duration is zero, the retry is deactivated. (default 15s) --envoy-config-timeout duration Timeout that determines how long to wait for Envoy to N/ACK CiliumEnvoyConfig resources (default 2m0s) --envoy-keep-cap-netbindservice Keep capability NETBINDSERVICE for Envoy process --envoy-log string Path to a separate Envoy log file, if any --envoy-secrets-namespace string EnvoySecretsNamespace is the namespace having secrets used by CEC --exclude-local-address strings Exclude CIDR from being recognized as local address --exclude-node-label-patterns strings List of k8s node label regex patterns to be excluded from CiliumNode --external-envoy-proxy whether the Envoy is deployed externally in form of a DaemonSet or not --fixed-identity-mapping map Key-value for the fixed identity mapping which allows to use reserved label for fixed identities, e.g. 128=kv-store,129=kube-dns --gateway-api-secrets-namespace string GatewayAPISecretsNamespace is the namespace having tls secrets used by CEC, originating from Gateway API --gops-port uint16 Port for gops server to listen on (default 9890) -h, --help help for cilium-agent --http-idle-timeout uint Time after which a non-gRPC HTTP stream is considered failed unless traffic in the stream has been processed (in seconds); defaults to 0 (unlimited) --http-max-grpc-timeout uint Time after which a forwarded gRPC request is considered failed unless completed (in seconds). A \"grpc-timeout\" header may override this with a shorter value; defaults to 0 (unlimited) --http-normalize-path Use Envoy HTTP path normalization options, which currently includes RFC 3986 path normalization, Envoy merge slashes option, and unescaping and redirecting for paths that contain escaped slashes. These are necessary to keep path based access control functional, and should not interfere with normal operation. Set this to false only with caution. (default true) --http-request-timeout uint Time after which a forwarded HTTP request is considered failed unless completed (in seconds); Use 0 for unlimited (default 3600) --http-retry-count uint Number of retries performed after a forwarded request attempt fails (default 3) --http-retry-timeout uint Time after which a forwarded but uncompleted request is retried (connection failures are retried immediately); defaults to 0 (never) --hubble-disable-tls Allow Hubble server to run on the given listen address without TLS. --hubble-drop-events Emit packet drop Events related to pods --hubble-drop-events-interval duration Minimum time between emitting same events (default 2m0s) --hubble-drop-events-reasons string Drop reasons to emit events for (default \"authrequired,policydenied\") --hubble-event-buffer-capacity int Capacity of Hubble events"
},
{
"data": "The provided value must be one less than an integer power of two and no larger than 65535 (ie: 1, 3, ..., 2047, 4095, ..., 65535) (default 4095) --hubble-event-queue-size int Buffer size of the channel to receive monitor events. --hubble-export-allowlist strings Specify allowlist as JSON encoded FlowFilters to Hubble exporter. --hubble-export-denylist strings Specify denylist as JSON encoded FlowFilters to Hubble exporter. --hubble-export-fieldmask strings Specify list of fields to use for field mask in Hubble exporter. --hubble-export-file-compress Compress rotated Hubble export files. --hubble-export-file-max-backups int Number of rotated Hubble export files to keep. (default 5) --hubble-export-file-max-size-mb int Size in MB at which to rotate Hubble export file. (default 10) --hubble-export-file-path stdout Filepath to write Hubble events to. By specifying stdout the flows are logged instead of written to a rotated file. --hubble-flowlogs-config-path string Filepath with configuration of hubble flowlogs --hubble-listen-address string An additional address for Hubble server to listen to, e.g. \":4244\" --hubble-metrics strings List of Hubble metrics to enable. --hubble-metrics-server string Address to serve Hubble metrics on. --hubble-monitor-events strings Cilium monitor events for Hubble to observe: [drop debug capture trace policy-verdict recorder trace-sock l7 agent]. By default, Hubble observes all monitor events. --hubble-prefer-ipv6 Prefer IPv6 addresses for announcing nodes when both address types are available. --hubble-recorder-sink-queue-size int Queue size of each Hubble recorder sink (default 1024) --hubble-recorder-storage-path string Directory in which pcap files created via the Hubble Recorder API are stored (default \"/var/run/cilium/pcaps\") --hubble-redact-enabled Hubble redact sensitive information from flows --hubble-redact-http-headers-allow strings HTTP headers to keep visible in flows --hubble-redact-http-headers-deny strings HTTP headers to redact from flows --hubble-redact-http-urlquery Hubble redact http URL query from flows --hubble-redact-http-userinfo Hubble redact http user info from flows (default true) --hubble-redact-kafka-apikey Hubble redact Kafka API key from flows --hubble-skip-unknown-cgroup-ids Skip Hubble events with unknown cgroup ids (default true) --hubble-socket-path string Set hubble's socket path to listen for connections (default \"/var/run/cilium/hubble.sock\") --hubble-tls-cert-file string Path to the public key file for the Hubble server. The file must contain PEM encoded data. --hubble-tls-client-ca-files strings Paths to one or more public key files of client CA certificates to use for TLS with mutual authentication (mTLS). The files must contain PEM encoded data. When provided, this option effectively enables mTLS. --hubble-tls-key-file string Path to the private key file for the Hubble server. The file must contain PEM encoded data. --identity-allocation-mode string Method to use for identity allocation (default \"kvstore\") --identity-change-grace-period duration Time to wait before using new identity on endpoint identity change (default 5s) --identity-restore-grace-period duration Time to wait before releasing unused restored CIDR identities during agent restart (default 10m0s) --ingress-secrets-namespace string IngressSecretsNamespace is the namespace having tls secrets used by CEC, originating from Ingress controller --install-no-conntrack-iptables-rules Install Iptables rules to skip netfilter connection tracking on all pod traffic. This option is only effective when Cilium is running in direct routing and full KPR mode. Moreover, this option cannot be enabled when Cilium is running in a managed Kubernetes environment or in a chained CNI setup. --ip-masq-agent-config-path string ip-masq-agent configuration file path (default \"/etc/config/ip-masq-agent\") --ipam string Backend to use for IPAM (default \"cluster-pool\") --ipam-cilium-node-update-rate duration Maximum rate at which the CiliumNode custom resource is updated (default 15s) --ipam-default-ip-pool string Name of the default IP Pool when using multi-pool (default \"default\") --ipam-multi-pool-pre-allocation map Defines the minimum number of IPs a node should pre-allocate from each pool (default default=8) --ipsec-key-file string Path to IPsec key file --ipsec-key-rotation-duration duration Maximum duration of the IPsec key rotation. The previous key will be removed after that"
},
{
"data": "(default 5m0s) --iptables-lock-timeout duration Time to pass to each iptables invocation to wait for xtables lock acquisition (default 5s) --iptables-random-fully Set iptables flag random-fully on masquerading rules --ipv4-native-routing-cidr string Allows to explicitly specify the IPv4 CIDR for native routing. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT. Generally speaking, specifying a native routing CIDR implies that Cilium can depend on the underlying networking stack to route packets to their destination. To offer a concrete example, if Cilium is configured to use direct routing and the Kubernetes CIDR is included in the native routing CIDR, the user must configure the routes to reach pods, either manually or by setting the auto-direct-node-routes flag. --ipv4-node string IPv4 address of node (default \"auto\") --ipv4-pod-subnets strings List of IPv4 pod subnets to preconfigure for encryption --ipv4-range string Per-node IPv4 endpoint prefix, e.g. 10.16.0.0/16 (default \"auto\") --ipv4-service-loopback-address string IPv4 address for service loopback SNAT (default \"169.254.42.1\") --ipv4-service-range string Kubernetes IPv4 services CIDR if not inside cluster prefix (default \"auto\") --ipv6-cluster-alloc-cidr string IPv6 /64 CIDR used to allocate per node endpoint /96 CIDR (default \"f00d::/64\") --ipv6-mcast-device string Device that joins a Solicited-Node multicast group for IPv6 --ipv6-native-routing-cidr string Allows to explicitly specify the IPv6 CIDR for native routing. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT. Generally speaking, specifying a native routing CIDR implies that Cilium can depend on the underlying networking stack to route packets to their destination. To offer a concrete example, if Cilium is configured to use direct routing and the Kubernetes CIDR is included in the native routing CIDR, the user must configure the routes to reach pods, either manually or by setting the auto-direct-node-routes flag. --ipv6-node string IPv6 address of node (default \"auto\") --ipv6-pod-subnets strings List of IPv6 pod subnets to preconfigure for encryption --ipv6-range string Per-node IPv6 endpoint prefix, e.g. fd02:1:1::/96 (default \"auto\") --ipv6-service-range string Kubernetes IPv6 services CIDR if not inside cluster prefix (default \"auto\") --join-cluster Join a Cilium cluster via kvstore registration --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-namespace string Name of the Kubernetes namespace in which Cilium is deployed in --k8s-require-ipv4-pod-cidr Require IPv4 PodCIDR to be specified in node resource --k8s-require-ipv6-pod-cidr Require IPv6 PodCIDR to be specified in node resource --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --k8s-watcher-endpoint-selector string K8s endpoint watcher will watch for these k8s endpoints (default \"metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager\") --keep-config When restoring state, keeps containers' configuration in place --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --kube-proxy-replacement-healthz-bind-address string The IP address with port for kube-proxy replacement health check server to serve on (set to '0.0.0.0:10256' for all IPv4 interfaces and '[::]:10256' for all IPv6 interfaces). Set empty to disable. --kvstore string Key-value store type --kvstore-connectivity-timeout duration Time after which an incomplete kvstore operation is considered failed (default 2m0s) --kvstore-max-consecutive-quorum-errors uint Max acceptable kvstore consecutive quorum errors before the agent assumes permanent failure (default 2) --kvstore-opt map Key-value store options e.g."
},
{
"data": "--kvstore-periodic-sync duration Periodic KVstore synchronization interval (default 5m0s) --l2-announcements-lease-duration duration Duration of inactivity after which a new leader is selected (default 15s) --l2-announcements-renew-deadline duration Interval at which the leader renews a lease (default 5s) --l2-announcements-retry-period duration Timeout after a renew failure, before the next retry (default 2s) --l2-pod-announcements-interface string Interface used for sending gratuitous arp messages --label-prefix-file string Valid label prefixes file path --labels strings List of label prefixes used to determine identity of an endpoint --lib-dir string Directory path to store runtime build environment (default \"/var/lib/cilium\") --local-router-ipv4 string Link-local IPv4 used for Cilium's router devices --local-router-ipv6 string Link-local IPv6 used for Cilium's router devices --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-agent, configmap example for syslog driver: {\"syslog.level\":\"info\",\"syslog.facility\":\"local5\",\"syslog.tag\":\"cilium-agent\"} --log-system-load Enable periodic logging of system load --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-enabled Enable authentication processing & garbage collection (beta) (default true) --mesh-auth-gc-interval duration Interval in which auth entries are attempted to be garbage collected (default 5m0s) --mesh-auth-mutual-connect-timeout duration Timeout for connecting to the remote node TCP socket (default 5s) --mesh-auth-mutual-listener-port int Port on which the Cilium Agent will perform mutual authentication handshakes between other Agents --mesh-auth-queue-size int Queue size for the auth manager (default 1024) --mesh-auth-rotated-identities-queue-size int The size of the queue for signaling rotated identities. (default 1024) --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-admin-socket string The path for the SPIRE admin agent Unix socket. --metrics strings Metrics that should be enabled or disabled from the default metric list. (+metricfoo to enable metricfoo, -metricbar to disable metricbar) --monitor-aggregation string Level of monitor aggregation for traces from the datapath (default \"None\") --monitor-aggregation-flags strings TCP flags that trigger monitor reports when monitor aggregation is enabled (default [syn,fin,rst]) --monitor-aggregation-interval duration Monitor report interval when monitor aggregation is enabled (default 5s) --monitor-queue-size int Size of the event queue when reading monitor events --mtu int Overwrite auto-detected MTU of underlying network --multicast-enabled Enables multicast in Cilium --node-encryption-opt-out-labels string Label selector for nodes which will opt-out of node-to-node encryption (default \"node-role.kubernetes.io/control-plane\") --node-labels strings List of label prefixes used to determine identity of a node (used only when enable-node-selector-labels is enabled) --node-port-bind-protection Reject application bind(2) requests to service ports in the NodePort range (default true) --node-port-range strings Set the min/max NodePort port range (default [30000,32767]) --nodeport-addresses strings A whitelist of CIDRs to limit which IPs are used for NodePort. If not set, primary IPv4 and/or IPv6 address of each native device is used. --policy-accounting Enable policy accounting (default true) --policy-audit-mode Enable policy audit (non-drop) mode --policy-cidr-match-mode strings The entities that can be selected by CIDR policy. Supported values: 'nodes' --policy-queue-size int Size of queues for policy-related events (default 100) --pprof Enable serving pprof debugging API --pprof-address string Address that pprof listens on (default \"localhost\") --pprof-port uint16 Port that pprof listens on (default 6060) --preallocate-bpf-maps Enable BPF map pre-allocation (default true) --prepend-iptables-chains Prepend custom iptables chains instead of appending (default true) --procfs string Path to the host's proc filesystem mount (default \"/proc\") --prometheus-serve-addr string IP:Port on which to serve prometheus metrics (pass \":Port\" to bind on all interfaces, \"\" is off) --proxy-admin-port int Port to serve Envoy admin interface on. --proxy-connect-timeout uint Time after which a TCP connect attempt is considered failed unless completed (in seconds) (default 2) --proxy-gid uint Group ID for proxy control plane"
},
{
"data": "(default 1337) --proxy-idle-timeout-seconds int Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s (default 60) --proxy-max-connection-duration-seconds int Set Envoy HTTP option maxconnectionduration seconds. Default 0 (disable) --proxy-max-requests-per-connection int Set Envoy HTTP option maxrequestsper_connection. Default 0 (disable) --proxy-portrange-max uint16 End of port range that is used to allocate ports for L7 proxies. (default 20000) --proxy-portrange-min uint16 Start of port range that is used to allocate ports for L7 proxies. (default 10000) --proxy-prometheus-port int Port to serve Envoy metrics on. Default 0 (disabled). --proxy-xff-num-trusted-hops-egress uint32 Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the egress L7 policy enforcement Envoy listeners. --proxy-xff-num-trusted-hops-ingress uint32 Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the ingress L7 policy enforcement Envoy listeners. --read-cni-conf string CNI configuration file to use as a source for --write-cni-conf-when-ready. If not supplied, a suitable one will be generated. --restore Restores state, if possible, from previous daemon (default true) --route-metric int Overwrite the metric used by cilium when adding routes to its 'cilium_host' device --routing-mode string Routing mode (\"native\" or \"tunnel\") (default \"tunnel\") --service-no-backend-response string Response to traffic for a service without backends (default \"reject\") --socket-path string Sets daemon's socket path to listen for connections (default \"/var/run/cilium/cilium.sock\") --state-dir string Directory path to store runtime state (default \"/var/run/cilium\") --tofqdns-dns-reject-response-code string DNS response code for rejecting DNS requests, available options are '[nameError refused]' (default \"refused\") --tofqdns-enable-dns-compression Allow the DNS proxy to compress responses to endpoints that are larger than 512 Bytes or the EDNS0 option, if present (default true) --tofqdns-endpoint-max-ip-per-hostname int Maximum number of IPs to maintain per FQDN name for each endpoint (default 50) --tofqdns-idle-connection-grace-period duration Time during which idle but previously active connections with expired DNS lookups are still considered alive (default 0s) --tofqdns-max-deferred-connection-deletes int Maximum number of IPs to retain for expired DNS lookups with still-active connections (default 10000) --tofqdns-min-ttl int The minimum time, in seconds, to use DNS data for toFQDNs policies --tofqdns-pre-cache string DNS cache data at this path is preloaded on agent startup --tofqdns-proxy-port int Global port on which the in-agent DNS proxy should listen. Default 0 is a OS-assigned port. --tofqdns-proxy-response-max-delay duration The maximum time the DNS proxy holds an allowed DNS response before sending it along. Responses are sent as soon as the datapath is updated with the new IP information. (default 100ms) --trace-payloadlen int Length of payload to capture when tracing (default 128) --trace-sock Enable tracing for socket-based LB (default true) --tunnel-port uint16 Tunnel port (default 8472 for \"vxlan\" and 6081 for \"geneve\") --tunnel-protocol string Encapsulation protocol to use for the overlay (\"vxlan\" or \"geneve\") (default \"vxlan\") --use-full-tls-context If enabled, persist ca.crt keys into the Envoy config even in a terminatingTLS block on an L7 Cilium Policy. This is to enable compatibility with previously buggy behaviour. This flag is deprecated and will be removed in a future release. --version Print version information --vlan-bpf-bypass strings List of explicitly allowed VLAN IDs, '0' id will allow all VLAN IDs --vtep-cidr strings List of VTEP CIDRs that will be routed towards VTEPs for traffic cluster egress --vtep-endpoint strings List of VTEP IP addresses --vtep-mac strings List of VTEP MAC addresses for forwarding traffic outside the cluster --vtep-mask string VTEP CIDR Mask for all VTEP CIDRs (default \"255.255.255.0\") --wireguard-persistent-keepalive duration The Wireguard keepalive interval as a Go duration string --write-cni-conf-when-ready string Write the CNI configuration to the specified path when agent is ready ``` - Generate the autocompletion script for the specified shell - Inspect the hive"
}
] | {
"category": "Runtime",
"file_name": "cilium-agent.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "Antrea may run in networkPolicyOnly mode in AKS and EKS clusters. This document describes the steps to create an EKS cluster with Antrea using terraform. To run EKS cluster, install and configure AWS cli(either version 1 or 2), see <https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html>, and <https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html> Install aws-iam-authenticator, see <https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html> Install terraform, see <https://learn.hashicorp.com/tutorials/terraform/install-cli> You must already have ssh key-pair created. This key pair will be used to access worker Node via ssh. ```bash ls ~/.ssh/ idrsa idrsa.pub ``` Ensures that you have permission to create EKS cluster, and have already created EKS cluster role as well as worker Node profile. ```bash export TFVAReksclusteriamrolename=YOUREKSROLE export TFVAReksiaminstanceprofilename=YOUREKSWORKERNODEPROFILE export TFVARekskeypairname=YOURKEYPAIRTOACCESSWORKER_NODE ``` Where TFVAReksclusteriamrolename may be created by following these TFVAReksiaminstanceprofilename may be created by following these TFVARekskeypair_name is the aws key pair name you have configured by following these , using ssh-pair created in Prerequisites item 4 Create EKS cluster ```bash ./hack/terraform-eks.sh create ``` Interact with EKS cluster ```bash ./hack/terraform-eks.sh kubectl ... # issue kubectl commands to EKS cluster ./hack/terraform-eks.sh load ... # load local built images to EKS cluster ./hack/terraform-eks.sh destroy # destroy EKS cluster ``` and worker Node can be accessed with ssh via their external IPs. Apply Antrea to EKS cluster ```bash ./hack/generate-manifest.sh --encap-mode networkPolicyOnly | ~/terraform/eks kubectl apply -f - ```"
}
] | {
"category": "Runtime",
"file_name": "eks-terraform.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "The specifications below quantify Firecracker's promise to enable minimal-overhead execution of container and serverless workloads. These specifications are enforced by integration tests (that run for each PR and main branch merge). On an (with hyperthreading disabled) and an and given host system resources are available (e.g., there are enough free CPU cycles, there is enough RAM, etc.), customers can rely on the following: Stability: The Firecracker virtual machine manager starts (up to API socket availability) within `8 CPU ms`[^1] and never crashes/halts/terminates for internal reasons once started. Note: The wall-clock time has a large standard deviation, spanning `6 ms to 60 ms`, with typical durations around `12 ms`. Failure Information: When failures occur due to external circumstances, they are logged[^2] by the Firecracker process. API Stability: The API socket is always available and the API conforms to the in-tree . API failures are logged in the Firecracker log. Overhead: For a Firecracker virtual machine manager running a microVM with `1 CPUs and 128 MiB of RAM`, and a guest OS with the Firecracker-tuned kernel: Firecracker's virtual machine manager threads have a memory overhead `<= 5 MiB`. The memory overhead is dependent on the workload (e.g. a workload with multiple connections might generate a memory overhead > 5MiB) and on the VMM configuration (the overhead does not include the memory used by the data store. The overhead is tested as part of the Firecracker CI using a . It takes `<= 125 ms` to go from receiving the Firecracker InstanceStart API call to the start of the Linux guest user-space `/sbin/init` process. The boot time is measured using a VM with the serial console disabled and a minimal kernel and root file system. For more details check the integration tests. The compute-only guest CPU performance is `> 95%` of the equivalent bare-metal performance. `[integration test pending]` IO Performance: With a host CPU core dedicated to the Firecracker device emulation thread, the guest achieves up to `14.5 Gbps` network throughput by using `<= 80%` of the host CPU core for emulation. `[integration test pending]` the guest achieves up to `25 Gbps` network throughput by using `100%` of the host CPU core for emulation. `[integration test pending]` the virtualization layer adds on average `0.06ms` of latency. `[integration test pending]` the guest achieves up to `1 GiB/s` storage throughput by using `<= 70%` of the host CPU core for emulation. `[integration test pending]` Telemetry: Firecracker emits logs and metrics to the named pipes passed to the logging API. Any logs and metrics emitted while their respective pipes are full will be lost. Any such events will be signaled through the `lost-logs` and `lost-metrics` counters. getting consistent measurements for some performance metrics. process start-up and the logging system initialization in the `firecracker` process."
}
] | {
"category": "Runtime",
"file_name": "SPECIFICATION.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
} |
[
{
"data": "Start the ObjectNode object gateway by executing the following command: ```bash nohup cfs-server -c objectnode.json & ``` The configuration file is as follows: ```json { \"role\": \"objectnode\", \"listen\": \"17410\", \"domains\": [ \"object.cfs.local\" ], \"logDir\": \"/cfs/Logs/objectnode\", \"logLevel\": \"info\", \"masterAddr\": [ \"10.196.59.198:17010\", \"10.196.59.199:17010\", \"10.196.59.200:17010\" ], \"exporterPort\": 9503, \"prof\": \"7013\" } ``` The meaning of each parameter in the configuration file is shown in the following table: | Parameter | Type | Meaning | Required | |--|--|--|-| | role | string | Process role, must be set to `objectnode` | Yes | | listen | string | Port number that the object storage subsystem listens to.<br>Format: `PORT` | Yes | | domains | string slice | Configure domain names for S3-compatible interfaces to support DNS-style access to resources | No | | logDir | string | Log storage path | Yes | | logLevel | string | Log level. Default: `error` | No | | masterAddr | string slice | IP and port number of the resource management master.<br>Format: `IP:PORT` | Yes | | exporterPort | string | Port for Prometheus to obtain monitoring data | No | | prof | string | Debug and administrator API interface | Yes | ObjectNode provides S3-compatible object storage interfaces to operate on files in CubeFS, so you can use open source tools such as and or the native Amazon S3 SDK to operate on files in CubeFS. The main supported interfaces are as follows: | API | Reference | ||| | `HeadBucket` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html> | | `GetBucketLocation` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html> | | API | Reference | |--|--| | `PutObject` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html> | | `GetObject` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html> | | `HeadObject` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html> | | `CopyObject` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html> | | `ListObjects` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html> | | `ListObjectsV2` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html> | | `DeleteObject` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html> | | `DeleteObjects` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html> | | API | Reference | ||| | `CreateMultipartUpload` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html> | | `UploadPart` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html> | | `CompleteMultipartUpload` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html> | | `AbortMultipartUpload` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html> | | `ListParts` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html> | | `ListMultipartUploads` | <https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html> | | Name | Language | Link | |--|--|-| | AWS SDK for Java | `Java` | <https://aws.amazon.com/sdk-for-java/> | | AWS SDK for JavaScript | `JavaScript` | <https://aws.amazon.com/sdk-for-browser/> | | AWS SDK for JavaScript in Node.js | `JavaScript` | <https://aws.amazon.com/sdk-for-node-js/> | | AWS SDK for Go | `Go` | <https://docs.aws.amazon.com/sdk-for-go/> | | AWS SDK for PHP | `PHP` | <https://aws.amazon.com/sdk-for-php/> | | AWS SDK for Ruby | `Ruby` | <https://aws.amazon.com/sdk-for-ruby/> | | AWS SDK for .NET | `.NET` |"
},
{
"data": "| | AWS SDK for C++ | `C++` | <https://aws.amazon.com/sdk-for-cpp/> | | Boto3 | `Python` | <http://boto.cloudhackers.com> | You can refer to the link: to create a user. If the user has been created, the user can obtain the Access Key and Secret Key through the relevant API. Below is an example of using the object storage with the AWS GO SDK. The following shows how to create a bucket. ```go const ( Endpoint = \"127.0.0.1:17410\" // IP and listening port of the ObjectNode object storage Region = \"cfs_dev\" // Cluster name of the resource management master AccessKeyId = \"Qkr2zxKm8D6ZOh\" // User Access Key SecretKeyId = \"wygX0NzgshoezVNo\" // User Secret Key BucketName = \"BucketName\" // Bucket name key = \"key\" // File name ) func CreateBucket() { conf := &aws.Config{ Region: aws.String(Region), Endpoint: aws.String(Endpoint), S3ForcePathStyle: aws.Bool(true), Credentials: credentials.NewStaticCredentials(AccessKeyId, SecretKeyId, \"\"), LogLevel: aws.LogLevel(aws.LogDebug), } sess := session.Must(session.NewSessionWithOptions(session.Options{Config: *conf})) service := s3.New(sess) bucketConfig := s3.CreateBucketConfiguration{ LocationConstraint: aws.String(Region), } req, out := service.CreateBucketRequest(&s3.CreateBucketInput{ Bucket: aws.String(BucketName), CreateBucketConfiguration: &bucketConfig, }) err := req.Send() if err != nil { fmt.Println(\"Failed to CreateBucket \", err) } else { fmt.Println(\"CreateBucket succeed \", out) } } ``` Response: ```http HTTP/1.1 200 OK Connection: close Content-Length: 0 Date: Wed, 01 Mar 2023 07:55:35 GMT Location: /BucketName Server: CubeFS X-Amz-Request-Id: cb9bafab3e8a4e56a296b47604f6a415 CreateBucket succeed { Location: \"/BucketName\" } ``` The object storage subsystem supports two upload methods: normal upload and multipart upload. The following shows how to use the normal upload interface to upload an object. ```go func PutObject() { conf := &aws.Config{ Region: aws.String(Region), Endpoint: aws.String(Endpoint), S3ForcePathStyle: aws.Bool(true), Credentials: credentials.NewStaticCredentials(AccessKeyId, SecretKeyId, \"\"), LogLevel: aws.LogLevel(aws.LogDebug), } sess := session.Must(session.NewSessionWithOptions(session.Options{Config: *conf})) svc := s3.New(sess) input := &s3.PutObjectInput{ Body: aws.ReadSeekCloser(bytes.NewBuffer([]byte(\"test\"))), Bucket: aws.String(BucketName), Key: aws.String(key), } result, err := svc.PutObject(input) if err != nil { fmt.Println(\"Failed to Put object \", err) } else { fmt.Println(\"Put object succeed \", result) } } ``` Response: ```http HTTP/1.1 200 OK Content-Length: 0 Connection: keep-alive Date: Wed, 01 Mar 2023 08:03:44 GMT Etag: \"098f6bcd4621d373cade4e832627b4f6\" Server: CubeFS X-Amz-Request-Id: 7a2f7cd926f14284abf18716c17c01f9 Put object succeed { ETag: \"\\\"098f6bcd4621d373cade4e832627b4f6\\\"\" } ``` The following shows how to use the multipart upload interface to upload a large object. ```go func UploadWithManager() { conf := &aws.Config{ Region: aws.String(Region), Endpoint: aws.String(Endpoint), S3ForcePathStyle: aws.Bool(true), Credentials: credentials.NewStaticCredentials(AccessKeyId, SecretKeyId, \"\"), LogLevel: aws.LogLevel(aws.LogDebug), } sess := session.Must(session.NewSessionWithOptions(session.Options{Config: *conf})) file, err := ioutil.ReadFile(\"D:/Users/80303220/Desktop/largeFile\") if err != nil { fmt.Println(\"Unable to read file \", err) return } uploader := s3manager.NewUploader(sess) upParams := &s3manager.UploadInput{ Bucket: aws.String(BucketName), Key: aws.String(key), Body: aws.ReadSeekCloser(strings.NewReader(string(file))), } uploaderResult, err := uploader.Upload(upParams, func(u *s3manager.Uploader) { //Set the part size to 16MB"
},
{
"data": "= 64 1024 1024 // Do not delete parts if the upload fails u.LeavePartsOnError = true //Set the concurrency. The concurrency is not the more the better. Please consider the network condition and device load comprehensively. u.Concurrency = 5 }) if err != nil { fmt.Println(\"Failed to upload \", err) } else { fmt.Println(\"upload succeed \", uploaderResult.Location, *uploaderResult.ETag) } } ``` Response: ```http HTTP/1.1 200 OK Content-Length: 227 Connection: keep-alive Content-Type: application/xml Date: Wed, 01 Mar 2023 08:15:43 GMT Server: CubeFS X-Amz-Request-Id: b20ba3fe9ab34321a3428bf69c1e98a4 ``` The following shows how to copy an object. ```go func CopyObject() { conf := &aws.Config{ Region: aws.String(Region), Endpoint: aws.String(Endpoint), S3ForcePathStyle: aws.Bool(true), Credentials: credentials.NewStaticCredentials(AccessKeyId, SecretKeyId, \"\"), LogLevel: aws.LogLevel(aws.LogDebug), } sess := session.Must(session.NewSessionWithOptions(session.Options{Config: *conf})) svc := s3.New(sess) input := &s3.CopyObjectInput{ Bucket: aws.String(BucketName), Key: aws.String(\"test-dst\"), CopySource: aws.String(BucketName + \"/\" + \"test\"), } result, err := svc.CopyObject(input) if err != nil { fmt.Println(err) return } fmt.Println(result) } ``` Response: ```http HTTP/1.1 200 OK Content-Length: 184 Connection: keep-alive Content-Type: application/xml Date: Wed, 01 Mar 2023 08:21:25 GMT Server: CubeFS X-Amz-Request-Id: 8889dd739a1a4c2492238e48724e491e { CopyObjectResult: { ETag: \"\\\"098f6bcd4621d373cade4e832627b4f6\\\"\", LastModified: 2023-03-01 08:21:23 +0000 UTC } } ``` The following shows how to download an object. ```go func GetObject(key string) error { conf := &aws.Config{ Region: aws.String(Region), Endpoint: aws.String(Endpoint), S3ForcePathStyle: aws.Bool(true), Credentials: credentials.NewStaticCredentials(AccessKeyId, SecretKeyId, \"\"), LogLevel: aws.LogLevel(aws.LogDebug), } sess := session.Must(session.NewSessionWithOptions(session.Options{Config: *conf})) svc := s3.New(sess) input := &s3.GetObjectInput{ Bucket: aws.String(BucketName), Key: aws.String(key), //Range: aws.String(\"bytes=0-1\"), } req, srcObject := svc.GetObjectRequest(input) err := req.Send() if srcObject.Body != nil { defer srcObject.Body.Close() } if err != nil { fmt.Println(\"Failed to Get object \", err) return err } else { file, err := os.Create(\"./test-dst\") if err != nil { return err } i, err := io.Copy(file, srcObject.Body) if err == nil { fmt.Println(\"Get object succeed \", i) return nil } fmt.Println(\"Failed to Get object \", err) return err } } ``` Response: ```http HTTP/1.1 200 OK Content-Length: 4 Accept-Ranges: bytes Connection: keep-alive Content-Type: application/octet-stream Date: Wed, 01 Mar 2023 08:28:48 GMT Etag: \"098f6bcd4621d373cade4e832627b4f6\" Last-Modified: Wed, 01 Mar 2023 08:03:43 GMT Server: CubeFS X-Amz-Request-Id: 71fecfb8e9bd4d3db8d4a71cb50c4c47 ``` The following shows how to delete an object. ```go func DeleteObject() { conf := &aws.Config{ Region: aws.String(Region), Endpoint: aws.String(Endpoint), S3ForcePathStyle: aws.Bool(true), Credentials: credentials.NewStaticCredentials(AccessKeyId, SecretKeyId, \"\"), LogLevel: aws.LogLevel(aws.LogDebug), } sess := session.Must(session.NewSessionWithOptions(session.Options{Config: *conf})) service := s3.New(sess) req, out := service.DeleteObjectRequest(&s3.DeleteObjectInput{ Bucket: aws.String(BucketName), Key: aws.String(key), }) err := req.Send() if err != nil { fmt.Println(\"Failed to Delete \", err) } else { fmt.Println(\"Delete succeed \", out) } } ``` Response: ```http HTTP/1.1 204 No Content Connection: keep-alive Date: Wed, 01 Mar 2023 08:31:11 GMT Server: CubeFS X-Amz-Request-Id: a4a5d27d3cb64466837ba6324eb8b1c2 ```"
}
] | {
"category": "Runtime",
"file_name": "objectnode.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Each zone manages bucket resharding decisions independently With per-bucket replication policies, some zones may only replicate a subset of objects, so require fewer shards. Avoids having to coordinate reshards across zones. Resharding a bucket does not require a full sync of its objects Existing bilogs must be preserved and processed before new bilog shards. Backward compatibility No zone can reshard until all peer zones upgrade to a supported release. Requires a manual zonegroup change to enable resharding. A layout describes a set of rados objects, along with some strategy to distribute things across them. A bucket index layout distributes object names across some number of shards via `cephstrhashlinux()`. Resharding a bucket enacts a transition from one such layout to another. Each layout could represent data differently. For example, a bucket index layout would be used with clsrgw to write/delete keys. Whereas a datalog layout may be used with clslog to append and trim log entries, then later transition to a layout based on some other primitive like clsqueue or cls_fifo. To reshard a bucket, we currently create a new bucket instance with the desired sharding layout, and switch to that instance when resharding completes. In multisite, though, the metadata master zone is authoritative for all bucket metadata, including the sharding layout and reshard status. Any changes to metadata must take place on the metadata master zone and replicate from there to other zones. If we want to allow each zone to manage its bucket sharding independently, we can't allow them each to create a new bucket instance, because data sync relies on the consistency of instance ids between zones. We also can't allow metadata sync to overwrite our local sharding information with the metadata master's copy. That means that the bucket's sharding information needs to be kept private to the local zone's bucket instance, and that information also needs to track all reshard status that's currently spread between the old and new bucket instance metadata: old shard layout, new shard layout, and current reshard progress. To make this information private, we can just prevent metadata sync from overwriting these fields. This change also affects the rados object names of the bucket index shards, currently of the form `.dir.<instance-id>.<shard-id>`. Since we need to represent multiple sharding layouts for a single instance-id, we need to add some unique identifier to the object names. This comes in the form of a generation number, incremented with each reshard, like `.dir.<instance-id>.<generation>.<shard-id>`. The first generation number 0 would be omitted from the object names for backward compatibility. The bucket replication logs for multisite are stored in the same bucket index shards as the keys that they"
},
{
"data": "However, we can't reshard these log entries like we do with normal keys, because other zones need to track their position in the logs. If we shuffle the log entries around between shards, other zones no longer have a way to associate their old shard marker positions with the new shards, and their only recourse would be to restart a full sync. So when resharding buckets, we need to preserve the old bucket index logs so that other zones can finish processing their log entries, while any new events are recorded in the new bucket index logs. An additional goal is to move replication logs out of omap (so out of the bucket index) into separate rados objects. To enable this, the bucket instance metadata should be able to describe a bucket whose index layout is different from its log layout. For existing buckets, the two layouts would be identical and share the bucket index objects. Alternate log layouts are otherwise out of scope for this design. To support peer zones that are still processing old logs, the local bucket instance metadata must track the history of all log layouts that haven't been fully trimmed yet. Once bilog trimming advances past an old generation, it can delete the associated rados objects and remove that layout from the bucket instance metadata. To prevent this history from growing too large, we can refuse to reshard bucket index logs until trimming catches up. The distinction between index layout and log layout is important, because incremental sync only cares about changes to the log layout. Changes to the index layout would only affect full sync, which uses a custom RGWListBucket extension to list the objects of each index shard separately. But by changing the scope of full sync from per-bucket-shard to per-bucket and using a normal bucket listing to get all objects, we can make full sync independent of the index layout. And once the replication logs are moved out of the bucket index, dynamic resharding is free to change the index layout as much as it wants with no effect on multisite replication. Modify existing state machine for bucket reshard to mutate its existing bucket instance instead of creating a new one. Add fields for log layout. When resharding a bucket whose logs are in the index: Add a new log layout generation to the bucket instance Copy the bucket index entries into their new index layout Commit the log generation change so new entries will be written there Create a datalog entry with the new log generation When sync fetches a bucket instance from the master zone, preserve any private fields in the local instance. Use cls_version to guarantee that we write back the most recent version of those private fields. Datalog entries currently include a bucket shard"
},
{
"data": "We need to add the log generation number to these entries so we can tell which sharding layout it refers to. If we see a new generation number, that entry also implies an obligation to finish syncing all shards of prior generations. Add a per-bucket sync status object that tracks: full sync progress, the current generation of incremental sync, and the set of shards that have completed incremental sync of that generation Existing per-bucket-shard sync status objects continue to track incremental sync. their object names should include the generation number, except for generation 0 For backward compatibility, add special handling when we get ENOENT trying to read this per-bucket sync status: If the remote's oldest log layout has generation=0, read any existing per-shard sync status objects. If any are found, resume incremental sync from there. Otherwise, initialize for full sync. Full sync uses a single bucket-wide listing to fetch all objects. Use a cls_lock to prevent different shards from duplicating this work. When incremental sync gets to the end of a log shard (i.e. listing the log returns truncated=false): If the remote has a newer log generation, flag that shard as 'done' in the bucket sync status. Once all shards in the current generation reach that 'done' state, incremental bucket sync can advance to the next generation. Use cls_version on the bucket sync status object to detect racing writes from other shards. Reframe in terms of log generations, instead of handling SYNCSTOP events with a special Stopped state: radosgw-admin bucket sync enable: create a new log generation in the bucket instance metadata detect races with reshard: fail if reshard in progress, and write with cls_version to detect race with start of reshard if the current log generation is shared with the bucket index layout (BucketLogType::InIndex), the new log generation will point at the same index layout/generation. so the log generation increments, but the index objects keep the same generation SYNCSTOP in incremental sync: flag the shard as 'done' and ignore datalog events on that bucket until we see a new generation Use generation number from sync status to trim the right logs Once all shards of a log generation are trimmed: Remove their rados objects. Remove the associated incremental sync status objects. Remove the log generation from its bucket instance metadata. RGWOpBILogList response should include the bucket's highest log generation Allows incremental sync to determine whether truncated=false means that it's caught up, or that it needs to transition to the next generation. RGWOpBILogInfo response should include the bucket's lowest and highest log generations Allows bucket sync status initialization to decide whether it needs to scan for existing shard status, and where it should resume incremental sync after full sync completes. RGWOpBILogStatus response should include per-bucket status information For log trimming of old generations"
}
] | {
"category": "Runtime",
"file_name": "multisite-reshard.md",
"project_name": "Ceph",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|"
}
] | {
"category": "Runtime",
"file_name": "fuzzy_mode_convert_table.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "All URIs are relative to http://localhost/api/v1 Method | HTTP request | Description | - | - | Put /vm.boot | Boot the previously created VM instance. | Put /vm.create | Create the cloud-hypervisor Virtual Machine (VM) instance. The instance is not booted, only created. | Put /vm.delete | Delete the cloud-hypervisor Virtual Machine (VM) instance. | Put /vm.pause | Pause a previously booted VM instance. | Put /vm.power-button | Trigger a power button in the VM | Put /vm.reboot | Reboot the VM instance. | Put /vm.resume | Resume a previously paused VM instance. | Put /vm.shutdown | Shut the VM instance down. | Put /vmm.shutdown | Shuts the cloud-hypervisor VMM. | Put /vm.add-device | Add a new device to the VM | Put /vm.add-disk | Add a new disk to the VM | Put /vm.add-fs | Add a new virtio-fs device to the VM | Put /vm.add-net | Add a new network device to the VM | Put /vm.add-pmem | Add a new pmem device to the VM | Put /vm.add-vdpa | Add a new vDPA device to the VM | Put /vm.add-vsock | Add a new vsock device to the VM | Put /vm.coredump | Takes a VM coredump. | Get /vm.counters | Get counters from the VM | Get /vm.info | Returns general information about the cloud-hypervisor Virtual Machine (VM) instance. | Put /vm.receive-migration | Receive a VM migration from URL | Put /vm.remove-device | Remove a device from the VM | Put /vm.resize | Resize the VM | Put /vm.resize-zone | Resize a memory zone | Put /vm.restore | Restore a VM from a snapshot. | Put /vm.send-migration | Send a VM migration to URL | Put /vm.snapshot | Returns a VM snapshot. | Get /vmm.ping | Ping the VMM to check for API server availability BootVM(ctx).Execute() Boot the previously created VM instance. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.BootVM(context.Background()).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.BootVM``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiBootVMRequest struct via the builder pattern (empty response body) No authorization required Content-Type: Not defined Accept: Not defined CreateVM(ctx).VmConfig(vmConfig).Execute() Create the cloud-hypervisor Virtual Machine (VM) instance. The instance is not booted, only created. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { vmConfig := openapiclient.NewVmConfig(openapiclient.NewPayloadConfig()) // VmConfig | The VM configuration configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.CreateVM(context.Background()).VmConfig(vmConfig).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.CreateVM``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` Other parameters are passed through a pointer to a apiCreateVMRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - vmConfig | | The VM configuration | (empty response body) No authorization required Content-Type: application/json Accept: Not defined DeleteVM(ctx).Execute() Delete the cloud-hypervisor Virtual Machine (VM) instance. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { configuration :="
},
{
"data": "api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.DeleteVM(context.Background()).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.DeleteVM``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiDeleteVMRequest struct via the builder pattern (empty response body) No authorization required Content-Type: Not defined Accept: Not defined PauseVM(ctx).Execute() Pause a previously booted VM instance. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.PauseVM(context.Background()).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.PauseVM``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiPauseVMRequest struct via the builder pattern (empty response body) No authorization required Content-Type: Not defined Accept: Not defined PowerButtonVM(ctx).Execute() Trigger a power button in the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.PowerButtonVM(context.Background()).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.PowerButtonVM``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiPowerButtonVMRequest struct via the builder pattern (empty response body) No authorization required Content-Type: Not defined Accept: Not defined RebootVM(ctx).Execute() Reboot the VM instance. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.RebootVM(context.Background()).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.RebootVM``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiRebootVMRequest struct via the builder pattern (empty response body) No authorization required Content-Type: Not defined Accept: Not defined ResumeVM(ctx).Execute() Resume a previously paused VM instance. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.ResumeVM(context.Background()).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.ResumeVM``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiResumeVMRequest struct via the builder pattern (empty response body) No authorization required Content-Type: Not defined Accept: Not defined ShutdownVM(ctx).Execute() Shut the VM instance down. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.ShutdownVM(context.Background()).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.ShutdownVM``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiShutdownVMRequest struct via the builder pattern (empty response body) No authorization required Content-Type: Not defined Accept: Not defined ShutdownVMM(ctx).Execute() Shuts the cloud-hypervisor VMM. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err :="
},
{
"data": "if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.ShutdownVMM``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiShutdownVMMRequest struct via the builder pattern (empty response body) No authorization required Content-Type: Not defined Accept: Not defined PciDeviceInfo VmAddDevicePut(ctx).DeviceConfig(deviceConfig).Execute() Add a new device to the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { deviceConfig := *openapiclient.NewDeviceConfig(\"Path_example\") // DeviceConfig | The path of the new device configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmAddDevicePut(context.Background()).DeviceConfig(deviceConfig).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmAddDevicePut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } // response from `VmAddDevicePut`: PciDeviceInfo fmt.Fprintf(os.Stdout, \"Response from `DefaultApi.VmAddDevicePut`: %v\\n\", resp) } ``` Other parameters are passed through a pointer to a apiVmAddDevicePutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - deviceConfig | | The path of the new device | No authorization required Content-Type: application/json Accept: application/json PciDeviceInfo VmAddDiskPut(ctx).DiskConfig(diskConfig).Execute() Add a new disk to the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { diskConfig := *openapiclient.NewDiskConfig(\"Path_example\") // DiskConfig | The details of the new disk configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmAddDiskPut(context.Background()).DiskConfig(diskConfig).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmAddDiskPut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } // response from `VmAddDiskPut`: PciDeviceInfo fmt.Fprintf(os.Stdout, \"Response from `DefaultApi.VmAddDiskPut`: %v\\n\", resp) } ``` Other parameters are passed through a pointer to a apiVmAddDiskPutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - diskConfig | | The details of the new disk | No authorization required Content-Type: application/json Accept: application/json PciDeviceInfo VmAddFsPut(ctx).FsConfig(fsConfig).Execute() Add a new virtio-fs device to the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { fsConfig := *openapiclient.NewFsConfig(\"Tagexample\", \"Socketexample\", int32(123), int32(123)) // FsConfig | The details of the new virtio-fs configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmAddFsPut(context.Background()).FsConfig(fsConfig).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmAddFsPut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } // response from `VmAddFsPut`: PciDeviceInfo fmt.Fprintf(os.Stdout, \"Response from `DefaultApi.VmAddFsPut`: %v\\n\", resp) } ``` Other parameters are passed through a pointer to a apiVmAddFsPutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - fsConfig | | The details of the new virtio-fs | No authorization required Content-Type: application/json Accept: application/json PciDeviceInfo VmAddNetPut(ctx).NetConfig(netConfig).Execute() Add a new network device to the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { netConfig := *openapiclient.NewNetConfig() // NetConfig | The details of the new network device configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmAddNetPut(context.Background()).NetConfig(netConfig).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmAddNetPut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } // response from `VmAddNetPut`: PciDeviceInfo fmt.Fprintf(os.Stdout, \"Response from"
},
{
"data": "%v\\n\", resp) } ``` Other parameters are passed through a pointer to a apiVmAddNetPutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - netConfig | | The details of the new network device | No authorization required Content-Type: application/json Accept: application/json PciDeviceInfo VmAddPmemPut(ctx).PmemConfig(pmemConfig).Execute() Add a new pmem device to the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { pmemConfig := *openapiclient.NewPmemConfig(\"File_example\") // PmemConfig | The details of the new pmem device configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmAddPmemPut(context.Background()).PmemConfig(pmemConfig).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmAddPmemPut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } // response from `VmAddPmemPut`: PciDeviceInfo fmt.Fprintf(os.Stdout, \"Response from `DefaultApi.VmAddPmemPut`: %v\\n\", resp) } ``` Other parameters are passed through a pointer to a apiVmAddPmemPutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - pmemConfig | | The details of the new pmem device | No authorization required Content-Type: application/json Accept: application/json PciDeviceInfo VmAddVdpaPut(ctx).VdpaConfig(vdpaConfig).Execute() Add a new vDPA device to the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { vdpaConfig := *openapiclient.NewVdpaConfig(\"Path_example\", int32(123)) // VdpaConfig | The details of the new vDPA device configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmAddVdpaPut(context.Background()).VdpaConfig(vdpaConfig).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmAddVdpaPut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } // response from `VmAddVdpaPut`: PciDeviceInfo fmt.Fprintf(os.Stdout, \"Response from `DefaultApi.VmAddVdpaPut`: %v\\n\", resp) } ``` Other parameters are passed through a pointer to a apiVmAddVdpaPutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - vdpaConfig | | The details of the new vDPA device | No authorization required Content-Type: application/json Accept: application/json PciDeviceInfo VmAddVsockPut(ctx).VsockConfig(vsockConfig).Execute() Add a new vsock device to the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { vsockConfig := *openapiclient.NewVsockConfig(int64(123), \"Socket_example\") // VsockConfig | The details of the new vsock device configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmAddVsockPut(context.Background()).VsockConfig(vsockConfig).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmAddVsockPut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } // response from `VmAddVsockPut`: PciDeviceInfo fmt.Fprintf(os.Stdout, \"Response from `DefaultApi.VmAddVsockPut`: %v\\n\", resp) } ``` Other parameters are passed through a pointer to a apiVmAddVsockPutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - vsockConfig | | The details of the new vsock device | No authorization required Content-Type: application/json Accept: application/json VmCoredumpPut(ctx).VmCoredumpData(vmCoredumpData).Execute() Takes a VM coredump. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { vmCoredumpData := *openapiclient.NewVmCoredumpData() // VmCoredumpData | The coredump configuration configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmCoredumpPut(context.Background()).VmCoredumpData(vmCoredumpData).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmCoredumpPut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` Other parameters are passed through a pointer to a apiVmCoredumpPutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - vmCoredumpData | | The coredump configuration | (empty response body) No authorization required Content-Type: application/json Accept: Not defined map[string]map[string]int64 VmCountersGet(ctx).Execute() Get counters from the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient"
},
{
"data": ") func main() { configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmCountersGet(context.Background()).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmCountersGet``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } // response from `VmCountersGet`: map[string]map[string]int64 fmt.Fprintf(os.Stdout, \"Response from `DefaultApi.VmCountersGet`: %v\\n\", resp) } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiVmCountersGetRequest struct via the builder pattern No authorization required Content-Type: Not defined Accept: application/json VmInfo VmInfoGet(ctx).Execute() Returns general information about the cloud-hypervisor Virtual Machine (VM) instance. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmInfoGet(context.Background()).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmInfoGet``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } // response from `VmInfoGet`: VmInfo fmt.Fprintf(os.Stdout, \"Response from `DefaultApi.VmInfoGet`: %v\\n\", resp) } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiVmInfoGetRequest struct via the builder pattern No authorization required Content-Type: Not defined Accept: application/json VmReceiveMigrationPut(ctx).ReceiveMigrationData(receiveMigrationData).Execute() Receive a VM migration from URL ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { receiveMigrationData := *openapiclient.NewReceiveMigrationData(\"ReceiverUrl_example\") // ReceiveMigrationData | The URL for the reception of migration state configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmReceiveMigrationPut(context.Background()).ReceiveMigrationData(receiveMigrationData).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmReceiveMigrationPut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` Other parameters are passed through a pointer to a apiVmReceiveMigrationPutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - receiveMigrationData | | The URL for the reception of migration state | (empty response body) No authorization required Content-Type: application/json Accept: Not defined VmRemoveDevicePut(ctx).VmRemoveDevice(vmRemoveDevice).Execute() Remove a device from the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { vmRemoveDevice := *openapiclient.NewVmRemoveDevice() // VmRemoveDevice | The identifier of the device configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmRemoveDevicePut(context.Background()).VmRemoveDevice(vmRemoveDevice).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmRemoveDevicePut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` Other parameters are passed through a pointer to a apiVmRemoveDevicePutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - vmRemoveDevice | | The identifier of the device | (empty response body) No authorization required Content-Type: application/json Accept: Not defined VmResizePut(ctx).VmResize(vmResize).Execute() Resize the VM ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { vmResize := *openapiclient.NewVmResize() // VmResize | The target size for the VM configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmResizePut(context.Background()).VmResize(vmResize).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmResizePut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` Other parameters are passed through a pointer to a apiVmResizePutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - vmResize | | The target size for the VM | (empty response body) No authorization required Content-Type: application/json Accept: Not defined"
},
{
"data": "Resize a memory zone ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { vmResizeZone := *openapiclient.NewVmResizeZone() // VmResizeZone | The target size for the memory zone configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmResizeZonePut(context.Background()).VmResizeZone(vmResizeZone).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmResizeZonePut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` Other parameters are passed through a pointer to a apiVmResizeZonePutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - vmResizeZone | | The target size for the memory zone | (empty response body) No authorization required Content-Type: application/json Accept: Not defined VmRestorePut(ctx).RestoreConfig(restoreConfig).Execute() Restore a VM from a snapshot. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { restoreConfig := *openapiclient.NewRestoreConfig(\"SourceUrl_example\") // RestoreConfig | The restore configuration configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmRestorePut(context.Background()).RestoreConfig(restoreConfig).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmRestorePut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` Other parameters are passed through a pointer to a apiVmRestorePutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - restoreConfig | | The restore configuration | (empty response body) No authorization required Content-Type: application/json Accept: Not defined VmSendMigrationPut(ctx).SendMigrationData(sendMigrationData).Execute() Send a VM migration to URL ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { sendMigrationData := *openapiclient.NewSendMigrationData(\"DestinationUrl_example\") // SendMigrationData | The URL for sending the migration state configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmSendMigrationPut(context.Background()).SendMigrationData(sendMigrationData).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmSendMigrationPut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` Other parameters are passed through a pointer to a apiVmSendMigrationPutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - sendMigrationData | | The URL for sending the migration state | (empty response body) No authorization required Content-Type: application/json Accept: Not defined VmSnapshotPut(ctx).VmSnapshotConfig(vmSnapshotConfig).Execute() Returns a VM snapshot. ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { vmSnapshotConfig := *openapiclient.NewVmSnapshotConfig() // VmSnapshotConfig | The snapshot configuration configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmSnapshotPut(context.Background()).VmSnapshotConfig(vmSnapshotConfig).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmSnapshotPut``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } } ``` Other parameters are passed through a pointer to a apiVmSnapshotPutRequest struct via the builder pattern Name | Type | Description | Notes | - | - | - vmSnapshotConfig | | The snapshot configuration | (empty response body) No authorization required Content-Type: application/json Accept: Not defined VmmPingResponse VmmPingGet(ctx).Execute() Ping the VMM to check for API server availability ```go package main import ( \"context\" \"fmt\" \"os\" openapiclient \"./openapi\" ) func main() { configuration := openapiclient.NewConfiguration() api_client := openapiclient.NewAPIClient(configuration) resp, r, err := api_client.DefaultApi.VmmPingGet(context.Background()).Execute() if err != nil { fmt.Fprintf(os.Stderr, \"Error when calling `DefaultApi.VmmPingGet``: %v\\n\", err) fmt.Fprintf(os.Stderr, \"Full HTTP response: %v\\n\", r) } // response from `VmmPingGet`: VmmPingResponse fmt.Fprintf(os.Stdout, \"Response from `DefaultApi.VmmPingGet`: %v\\n\", resp) } ``` This endpoint does not need any parameter. Other parameters are passed through a pointer to a apiVmmPingGetRequest struct via the builder pattern No authorization required Content-Type: Not defined Accept: application/json"
}
] | {
"category": "Runtime",
"file_name": "DefaultApi.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "title: CephObjectZoneGroup CRD Rook allows creation of zone groups in a configuration through a CRD. The following settings are available for Ceph object store zone groups. ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectZoneGroup metadata: name: zonegroup-a namespace: rook-ceph spec: realm: realm-a ``` `name`: The name of the object zone group to create `namespace`: The namespace of the Rook cluster where the object zone group is created. `realm`: The object realm in which the zone group will be created. This matches the name of the object realm CRD."
}
] | {
"category": "Runtime",
"file_name": "ceph-object-zonegroup-crd.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "title: Data Synchronization sidebar_position: 7 description: Learn how to use the data sync tool in JuiceFS. is a powerful data migration tool, which can copy data across all supported storages including object storage, JuiceFS itself, and local file systems, you can freely copy data between any of these systems. In addition, it supports remote directories through SSH, HDFS, WebDAV, etc. while providing advanced features such as incremental synchronization, and pattern matching (like rsync), and distributed syncing. ```shell juicefs sync [command options] SRC DST ``` Synchronize data from `SRC` to `DST`, capable for both directories and files. Arguments: `SRC` is the source data address or path; `DST` is the destination address or path; ` for more details. Address format: ```shell @]BUCKET minio://[ACCESSKEY:SECRETKEY[:TOKEN]@]ENDPOINT/BUCKET[/PREFIX] ``` Explanation: `NAME` is the storage type like `s3` or `oss`. See for more details; `ACCESSKEY` and `SECRETKEY` are the credentials for accessing object storage APIs; If special characters are included, it needs to be escaped and replaced manually. For example, `/` needs to be replaced with its escape character `%2F`. `TOKEN` token used to access the object storage, as some object storage supports the use of temporary token to obtain permission for a limited time `BUCKET[.ENDPOINT]` is the address of the object storage; `PREFIX` is the common prefix of the directories to synchronize, optional. Here is an example of the object storage address of Amazon S3. ``` s3://ABCDEFG:HIJKLMN@myjfs.s3.us-west-1.amazonaws.com ``` In particular, `SRC` and `DST` ending with a trailing `/` are treated as directories, e.g. `movie/`. Those don't end with a trailing `/` are treated as prefixes, and will be used for pattern matching. For example, assuming we have `test` and `text` directories in the current directory, the following command can synchronize them into the destination `~/mnt/`. ```shell juicefs sync ./te ~/mnt/te ``` In this way, the subcommand `sync` takes `te` as a prefix to find all the matching directories, i.e. `test` and `text`. `~/mnt/te` is also a prefix, and all directories and files synchronized to this destination will be renamed by replacing the original prefix `te` with the new prefix `te`. The changes in the names of directories and files before and after synchronization cannot be seen in the above example. However, if we take another prefix, for example, `ab`, ```shell juicefs sync ./te ~/mnt/ab ``` the `test` directory synchronized to the destination directory will be renamed as `abst`, and `text` will be `abxt`. Assume that we have the following storages. Object Storage A Bucket name: aaa Endpoint: `https://aaa.s3.us-west-1.amazonaws.com` Object Storage B Bucket name: bbb Endpoint: `https://bbb.oss-cn-hangzhou.aliyuncs.com` JuiceFS File System Metadata Storage: `redis://10.10.0.8:6379/1` Object Storage: `https://ccc-125000.cos.ap-beijing.myqcloud.com` All of the storages share the same secret key: ACCESS_KEY: `ABCDEFG` SECRET_KEY: `HIJKLMN` The following command synchronizes `movies` directory on to . ```shell juicefs mount -d redis://10.10.0.8:6379/1 /mnt/jfs juicefs sync s3://ABCDEFG:HIJKLMN@aaa.s3.us-west-1.amazonaws.com/movies/ /mnt/jfs/movies/ ``` The following command synchronizes `images` directory from to . ```shell juicefs mount -d redis://10.10.0.8:6379/1 /mnt/jfs juicefs sync /mnt/jfs/images/ s3://ABCDEFG:HIJKLMN@aaa.s3.us-west-1.amazonaws.com/images/ ``` The following command synchronizes all of the data on to . ```shell juicefs sync s3://ABCDEFG:HIJKLMN@aaa.s3.us-west-1.amazonaws.com oss://ABCDEFG:HIJKLMN@bbb.oss-cn-hangzhou.aliyuncs.com ``` To copy files between directories on a local computer, simply specify the source and destination"
},
{
"data": "For example, to synchronize the `/media/` directory with the `/backup/` directory: ```shell juicefs sync /media/ /backup/ ``` If you need to synchronize between servers, you can access the target server using the SFTP/SSH protocol. For example, to synchronize the local `/media/` directory with the `/backup/` directory on another server: ```shell juicefs sync /media/ username@192.168.1.100:/backup/ juicefs sync /media/ \"username:password\"@192.168.1.100:/backup/ ``` When using the SFTP/SSH protocol, if no password is specified, the sync task will prompt for the password. If you want to explicitly specify the username and password, you need to enclose them in double quotation marks, with a colon separating the username and password. For data migrations that involve JuiceFS, it's recommended use the `jfs://` protocol, rather than mount JuiceFS and access its local directory, which bypasses the FUSE mount point and access JuiceFS directly. Under large scale scenarios, bypassing FUSE can save precious resources and increase performance. ```shell myfs=redis://10.10.0.8:6379/1 juicefs sync s3://ABCDEFG:HIJKLMN@aaa.s3.us-west-1.amazonaws.com/movies/ jfs://myfs/movies/ ``` Simply put, when using `sync` to transfer big files, progress bar might move slowly or get stuck. If this happens, you can observe the progress using other methods. `sync` assumes it's mainly used to copy a large amount of files, its progress bar is designed for this scenario: progress only updates when a file has been transferred. In a large file scenario, every file is transferred slowly, hence the slow or even static progress bar. This is worse for destinations without multipart upload support (e.g. `file`, `sftp`, `jfs`, `gluster` schemes), where every file is transferred single-threaded. If progress bar is not moving, use below methods to observe and troubleshoot: If either end is a JuiceFS mount point, you can use to quickly check current IO status. If destination is a local disk, look for temporary files that end with `.tmp.xxx`, these are the temp files created by `sync`, they will be renamed upon transfer complete. Look for size changes in temp files to verify the current IO status. If both end are object storage services, use tools like `nethogs` to check network IO. The subcommand `sync` works incrementally by default, which compares the differences between the source and target paths, and then synchronizes only the differences. You can add option `--update` or `-u` to keep updated the `mtime` of the synchronized directories and files. For full synchronization, i.e. synchronizing all the time no matter whether the destination files exist or not, you can add option `--force-update` or `-f`. For example, the following command fully synchronizes `movies` directory from to . ```shell juicefs mount -d redis://10.10.0.8:6379/1 /mnt/jfs juicefs sync --force-update s3://ABCDEFG:HIJKLMN@aaa.s3.us-west-1.amazonaws.com/movies/ /mnt/jfs/movies/ ``` The pattern matching function of the subcommand `sync` is similar to that of `rsync`, which allows you to exclude or include certain classes of files by rules and synchronize any set of files by combining multiple rules. Now we have the following rules available. Patterns ending with `/` only matches directories; otherwise, they match files, links or devices. Patterns containing `*`, `?` or `[` match as wildcards, otherwise, they match as regular strings; `*` matches any non-empty path components (it stops at"
},
{
"data": "`?` matches any single character except `/`; `[` matches a set of characters, for example `[a-z]` or `[[:alpha:]]`; Backslashes can be used to escape characters in wildcard patterns, while they match literally when no wildcards are present. It is always matched recursively using patterns as prefixes. Option `--exclude` can be used to exclude patterns. The following example shows a full synchronization from to , excluding hidden directories and files: :::note Remark Linux regards a directory or a file with a name starts with `.` as hidden. ::: ```shell juicefs mount -d redis://10.10.0.8:6379/1 /mnt/jfs juicefs sync --exclude '.*' /mnt/jfs/ s3://ABCDEFG:HIJKLMN@aaa.s3.us-west-1.amazonaws.com/ ``` You can use this option several times with different parameters in the command to exclude multiple patterns. For example, using the following command can exclude all hidden files, `pic/` directory and `4.png` file in synchronization: ```shell juicefs sync --exclude '.*' --exclude 'pic/' --exclude '4.png' /mnt/jfs/ s3://ABCDEFG:HIJKLMN@aaa.s3.us-west-1.amazonaws.com ``` Option `--include` can be used to include patterns you don't want to exclude. For example, only `pic/` and `4.png` are synchronized and all the others are excluded after executing the following command: ```shell juicefs sync --include 'pic/' --include '4.png' --exclude '*' /mnt/jfs/ s3://ABCDEFG:HIJKLMN@aaa.s3.us-west-1.amazonaws.com ``` :::info NOTICE The earlier options have higher priorities than the latter ones. Thus, the `--include` options should come before `--exclude`. Otherwise, all the `--include` options such as `--include 'pic/' --include '4.png'` which appear later than `--exclude '*'` will be ignored. ::: Filtering modes determine how multiple filtering patterns decide whether to synchronize a path. The `sync` command supports two filtering modes: one-time filtering and layered filtering. By default, the `sync` command uses the layered filtering mode. You can use the `--match-full-path` parameter to switch to one-time filtering mode. In one-time filtering, the full path of an object is matched against multiple patterns in sequence. For example, given the `a1./b1/c1.txt` object and the `inclusion`/`exclusion` rule `--include a.txt, --include c1.txt, --exclude c.txt`, the string `a1/b1/c1.txt` is matched against the three patterns `--include a.txt`, `--inlude c1.txt`, and `--exclude c.txt` in sequence. The specific steps are as follows: `a1/b1/c1.txt` matches against `--include a*.txt`, which fails to match. `a1/b1/c1.txt` is matched against `--inlude c1.txt`, which succeeds according to the matching rules. The final matching result for `a1/b1/c1.txt` is \"sync.\" The subsequent rule `--exclude c*.txt` would also match based on the suffix. But according to the sequential nature of `include`/`exclude` parameters, once a pattern is matched, no further patterns are evaluated. Therefore, the final result for matching `a1/b1/c1.txt` is determined by the `--inlude c1.txt` rule, which is \"sync.\" Here are some examples of `exclude`/`include` rules in the one-time filtering mode: `--exclude .o` excludes all files matching the pattern `.o`. `--exclude /foo` excludes any file or directory in the root named `foo`. `--exclude foo/` excludes all directories ending with `foo`. `--exclude /foo/*/bar` excludes any `bar` file located two levels down within the `foo` directory in the root. `--exclude /foo//bar` excludes any file named `bar` located within any level of the `foo` directory in the root. (`` matches any number of directory levels) Using `--include / --include .c --exclude *` includes only directories and `c` source files, excluding all other files and directories. Using `--include foo/bar.c --exclude *` includes only the `foo` directory and `foo/bar.c`. One-time filtering is easy to understand and use. It is recommended in most"
},
{
"data": "Layered filtering breaks down the object's path into sequential subpaths according to its layers. For example, the layer sequence for the path `a1/b1/c1.txt` is `a1`, `a1/b1`, and `a1/b1/c1.txt`. Each element in this sequence is treated as the original path in a one-time filtering process. During one-time filtering, if a pattern matches and it is an `exclude` rule, it immediately returns \"Exclude\" as the result for the entire layered filtering of the object. If the pattern matches and it is an `include` rule, it skips the remaining rules for the current layer and proceeds to the next layer of filtering. If no rules match at a given level, the filtering proceeds to the next layer. If all layers are processed without any match, the default action \"sync\" is returned. For example, given the object `a1/b1/c1.txt` and the rules `--include a.txt`, `--include c1.txt`, `--exclude c.txt`, the layered filtering process for this example with the subpath sequence `a1`, `a1/b1`, and `a1/b1/c1.txt` will be as follows: At the first layer, the path `a1` is evaluated against the rules `--include a.txt`, `--inlude c1.txt`, and `--exclude c.txt`. None of these rules match, so it proceeds to the next layer. At the second layer, the path `a1/b1` is evaluated against the same rules. Again, no rules match, so it proceeds to the next layer. At the third layer, the path `a1/b1/c1.txt` is evaluated. The rule `--include c1.txt` matches. The behavior of this pattern is \"sync.\" In layered filtering, if the filtering result for a level is \"sync,\" it directly proceeds to the next layer of filtering. There are no more layers. All layers of filtering have been completed, so the default behavior is \"sync.\" In the example above, the match was successful at the last layer. Besides this, there are two other scenarios: If a match occurs before the last layer, and it is an `exclude` pattern, the result is \"exclude\" for the entire filtering. If it is an `include` pattern, it proceeds to the next layer of filtering. If all layers are completed without any matches, the default behavior is \"sync.\" In summary, layered filtering sequentially applies one-time filtering from the top layer to the bottom. Each layer of filtering can result in either \"exclude,\" which is final, or \"sync,\" which requires proceeding to the next layer. Here are some examples of layered filtering with `exclude`/`include` rules: `--exclude .o` excludes all files matching \".o\". `--exclude /foo` excludes files or directories with the root directory name \"foo\" during transfer. `--exclude foo/` excludes all directories named \"foo\". `--exclude /foo/*/bar` excludes \"bar\" files under the \"foo\" directory, up to two layers deep from the root directory. `--exclude /foo//bar` excludes \"bar\" files under the \"foo\" directory recursively at any layer from the root directory. (\"\" matches any number of directory levels) Using `--include / --include .c --exclude *` includes all directories and `c` source code files, excluding all other files and directories. Using `--include foo/ --include foo/bar.c --exclude ` includes only the \"foo\" directory and \"foo/bar.c\" file. (The \"foo\" directory must be explicitly included, or it is excluded by the `--exclude ` rule.) For `dirname/***`, it matches all files at all layers under the `dirname`"
},
{
"data": "Note that each subpath element is recursively traversed from top to bottom, so `include`/`exclude` matching rules apply recursively to each full path element. For example, to include `/foo/bar/baz`, both `/foo` and `/foo/bar` should not be excluded. When a file is found to be transferred, the exclusion matching pattern short-circuits the exclusion traversal at that file's directory layer. If a parent directory is excluded, deeper include pattern matching is ineffective. This is crucial when using trailing `*`. For example, the following example will not work as expected: ``` --include='/some/path/this-file-will-not-be-found' --include='/file-is-included' --exclude='*' ``` Due to the `` rule excluding the parent directory `some`, it will fail. One solution is to request the inclusion of all directory structures by using a single rule like `--include /`, which must be placed before other `--include*` rules. Another solution is to add specific inclusion rules for all parent directories that need to be accessed. For example, the following rules can work correctly: ``` --include /some/ --include /some/path/ --include /some/path/this-file-is-found --include /file-also-included --exclude * ``` Understanding and using layered filtering can be quite complex. However, it is mostly compatible with rsync's `include`/`exclude` parameters. Therefore, it is generally recommended to use it in scenarios compatible with rsync behavior. The subcommand `sync` only synchronizes file objects and directories containing file objects, and skips empty directories by default. To synchronize empty directories, you can use `--dirs` option. In addition, when synchronizing between file systems such as local, SFTP and HDFS, option `--perms` can be used to synchronize file permissions from the source to the destination. You can use `--links` option to disable symbolic link resolving when synchronizing local directories. That is, synchronizing only the symbolic links themselves rather than the directories or files they are pointing to. The new symbolic links created by the synchronization refer to the same paths as the original symbolic links without any conversions, no matter whether their references are reachable before or after the synchronization. Some details need to be noticed The `mtime` of a symbolic link will not be synchronized; `--check-new` and `--perms` will be ignored when synchronizing symbolic links. `juicefs sync` by default starts 10 threads to run syncing jobs, you can set the `--threads` option to increase or decrease the number of threads as needed. But also note that due to various factors, blindly increasing `--threads` may not always work, and you should also consider: `SRC` and `DST` storage systems may have already reached their bandwidth limits, if this is indeed the bottleneck, further increasing concurrency will not improve the situation; Performing `juicefs sync` on a single host may be limited by host resources, e.g. CPU or network throttle, if this is the case, consider using (introduced below); If the synchronized data is mainly small files, and the `list` API of `SRC` storage system has excellent performance, then the default single-threaded `list` of `juicefs sync` may become a bottleneck. You can consider enabling (introduced below). From the output of `juicefs sync`, pay attention to the `Pending objects` count, if this value stays zero, consumption is faster than production and you should increase `--list-threads` to enable concurrent `list`, and then use `--list-depth` to control `list`"
},
{
"data": "For example, if you're dealing with a object storage bucket used by JuiceFS, directory structure will be `/<vol-name>/chunks/xxx/xxx/...`, using `--list-depth=2` will perform concurrent listing on `/<vol-name>/chunks` which usually renders the best performance. Synchronizing between two object storages is essentially pulling data from one and pushing it to the other. The efficiency of the synchronization will depend on the bandwidth between the client and the cloud. When copying large scale data, node bandwidth can easily bottleneck the synchronization process. For this scenario, `juicefs sync` provides a multi-machine concurrent solution, as shown in the figure below. Manager node executes `sync` command as the master, and defines multiple worker nodes by setting option `--worker` (manager node itself also serve as a worker node). JuiceFS will split the workload distribute to Workers for distributed synchronization. This increases the amount of data that can be processed per unit time, and the total bandwidth is also multiplied. When using distributed syncing, you should configure SSH logins so that the manager can access all worker nodes without password, if SSH port isn't the default 22, you'll also have to include that in the manager's `~/.ssh/config`. Manager will distribute the JuiceFS Client to all worker nodes, so they should all use the same architecture to avoid running into compatibility problems. For example, synchronize data from to concurrently with multiple machines. ```shell juicefs sync --worker bob@192.168.1.20,tom@192.168.8.10 s3://ABCDEFG:HIJKLMN@aaa.s3.us-west-1.amazonaws.com oss://ABCDEFG:HIJKLMN@bbb.oss-cn-hangzhou.aliyuncs.com ``` The synchronization workload between the two object storages is shared by the current machine and the two Workers `bob@192.168.1.20` and `tom@192.168.8.10`. Geo-disaster recovery backup backs up files, and thus the files stored in JuiceFS should be synchronized to other object storages. For example, synchronize files in to : ```shell juicefs mount -d redis://10.10.0.8:6379/1 /mnt/jfs juicefs sync /mnt/jfs/ s3://ABCDEFG:HIJKLMN@aaa.s3.us-west-1.amazonaws.com/ ``` After sync, you can see all the files in . Unlike the file-oriented disaster recovery backup, the purpose of creating a copy of JuiceFS data is to establish a mirror with exactly the same content and structure as the JuiceFS data storage. When the object storage in use fails, you can switch to the data copy by modifying the configurations. Note that only the file data of the JuiceFS file system is replicated, and the metadata stored in the metadata engine still needs to be backed up. This requires manipulating the underlying object storage directly to synchronize it with the target object storage. For example, to take the as the data copy of the : ```shell juicefs sync cos://ABCDEFG:HIJKLMN@ccc-125000.cos.ap-beijing.myqcloud.com oss://ABCDEFG:HIJKLMN@bbb.oss-cn-hangzhou.aliyuncs.com ``` After sync, the file content and hierarchy in the are exactly the same as the . When transferring a large amount of small files across different regions via FUSE mount points, clients will inevitably talk to the metadata service in the opposite region via public internet (or dedicated network connection with limited bandwidth). In such cases, metadata latency can become the bottleneck of the data transfer: S3 Gateway comes to rescue in these circumstances: deploy a gateway in the source region, and since this gateway accesses metadata via private network, metadata latency is eliminated to a minimum, bringing the best performance for small file intensive scenarios. Read to learn its deployment and use."
}
] | {
"category": "Runtime",
"file_name": "sync.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "title: CephBlockPoolRados Namespace CRD This guide assumes you have created a Rook cluster as explained in the main RADOS currently uses pools both for data distribution (pools are shared into PGs, which map to OSDs) and as the granularity for security (capabilities can restrict access by pool). Overloading pools for both purposes makes it hard to do multi-tenancy because it is not a good idea to have a very large number of pools. A namespace would be a division of a pool into separate logical namespaces. For more information about BlockPool and namespace refer to the [Ceph docs](https://docs.ceph.com/en/latest/man/8/rbd/) Having multiple namespaces in a pool would allow multiple Kubernetes clusters to share one unique ceph cluster without creating a pool per kubernetes cluster and it will also allow to have tenant isolation between multiple tenants in a single Kubernetes cluster without creating multiple pools for tenants. Rook allows creation of Ceph BlockPool through the custom resource definitions (CRDs). To get you started, here is a simple example of a CR to create a CephBlockPoolRadosNamespace on the CephBlockPool \"replicapool\". ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPoolRadosNamespace metadata: name: namespace-a namespace: rook-ceph # namespace:cluster spec: blockPoolName: replicapool ``` If any setting is unspecified, a suitable default will be used automatically. `name`: The name that will be used for the Ceph BlockPool rados namespace. `blockPoolName`: The metadata name of the CephBlockPool CR where the rados namespace will be created."
}
] | {
"category": "Runtime",
"file_name": "ceph-block-pool-rados-namespace-crd.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Thank you for contributing to Velero! Fixes #(issue) . Commits without the DCO will delay acceptance. or added `/kind changelog-not-required` as a comment on this pull request. [ ] Updated the corresponding documentation in `site/content/docs/main`."
}
] | {
"category": "Runtime",
"file_name": "pull_request_template.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "relatedlinks: https://cloudinit.readthedocs.org/ (cloud-init)= is a tool for automatically initializing and customizing an instance of a Linux distribution. By adding `cloud-init` configuration to your instance, you can instruct `cloud-init` to execute specific actions at the first start of an instance. Possible actions include, for example: Updating and installing packages Applying certain configurations Adding users Enabling services Running commands or scripts Automatically growing the file system of a VM to the size of the disk See the {ref}`cloud-init:index` for detailed information. ```{note} The `cloud-init` actions are run only once on the first start of the instance. Rebooting the instance does not re-trigger the actions. ``` To use `cloud-init`, you must base your instance on an image that has `cloud-init` installed. Images from the have `cloud-init`-enabled variants, which are usually bigger in size than the default variant. The cloud variants use the `/cloud` suffix, for example, `images:ubuntu/22.04/cloud`. For `cloud-init` to work inside of a virtual machine, you need to either have a functional `incus-agent` in the VM or need to provide the `cloud-init` data through a special extra disk. All images coming from the `images:` remote will have the agent already setup and so are good to go from the start. For instances that do not have the `incus-agent`, you can pass in the extra `cloud-init` disk with: incus config device add INSTANCE cloud-init disk source=cloud-init:config Incus supports two different sets of configuration options for configuring `cloud-init`: `cloud-init.` and `user.`. Which of these sets you must use depends on the `cloud-init` support in the image that you use. As a rule of thumb, newer images support the `cloud-init.` configuration options, while older images support `user.`. However, there might be exceptions to that rule. The following configuration options are supported: `cloud-init.vendor-data` or `user.vendor-data` (see {ref}`cloud-init:vendordata`) `cloud-init.user-data` or `user.user-data` (see {ref}`cloud-init:userdataformats`) `cloud-init.network-config` or `user.network-config` (see {ref}`cloud-init:network_config`) For more information about the configuration options, see the , and the documentation for the {ref}`LXD data source <cloud-init:datasource_lxd>` in the `cloud-init` documentation. Both `vendor-data` and `user-data` are used to provide {ref}`cloud configuration data <explanation/format:cloud config data>` to `cloud-init`. The main idea is that `vendor-data` is used for the general default configuration, while `user-data` is used for instance-specific configuration. This means that you should specify `vendor-data` in a profile and `user-data` in the instance configuration. Incus does not enforce this method, but allows using both `vendor-data` and `user-data` in profiles and in the instance configuration. If both `vendor-data` and `user-data` are supplied for an instance, `cloud-init` merges the two configurations. However, if you use the same keys in both configurations, merging might not be possible. In this case, configure how `cloud-init` should merge the provided data. See {ref}`cloud-init:merginguserdata` for instructions. To configure `cloud-init` for an instance, add the corresponding configuration options to a {ref}`profile <profiles>` that the instance uses or directly to the {ref}`instance configuration <instances-configure>`. When configuring `cloud-init` directly for an instance, keep in mind that `cloud-init` runs only on the first start of the instance. That means that you must configure `cloud-init` before you start the instance. To do so, create the instance with instead of , and then start it after completing the configuration. The `cloud-init` options require YAML's"
},
{
"data": "You use a pipe symbol (`|`) to indicate that all indented text after the pipe should be passed to `cloud-init` as a single string, with new lines and indentation preserved. The `vendor-data` and `user-data` options usually start with `#cloud-config`. For example: ```yaml config: cloud-init.user-data: | package_upgrade: true packages: package1 package2 ``` ```{tip} See {ref}`How to validate user data <cloud-init:checkuserdatacloudconfig>` for information on how to check whether the syntax is correct. ``` `cloud-init` runs automatically on the first start of an instance. Depending on the configured actions, it might take a while until it finishes. To check the `cloud-init` status, log on to the instance and enter the following command: cloud-init status If the result is `status: running`, `cloud-init` is still working. If the result is `status: done`, it has finished. Alternatively, use the `--wait` flag to be notified only when `cloud-init` is finished: ```{terminal} :input: cloud-init status --wait :user: root :host: instance ..................................... status: done ``` The `user-data` and `vendor-data` configuration can be used to, for example, upgrade or install packages, add users, or run commands. The provided values must have a first line that indicates what type of {ref}`user data format <cloud-init:userdataformats>` is being passed to `cloud-init`. For activities like upgrading packages or setting up a user, `#cloud-config` is the data format to use. The configuration data is stored in the following files in the instance's root file system: `/var/lib/cloud/instance/cloud-config.txt` `/var/lib/cloud/instance/user-data.txt` See the following sections for the user data (or vendor data) configuration for different example use cases. You can find more advanced {ref}`examples <cloud-init:yaml_examples>` in the `cloud-init` documentation. To trigger a package upgrade from the repositories for the instance right after the instance is created, use the `package_upgrade` key: ```yaml config: cloud-init.user-data: | package_upgrade: true ``` To install specific packages when the instance is set up, use the `packages` key and specify the package names as a list: ```yaml config: cloud-init.user-data: | packages: git openssh-server ``` To set the time zone for the instance on instance creation, use the `timezone` key: ```yaml config: cloud-init.user-data: | timezone: Europe/Rome ``` To run a command (such as writing a marker file), use the `runcmd` key and specify the commands as a list: ```yaml config: cloud-init.user-data: | runcmd: [touch, /run/cloud.init.ran] ``` To add a user account, use the `user` key. See the {ref}`cloud-init:reference/examples:including users and groups` example in the `cloud-init` documentation for details about default users and which keys are supported. ```yaml config: cloud-init.user-data: | user: name: documentation_example ``` By default, `cloud-init` configures a DHCP client on an instance's `eth0` interface. You can define your own network configuration using the `network-config` option to override the default configuration (this is due to how the template is structured). `cloud-init` then renders the relevant network configuration on the system using either `ifupdown` or `netplan`, depending on the Ubuntu release. The configuration data is stored in the following files in the instance's root file system: `/var/lib/cloud/seed/nocloud-net/network-config` `/etc/network/interfaces.d/50-cloud-init.cfg` (if using `ifupdown`) `/etc/netplan/50-cloud-init.yaml` (if using `netplan`) To configure a specific network interface with a static IPv4 address and also use a custom name server, use the following configuration: ```yaml config: cloud-init.network-config: | version: 2 ethernets: eth1: addresses: 10.10.101.20/24 gateway4: 10.10.101.1 nameservers: addresses: 10.10.10.254 ```"
}
] | {
"category": "Runtime",
"file_name": "cloud-init.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
} |
[
{
"data": "<BR> OpenEBS is an \"umbrella project\". Every project, repository and file in the OpenEBS organization adopts and follows the policies found in the Community repo umbrella project files. <BR> This project follows the"
}
] | {
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Previous change logs can be found at Memcache support @ilixiaocui @Cyber-SiKu @fansehep @ilixiaocui @Cyber-SiKu curvefs new tools @shentupenghui @Sindweller @zyb521 @tsonglew @aspirer @Wine93 @tangwz @Tangruilin @fansehep Merge block storage and file storage compilation scripts [Merge block storage and file storage compilation scripts ](https://github.com/opencurve/curve/pull/2089) @linshiyx @linshiyx @SeanHai @tsonglew @shentupenghui aws s3 sdk revert @h0hmj @wuhongsong"
}
] | {
"category": "Runtime",
"file_name": "CHANGELOG-2.5.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "A NFS server located within the share-manager pod is a key component of a RWX volume. The share-manager controller will recreate the share-manager pod and attach the volume to another node while the node where the share-manager pod is running is down. However, the failover cannot work correctly because the NFS server lacks an appropriate recovery backend and the connection information cannot be persistent. As a result, the client's workload will be interrupted during the failover. To make the NFS server have failover capability, we want to implement a dedicated recovery backend and associated modifications for Longhorn. Implement a dedicated recovery backend for Longhorn and make the NFS server highly available. Active/active or Active/passive NFS server pattern To support NFS server's failover capability, we need to change both the client and server configurations. A dedicated recovery backend for Kubernetes and Longhorn is also necessary. In the implementation, we will not implement the active/active or active/passive server pattern. Longhorn currently supports local filesystems such as ext4 and xfs. Thus, any change in the node, which is providing service, cannot update to the standby node. The limitation will hinder the active/active design. Currently, the creation of an engine process needs at least one replica and then exports the iSCSI frontend. That is, the standby engine process of the active/passive configuration is not allowable in current Longhorn architecture. While the node where the share-manager pod is running is down, the share-manager controller will recreate the share-manager pod and attach the volume to another node. However, the failover cannot work correctly because the connection information are lost after restarting the NFS server. As a result, the locks cannot be reclaimed correctly, and the interruptions of the clients filesystem operations happen. The changes and improvements should not impact the usage of the RWX volumes. NFS filesystem operation will not be interrupted after the failover of the share-manager pod. After the crash of the share-manager pod, the application on the client sides IO operations will be stuck until the share-manager and NFS are recreated. To make the improvement work, users have to make sure the hostname of each node in the Longhorn system is unique by checking each node's hostname using `hostname` command. To shorten the failover time, users can Multiple coredns pods in the Kubernetes cluster to ensure the recovery backend be always accessible. Reduce the NFS server's `GracePeriod` and `LeaseLifetime`. By default, `GracePeriod` and `LeaseLifetime` are 90 and 60 seconds, respectively. However, the value can be reduced to a smaller value for early termination of the grace period at the expense of"
},
{
"data": "Please refer to . Reduce `node-monitor-period` and `node-monitor-grace-period` values in the Kubelet. The unresponsive node will be marked as `NotReady` and speed up the NFS server's failover process. longhorn-manager The speed of a share-manager pod and volume's failover is affected by the cluster's settings and resources, so it is unpredictable how long it takes to failover. Thus, the NFS client mount options `soft, timeo=30, retrans=3` are replaced with `hard`. share-manager To allow the NFSv4 clients to reclaim locks after the failover of the NFS server, the grace period is enabled by setting Lease_Lifetime = 60 Grace_Period = 90 Additionally, set `NFSCoreParam.Clustered` to `false`. The NFS server will use the hostname rather than such as `node0` in the share-manager pod, which is same as the name of the share-manager pod, to create a corresponding storage in the recovery backend. The unique hostname avoids the naming conflict in the recovery backend. nfs-ganesha (user-space NFS server) ``` service longhorn-nfs-recovery-backend HTTP API endpoint 1 endpoint N share-manager pod recovery-backend recovery-backend pod pod ... nfs server configMap ``` Introduce a recovery-backend service backed by multiple recovery-backend pods. The recover-backend is shared by multiple RWX volumes to reduce the costs of the resources. Implement a set of dedicating recovery-backend operations for Longhorn in nfs-ganesha recovery_init Create a configmap, `recovery-backend-${share-manager-pod-name}`, storing the client information end_grace Clean up the configmap recoveryreadclids Create the client reclaim list from the configmap add_clid Add the client key (clients hostname) into the configmap rm_clid Remove the client key (clients hostname) from the configmap addrevokefh Revoke the delegation Then, the data from the above operations are persisted by sending to the recovery-backend service. The data will be saved in the configmap, `recovery-backend-${share-manager-pod-name}`. Dedicating Configmap Format ``` name: `recovery-backend-${share-manager-pod-name}` labels: longhorn.io/component: nfs-recovery-backend ... annotations: version: 8-bytes random id, e.g. 6SVVI1LE data: 6SVVI1LE: {.json encoded content (containing the client identity information} ``` One example ``` apiVersion: v1 data: 6SVVI1LE: '{\"31:Linux NFSv4.1 rancher50-worker1\":[],\"31:Linux NFSv4.1 rancher50-worker2\":[],\"31:Linux NFSv4.1 rancher50-worker3\":[]}' kind: ConfigMap metadata: annotations: version: 6SVVI1LE creationTimestamp: \"2022-12-01T01:27:14Z\" labels: longhorn.io/component: share-manager-configmap longhorn.io/managed-by: longhorn-manager longhorn.io/share-manager: pvc-de201ca5-ec0b-42ea-9501-253a7935fc3e name: recovery-backend-share-manager-pvc-de201ca5-ec0b-42ea-9501-253a7935fc3e namespace: longhorn-system resourceVersion: \"47544\" uid: 60e29c30-38b8-4986-947b-68384fcbb9ef ``` In the event that the original share manager pod is unavailable, a new share manager pod cannot be created In the client side, IO to the RWX volume will hang until a share-manager pod replacement is successfully created on another node. Failed to reclaim locks in 90-seconds grace period If locks cannot be reclaimed after a grace period, the locks are discarded and return IO errors to the client. The client reestablishes a new lock. The application should handle the IO"
},
{
"data": "Nevertheless, not all applications can handle IO errors due to their implementation. Thus, it may result in the failure of the IO operation and the loss of data. Data consistency may be an issue. If the DNS service goes down, share-manager pod will not be able to communicate with longhorn-nfs-recovery-backend The NFS-ganesha server in the share-manager pod communicates with longhorn-nfs-recovery-backend via the service `longhorn-recovery-backend` IP. Thus, the high availability of the DNS services is recommended for avoiding the communication failure. Setup 3 worker nodes for the Longhorn cluster Attach 1 RWO volume to node-1 Attach 2 RWO volumes to node-2 Attach 3 RWO volumes to node-3 Tests Create 1 RWX volume and then run an app pod with the RWX volume on each worker node.Execute the command in each app pod `( exec 7<>/data/testfile-${i}; flock -x 7; while date | dd conv=fsync >&7 ; do sleep 1; done )` where ${i} is the node number. Turn off the node where share-manager is running. Once the share-manager pod is recreated on a different node, check Expect In the client side, IO to the RWX volume will hang until a share-manager pod replacement is successfully created on another node. During the grace period, the server rejects READ and WRITE operations and non-reclaim locking requests (i.e., other LOCK and OPEN operations) with an error of NFS4ERR_GRACE. The clients can continue working without IO error. Lock reclaim process can be finished earlier than the 90-seconds grace period. During the grace period, the server reject READ and WRITE operations and non-reclaim If locks cannot be reclaimed after a grace period, the locks are discarded and return IO errors to the client. The client reestablishes a new lock. Turn the deployment into a daemonset in) and disable`Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly`. Then, deploy the daemonset with a RWX volume. Turn off the node where share-manager is running. Once the share-manager pod is recreated on a different node, check Expect The other active clients should not run into the stale handle errors after the failover. Lock reclaim process can be finished earlier than the 90-seconds grace period. Multiple locks one single file tested by byte-range file locking Each client () in each app pod locks a different range of the same file. Afterwards, it writes data repeatedly into the file. Turn off the node where share-manager is running. Once the share-manager pod is recreated on a different node, check The clients continue the tasks after the server's failover without IO or stale handle errors. Lock reclaim process can be finished earlier than the 90-seconds grace period."
}
] | {
"category": "Runtime",
"file_name": "20220727-dedicated-recovery-backend-for-rwx-volume-nfs-server.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "title: How Weave Finds Containers menu_order: 10 search_type: Documentation The weaveDNS service running on every host acts as the nameserver for containers on that host. It learns about hostnames for local containers from the proxy and from the `weave attach` command. If a hostname is in the `.weave.local` domain, then weaveDNS records the association of that name with the container's Weave Net IP address(es) in its in-memory database, and then broadcasts the association to other Weave Net peers in the cluster. When weaveDNS is queried for a name in the `.weave.local` domain, it looks up the hostname in its memory database and responds with the IPs of all containers for that hostname across the entire cluster. When weaveDNS is queried for a name in a domain other than `.weave.local`, it queries the host's configured nameserver, which is the standard behaviour for Docker containers. So that containers can connect to a stable and always routable IP address, weaveDNS listens on port 53 to the Docker bridge device, which is assumed to be `docker0`. Some configurations may use a different Docker bridge device. To supply a different bridge device, use the environment variable `DOCKER_BRIDGE`, e.g., ``` $ sudo DOCKER_BRIDGE=someother weave launch ``` In the event that weaveDNS is launched in this way, it's important that other calls to `weave` also specify the bridge device: ``` $ sudo DOCKER_BRIDGE=someother weave attach ... ``` See Also *"
}
] | {
"category": "Runtime",
"file_name": "how-works-weavedns.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "[TOC] gVisor accesses the filesystem through a file proxy, called the Gofer. The gofer runs as a separate process, that is isolated from the sandbox. Gofer instances communicate with their respective sentry using the LISAFS protocol. Configuring the filesystem provides performance benefits, but isn't the only step to optimizing gVisor performance. See the [Production guide] for more. To isolate the host filesystem from the sandbox, you can set a writable tmpfs overlay on top of the entire filesystem. All modifications are made to the overlay, keeping the host filesystem unmodified. Note: All created and modified files are stored in memory inside the sandbox. To use the tmpfs overlay, add the following `runtimeArgs` to your Docker configuration (`/etc/docker/daemon.json`) and restart the Docker daemon: ```json { \"runtimes\": { \"runsc\": { \"path\": \"/usr/local/bin/runsc\", \"runtimeArgs\": [ \"--overlay2=all:memory\" ] } } } ``` Any modifications to the root filesystem is destroyed with the container. So it almost always makes sense to apply an overlay on top of the root filesystem. This can drastically boost performance, as runsc will handle root filesystem changes completely in memory instead of making costly round trips to the gofer and make syscalls to modify the host. However, holding so much file data in memory for the root filesystem can bloat up container memory usage. To circumvent this, you can have root mount's upper layer (tmpfs) be backed by a host file, so all file data is stored on disk. The newer `--overlay2` flag allows you to achieve these. You can specify `--overlay2=root:self` in `runtimeArgs`. The overlay backing host file will be created in the container's root filesystem. This file will be hidden from the containerized application. Placing the host file in the container's root filesystem is important because k8s scans the container's root filesystem from the host to enforce local ephemeral storage limits. You can also place the overlay host file in another directory using `--overlay2=root:/path/dir`. The root filesystem is where the image is extracted and is not generally modified from outside the sandbox. This allows for some optimizations, like skipping checks to determine if a directory has changed since the last time it was cached, thus missing updates that may have happened. If you need to `docker cp` files inside the root filesystem, you may want to enable shared mode. Just be aware that file system access will be slower due to the extra checks that are required. Note: External mounts are always shared. To use set the root filesystem shared, add the following `runtimeArgs` to your Docker configuration (`/etc/docker/daemon.json`) and restart the Docker daemon: ```json { \"runtimes\": { \"runsc\": { \"path\": \"/usr/local/bin/runsc\", \"runtimeArgs\": [ \"--file-access=shared\" ] } } } ```"
}
] | {
"category": "Runtime",
"file_name": "filesystem.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
} |
[
{
"data": "Multi-cluster Gateway works with Antrea `networkPolicyOnly` mode, in which cross-cluster traffic is routed by Multi-cluster Gateways of member clusters, and the traffic goes through Antrea overlay tunnels between Gateways and local cluster Pods. Pod traffic within a cluster is still handled by the primary CNI, not Antrea. This section describes steps to deploy Antrea in `networkPolicyOnly` mode with the Multi-cluster feature enabled on an EKS cluster. You can follow to create an EKS cluster, and follow the to deploy Antrea to an EKS cluster. Please note there are a few changes required by Antrea Multi-cluster. You should set the following configuration parameters in `antrea-agent.conf` of the Antrea deployment manifest to enable the `Multicluster` feature and Antrea Multi-cluster Gateway: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: Multicluster: true multicluster: enableGateway: true namespace: \"\" # Change to the Namespace where antrea-mc-controller is deployed. ``` Repeat the same steps to deploy Antrea for all member clusters in a ClusterSet. Besides the Antrea deployment, you also need to deploy Antrea Multi-cluster Controller in each member cluster. Make sure the Service CIDRs (ClusterIP ranges) must not overlap among the member clusters. Please refer to or to learn more information about how to configure a ClusterSet. When EKS clusters of a ClusterSet are in different VPCs, you may need to enable connectivity between VPCs to support Multi-cluster traffic. You can check the following steps to set up VPC connectivity for a ClusterSet. In the following descriptions, we take a ClusterSet with two member clusters in two VPCs as an example to describe the VPC configuration. | Cluster ID | PodCIDR | Gateway IP | | | - | | | west-cluster | 110.13.0.0/16 | 110.13.26.12 | | east-cluster | 110.14.0.0/16 | 110.14.18.50 | When the Gateway Nodes do not have public IPs, you may create a VPC peering connection between the two VPCs for the Gateways to reach each other. You can follow the to configure VPC peering. You also need to add a route to the route tables of the Gateway Nodes' subnets, to enable routing across the peering connection. For `west-cluster`, the route should have `east-cluster`'s Pod CIDR: `110.14.0.0/16` to be the destination, and the peering connection to be the target; for `east-cluster`, the route should have `west-cluster`'s Pod CIDR: `110.13.0.0/16` to be the destination. To learn more about VPC peering routes, please refer to the . AWS security groups may need to be configured to allow tunnel traffic to Multi-cluster Gateways, especially when the member clusters are in different VPCs. EKS should have already created a security group for each cluster, which should have a description like \"EKS created security group applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads.\". You can add a new rule to the security group for Gateway traffic. For `west-cluster`, add an inbound rule with source to be `east-cluster`'s Gateway IP `110.14.18.50/32`; for `east-cluster`, the source should be `west-cluster`'s Gateway IP `110.13.26.12/32`. By default, Multi-cluster Gateway IP should be the `InternalIP` of the Gateway Node, but you may configure Antrea Multi-cluster to use the Node `ExternalIP`. Please use the right Node IP address as the Gateway IP in the security group rule."
}
] | {
"category": "Runtime",
"file_name": "policy-only-mode.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "Name | Type | Description | Notes | - | - | - Cpus | Pointer to | | [optional] Memory | Pointer to | | [optional] Payload | | | Disks | Pointer to | | [optional] Net | Pointer to | | [optional] Rng | Pointer to | | [optional] Balloon | Pointer to | | [optional] Fs | Pointer to | | [optional] Pmem | Pointer to | | [optional] Serial | Pointer to | | [optional] Console | Pointer to | | [optional] Devices | Pointer to | | [optional] Vdpa | Pointer to | | [optional] Vsock | Pointer to | | [optional] SgxEpc | Pointer to | | [optional] Numa | Pointer to | | [optional] Iommu | Pointer to bool | | [optional] [default to false] Watchdog | Pointer to bool | | [optional] [default to false] Platform | Pointer to | | [optional] Tpm | Pointer to | | [optional] `func NewVmConfig(payload PayloadConfig, ) *VmConfig` NewVmConfig instantiates a new VmConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewVmConfigWithDefaults() *VmConfig` NewVmConfigWithDefaults instantiates a new VmConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *VmConfig) GetCpus() CpusConfig` GetCpus returns the Cpus field if non-nil, zero value otherwise. `func (o VmConfig) GetCpusOk() (CpusConfig, bool)` GetCpusOk returns a tuple with the Cpus field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetCpus(v CpusConfig)` SetCpus sets Cpus field to given value. `func (o *VmConfig) HasCpus() bool` HasCpus returns a boolean if a field has been set. `func (o *VmConfig) GetMemory() MemoryConfig` GetMemory returns the Memory field if non-nil, zero value otherwise. `func (o VmConfig) GetMemoryOk() (MemoryConfig, bool)` GetMemoryOk returns a tuple with the Memory field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetMemory(v MemoryConfig)` SetMemory sets Memory field to given value. `func (o *VmConfig) HasMemory() bool` HasMemory returns a boolean if a field has been set. `func (o *VmConfig) GetPayload() PayloadConfig` GetPayload returns the Payload field if non-nil, zero value otherwise. `func (o VmConfig) GetPayloadOk() (PayloadConfig, bool)` GetPayloadOk returns a tuple with the Payload field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetPayload(v PayloadConfig)` SetPayload sets Payload field to given value. `func (o *VmConfig) GetDisks() []DiskConfig` GetDisks returns the Disks field if non-nil, zero value otherwise. `func (o VmConfig) GetDisksOk() ([]DiskConfig, bool)` GetDisksOk returns a tuple with the Disks field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetDisks(v []DiskConfig)` SetDisks sets Disks field to given value. `func (o *VmConfig) HasDisks() bool` HasDisks returns a boolean if a field has been set. `func (o *VmConfig) GetNet() []NetConfig` GetNet returns the Net field if non-nil, zero value otherwise. `func (o VmConfig) GetNetOk() ([]NetConfig, bool)` GetNetOk returns a tuple with the Net field if it's non-nil, zero value otherwise and a boolean to check if the value has been"
},
{
"data": "`func (o *VmConfig) SetNet(v []NetConfig)` SetNet sets Net field to given value. `func (o *VmConfig) HasNet() bool` HasNet returns a boolean if a field has been set. `func (o *VmConfig) GetRng() RngConfig` GetRng returns the Rng field if non-nil, zero value otherwise. `func (o VmConfig) GetRngOk() (RngConfig, bool)` GetRngOk returns a tuple with the Rng field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetRng(v RngConfig)` SetRng sets Rng field to given value. `func (o *VmConfig) HasRng() bool` HasRng returns a boolean if a field has been set. `func (o *VmConfig) GetBalloon() BalloonConfig` GetBalloon returns the Balloon field if non-nil, zero value otherwise. `func (o VmConfig) GetBalloonOk() (BalloonConfig, bool)` GetBalloonOk returns a tuple with the Balloon field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetBalloon(v BalloonConfig)` SetBalloon sets Balloon field to given value. `func (o *VmConfig) HasBalloon() bool` HasBalloon returns a boolean if a field has been set. `func (o *VmConfig) GetFs() []FsConfig` GetFs returns the Fs field if non-nil, zero value otherwise. `func (o VmConfig) GetFsOk() ([]FsConfig, bool)` GetFsOk returns a tuple with the Fs field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetFs(v []FsConfig)` SetFs sets Fs field to given value. `func (o *VmConfig) HasFs() bool` HasFs returns a boolean if a field has been set. `func (o *VmConfig) GetPmem() []PmemConfig` GetPmem returns the Pmem field if non-nil, zero value otherwise. `func (o VmConfig) GetPmemOk() ([]PmemConfig, bool)` GetPmemOk returns a tuple with the Pmem field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetPmem(v []PmemConfig)` SetPmem sets Pmem field to given value. `func (o *VmConfig) HasPmem() bool` HasPmem returns a boolean if a field has been set. `func (o *VmConfig) GetSerial() ConsoleConfig` GetSerial returns the Serial field if non-nil, zero value otherwise. `func (o VmConfig) GetSerialOk() (ConsoleConfig, bool)` GetSerialOk returns a tuple with the Serial field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetSerial(v ConsoleConfig)` SetSerial sets Serial field to given value. `func (o *VmConfig) HasSerial() bool` HasSerial returns a boolean if a field has been set. `func (o *VmConfig) GetConsole() ConsoleConfig` GetConsole returns the Console field if non-nil, zero value otherwise. `func (o VmConfig) GetConsoleOk() (ConsoleConfig, bool)` GetConsoleOk returns a tuple with the Console field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetConsole(v ConsoleConfig)` SetConsole sets Console field to given value. `func (o *VmConfig) HasConsole() bool` HasConsole returns a boolean if a field has been set. `func (o *VmConfig) GetDevices() []DeviceConfig` GetDevices returns the Devices field if non-nil, zero value otherwise. `func (o VmConfig) GetDevicesOk() ([]DeviceConfig, bool)` GetDevicesOk returns a tuple with the Devices field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetDevices(v []DeviceConfig)` SetDevices sets Devices field to given value. `func (o *VmConfig) HasDevices() bool` HasDevices returns a boolean if a field has been set. `func (o *VmConfig) GetVdpa() []VdpaConfig` GetVdpa returns the Vdpa field if non-nil, zero value"
},
{
"data": "`func (o VmConfig) GetVdpaOk() ([]VdpaConfig, bool)` GetVdpaOk returns a tuple with the Vdpa field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetVdpa(v []VdpaConfig)` SetVdpa sets Vdpa field to given value. `func (o *VmConfig) HasVdpa() bool` HasVdpa returns a boolean if a field has been set. `func (o *VmConfig) GetVsock() VsockConfig` GetVsock returns the Vsock field if non-nil, zero value otherwise. `func (o VmConfig) GetVsockOk() (VsockConfig, bool)` GetVsockOk returns a tuple with the Vsock field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetVsock(v VsockConfig)` SetVsock sets Vsock field to given value. `func (o *VmConfig) HasVsock() bool` HasVsock returns a boolean if a field has been set. `func (o *VmConfig) GetSgxEpc() []SgxEpcConfig` GetSgxEpc returns the SgxEpc field if non-nil, zero value otherwise. `func (o VmConfig) GetSgxEpcOk() ([]SgxEpcConfig, bool)` GetSgxEpcOk returns a tuple with the SgxEpc field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetSgxEpc(v []SgxEpcConfig)` SetSgxEpc sets SgxEpc field to given value. `func (o *VmConfig) HasSgxEpc() bool` HasSgxEpc returns a boolean if a field has been set. `func (o *VmConfig) GetNuma() []NumaConfig` GetNuma returns the Numa field if non-nil, zero value otherwise. `func (o VmConfig) GetNumaOk() ([]NumaConfig, bool)` GetNumaOk returns a tuple with the Numa field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetNuma(v []NumaConfig)` SetNuma sets Numa field to given value. `func (o *VmConfig) HasNuma() bool` HasNuma returns a boolean if a field has been set. `func (o *VmConfig) GetIommu() bool` GetIommu returns the Iommu field if non-nil, zero value otherwise. `func (o VmConfig) GetIommuOk() (bool, bool)` GetIommuOk returns a tuple with the Iommu field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetIommu(v bool)` SetIommu sets Iommu field to given value. `func (o *VmConfig) HasIommu() bool` HasIommu returns a boolean if a field has been set. `func (o *VmConfig) GetWatchdog() bool` GetWatchdog returns the Watchdog field if non-nil, zero value otherwise. `func (o VmConfig) GetWatchdogOk() (bool, bool)` GetWatchdogOk returns a tuple with the Watchdog field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetWatchdog(v bool)` SetWatchdog sets Watchdog field to given value. `func (o *VmConfig) HasWatchdog() bool` HasWatchdog returns a boolean if a field has been set. `func (o *VmConfig) GetPlatform() PlatformConfig` GetPlatform returns the Platform field if non-nil, zero value otherwise. `func (o VmConfig) GetPlatformOk() (PlatformConfig, bool)` GetPlatformOk returns a tuple with the Platform field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetPlatform(v PlatformConfig)` SetPlatform sets Platform field to given value. `func (o *VmConfig) HasPlatform() bool` HasPlatform returns a boolean if a field has been set. `func (o *VmConfig) GetTpm() TpmConfig` GetTpm returns the Tpm field if non-nil, zero value otherwise. `func (o VmConfig) GetTpmOk() (TpmConfig, bool)` GetTpmOk returns a tuple with the Tpm field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmConfig) SetTpm(v TpmConfig)` SetTpm sets Tpm field to given value. `func (o *VmConfig) HasTpm() bool` HasTpm returns a boolean if a field has been set."
}
] | {
"category": "Runtime",
"file_name": "VmConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
} |
[
{
"data": "```bash go test -bench XXX -run XXX -benchtime 30s ``` ``` goos: linux goarch: amd64 pkg: github.com/go-openapi/swag cpu: Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz BenchmarkToXXXName/ToGoName-4 862623 44101 ns/op 10450 B/op 732 allocs/op BenchmarkToXXXName/ToVarName-4 853656 40728 ns/op 10468 B/op 734 allocs/op BenchmarkToXXXName/ToFileName-4 1268312 27813 ns/op 9785 B/op 617 allocs/op BenchmarkToXXXName/ToCommandName-4 1276322 27903 ns/op 9785 B/op 617 allocs/op BenchmarkToXXXName/ToHumanNameLower-4 895334 40354 ns/op 10472 B/op 731 allocs/op BenchmarkToXXXName/ToHumanNameTitle-4 882441 40678 ns/op 10566 B/op 749 allocs/op ``` ~ x10 performance improvement and ~ /100 memory allocations. ``` goos: linux goarch: amd64 pkg: github.com/go-openapi/swag cpu: Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz BenchmarkToXXXName/ToGoName-4 9595830 3991 ns/op 42 B/op 5 allocs/op BenchmarkToXXXName/ToVarName-4 9194276 3984 ns/op 62 B/op 7 allocs/op BenchmarkToXXXName/ToFileName-4 17002711 2123 ns/op 147 B/op 7 allocs/op BenchmarkToXXXName/ToCommandName-4 16772926 2111 ns/op 147 B/op 7 allocs/op BenchmarkToXXXName/ToHumanNameLower-4 9788331 3749 ns/op 92 B/op 6 allocs/op BenchmarkToXXXName/ToHumanNameTitle-4 9188260 3941 ns/op 104 B/op 6 allocs/op ``` ``` goos: linux goarch: amd64 pkg: github.com/go-openapi/swag cpu: AMD Ryzen 7 5800X 8-Core Processor BenchmarkToXXXName/ToGoName-16 18527378 1972 ns/op 42 B/op 5 allocs/op BenchmarkToXXXName/ToVarName-16 15552692 2093 ns/op 62 B/op 7 allocs/op BenchmarkToXXXName/ToFileName-16 32161176 1117 ns/op 147 B/op 7 allocs/op BenchmarkToXXXName/ToCommandName-16 32256634 1137 ns/op 147 B/op 7 allocs/op BenchmarkToXXXName/ToHumanNameLower-16 18599661 1946 ns/op 92 B/op 6 allocs/op BenchmarkToXXXName/ToHumanNameTitle-16 17581353 2054 ns/op 105 B/op 6 allocs/op ```"
}
] | {
"category": "Runtime",
"file_name": "BENCHMARK.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
} |
[
{
"data": "title: \"ark create backup\" layout: docs Create a backup Create a backup ``` ark create backup NAME [flags] ``` ``` --exclude-namespaces stringArray namespaces to exclude from the backup --exclude-resources stringArray resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io -h, --help help for backup --include-cluster-resources optionalBool[=true] include cluster-scoped resources in the backup --include-namespaces stringArray namespaces to include in the backup (use '' for all namespaces) (default ) --include-resources stringArray resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources) --label-columns stringArray a comma-separated list of labels to be displayed as columns --labels mapStringString labels to apply to the backup -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. -l, --selector labelSelector only back up resources matching this label selector (default <none>) --show-labels show labels in the last column --snapshot-volumes optionalBool[=true] take snapshots of PersistentVolumes as part of the backup --ttl duration how long before the backup can be garbage collected (default 720h0m0s) ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Create ark resources"
}
] | {
"category": "Runtime",
"file_name": "ark_create_backup.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Please refer to for details. Cobra 1.1 standardized its zsh completion support to align it with its other shell completions. Although the API was kept backwards-compatible, some small changes in behavior were introduced. See further below for more details on these deprecations. `cmd.MarkZshCompPositionalArgumentFile(pos, []string{})` is no longer needed. It is therefore deprecated* and silently ignored. `cmd.MarkZshCompPositionalArgumentFile(pos, glob[])` is deprecated* and silently ignored. Instead use `ValidArgsFunction` with `ShellCompDirectiveFilterFileExt`. `cmd.MarkZshCompPositionalArgumentWords()` is deprecated* and silently ignored. Instead use `ValidArgsFunction`. Noun completion |Old behavior|New behavior| ||| |No file completion by default (opposite of bash)|File completion by default; use `ValidArgsFunction` with `ShellCompDirectiveNoFileComp` to turn off file completion on a per-argument basis| |Completion of flag names without the `-` prefix having been typed|Flag names are only completed if the user has typed the first `-`| `cmd.MarkZshCompPositionalArgumentFile(pos, []string{})` used to turn on file completion on a per-argument position basis|File completion for all arguments by default; `cmd.MarkZshCompPositionalArgumentFile()` is deprecated and silently ignored| |`cmd.MarkZshCompPositionalArgumentFile(pos, glob[])` used to turn on file completion with glob filtering on a per-argument position basis (zsh-specific)|`cmd.MarkZshCompPositionalArgumentFile()` is deprecated and silently ignored; use `ValidArgsFunction` with `ShellCompDirectiveFilterFileExt` for file extension filtering (not full glob filtering)| |`cmd.MarkZshCompPositionalArgumentWords(pos, words[])` used to provide completion choices on a per-argument position basis (zsh-specific)|`cmd.MarkZshCompPositionalArgumentWords()` is deprecated and silently ignored; use `ValidArgsFunction` to achieve the same behavior| Flag-value completion |Old behavior|New behavior| ||| |No file completion by default (opposite of bash)|File completion by default; use `RegisterFlagCompletionFunc()` with `ShellCompDirectiveNoFileComp` to turn off file completion| |`cmd.MarkFlagFilename(flag, []string{})` and similar used to turn on file completion|File completion by default; `cmd.MarkFlagFilename(flag, []string{})` no longer needed in this context and silently ignored| |`cmd.MarkFlagFilename(flag, glob[])` used to turn on file completion with glob filtering (syntax of `[]string{\".yaml\", \".yml\"}` incompatible with bash)|Will continue to work, however, support for bash syntax is added and should be used instead so as to work for all shells (`[]string{\"yaml\", \"yml\"}`)| |`cmd.MarkFlagDirname(flag)` only completes directories (zsh-specific)|Has been added for all shells| |Completion of a flag name does not repeat, unless flag is of type `Array` or `Slice` (not supported by bash)|Retained for `zsh` and added to `fish`| |Completion of a flag name does not provide the `=` form (unlike bash)|Retained for `zsh` and added to `fish`| Improvements Custom completion support (`ValidArgsFunction` and `RegisterFlagCompletionFunc()`) File completion by default if no other completions found Handling of required flags File extension filtering no longer mutually exclusive with bash usage Completion of directory names within* another directory Support for `=` form of flags"
}
] | {
"category": "Runtime",
"file_name": "zsh_completions.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "Kanister uses structured logging to ensure that its logs can be easily categorized, indexed and searched by downstream log aggregation software. The default logging level of Kanister is set to `info`. This logging level can be changed by modifying the value of the `LOG_LEVEL` environment variable of the Kanister container. When using Helm, this value can be configured using the `controller.logLevel` variable. For example, to set the logging level to `debug`: ``` bash helm -n kanister upgrade --install kanister \\ --set controller.logLevel=debug \\ --create-namespace kanister/kanister-operator ``` The supported logging levels are: `panic` `fatal` `error` `info` `debug` `trace`"
}
] | {
"category": "Runtime",
"file_name": "logs_level.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": ".. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 http://creativecommons.org/licenses/by/3.0/legalcode ==================================== OpenSDS SourthBound Interface Design ==================================== Problem description =================== Currently OpenSDS SourthBound interface is as follow: type VolumeDriver interface { //Any initialization the volume driver does while starting. Setup() //Any operation the volume driver does while stoping. Unset() CreateVolume(name string, volType string, size int32) (string, error) GetVolume(volID string) (string, error) GetAllVolumes(allowDetails bool) (string, error) DeleteVolume(volID string) (string, error) AttachVolume(volID string) (string, error) DetachVolume(device string) (string, error) MountVolume(mountDir, device, fsType string) (string, error) UnmountVolume(mountDir string) (string, error) } From the interface shown above, we came to a conclusion that MountVolume and UnmountVolume methods can not be handled by drivers, thus they should be from here. The reasons are as follows: 1) From the point of semantics, mount and unmount operations belong to host rather than storage backends, so there is no need for backends to receive these two requests. 2) From the point of architecture design, OpenSDS only contains volume and share resources. So when users want to mount a resource, just tell OpenSDS which type of resource it should choose and let OpenSDS do the remaining work. 3) From the point of implementation, if we move these two operations to dock module, a lot of redundant code will be removed. Proposed Change =============== The main changes are as follows: 1) Remove mount and unmount operation in VolumeDriver and ShareDriver interface. 2) Create two packages(volume and share) in dock module, and move the two operations above to new files(such as \"volume_mount.go\"). 3) Remove code of these two operation in all backend drivers. Data model impact -- After changed, the interface will be like this: type VolumeDriver interface { //Any initialization the volume driver does while starting. Setup() //Any operation the volume driver does while stoping. Unset() CreateVolume(name string, volType string, size int32) (string, error) GetVolume(volID string) (string, error) GetAllVolumes(allowDetails bool) (string, error) DeleteVolume(volID string) (string, error) AttachVolume(volID string) (string, error) DetachVolume(device string) (string, error) } REST API impact None Security impact None Other end user impact None Performance impact None Other deployer impact None Dependencies ============ None Testing ======= None Documentation Impact ==================== None References ========== None"
}
] | {
"category": "Runtime",
"file_name": "update_southbound_interface.md",
"project_name": "Soda Foundation",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "oep-number: CSI 0000/CSTOR 0010/JIVA 1010/OPERATOR 0001/LOCALPV 1011 title: My First OEP authors: \"@amitkumardas\" owners: \"@kmova\" \"@vishnuitta\" editor: TBD creation-date: yyyy-mm-dd last-updated: yyyy-mm-dd status: provisional/implementable/implemented/deferred/rejected/withdrawn/replaced see-also: OEP 1 OEP 2 replaces: OEP 3 superseded-by: OEP 100 This is the title of the OpenEBS Enhancement Proposal (OEP). Keep it simple and descriptive. A good title can help communicate what the OEP is and should be considered as part of any review. The title should be lowercased and spaces/punctuation should be replaced with `-`. To get started with this template: Make a copy of this template. Name it `YYYYMMDD-my-title.md`. Fill out the \"overview\" sections. This includes the Summary and Motivation sections. Create a PR. Name it `[OEP NUMBER] Title`, e.g. `[OEP 0001] Initial design on OpenEBS CSI`. Assign it to owner(s) that are sponsoring this process. Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the OEP merged quickly. The best way to do this is to just start with the \"Overview\" sections and fill out details incrementally in follow on PRs. View anything marked as a `provisional` as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes. The canonical place for the latest set of instructions (and the likely source of this file) is . The `Metadata` section above is intended to support the creation of tooling around the OEP process. This will be a YAML section that is fenced as a code block. See the for details on each of these items. A table of contents is helpful for quickly jumping to sections of a OEP and for highlighting any additional information provided beyond the standard OEP template. a table of contents from markdown are available. * * * * The `Summary` section is incredibly important for producing high quality user focused documentation such as release notes or a development road map. It should be possible to collect this information before implementation begins in order to avoid requiring implementors to split their attention between writing release notes and implementing the feature itself. OEP editors should help to ensure that the tone and content of the `Summary` section is useful for a wide audience. A good summary is probably at least a paragraph in"
},
{
"data": "This section is for explicitly listing the motivation, goals and non-goals of this OEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to to demonstrate the interest in a OEP within the wider OpenEBS community. List the specific goals of the OEP. How will we know that this has succeeded? What is out of scope for his OEP? Listing non-goals helps to focus discussion and make progress. This is where we get down to the nitty gritty of what the proposal actually is. Detail the things that people will be able to do if this OEP is implemented. Include as much detail as possible so that people can understand the \"how\" of the system. The goal here is to make this feel real for users without getting bogged down. What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate. What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem. How will we know that this has succeeded? Gathering user feedback is crucial for building high quality experiences and owners have the important responsibility of setting milestones for stability and completeness. Hopefully the content previously contained in will be tracked in the `Graduation Criteria` section. Major milestones in the life cycle of a OEP should be tracked in `Implementation History`. Major milestones might include the `Summary` and `Motivation` sections being merged signaling owner acceptance the `Proposal` section being merged signaling agreement on a proposed design the date implementation started the first OpenEBS release where an initial version of the OEP was available the version of OpenEBS where the OEP graduated to general availability when the OEP was retired or superseded Why should this OEP not be implemented. Similar to the `Drawbacks` section the `Alternatives` section is used to highlight and record other possible approaches to delivering the value proposed by a OEP. Use this section if you need things from the project/owner. Examples include a new subproject, repos requested, github details. Listing these here allows a owner to get the process for these resources started right away."
}
] | {
"category": "Runtime",
"file_name": "oep-template.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
} |
[
{
"data": "You need to install the following dependencies and configure the PATH environment variable to point to the binary directory where gclang is located: ```bash $ yum makecache $ yum install -y git unzip patch golang llvm clang compiler-rt libasan-static libasan $ go env -w GO111MODULE=auto $ go env -w GOPROXY=\"https://goproxy.io,direct\" $ go get -v github.com/SRI-CSL/gllvm/cmd/... $ export PATH=/root/go/bin:$PATH ``` ```bash $ mkdir build && cd build $ cmake -DCMAKEBUILDTYPE=Debug -DENABLEASAN=ON -DENABLEFUZZ=ON .. $ cmake -DCMAKEBUILDTYPE=Debug -DGCOV=ON -DENABLEASAN=ON -DENABLEFUZZ=ON .. $ make -j $(nproc) ``` Execute all fuzz test cases: ```bash $ cd test/fuzz/ $ ./fuzz.sh ``` Execute some fuzz test cases: ```bash $ cd test/fuzz/ $ ./fuzz.sh testgrobjparserfuzz testpwobjparserfuzz testvolumemountspecfuzz testvolumeparsevolumefuzz ``` If the test is successful, a log will be generated in the iSulad root directory: ```bash $ ls -la *.log -rw-. 1 root root 2357 Jul 4 10:23 testgrobjparserfuzz.log -rw-. 1 root root 3046 Jul 4 10:23 testpwobjparserfuzz.log -rw-. 1 root root 1167 Jul 4 10:23 testvolumemountspecfuzz.log -rw-. 1 root root 3411 Jul 4 10:23 testvolumeparsevolumefuzz.log ``` You can use third-party tools to collect coverage information and analyze it. For example, you can run the following command to let lcov collect cover.info, where ISULADSRCPATH is filled with iSulad source code path: ```bash $ lcov -gcov-tool /usr/bin/llvm-gcov.sh -c -d -m $ISULADSRCPATH -o cover.info ```"
}
] | {
"category": "Runtime",
"file_name": "fuzz_test_guide.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
} |