content
listlengths
1
171
tag
dict
[ { "data": "title: \"Node-agent Concurrency\" layout: docs Velero node-agent is a daemonset hosting modules to complete the concrete tasks of backups/restores, i.e., file system backup/restore, CSI snapshot data movement. Varying from the data size, data complexity, resource availability, the tasks may take a long time and remarkable resources (CPU, memory, network bandwidth, etc.). These tasks make the loads of node-agent. Node-agent concurrency configurations allow you to configure the concurrent number of node-agent loads per node. When the resources are sufficient in nodes, you can set a large concurrent number, so as to reduce the backup/restore time; otherwise, the concurrency should be reduced, otherwise, the backup/restore may encounter problems, i.e., time lagging, hang or OOM kill. To set Node-agent concurrency configurations, a configMap named ```node-agent-config``` should be created manually. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only. Node-agent server checks these configurations at startup time. Therefore, you could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted. You can specify a concurrent number that will be applied to all nodes if the per-node number is not specified. This number is set through ```globalConfig``` field in ```loadConcurrency```. The number starts from 1 which means there is no concurrency, only one load is allowed. There is no roof limit. If this number is not specified or not valid, a hard-coded default value will be used, the value is set to 1. You can specify different concurrent number per node, for example, you can set 3 concurrent instances in Node-1, 2 instances in Node-2 and 1 instance in Node-3. The range of Per-node concurrent number is the same with Global concurrent number. Per-node concurrent number is preferable to Global concurrent number, so it will overwrite the Global concurrent number for that node. Per-node concurrent number is implemented through ```perNodeConfig``` field in ```loadConcurrency```. ```perNodeConfig``` is a list of ```RuledConfigs``` each item of which matches one or more nodes by label selectors and specify the concurrent number for the matched nodes. Here is an example of the ```perNodeConfig``: ``` \"nodeSelector: kubernetes.io/hostname=node1; number: 3\" \"nodeSelector: beta.kubernetes.io/instance-type=Standard_B4ms; number: 5\" ``` The first element means the node with host name ```node1``` gets the Per-node concurrent number of 3. The second element means all the nodes with label ```beta.kubernetes.io/instance-type``` of value ```Standard_B4ms``` get the Per-node concurrent number of 5. At least one node is expected to have a label with the specified ```RuledConfigs``` element (rule). If no node is with this label, the Per-node rule makes no effect. If one node falls into more than one rules, e.g., if node1 also has the label ```beta.kubernetes.io/instance-type=Standard_B4ms```, the smallest number (3) will be used. A sample of the complete ```node-agent-config``` configMap is as below: ```json { \"loadConcurrency\": { \"globalConfig\": 2, \"perNodeConfig\": [ { \"nodeSelector\": { \"matchLabels\": { \"kubernetes.io/hostname\": \"node1\" } }, \"number\": 3 }, { \"nodeSelector\": { \"matchLabels\": { \"beta.kubernetes.io/instance-type\": \"Standard_B4ms\" } }, \"number\": 5 } ] } } ``` To create the configMap, save something like the above sample to a json file and then run below command: ``` kubectl create cm node-agent-config -n velero --from-file=<json file name> ```" } ]
{ "category": "Runtime", "file_name": "node-agent-concurrency.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Firecracker implements a framework for instrumentation based tracing with the aim to improve its debugability. Instrumentation based tracing was defined by as: There are two ways to obtain profiling information: either statistical sampling or code instrumentation. Statistical sampling is less disruptive to program execution, but cannot provide completely accurate information. Code instrumentation, on the other hand, may be more disruptive, but allows the profiler to record all the events it is interested in. Specifically in CPU time profiling, statistical sampling may reveal, for example, the relative percentage of time spent in frequently-called methods, whereas code instrumentation can report the exact number of time each method is invoked. Enabling tracing adds logs output on each functions entry and exit. This assists debugging problems that relate to deadlocks or high latencies by quickly identifying elongated function calls. Firecracker implements instrumentation based tracing via and , outputting a `Trace` level log when entering and exiting every function. Adding traces impacts Firecracker binary size and its performance, so instrumentation is not present by default. Instrumentation is also not present on the release binaries. You can use `cargo run --bin clippy-tracing --` to build and run the latest version in the repo or you can run `cargo install --path src/clippy-tracing` to install the binary then use `clippy-tracing` to run this binary. You can run `clippy-tracing --help` for help. To enable tracing in Firecracker, add instrumentation with: ``` clippy-tracing \\ --action fix \\ --path ./src \\ --exclude benches \\ --exclude virtio/gen,bindings.rs,net/gen \\ --exclude log-instrument-macros/,log-instrument/,clippy-tracing/ \\ --exclude vmmconfig/logger.rs,logger/,signalhandler.rs,time.rs ``` `--exclude` can be used to avoid adding instrumentation to specific files, here it is used to avoid adding instrumentation in: tests. bindings. the instrumentation tooling. logger functionality that may form an infinite loop. After adding instrumentation re-compile with `--features tracing`: ``` cargo build --features tracing ``` This will result in an increase in the binary size (~100kb) and a significant regression in performance (>10x). To mitigate the performance impact you can filter the tracing output as described in the next section. You can filter tracing output both at run-time and compile-time. This can be used to mitigate the performance impact of logging many traces. Run-time filtering is implemented with the `/logger` API call, this can significantly mitigate the impact on execution time but cannot mitigate the impact on memory usage. Execution time impact is mitigated by avoiding constructing and writing the trace log, it still needs to check the condition in every place it would output a log. Memory usage impact is not mitigated as the instrumentation remains in the binary unchanged. Compile-time filtering is a manual process using the tool. This can almost entirely mitigate the impact on execution time and the impact on memory usage. You can filter by module path and/or file path at runtime, e.g.: ```bash curl -X PUT --unix-socket \"${API_SOCKET}\" \\ --data \"{ \\\"level\\\": \\\"Trace\\\", \\\"module\\\": \\\"api_server::request\\\", }\" \\ \"http://localhost/logger\" ``` Instrumentation logs are `Trace` level logs, at runtime the level must be set to `Trace` to see them. The module filter applied here ensures only logs from the `request` modules within the `api_server` crate will be output. This will mitigate most of the performance" }, { "data": "Specific environments can restrict run-time configuration. In these environments it becomes necessary to support targeted tracing without run-time re-configuration, for this compile-time filtering must be used. To reproduce the same filtering as run-time at compile-time, you can use at compile-time like: ```bash clippy-tracing --action strip --path ./src clippy-tracing --action fix --path ./src/firecracker/src/api_server/src/request cargo build --features tracing ``` Then at run-time: ```bash curl -X PUT --unix-socket \"${API_SOCKET}\" \\ --data \"{ \\\"level\\\": \\\"Trace\\\", }\" \\ \"http://localhost/logger\" ``` The instrumentation has been stripped from all files other than those at `./src/firecracker/src/api_server/src/request` so we do not need to apply a run-time filter. Runtime filtering could be applied but in this case it yields no additional benefit. In this example we start Firecracker with tracing then make a simple API call. ``` ~/Projects/firecracker$ sudo curl -X GET --unix-socket \"/run/firecracker.socket\" \"http://localhost/\" {\"id\":\"anonymous-instance\",\"state\":\"Not started\",\"vmmversion\":\"1.6.0-dev\",\"appname\":\"Firecracker\"} ``` ``` ~/Projects/firecracker$ sudo ./firecracker/build/cargo_target/release/firecracker --level Trace 2023-10-13T14:15:38.851263983 [anonymous-instance:main] Running Firecracker v1.6.0-dev 2023-10-13T14:15:38.851316122 [anonymous-instance:main] ThreadId(1)::main::mainexec>>singlevalue 2023-10-13T14:15:38.851322264 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>value_of 2023-10-13T14:15:38.851325119 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<value_of 2023-10-13T14:15:38.851328776 [anonymous-instance:main] ThreadId(1)::main::mainexec<<singlevalue 2023-10-13T14:15:38.851331351 [anonymous-instance:main] ThreadId(1)::main::mainexec>>flagpresent 2023-10-13T14:15:38.851335809 [anonymous-instance:main] ThreadId(1)::main::mainexec::flagpresent>>value_of 2023-10-13T14:15:38.851338254 [anonymous-instance:main] ThreadId(1)::main::mainexec::flagpresent<<value_of 2023-10-13T14:15:38.851342091 [anonymous-instance:main] ThreadId(1)::main::mainexec<<flagpresent 2023-10-13T14:15:38.851345638 [anonymous-instance:main] ThreadId(1)::main::mainexec>>singlevalue 2023-10-13T14:15:38.851349245 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>value_of 2023-10-13T14:15:38.851352721 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<value_of 2023-10-13T14:15:38.851355827 [anonymous-instance:main] ThreadId(1)::main::mainexec<<singlevalue 2023-10-13T14:15:38.851359444 [anonymous-instance:main] ThreadId(1)::main::mainexec>>fromargs 2023-10-13T14:15:38.851362931 [anonymous-instance:main] ThreadId(1)::main::mainexec<<fromargs 2023-10-13T14:15:38.851366207 [anonymous-instance:main] ThreadId(1)::main::mainexec>>getfilters 2023-10-13T14:15:38.851368401 [anonymous-instance:main] ThreadId(1)::main::mainexec::getfilters>>getdefaultfilters 2023-10-13T14:15:38.851372068 [anonymous-instance:main] ThreadId(1)::main::mainexec::getfilters::getdefaultfilters>>deserialize_binary 2023-10-13T14:15:38.851380033 [anonymous-instance:main] ThreadId(1)::main::mainexec::getfilters::getdefaultfilters<<deserialize_binary 2023-10-13T14:15:38.851383990 [anonymous-instance:main] ThreadId(1)::main::mainexec::getfilters::getdefaultfilters>>filterthreadcategories 2023-10-13T14:15:38.851388098 [anonymous-instance:main] ThreadId(1)::main::mainexec::getfilters::getdefaultfilters<<filterthreadcategories 2023-10-13T14:15:38.851391845 [anonymous-instance:main] ThreadId(1)::main::mainexec::getfilters<<getdefaultfilters 2023-10-13T14:15:38.851394360 [anonymous-instance:main] ThreadId(1)::main::mainexec<<getfilters 2023-10-13T14:15:38.851398077 [anonymous-instance:main] ThreadId(1)::main::mainexec>>singlevalue 2023-10-13T14:15:38.851400462 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>value_of 2023-10-13T14:15:38.851403507 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<value_of 2023-10-13T14:15:38.851410961 [anonymous-instance:main] ThreadId(1)::main::mainexec<<singlevalue 2023-10-13T14:15:38.851414107 [anonymous-instance:main] ThreadId(1)::main::mainexec>>singlevalue 2023-10-13T14:15:38.851417955 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>value_of 2023-10-13T14:15:38.851420650 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<value_of 2023-10-13T14:15:38.851426130 [anonymous-instance:main] ThreadId(1)::main::mainexec<<singlevalue 2023-10-13T14:15:38.851428434 [anonymous-instance:main] ThreadId(1)::main::mainexec>>flagpresent 2023-10-13T14:15:38.851430949 [anonymous-instance:main] ThreadId(1)::main::mainexec::flagpresent>>value_of 2023-10-13T14:15:38.851434766 [anonymous-instance:main] ThreadId(1)::main::mainexec::flagpresent<<value_of 2023-10-13T14:15:38.851438133 [anonymous-instance:main] ThreadId(1)::main::mainexec<<flagpresent 2023-10-13T14:15:38.851440577 [anonymous-instance:main] ThreadId(1)::main::mainexec>>flagpresent 2023-10-13T14:15:38.851444575 [anonymous-instance:main] ThreadId(1)::main::mainexec::flagpresent>>value_of 2023-10-13T14:15:38.851447941 [anonymous-instance:main] ThreadId(1)::main::mainexec::flagpresent<<value_of 2023-10-13T14:15:38.851450005 [anonymous-instance:main] ThreadId(1)::main::mainexec<<flagpresent 2023-10-13T14:15:38.851453772 [anonymous-instance:main] ThreadId(1)::main::main_exec>>arguments 2023-10-13T14:15:38.851456488 [anonymous-instance:main] ThreadId(1)::main::main_exec<<arguments 2023-10-13T14:15:38.851458902 [anonymous-instance:main] ThreadId(1)::main::mainexec>>singlevalue 2023-10-13T14:15:38.851462679 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>value_of 2023-10-13T14:15:38.851466587 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<value_of 2023-10-13T14:15:38.851470324 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>assinglevalue 2023-10-13T14:15:38.851473239 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<assinglevalue 2023-10-13T14:15:38.851476896 [anonymous-instance:main] ThreadId(1)::main::mainexec<<singlevalue 2023-10-13T14:15:38.851479521 [anonymous-instance:main] ThreadId(1)::main::main_exec>>arguments 2023-10-13T14:15:38.851485062 [anonymous-instance:main] ThreadId(1)::main::main_exec<<arguments 2023-10-13T14:15:38.851488398 [anonymous-instance:main] ThreadId(1)::main::mainexec>>singlevalue 2023-10-13T14:15:38.851491925 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>value_of 2023-10-13T14:15:38.851494900 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<value_of 2023-10-13T14:15:38.851496934 [anonymous-instance:main] ThreadId(1)::main::mainexec<<singlevalue 2023-10-13T14:15:38.851499689 [anonymous-instance:main] ThreadId(1)::main::mainexec>>singlevalue 2023-10-13T14:15:38.851502374 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>value_of 2023-10-13T14:15:38.851504629 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<value_of 2023-10-13T14:15:38.851507234 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>assinglevalue 2023-10-13T14:15:38.851508897 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<assinglevalue 2023-10-13T14:15:38.851510630 [anonymous-instance:main] ThreadId(1)::main::mainexec<<singlevalue 2023-10-13T14:15:38.851513576 [anonymous-instance:main] ThreadId(1)::main::mainexec>>singlevalue 2023-10-13T14:15:38.851515559 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>value_of 2023-10-13T14:15:38.851517503 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<value_of 2023-10-13T14:15:38.851520068 [anonymous-instance:main] ThreadId(1)::main::mainexec<<singlevalue 2023-10-13T14:15:38.851521731 [anonymous-instance:main] ThreadId(1)::main::mainexec>>singlevalue 2023-10-13T14:15:38.851525628 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>value_of 2023-10-13T14:15:38.851529045 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<value_of 2023-10-13T14:15:38.851533153 [anonymous-instance:main] ThreadId(1)::main::mainexec<<singlevalue 2023-10-13T14:15:38.851536339 [anonymous-instance:main] ThreadId(1)::main::mainexec>>singlevalue 2023-10-13T14:15:38.851538883 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue>>value_of 2023-10-13T14:15:38.851542330 [anonymous-instance:main] ThreadId(1)::main::mainexec::singlevalue<<value_of 2023-10-13T14:15:38.851544704 [anonymous-instance:main] ThreadId(1)::main::mainexec<<singlevalue 2023-10-13T14:15:38.851548572 [anonymous-instance:main] ThreadId(1)::main::mainexec>>runwith_api 2023-10-13T14:15:38.851664621 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwith_api>>new 2023-10-13T14:15:38.851672586 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwith_api<<new 2023-10-13T14:15:38.851677876 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwith_api>>init 2023-10-13T14:15:38.851684739 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwith_api<<init 2023-10-13T14:15:38.851724064 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwithapi>>buildmicrovmfromrequests 2023-10-13T14:15:38.851728171 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwithapi::buildmicrovmfromrequests>>default 2023-10-13T14:15:38.851731888 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwithapi::buildmicrovmfromrequests<<default 2023-10-13T14:15:38.851734634 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwithapi::buildmicrovmfromrequests>>new 2023-10-13T14:15:38.851737830 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwithapi::buildmicrovmfromrequests<<new 2023-10-13T14:15:38.851748550 [anonymous-instance:fc_api] ThreadId(2)>>new 2023-10-13T14:15:38.851761404 [anonymous-instance:fc_api] ThreadId(2)<<new 2023-10-13T14:15:38.851764861 [anonymous-instance:fc_api] ThreadId(2)>>run 2023-10-13T14:15:38.851775200 [anonymous-instance:fcapi] ThreadId(2)::run>>applyfilter 2023-10-13T14:15:38.851823462 [anonymous-instance:fcapi] ThreadId(2)::run<<applyfilter 2023-10-13T14:15:55.422397039 [anonymous-instance:fcapi] ThreadId(2)::run>>handlerequest 2023-10-13T14:15:55.422417909 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest>>try_from 2023-10-13T14:15:55.422420554 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::try_from>>describe 2023-10-13T14:15:55.422424551 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::try_from<<describe 2023-10-13T14:15:55.422426395 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::tryfrom>>logreceivedapirequest 2023-10-13T14:15:55.422429270 [anonymous-instance:fc_api] The API server received a Get request on \"/\". 2023-10-13T14:15:55.422431354 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::tryfrom<<logreceivedapirequest 2023-10-13T14:15:55.422433298 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::tryfrom>>parsegetinstanceinfo 2023-10-13T14:15:55.422435211 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::tryfrom::parsegetinstanceinfo>>new_sync 2023-10-13T14:15:55.422437165 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::tryfrom::parsegetinstanceinfo::new_sync>>new 2023-10-13T14:15:55.422439289 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::tryfrom::parsegetinstanceinfo::new_sync<<new 2023-10-13T14:15:55.422441123 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::tryfrom::parsegetinstanceinfo<<new_sync 2023-10-13T14:15:55.422444459 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::tryfrom<<parsegetinstanceinfo 2023-10-13T14:15:55.422446833 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest<<try_from 2023-10-13T14:15:55.422448837 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest>>into_parts 2023-10-13T14:15:55.422450921 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest<<into_parts 2023-10-13T14:15:55.422453967 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest>>servevmmaction_request 2023-10-13T14:15:55.422472552 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwithapi::buildmicrovmfromrequests>>handleprebootrequest 2023-10-13T14:15:55.422480477 [anonymous-instance:main] ThreadId(1)::main::mainexec::runwithapi::buildmicrovmfromrequests<<handleprebootrequest 2023-10-13T14:15:55.422488963 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::servevmmactionrequest>>convertto_response 2023-10-13T14:15:55.422492289 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::servevmmactionrequest::converttoresponse>>successresponsewithdata 2023-10-13T14:15:55.422493983 [anonymous-instance:fc_api] The request was executed successfully. Status code: 200 OK. 2023-10-13T14:15:55.422498331 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::servevmmactionrequest::converttoresponse::successresponsewithdata>>serialize 2023-10-13T14:15:55.422501387 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::servevmmactionrequest::converttoresponse::successresponsewithdata::serialize>>fmt 2023-10-13T14:15:55.422506086 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::servevmmactionrequest::converttoresponse::successresponsewithdata::serialize<<fmt 2023-10-13T14:15:55.422509171 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::servevmmactionrequest::converttoresponse::successresponsewithdata<<serialize 2023-10-13T14:15:55.422511776 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::servevmmactionrequest::converttoresponse<<successresponsewithdata 2023-10-13T14:15:55.422514371 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest::servevmmactionrequest<<convertto_response 2023-10-13T14:15:55.422516385 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest<<servevmmaction_request 2023-10-13T14:15:55.422518719 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest>>takedeprecationmessage 2023-10-13T14:15:55.422520533 [anonymous-instance:fcapi] ThreadId(2)::run::handlerequest<<takedeprecationmessage 2023-10-13T14:15:55.422522847 [anonymous-instance:fcapi] ThreadId(2)::run<<handlerequest 2023-10-13T14:15:55.422525422 [anonymous-instance:fc_api] Total previous API call duration: 132 us. ```" } ]
{ "category": "Runtime", "file_name": "tracing.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Flush all NAT mapping entries ``` cilium-dbg bpf nat flush [flags] ``` ``` -h, --help help for flush ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - NAT mapping tables" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_nat_flush.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "layout: global title: Running Presto with Alluxio This guide describes how to configure to access Alluxio. is an open source distributed SQL query engine for running interactive analytic queries on data at a large scale. This guide describes how to run queries against Presto with Alluxio as a distributed caching layer, for any data storage systems that Alluxio supports (AWS S3, HDFS, Azure Blob Store, NFS, and more). Alluxio allows Presto access data regardless of the data source and transparently cache frequently accessed data (e.g., tables commonly used) into Alluxio distributed storage. Co-locating Alluxio workers with Presto workers improves data locality and reduces the I/O access latency when other storage systems are remote or the network is slow or congested. Setup Java for Java 8 Update 161 or higher (8u161+), 64-bit. . This guide is tested with PrestoDB 0.247. Alluxio has been set up and is running. Make sure that the Alluxio client jar is available. This Alluxio client jar file can be found at `{{site.ALLUXIOCLIENTJAR_PATH}}` in the tarball downloaded from Alluxio . Make sure that Hive Metastore is running to serve metadata information of Hive tables. Presto gets the database and table metadata information (including file system locations) from the Hive Metastore, via Presto's Hive connector. Here is a example Presto configuration file `${PRESTO_HOME}/etc/catalog/hive.properties`, for a catalog using the Hive connector, where the metastore is located on `localhost`. ```properties connector.name=hive-hadoop2 hive.metastore.uri=thrift://localhost:9083 ``` In order for Presto to be able to communicate with the Alluxio servers, the Alluxio client jar must be in the classpath of Presto servers. Put the Alluxio client jar `{{site.ALLUXIOCLIENTJAR_PATH}}` into the directory `${PRESTO_HOME}/plugin/hive-hadoop2/` (this directory may differ across versions) on all Presto servers. Restart the Presto workers and coordinator: ```shell $ ${PRESTO_HOME}/bin/launcher restart ``` After completing the basic configuration, Presto should be able to access data in Alluxio. To configure more advanced features for Presto (e.g., connect to Alluxio with HA), please follow the instructions at . Ensure your Hive Metastore service is running. Hive Metastore listens on port `9083` by default. If it is not running, execute the following command to start the metastore: ```shell $ ${HIVE_HOME}/bin/hive --service metastore ``` Here is an example to create an internal table in Hive backed by files in Alluxio. You can download a data file (e.g., `ml-100k.zip`) from . Unzip this file and upload the file `u.user` into `/ml-100k/` in Alluxio: ```shell $ ./bin/alluxio fs mkdir /ml-100k $ ./bin/alluxio fs cp file:///path/to/ml-100k/u.user alluxio:///ml-100k ``` Create an external Hive table pointing to the Alluxio file" }, { "data": "```sql hive> CREATE TABLE u_user ( userid INT, age INT, gender CHAR(1), occupation STRING, zipcode STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE LOCATION 'alluxio://master_hostname:port/ml-100k'; ``` View the Alluxio WebUI at `http://master_hostname:19999` and you can see the directory and files that Hive creates: Start your Presto server. Presto server runs on port `8080` by default (configurable with `http-server.http.port` in `${PRESTO_HOME}/etc/config.properties` ): ```shell $ ${PRESTO_HOME}/bin/launcher run ``` Follow to download the `presto-cli-<PRESTO_VERSION>-executable.jar`, rename it to `presto`, and make it executable with `chmod +x` (sometimes the executable `presto` exists in `${PRESTO_HOME}/bin/presto` and you can use it directly). Run a single query (replace `localhost:8080` with your actual Presto server hostname and port): ```shell $ ./presto --server localhost:8080 --execute \"use default; select * from u_user limit 10;\" --catalog hive --debug ``` And you can see the query results from shell: You can also find some of the Alluxio client log messages in the Presto Server log: To configure additional Alluxio properties, you can append the conf path (i.e. `${ALLUXIO_HOME}/conf`) containing to Presto's JVM config at `etc/jvm.config` under Presto folder. The advantage of this approach is to have all the Alluxio properties set within the same file of `alluxio-site.properties`. ```shell $ -Xbootclasspath/a:<path-to-alluxio-conf> ``` Alternatively, add Alluxio properties to the Hadoop configuration files (`core-site.xml`, `hdfs-site.xml`), and use the Presto property `hive.config.resources` in the file `${PRESTO_HOME}/etc/catalog/hive.properties` to point to the Hadoop resource locations for every Presto worker. ```properties hive.config.resources=/<PATHTOCONF>/core-site.xml,/<PATHTOCONF>/hdfs-site.xml ``` If the Alluxio HA cluster uses internal leader election, set the Alluxio cluster property appropriately in the `alluxio-site.properties` file which is on the classpath. ```properties alluxio.master.rpc.addresses=masterhostname1:19998,masterhostname2:19998,masterhostname3:19998 ``` Alternatively you can add the property to the Hadoop `core-site.xml` configuration which is contained by `hive.config.resources`. ```xml <configuration> <property> <name>alluxio.master.rpc.addresses</name> <value>masterhostname1:19998,masterhostname2:19998,masterhostname3:19998</value> </property> </configuration> ``` For information about how to connect to Alluxio HA cluster using Zookeeper-based leader election, please refer to . For example, change `alluxio.user.file.writetype.default` from default `ASYNCTHROUGH` to `CACHETHROUGH`. Specify the property in `alluxio-site.properties` and distribute this file to the classpath of each Presto node: ```properties alluxio.user.file.writetype.default=CACHE_THROUGH ``` Alternatively, modify `conf/hive-site.xml` to include: ```xml <property> <name>alluxio.user.file.writetype.default</name> <value>CACHE_THROUGH</value> </property> ``` Presto's Hive connector uses the config `hive.max-split-size` to control the parallelism of the query. For Alluxio 1.6 or earlier, it is recommended to set this size no less than Alluxio's block size to avoid the read contention within the same block. For later Alluxio versions, this is no longer an issue because of Alluxio's async caching abilities. It is recommended to increase `alluxio.user.streaming.data.timeout` to a bigger value (e.g `10min`) to avoid a timeout failure when reading large files from remote workers. When you see error messages like the following, it is likely that Alluxio client jar is not on the classpath of the Presto worker. Please follow to fix this issue. ``` Query 2018090706343000001_cm7xe failed: No FileSystem for scheme: alluxio com.facebook.presto.spi.PrestoException: No FileSystem for scheme: alluxio at com.facebook.presto.hive.BackgroundHiveSplitLoader$HiveSplitLoaderTask.process(BackgroundHiveSplitLoader.java:189) at com.facebook.presto.hive.util.ResumableTasks.safeProcessTask(ResumableTasks.java:47) at com.facebook.presto.hive.util.ResumableTasks.access$000(ResumableTasks.java:20) at com.facebook.presto.hive.util.ResumableTasks$1.run(ResumableTasks.java:35) at io.airlift.concurrent.BoundedExecutor.drainQueue(BoundedExecutor.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ```" } ]
{ "category": "Runtime", "file_name": "Presto.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> BPF datapath bandwidth settings ``` -h, --help help for bandwidth ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - List BPF datapath bandwidth settings" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_bandwidth.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "sidebar_label: Redis sidebar_position: 1 slug: /redisbestpractices To ensure metadata service performance, we recommend use Redis service managed by public cloud provider, see . The space used by the JuiceFS metadata engine is mainly related to the number of files in the file system. According to our experience, the metadata of each file occupies approximately 300 bytes of memory. Therefore, if you want to store 100 million files, approximately 30 GiB of memory is required. You can check the specific memory usage through Redis' command, for example: ``` INFO memory used_memory: 19167628056 usedmemoryhuman: 17.85G usedmemoryrss: 20684886016 usedmemoryrss_human: 19.26G ... usedmemoryoverhead: 5727954464 ... usedmemorydataset: 13439673592 usedmemorydataset_perc: 70.12% ``` Among them, `usedmemoryrss` is the total memory size actually used by Redis, which includes not only the size of data stored in Redis (that is, `usedmemorydataset` above) but also some Redis (that is, `usedmemoryoverhead` above). As mentioned earlier that the metadata of each file occupies about 300 bytes, this is actually calculated by `usedmemorydataset`. If you find that the metadata of a single file in your JuiceFS file system occupies much more than 300 bytes, you can try to run command to clean up possible redundant data. is the official solution to high availability for Redis. It provides following capabilities: Monitoring. Sentinel constantly checks if your master and replica instances are working as expected. Notification. Sentinel can notify the system administrator, or other computer programs, via an API, that something is wrong with one of the monitored Redis instances. Automatic failover. If a master is not working as expected, Sentinel can start a failover process where a replica is promoted to master, the other additional replicas are reconfigured to use the new master, and the applications using the Redis server are informed about the new address to use when connecting. Configuration provider. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address. A stable release of Redis Sentinel is shipped since Redis 2.8. Redis Sentinel version 1, shipped with Redis 2.6, is deprecated and should not be used. Before start using Redis sentinel, learn the : You need at least three Sentinel instances for a robust deployment. The three Sentinel instances should be placed into computers or virtual machines that are believed to fail in an independent way. So for example different physical servers or Virtual Machines executed on different availability zones. Sentinel + Redis distributed system does not guarantee that acknowledged writes are retained during failures, since Redis uses asynchronous replication. However there are ways to deploy Sentinel that make the window to lose writes limited to certain moments, while there are other less secure ways to deploy it. There is no HA setup which is safe if you don't test from time to time in development environments, or even better if you can, in production environments, if they work. You may have a misconfiguration that will become apparent only when it's too late (at 3am when your master stops working). Sentinel, Docker, or other forms of Network Address Translation or Port Mapping should be mixed with care: Docker performs port remapping, breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master. Read the for more" }, { "data": "Once Redis servers and Sentinels are deployed, `META-URL` can be specified as `redis[s]://[[USER]:PASSWORD@]MASTERNAME,SENTINELADDR[,SENTINELADDR]:SENTINELPORT[/DB]`, for example: ```shell ./juicefs mount redis://:password@masterName,1.2.3.4,1.2.5.6:26379/2 ~/jfs ``` :::tip For JuiceFS v0.16+, the `PASSWORD` in the URL will be used to connect Redis server, and the password for Sentinel should be provided using the environment variable `SENTINELPASSWORD`. For early versions of JuiceFS, the `PASSWORD` is used for both Redis server and Sentinel, which can be overwritten by the environment variables `SENTINELPASSWORD` and `REDIS_PASSWORD`. ::: Since JuiceFS v1.0.0, it is supported to use Redis replica when mounting file systems, to reduce the load on Redis master. In order to achieve this, you must mount the JuiceFS file system in read-only mode (that is, set the `--read-only` mount option), and connect to the metadata engine through Redis Sentinel. Finally, you need to add `?route-read=replica` to the end of the metadata URL. For example: `redis://:password@masterName,1.2.3.4,1.2.5.6:26379/2?route-read=replica`. It should be noted that since the data of the Redis master node is asynchronously replicated to the replica nodes, the read metadata may not be the latest. :::note This feature requires JuiceFS v1.0.0 or higher ::: JuiceFS also supports Redis Cluster as a metadata engine, the `META-URL` format is `redis`. For example: ```shell juicefs format redis://127.0.0.1:7000,127.0.0.1:7001,127.0.0.1:7002/1 myjfs ``` :::tip Redis Cluster does not support multiple databases. However, it splits the key space into 16384 hash slots, and distributes the slots to several nodes. Based on Redis Cluster's feature, JuiceFS adds `{DB}` before all file system keys to ensure they will be hashed to the same hash slot, assuring that transactions can still work. Besides, one Redis Cluster can serve for multiple JuiceFS file systems as long as they use different db numbers. ::: Redis provides various options for in different ranges: RDB: The RDB persistence performs point-in-time snapshots of your dataset at specified intervals. AOF: The AOF persistence logs every write operation received by the server, which will be played again at server startup, meaning that the original dataset will be reconstructed each time server is restarted. Commands are logged using the same format as the Redis protocol in an append-only fashion. Redis is able to rewrite logs in the background when it gets too big. RDB+AOF <Badge type=\"success\">Recommended</Badge>: It is possible to combine AOF and RDB in the same instance. Notice that, in this case, when Redis restarts the AOF file will be used to reconstruct the original dataset since it is guaranteed to be the most complete. When using AOF, you can have different fsync policies: No fsync fsync every second <Badge type=\"primary\">Default</Badge> fsync at every query With the default policy of fsync every second write performance is good enough (fsync is performed using a background thread and the main thread will try hard to perform writes when no fsync is in progress.), but you may lose the writes from the last second. In addition, be aware that, even if the RBD+AOF mode is adopted, the disk may be damaged and the virtual machine may disappear. Thus, Redis data needs to be backed up regularly. Redis is very data backup friendly since you can copy RDB files while the database is running. The RDB is never modified once produced: while RDB is produced, a temporary name is assigned to it and will be renamed into its final destination atomically using `rename` only when the new snapshot is complete. You can also copy the AOF file to create backups. Please read the for more information. Make Sure to Back up Your" }, { "data": "as Disks break, instances in the cloud disappear, and so forth. By default Redis saves snapshots of the dataset on disk as a binary file called `dump.rdb`. You can configure Redis to save the dataset every N seconds if there are at least M changes in the dataset, or manually call the or commands as needed. As we mentioned above, Redis is very data backup friendly. This means that copying the RDB file is completely safe while the server is running. The following are our suggestions: Create a cron job in your server, and create hourly snapshots of the RDB file in one directory, and daily snapshots in a different directory. Every time running the cron script, call the `find` command to check if old snapshots have been deleted: for instance you can take hourly snapshots for the latest 48 hours, and daily snapshots for one or two months. Make sure to name the snapshots with data and time information. Make sure to transfer an RDB snapshot outside your data center or at least outside the physical machine running your Redis instance at least one time every day. Please read the for more information. After generating the AOF or RDB backup file, you can restore the data by copying the backup file to the path corresponding to the `dir` configuration of the new Redis instance. The instance configuration information can be obtained by the command. If both AOF and RDB persistence are enabled, Redis will use the AOF file first on starting to recover the data because AOF is guaranteed to be the most complete data. After recovering Redis data, you can continue to use the JuiceFS file system via the new Redis address. It is recommended to run command to check the integrity of the file system data. is a durable, in-memory database service that delivers ultra-fast performance. MemoryDB is compatible with Redis, with MemoryDB, all of your data is stored in memory, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput. MemoryDB also stores data durably across multiple Availability Zones (AZs) using a Multi-AZ transactional log to enable fast failover, database recovery, and node restarts. is a fully managed Redis service for the Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments. is a fully managed, in-memory cache that enables high-performance and scalable architectures. It is used to create cloud or hybrid deployments that handle millions of requests per second at sub-millisecond latency, with the advantages of configuration, security, and availability of a managed service. is a database service compatible with native Redis protocols. It supports hybrid of memory and hard disks for data persistence. ApsaraDB for Redis provides a highly available hot standby architecture and are scalable to meet requirements for high-performance and low-latency read/write operations. is a caching and storage service compatible with the Redis protocol. It features a rich variety of data structure options to help you develop different types of business scenarios, and offers a complete set of database services such as primary-secondary hot backup, automatic switchover for disaster recovery, data backup, failover, instance monitoring, online scaling and data rollback. If you want to use a Redis compatible product as the metadata engine, you need to confirm whether the following Redis data types and commands required by JuiceFS are fully supported. (need to support setting multiple fields and values) (optional)" } ]
{ "category": "Runtime", "file_name": "redis_best_practices.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "| feature | description | Alpha release | Beta release | GA release | |--|--||--|| | SpiderIppool | ip settings | v0.2.0 | v0.4.0 | v0.6.0 | | | namespace affinity | v0.4.0 | v0.6.0 | | | | application affinity | v0.4.0 | v0.6.0 | | | | multiple default ippool | v0.6.0 | | | | | multusname affinity | v0.6.0 | | | | | nodename affinity | v0.6.0 | v0.6.0 | | | | default cluster ippool | v0.2.0 | v0.4.0 | v0.6.0 | | | default namespace ippool | v0.4.0 | v0.5.0 | | | | default CNI ippool | v0.4.0 | v0.4.0 | | | | annotation ippool | v0.2.0 | v0.5.0 | | | | annotation route | v0.2.0 | v0.5.0 | | | | ippools for multi-interfaces without specified interface name in annotation | v0.9.0 | | | | SpiderSubnet | automatically create ippool | v0.4.0 | | | | | automatically scaling and deletion ip according to application | v0.4.0 | | | | | automatically delete ippool | v0.5.0 | | | | | annotation for multiple interface | v0.4.0 | | | | | keep ippool after deleting application | v0.5.0 | | | | | support deployment, statefulset, job, replicaset | v0.4.0 | | | | | support operator controller | v0.4.0 | | | | | flexible ip number | v0.5.0 | | | | | ippool inherit route and gateway attribute from its subnet | v0.6.0 | | | | reservedIP | reservedIP | v0.4.0 | v0.6.0 | | | Fixed IP | fixed ip for each pod of statefulset | v0.5.0 | | | | | fixed ip ranges for statefulset, deployment, replicaset | v0.4.0 | v0.6.0 | | | | fixed ip for kubevirt | v0.8.0 | | | | | support calico | v0.5.0 | v0.6.0 | | | | support weave | v0.5.0 | v0.6.0 | | | Spidermultusconfig | support macvlan ipvlan sriov custom | v0.6.0 | v0.7.0 | | | | support ovs-cni | v0.7.0 | | | | CNI version | cni v1.0.0 | v0.4.0 | v0.5.0 | | | ifacer | bond interface | v0.6.0 | v0.8.0 | | | | vlan interface | v0.6.0 | v0.8.0 | | | SpiderCoordinator | Sync podCIDR for calico | v0.6.0 | v0.8.0 | | | | Sync podCIDR for cilium | v0.6.0 | v0.8.0 | | | | sync clusterIP CIDR from serviceCIDR to support k8s 1.29 | | v0.10.0 | | | Coordinator | support underlay mode | v0.6.0 | v0.7.0 | | | | support overlay mode | v0.6.0 |" }, { "data": "| | | | CRD spidercoordinators for multus configuration | v0.6.0 | v0.8.0 | | | | detect ip conflict and gateway | v0.6.0 | v0.6.0 | | | | specify the MAC of pod | v0.6.0 | v0.8.0 | | | | tune the default route of pod multiple interfaces | v0.6.0 | v0.8.0 | | | Connectivity | visit service based on kube-proxy | v0.6.0 | v0.7.0 | | | | visit local node to guarantee the pod health check | v0.6.0 | v0.7.0 | | | | visit nodePort with spec.externalTrafficPolicy=local or spec.externalTrafficPolicy=cluster | v0.6.0 | | | | Observability | eBPF: pod stats | In plan | | | | Network Policy | ipvlan | v0.8.0 | | | | | macvlan | In plan | | | | | sriov | In plan | | | | Bandwidth | ipvlan | v0.8.0 | | | | | macvlan | In plan | | | | | sriov | In plan | | | | eBPF | implement service by cgroup eBPF | v0.8.0 | | | | | accelerate communication of pods on a same node | In plan | | | | Recycle IP | recycle IP taken by deleted pod | v0.4.0 | v0.6.0 | | | | recycle IP taken by deleting pod | v0.4.0 | v0.6.0 | | | | recycle IP when detected IP conflict | v0.10.0 | | | | Dual Stack | dual-stack | v0.2.0 | v0.4.0 | | | CLI | debug and operate. check which pod an IP is taken by, check IP usage , trigger GC | In plan | | | | Multi-cluster | a broker cluster could synchronize ippool resource within a same subnet from all member clusters, which could help avoid IP conflict | In plan | | | | | support submariner | v0.8.0 | | | | Dual CNI | underlay cooperate with cilium | v0.7.0 | | | | | underlay cooperate with calico | v0.7.0 | | | | RDMA | support macvlan and ipvlan CNI for RoCE device | v0.8.0 | | | | | support sriov CNI for RoCE device | v0.8.0 | | | | | support ipoib CNI for infiniband device | v0.9.0 | | | | | support ib-sriov CNI for infiniband device | v0.9.0 | | | | EgressGateway | egressGateway | v0.8.0 | | | | Dynamic-Resource-Allocation | implement dra framework | v1.0.0 | | | | | support for SpiderClaimParameter's rdmaAcc feature | v1.0.0 | | | | | support for schedule pod by SpiderMultusConfig or SpiderIPPool | Todo | | | | | unify the way device-plugin declares resources | Todo | | |" } ]
{ "category": "Runtime", "file_name": "roadmap.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "This document covers how to contribute to the kube-router project. Kube-router uses github PRs to manage contributions (could be anything from documentation, bug fixes, manifests etc.). Please read and for the functionality and internals of kube-router. If you have a question about Kube-router or have a problem using it, please start with contacting us on for quick help. If that doesn't answer your questions, or if you think you found a bug, please . Navigate to: and fork the repository. Follow these steps to setup a local repository for working on Kube-router: ``` bash $ git clone https://github.com/YOUR_ACCOUNT/kube-router.git $ cd kube-router $ git remote add upstream https://github.com/cloudnativelabs/kube-router $ git checkout master $ git fetch upstream $ git rebase upstream/master ``` Create a new branch to make changes on and that branch. ``` bash $ git checkout -b feature_x (make your changes) $ git status $ git add . $ git commit -a -m \"descriptive commit message for your changes\" ``` get update from upstream ``` bash $ git checkout master $ git fetch upstream $ git rebase upstream/master $ git checkout feature_x $ git rebase master ``` Now your `feature_x` branch is up-to-date with all the code in `upstream/master`, so push to your fork ``` bash $ git push origin master $ git push origin feature_x ``` Now that the `feature_x` branch has been pushed to your GitHub repository, you can initiate the pull request." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Kube-router", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Output the dependencies graph in graphviz dot format ``` cilium-operator hive dot-graph [flags] ``` ``` -h, --help help for dot-graph ``` ``` --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in" }, { "data": "(default true) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host network. --gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for Ingress. (default \"cilium-ingress\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created ``` - Inspect the hive" } ]
{ "category": "Runtime", "file_name": "cilium-operator_hive_dot-graph.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "orphan: true nosearch: true ```{important} This page shows how to output configuration option documentation. The content in this page is for demonstration purposes only. ``` Some instance options: ```{config:option} agent.nic_config instance :shortdesc: Set the name and MTU to be the same as the instance devices :default: \"`false`\" :type: bool :liveupdate: \"`no`\" :condition: Virtual machine Controls whether to set the name and MTU of the default network interfaces to be the same as the instance devices (this happens automatically for containers) ``` ```{config:option} migration.incremental.memory.iterations instance :shortdesc: Maximum number of transfer operations :condition: container :default: 10 :type: integer :liveupdate: \"yes\" Maximum number of transfer operations to go through before stopping the instance ``` ```{config:option} cluster.evacuate instance :shortdesc: What to do when evacuating the instance :default: \"`auto`\" :type: string :liveupdate: \"no\" Controls what to do when evacuating the instance (`auto`, `migrate`, `live-migrate`, or `stop`) ``` These need the `instance` scope to be specified as second argument. The default scope is `server`, so this argument isn't required. Some server options: ```{config:option} backups.compression_algorithm server :shortdesc: Compression algorithm for images :type: string :scope: global :default: \"`gzip`\" Compression algorithm to use for new images (`bzip2`, `gzip`, `lzma`, `xz` or `none`) ``` ```{config:option} instances.nic.host_name :shortdesc: How to generate a host name :type: string :scope: global :default: \"`random`\" If set to `random`, use the random host interface name as the host name; if set to `mac`, generate a host name in the form `inc<mac_address>` (MAC without leading two digits) ``` ```{config:option} instances.placement.scriptlet :shortdesc: Custom automatic instance placement logic :type: string :scope: global Stores the {ref}`clustering-instance-placement-scriptlet` for custom automatic instance placement logic ``` Any other scope is also possible. This scope shows that you can use formatting, mainly in the short description and the description, and the available options. ```{config:option} test1 something :shortdesc: testing Testing. ``` ```{config:option} test2 something :shortdesc: Hello! bold and `code` This is the real text. With two paragraphs. And a list: Item Item Item And a table: Key | Type | Scope | Default | Description :-- | : | :- | : | :- `acme.agree_tos` | bool | global | `false` | Agree to ACME terms of service `acme.ca_url` | string | global | `https://acme-v02.api.letsencrypt.org/directory` | URL to the directory resource of the ACME service `acme.domain` | string | global | - | Domain for which the certificate is issued `acme.email` | string | global | - | Email address used for the account registration ``` ```{config:option} test3 something :shortdesc: testing :default: \"`false`\" :type: Type :liveupdate: Python parses the options, so \"no\" is converted to \"False\" - to prevent this, put quotes around the text (\"no\" or \"`no`\") :condition: \"yes\" :readonly: \"`maybe` - also add quotes if the option starts with code\" :resource: Resource, :managed: Managed :required: Required :scope: (this is something like \"global\" or \"local\", not the scope of the option (`server`, `instance`, ...) Content ``` To reference an option, use `{config:option}`. It is not possible to override the link text. Except for server options (default), you must specify the scope. {config:option}`instance:migration.incremental.memory.iterations` {config:option}`something:test1` The index is here: {ref}`config-options`" } ]
{ "category": "Runtime", "file_name": "config_options_cheat_sheet.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Update individual pcap recorder ``` cilium-dbg recorder update [flags] ``` ``` --caplen uint Capture Length (0 is full capture) --filters strings List of filters ('<srcCIDR> <srcPort> <dstCIDR> <dstPort> <proto>') -h, --help help for update --id uint Identifier ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Introspect or mangle pcap recorder" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_recorder_update.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This is for fixing bugs! For pull requesting new features, improvements and changes use https://github.com/openebs/openebs/compare/?template=features.md --> <!-- Don't forget to follow code style, and update documentation and tests if needed --> <!-- If you can't answer some sections, please delete them --> <!-- Describe your changes in detail --> <!-- Why is this change required? What problem does it solve? --> <!-- If it fixes an open issue, please link to the issue here --> <!-- Please describe in detail how you tested your changes --> <!-- Include details of your testing environment, and the tests you ran to see how your change affects other areas of the code, etc.--> <!-- Will your changes brake backward compatibility or not? --> <!-- Add screenshots of your changes -->" } ]
{ "category": "Runtime", "file_name": "bugs.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Documentation We are using with the . Thanks to the MkDocs Material theme we have certain \"markdown syntax extensions\" available: And .. For a whole list of features . To locally preview the documentation, you can run the following command (in the root of the repository): ```console make docs-preview ``` When previewing, now you can navigate your browser to to open the preview of the documentation. !!! hint Should you encounter a `command not found` error while trying to preview the docs for the first time on a machine, you probably need to install the dependencies for MkDocs and extensions used: `pip3 install -r build/release/requirements_docs.txt`. Make sure that your Python binary path is included in your `PATH`. is a tool that generates the documentation for a helm chart automatically. If there are changes in the helm chart, the developer needs to run `make docs` (to run helm-docs) and check in the resulting autogenerated files. To make it easy to check locally for uncommitted changes generated by `make docs`, an additional `make` target exists: simply running `make check-docs` will run the docs auto-generation and will complain if this produces uncommitted changes to doc files. It is therefore a good habit to always run `make check-docs` locally before creating or updating a PR." } ]
{ "category": "Runtime", "file_name": "documentation.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "This file lists the dependencies used in this repository. | Dependency | License | |-|-| | Go | BSD 3-Clause \"New\" or \"Revised\" License | | golang.org/x/crypto v0.3.0 | BSD 3-Clause \"New\" or \"Revised\" License | | golang.org/x/net v0.2.0 | BSD 3-Clause \"New\" or \"Revised\" License | | golang.org/x/sys v0.2.0 | BSD 3-Clause \"New\" or \"Revised\" License | | golang.org/x/term v0.2.0 | BSD 3-Clause \"New\" or \"Revised\" License | | golang.org/x/text v0.4.0 | BSD 3-Clause \"New\" or \"Revised\" License |" } ]
{ "category": "Runtime", "file_name": "dependencies.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "oep-number: draft Migrate 20191115 title: Migration of SPC to CSPC authors: \"@shubham14bajpai\" owners: \"@amitkumardas\" \"@vishnuitta\" \"@kmova\" editor: \"@shubham14bajpai\" creation-date: 2019-11-15 last-updated: 2020-06-18 status: implementable see-also: NA replaces: current SPC with CSPC superseded-by: NA * * This design is aimed at providing a design for migrating SPC to CSPC via kubernetes job which will take SPC-name as input. This proposed design will be rolled out in phases. At a high level the design is implemented in the following phases: Phase 1: Ability to perform migration from SPC to CSPC using a Kubernetes Job. Phase 2: Allow for migration of old CStor volumes to CSI CStor volumes. Ease the process of migrating SPC to CSPC by automating it via kubernetes job. Enable day 2 operations on CStor Volumes by migrating them to CSI volumes. This design proposes the following key changes while migrating from a SPC to CSPC: The SPC label and finalizer on BDCs will be replaced by CSPC label and finalizer. This is done to avoid webhook validation failure while creating equivalent CSPC. Equivalent CSPC CR will be created for the migrating SPC. Equivalent CSPI CRs will be created by operator which will replace the CSP for given SPC. The CSPI will be created by disabling the reconciliation to avoid double import. The SPC owner reference on the BDCs will be replaced by equivalent CSPC owner reference. Sequentially for each CSP Old CSP deployment will be scaled down and sync will be enabled on equivalent CSPI. Old pool will be imported by renaming it from `cstor-cspuid` to `cstor-cspcuid`. The CVR with CSP labels and annotations will be replaced by CSPI labels and annotations. Finalizers from CSP will be removed to for proper cleanup. Clean up old SPC and CSP after successful creation of CSPC and CSPI objects. For migrating non-csi volumes to csi volumes following changes are proposed: New StorageClass with CSPC will be created and this will replace the old strageclass in the PVC of CStorVolume. Set the PV to `Retain` reclaim policy to prevent deletion of OpenEBS resources. A temporary CStorVolumePolicy with target deploy configurations and CSPI names from old CVRs will be created. This is facilitate the creation `cstor/v1` CVRs on the same pool as the old `openebs.io/v1alpha1` CVRs were. It also help preserve any configuration set on the target deployment of old volume. Recreate PVC with volumeName already populated & csi driver info Recreate PV with claimRef of PVC and csi spec Delete old target deployment. Create new CVC for volume with basic provision info of volume and annotation to temporary policy Update ownerreferences of target service with CVC. Wait for new `cstor/v1` CV to be Healthy. Patch the pod affinity if present on policy to the new target deployment. Remove the policy annotation from the CVC and clean up temporary policy and old CV and CVRs. If snapshots are present for the given PVC migrate them from old schema to new schema. Check whether the snapshotClass `csi-cstor-snapshotclass` is installed. Create equivalent `volumesnapshotcontent` for old `volumesnapshotdata`. Create equivalent csi `volumesnapshot` for old `volumesnapshot`. Check whether the `volumesnapshotcontent` and the csi `volumesnapshot` are bound. Delete the old `volumesnapshot` which should automatically remove corresponding" }, { "data": "The migration of SPC will be performed via a job which takes SPC name as one of its argument. The CSPC CR created via job will have `reconcile.openebs.io/disable-dependants` annotation set to `true`. This will help in disabling reconciliation of on all CSPIs created for the CSPC. The reconciliation is set off on CSPIs to avoid import while the old CSP pods are still running. Once all CSPI are successfully created the annotation will be removed. Sequentially one CSPI is taken and the corresponding CSP is found using `kubernetes/hostname` label. The CSP deployment is scaled down to avoid multiple pods trying to import the same pool. Next the all the BDC for given CSPI are updated with CSPC information. The CSPI will be patched with the annotation `cstorpoolinstance.openebs.io/oldname`. When CSPI reconciliation is enabled then the pool manager will import the pool The import command will be modified to import with or without the oldname. For example for renaming the command would look like: ```sh /usr/local/bin/zpool import -o cachefile=/var/openebs/cstor-pool/pool.cache -d /var/openebs/sparse cstor-08bfced5-6a28-4a63-a76b-c48c69be6ad5 cstor-6b3bbf9e-451d-4333-b119-1bb5217e5bc2 ``` For importing without renaming the command would look like: ```sh /usr/local/bin/zpool import -o cachefile=/var/openebs/cstor-pool/pool.cache -d /var/openebs/sparse cstor-6b3bbf9e-451d-4333-b119-1bb5217e5bc2 ``` The `cstorpoolinstance.openebs.io/oldname` annotation will be used to rename the pool which was named as `cstor-cspuid` to `cstor-cspcuid`. This annotation will be removed after successful import of the pool. Note: Before migrating make sure: OpenEBS version of old resources(control-plane and data-plane) should be at least 1.11.0. Apply new cstor-operators and csi driver which also should be at least in 1.11.0 version. Identify the nodes on which the `CSP` are present and run `fdisk -l` command on that node. If the command is hung then bad block file exists on that node. If such case occurs, please resolve this issue before proceeding with the migration. To identify the nodes look for the `kubernetes.io/hostname` label on the `CSP`. The Job spec for migrating SPC is: ```yaml apiVersion: batch/v1 kind: Job metadata: name: migrate-spc-cstor-sparse-pool namespace: openebs spec: backoffLimit: 4 template: spec: serviceAccountName: openebs-maya-operator containers: name: migrate args: \"cstor-spc\" \"--spc-name=cstor-sparse-pool\" \"--v=4\" env: name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace tty: true image: quay.io/openebs/migrate:ci restartPolicy: OnFailure ``` After successful completion of pool migration the logs will list out all applications having volumes on the given pool. Note: Before proceeding to the below steps the pool must be migrated successfully and the application using the volume to be migrated must be scale down. The migration of volumes will be performed via a job which takes the migrated CSPC name as one of its argument. The Job spec for migrating SPC is: ```yaml apiVersion: batch/v1 kind: Job metadata: name: migrate-cstor-volume-pvc-b265427e-6a62-470a-a841-5a36be371e14 namespace: openebs spec: backoffLimit: 4 template: spec: serviceAccountName: openebs-maya-operator containers: name: migrate args: \"cstor-volume\" \"--pv-name=pvc-b265427e-6a62-470a-a841-5a36be371e14\" \"--v=4\" env: name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace tty: true image: quay.io/openebs/migrate:ci restartPolicy: OnFailure ``` After the successful completion of the job the applications can be scaled up to verify the migration of volumes. Once the application is up new `csivolume` CR will be generated for the volume. Kubernetes version should be 1.14 or above. OpenEBS version should be 1.11 or above. CSPC operator 1.11 or above should be installed. CStor CSI operator 1.11 or above should be installed." } ]
{ "category": "Runtime", "file_name": "spc-to-cspc-migration.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 9 sidebar_label: \"Fast Failover\" When the stateful application (i.e. Pod with HwameiStor volume) runs into a problem, especially caused by the node issue, it's important to reschedule the Pod to another healthy node and keep running. However, due to the design of the Kubernetes' StatefulSet and Deployment, it will wait a long time (e.g. 5 mins) before rescheduling the Pod. Especially, it will never reschedule the Pod automatically for the StatefulSet Pod. This will cause the application stop, and even cause a huge business loss. HwameiStor provides a feature of fast failover to solve this problem. When identifying the application issue, it will reschedule the Pod immediately without waiting for a very long time. HwameiStor will fail the Pod over to another healthy node, and ensure the required data volumes are also located at the node. So, the application can continue to work. HwameiStor provides the fast failover considering the two cases: Node Failure When a node fails, all the Pods on this node can't work any moreAs to the Pod using HwameiStor volume it's necessary to reschedule to another healthy node with the associated data volume replica. You can trigger the fast failover for this node by: Add a label to this node: ```bash kubectl label node <nodeName> hwameistor.io/failover=start ``` When the fast failover completes, the label will be modified as: ```console hwameistor.io/failover=completed ``` Pod Failure When a Pod fails, you can trigger the fast failover for it by adding a lable to this Pod: ```bash kubectl label pod <podName> hwameistor.io/failover=start ``` When the fast failover completes, the old Pod will be deleted and then the new one will be created on a new node." } ]
{ "category": "Runtime", "file_name": "fast_failover.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Releasing Velero plugins layout: docs toc: \"true\" Velero plugins maintained by the core maintainers do not have any shipped binaries, only container images, so there is no need to invoke a GoReleaser script. Container images are built via a CI job on git push. Plugins the Velero core team is responsible include all those listed in except the vSphere plugin. Update the README.md file to update the compatibility matrix and `velero install` instructions with the expected version number and open a PR. Determining the version number is based on semantic versioning and whether the plugin uses any newly introduced, changed, or removed methods or variables from Velero. Roll all unreleased changelogs into a new `CHANGELOG-v<version>.md` file and delete the content of the `unreleased` folder. Edit the new changelog file as needed. Once the PR is merged, checkout the upstream `main` branch. Your local upstream might be named `upstream` or `origin`, so use this command: `git checkout <upstream-name>/main`. Tag the git version - `git tag v<version>`. Push the git tag - `git push --tags <upstream-name>` to trigger the image build. Wait for the container images to build. You may check the progress of the GH action that triggers the image build at `https://github.com/vmware-tanzu/<plugin-name>/actions` Verify that an image with the new tag is available at `https://hub.docker.com/repository/docker/velero/<plugin-name>/`. Run the Velero using the new image. Until it is made configurable, you will have to edit the in the test. If all e2e tests pass, go to the GitHub release page of the plugin (`https://github.com/vmware-tanzu/<plugin-name>/releases`) and manually create a release for the new tag. Copy and paste the content of the new changelog file into the release description field." } ]
{ "category": "Runtime", "file_name": "plugin-release-instructions.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "In this article, we will show you two serverless functions in Rust and WasmEdge deployed on Vercel. One is the image processing function, the other one is the TensorFlow inference function. For more insights on why WasmEdge on Vercel, please refer to the article . Since our demo WebAssembly functions are written in Rust, you will need a . Make sure that you install the `wasm32-wasi` compiler target as follows, in order to generate WebAssembly bytecode. ```bash rustup target add wasm32-wasi ``` The demo application front end is written in , and deployed on Vercel. We will assume that you already have the basic knowledge of how to work with Vercel. Our first demo application allows users to upload an image and then invoke a serverless function to turn it into black and white. A deployed on Vercel is available. Fork the to get started. To deploy the application on Vercel, just from web page. This repo is a standard Next.js application for the Vercel platform. The backend serverless function is in the folder. The file contains the Rust programs source code. The Rust program reads image data from the `STDIN`, and then outputs the black-white image to the `STDOUT`. ```rust use hex; use std::io::{self, Read}; use image::{ImageOutputFormat, ImageFormat}; fn main() { let mut buf = Vec::new(); io::stdin().readtoend(&mut buf).unwrap(); let imageformatdetected: ImageFormat = image::guess_format(&buf).unwrap(); let img = image::loadfrommemory(&buf).unwrap(); let filtered = img.grayscale(); let mut buf = vec![]; match imageformatdetected { ImageFormat::Gif => { filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); }, _ => { filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); }, }; io::stdout().write_all(&buf).unwrap(); io::stdout().flush().unwrap(); } ``` You can use Rusts `cargo` tool to build the Rust program into WebAssembly bytecode or native code. ```bash cd api/functions/image-grayscale/ cargo build --release --target wasm32-wasi ``` Copy the build artifacts to the `api` folder. ```bash cp target/wasm32-wasi/release/grayscale.wasm ../../ ``` Vercel runs upon setting up the serverless environment. It installs the WasmEdge runtime, and then compiles each WebAssembly bytecode program into a native `so` library for faster execution. The file conforms Vercel serverless specification. It loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice runs the compiled `grayscale.so` file generated by for better performance. ```javascript const fs = require('fs'); const { spawn } = require('child_process'); const path = require('path'); module.exports = (req, res) => { const wasmedge = spawn( path.join(dirname, 'wasmedge'), [path.join(dirname, 'grayscale.so')]); let d = []; wasmedge.stdout.on('data', (data) => { d.push(data); }); wasmedge.on('close', (code) => { let buf = Buffer.concat(d); res.setHeader('Content-Type', req.headers['image-type']); res.send(buf); }); wasmedge.stdin.write(req.body); wasmedge.stdin.end(''); } ``` That's it. and you now have a Vercel Jamstack app with a high-performance Rust and WebAssembly based serverless" }, { "data": "The application allows users to upload an image and then invoke a serverless function to classify the main subject on the image. It is in as the previous example but in the `tensorflow` branch. Note: when you on the Vercel website, it will create a for each branch. The `tensorflow` branch would have its own deployment URL. The backend serverless function for image classification is in the folder in the `tensorflow` branch. The file contains the Rust programs source code. The Rust program reads image data from the `STDIN`, and then outputs the text output to the `STDOUT`. It utilizes the WasmEdge Tensorflow API to run the AI inference. ```rust pub fn main() { // Step 1: Load the TFLite model let modeldata: &[u8] = includebytes!(\"models/mobilenetv11.0224/mobilenetv11.0224_quant.tflite\"); let labels = includestr!(\"models/mobilenetv11.0224/labelsmobilenetquantv1224.txt\"); // Step 2: Read image from STDIN let mut buf = Vec::new(); io::stdin().readtoend(&mut buf).unwrap(); // Step 3: Resize the input image for the tensorflow model let flatimg = wasmedgetensorflowinterface::loadjpgimageto_rgb8(&buf, 224, 224); // Step 4: AI inference let mut session = wasmedgetensorflowinterface::Session::new(&modeldata, wasmedgetensorflow_interface::ModelType::TensorFlowLite); session.addinput(\"input\", &flatimg, &[1, 224, 224, 3]) .run(); let resvec: Vec<u8> = session.getoutput(\"MobilenetV1/Predictions/Reshape_1\"); // Step 5: Find the food label that responds to the highest probability in res_vec // ... ... let mut label_lines = labels.lines(); for i in 0..maxindex { label_lines.next(); } // Step 6: Generate the output text let classname = labellines.next().unwrap().to_string(); if max_value > 50 { println!(\"It {} a <a href='https://www.google.com/search?q={}'>{}</a> in the picture\", confidence.tostring(), classname, class_name); } else { println!(\"It does not appears to be any food item in the picture.\"); } } ``` You can use the `cargo` tool to build the Rust program into WebAssembly bytecode or native code. ```bash cd api/functions/image-classification/ cargo build --release --target wasm32-wasi ``` Copy the build artifacts to the `api` folder. ```bash cp target/wasm32-wasi/release/classify.wasm ../../ ``` Again, the script installs WasmEdge runtime and its Tensorflow dependencies in this application. It also compiles the `classify.wasm` bytecode program to the `classify.so` native shared library at the time of deployment. The file conforms Vercel serverless specification. It loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice runs the compiled `classify.so` file generated by for better performance. ```javascript const fs = require('fs'); const { spawn } = require('child_process'); const path = require('path'); module.exports = (req, res) => { const wasmedge = spawn( path.join(dirname, 'wasmedge-tensorflow-lite'), [path.join(dirname, 'classify.so')], {env: {'LDLIBRARYPATH': dirname}} ); let d = []; wasmedge.stdout.on('data', (data) => { d.push(data); }); wasmedge.on('close', (code) => { res.setHeader('Content-Type', `text/plain`); res.send(d.join('')); }); wasmedge.stdin.write(req.body); wasmedge.stdin.end(''); } ``` You can now and have a web app for subject classification. Next, it's your turn to use as a template to develop your own Rust serverless functions in Vercel. Looking forward to your great work." } ]
{ "category": "Runtime", "file_name": "vercel.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "Starting November 2019, there will be two separate active versions of Manta: \"mantav1\" - a long term support branch of Manta that maintains current Manta features. \"mantav2\" - a new major version of Manta that adds (Buckets API, storage \"rebalancer\" service) and removes (jobs, snaplinks, etc.) some major features, and becomes the focus of future Manta development. At this time, mantav1 is the recommended version for production usage, but that is expected to change to mantav2 during 2020. \"Mantav2\" is a new major version of Manta. Its purpose is to focus on: improved API latency; exploring alternative storage backends that improve efficiency; improved operation and stability at larger scales. It is a backward incompatible change, because it drops some API features. Significant changes are: The following features of the current API (now called the \"Directory API\") are being removed. jobs (a.k.a. compute jobs) snaplinks metering data under `/<account>/reports/...` Otherwise the Directory API remains a part of Manta. A new \"Buckets API\" (S3-like) is added. This is the API for which latency improvements are being made. A \"rebalancer\" system is added for storage tier maintenance. The garbage collection (GC) system is improved for larger scale. Improved per-account usage data for operators. The \"master\" branch of Manta-related git repos is for mantav2. Mantav1 development has moved to \"mantav1\" branches. A user can tell from the \"Server\" header in Manta API responses. A mantav1 API responds with `Server: Manta`: $ curl -is $MANTA_URL/ | grep -i server server: Manta and a mantav2 API responds with `Server: Manta/2`: $ curl -is $MANTA_URL/ | grep -i server server: Manta/2 An operator can tell from the `MANTAV` metadatum on the \"manta\" SAPI application. If `MANTAV` is `1` or empty, this is a mantav1: [root@headnode (mydc1) ~]# sdc-sapi /applications?name=manta | json -H 0.metadata.MANTAV 1 If `MANTAV` is `2`, this is a mantav2: [root@headnode (mydc2) ~]# sdc-sapi /applications?name=manta | json -H 0.metadata.MANTAV 2 The Node.js and Java clients for mantav2 are still under development. They are currently available in a feature branch of their respective git repositories. The Node.js Manta client is developed in the repository. mantav1: Currently on the of joyent/node-manta, and published to npm as -- i.e. `npm install manta`. mantav2: Currently on the of joyent/node-manta. It is not yet published to npm. *(The intent is to eventually move mantav2 to the \"master\" branch and publish it to npm as \"mantav2\". Mantav1 dev would move to the \"mantav1\" branch and continue to publish to npm as \"manta\".)* The Java Manta client is developed in the repository. mantav1: Currently on the of joyent/java-manta. Current release versions are 3.x. mantav2: Currently on the of joyent/java-manta. *(The intent is to eventually move mantav2 to the \"master\" branch and release it as a new \"4.x\" major version. Mantav1 dev would move to the \"3.x\" branch and continue to release as 3.x versions.)* Operation of a Mantav1 per the [mantav1 Operator Guide](https://github.com/TritonDataCenter/manta/blob/mantav1/docs/operator-guide.md) remains unchanged, other than that operators should look for images named `mantav1-$servicename` rather than `manta-$servicename`. For example: ``` $ updates-imgadm list name=~mantav1- --latest UUID NAME VERSION FLAGS OS PUBLISHED ... 26515c9e-94c4-4204-99dd-d068c0c2ed3e mantav1-postgres mantav1-20200226T135432Z-gcff3bea I smartos 2020-02-26T14:08:42Z 5c8c8735-4c2c-489b-83ff-4e8bee124f63 mantav1-storage mantav1-20200304T221656Z-g1ba6beb I smartos 2020-03-04T22:21:22Z ``` There are \"mantav1\" branches of all the relevant repositories, from which \"mantav1-$servicename\" images are created for Mantav1 setup and operation. Joyent offers paid support for on premise mantav1. While mantav1 work is done to support particular customer issues, and PRs are accepted for mantav1 branches, the focus of current work is on mantav2. This work is still in development. At this time a Mantav2 deployment must start from scratch." } ]
{ "category": "Runtime", "file_name": "mantav2.md", "project_name": "Triton Object Storage", "subcategory": "Cloud Native Storage" }
[ { "data": "title: RBD Mirroring Disaster recovery (DR) is an organization's ability to react to and recover from an incident that negatively affects business operations. This plan comprises strategies for minimizing the consequences of a disaster, so an organization can continue to operate or quickly resume the key operations. Thus, disaster recovery is one of the aspects of . One of the solutions, to achieve the same, is . is an asynchronous replication of RBD images between multiple Ceph clusters. This capability is available in two modes: Journal-based: Every write to the RBD image is first recorded to the associated journal before modifying the actual image. The remote cluster will read from this associated journal and replay the updates to its local image. Snapshot-based: This mode uses periodically scheduled or manually created RBD image mirror-snapshots to replicate crash-consistent RBD images between clusters. !!! note This document sheds light on rbd mirroring and how to set it up using rook. See also the topic on In this section, we create specific RBD pools that are RBD mirroring enabled for use with the DR use case. Execute the following steps on each peer cluster to create mirror enabled pools: Create a RBD pool that is enabled for mirroring by adding the section `spec.mirroring` in the CephBlockPool CR: ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: mirrored-pool namespace: rook-ceph spec: replicated: size: 1 mirroring: enabled: true mode: image ``` ```console kubectl create -f pool-mirrored.yaml ``` Repeat the steps on the peer cluster. !!! note Pool name across the cluster peers must be the same for RBD replication to function. See the for more details. !!! note It is also feasible to edit existing pools and enable them for replication. In order for the rbd-mirror daemon to discover its peer cluster, the peer must be registered and a user account must be created. The following steps enable bootstrapping peers to discover and authenticate to each other: For Bootstrapping a peer cluster its bootstrap secret is required. To determine the name of the secret that contains the bootstrap secret execute the following command on the remote cluster (cluster-2) ```console [cluster-2]$ kubectl get cephblockpool.ceph.rook.io/mirrored-pool -n rook-ceph -ojsonpath='{.status.info.rbdMirrorBootstrapPeerSecretName}' ``` Here, `pool-peer-token-mirrored-pool` is the desired bootstrap secret name. The secret pool-peer-token-mirrored-pool contains all the information related to the token and needs to be injected to the peer, to fetch the decoded secret: ```console [cluster-2]$ kubectl get secret -n rook-ceph pool-peer-token-mirrored-pool -o jsonpath='{.data.token}'|base64 -d eyJmc2lkIjoiNGQ1YmNiNDAtNDY3YS00OWVkLThjMGEtOWVhOGJkNDY2OTE3IiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFDZ3hmZGdxN013R0JBQWZzcUtCaGpZVjJUZDRxVzJYQm5kemc9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMzkuMzY6MzMwMCx2MToxOTIuMTY4LjM5LjM2OjY3ODldIn0= ``` With this Decoded value, create a secret on the primary site (cluster-1): ```console [cluster-1]$ kubectl -n rook-ceph create secret generic rbd-primary-site-secret --from-literal=token=eyJmc2lkIjoiNGQ1YmNiNDAtNDY3YS00OWVkLThjMGEtOWVhOGJkNDY2OTE3IiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFDZ3hmZGdxN013R0JBQWZzcUtCaGpZVjJUZDRxVzJYQm5kemc9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMzkuMzY6MzMwMCx2MToxOTIuMTY4LjM5LjM2OjY3ODldIn0= --from-literal=pool=mirrored-pool ``` This completes the bootstrap process for cluster-1 to be peered with cluster-2. Repeat the process switching cluster-2 in place of cluster-1, to complete the bootstrap process across both peer clusters. For more details, refer to the official rbd mirror documentation on . Replication is handled by the rbd-mirror" }, { "data": "The rbd-mirror daemon is responsible for pulling image updates from the remote, peer cluster, and applying them to image within the local cluster. Creation of the rbd-mirror daemon(s) is done through the custom resource definitions (CRDs), as follows: Create mirror.yaml, to deploy the rbd-mirror daemon ```yaml apiVersion: ceph.rook.io/v1 kind: CephRBDMirror metadata: name: my-rbd-mirror namespace: rook-ceph spec: count: 1 ``` Create the RBD mirror daemon ```console [cluster-1]$ kubectl create -f mirror.yaml -n rook-ceph ``` Validate if `rbd-mirror` daemon pod is now up ```console [cluster-1]$ kubectl get pods -n rook-ceph rook-ceph-rbd-mirror-a-6985b47c8c-dpv4k 1/1 Running 0 10s ``` Verify that daemon health is OK ```console kubectl get cephblockpools.ceph.rook.io mirrored-pool -n rook-ceph -o jsonpath='{.status.mirroringStatus.summary}' {\"daemonhealth\":\"OK\",\"health\":\"OK\",\"imagehealth\":\"OK\",\"states\":{\"replaying\":1}} ``` Repeat the above steps on the peer cluster. See the for more details on the mirroring settings. Each pool can have its own peer. To add the peer information, patch the already created mirroring enabled pool to update the CephBlockPool CRD. ```console [cluster-1]$ kubectl -n rook-ceph patch cephblockpool mirrored-pool --type merge -p '{\"spec\":{\"mirroring\":{\"peers\": {\"secretNames\": [\"rbd-primary-site-secret\"]}}}}' ``` Volume Replication Operator follows controller pattern and provides extended APIs for storage disaster recovery. The extended APIs are provided via Custom Resource Definition(CRD). Create the VolumeReplication CRDs on all the peer clusters. ```console kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.5.0/config/crd/bases/replication.storage.openshift.io_volumereplicationclasses.yaml kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.5.0/config/crd/bases/replication.storage.openshift.io_volumereplications.yaml ``` To achieve RBD Mirroring, `csi-omap-generator` and `csi-addons` containers need to be deployed in the RBD provisioner pods, which are not enabled by default. Omap Generator*: Omap generator is a sidecar container that when deployed with the CSI provisioner pod, generates the internal CSI omaps between the PV and the RBD image. This is required as static PVs are transferred across peer clusters in the DR use case, and hence is needed to preserve PVC to storage mappings. Volume Replication Operator*: Volume Replication Operator is a kubernetes operator that provides common and reusable APIs for storage disaster recovery. The volume replication operation is supported by the It is based on specification and can be used by any storage provider. Execute the following steps on each peer cluster to enable the OMap generator and CSIADDONS sidecars: Edit the `rook-ceph-operator-config` configmap and add the following configurations ```console kubectl edit cm rook-ceph-operator-config -n rook-ceph ``` Add the following properties if not present: ```yaml data: CSIENABLEOMAP_GENERATOR: \"true\" CSIENABLECSIADDONS: \"true\" ``` After updating the configmap with those settings, two new sidecars should now start automatically in the CSI provisioner pod. Repeat the steps on the peer cluster. VolumeReplication CRDs provide support for two custom resources: VolumeReplicationClass: VolumeReplicationClass* is a cluster scoped resource that contains driver related configuration parameters. It holds the storage admin information required for the volume replication operator. VolumeReplication: VolumeReplication* is a namespaced resource that contains references to storage object to be replicated and VolumeReplicationClass corresponding to the driver providing replication. Below guide assumes that we have a PVC (rbd-pvc) in BOUND state; created using StorageClass with `Retain`" }, { "data": "```console [cluster-1]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rbd-pvc Bound pvc-65dc0aac-5e15-4474-90f4-7a3532c621ec 1Gi RWO csi-rbd-sc 44s ``` In this case, we create a Volume Replication Class on cluster-1 ```console [cluster-1]$ kubectl apply -f deploy/examples/volume-replication-class.yaml ``` !!! note The `schedulingInterval` can be specified in formats of minutes, hours or days using suffix `m`, `h` and `d` respectively. The optional schedulingStartTime can be specified using the ISO 8601 time format. Once VolumeReplicationClass is created, create a Volume Replication for the PVC which we intend to replicate to secondary cluster. ```console [cluster-1]$ kubectl apply -f deploy/examples/volume-replication.yaml ``` !!! note :memo: `VolumeReplication` is a namespace scoped object. Thus, it should be created in the same namespace as of PVC. `replicationState` is the state of the volume being referenced. Possible values are primary, secondary, and resync. `primary` denotes that the volume is primary. `secondary` denotes that the volume is secondary. `resync` denotes that the volume needs to be resynced. To check VolumeReplication CR status: ```console [cluster-1]$kubectl get volumereplication pvc-volumereplication -oyaml ``` ```yaml ... spec: dataSource: apiGroup: \"\" kind: PersistentVolumeClaim name: rbd-pvc replicationState: primary volumeReplicationClass: rbd-volumereplicationclass status: conditions: lastTransitionTime: \"2021-05-04T07:39:00Z\" message: \"\" observedGeneration: 1 reason: Promoted status: \"True\" type: Completed lastTransitionTime: \"2021-05-04T07:39:00Z\" message: \"\" observedGeneration: 1 reason: Healthy status: \"False\" type: Degraded lastTransitionTime: \"2021-05-04T07:39:00Z\" message: \"\" observedGeneration: 1 reason: NotResyncing status: \"False\" type: Resyncing lastCompletionTime: \"2021-05-04T07:39:00Z\" lastStartTime: \"2021-05-04T07:38:59Z\" message: volume is marked primary observedGeneration: 1 state: Primary ``` !!! note To effectively resume operations after a failover/relocation, backup of the kubernetes artifacts like deployment, PVC, PV, etc need to be created beforehand by the admin; so that the application can be restored on the peer cluster. Here, we take a backup of PVC and PV object on one site, so that they can be restored later to the peer cluster. Take backup of the PVC `rbd-pvc` ```console [cluster-1]$ kubectl get pvc rbd-pvc -oyaml > pvc-backup.yaml ``` Take a backup of the PV, corresponding to the PVC ```console [cluster-1]$ kubectl get pv/pvc-65dc0aac-5e15-4474-90f4-7a3532c621ec -oyaml > pv_backup.yaml ``` !!! note We can also take backup using external tools like Velero. See for more information. Create storageclass on the secondary cluster ```console [cluster-2]$ kubectl create -f deploy/examples/csi/rbd/storageclass.yaml ``` Create VolumeReplicationClass on the secondary cluster ```console [cluster-1]$ kubectl apply -f deploy/examples/volume-replication-class.yaml volumereplicationclass.replication.storage.openshift.io/rbd-volumereplicationclass created ``` If Persistent Volumes and Claims are created manually on the secondary cluster, remove the `claimRef` on the backed up PV objects in yaml files; so that the PV can get bound to the new claim on the secondary cluster. ```yaml ... spec: accessModes: ReadWriteOnce capacity: storage: 1Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: rbd-pvc namespace: default resourceVersion: \"64252\" uid: 65dc0aac-5e15-4474-90f4-7a3532c621ec csi: ... ``` Apply the Persistent Volume backup from the primary cluster ```console [cluster-2]$ kubectl create -f pv-backup.yaml ``` Apply the Persistent Volume claim from the restored backup ```console [cluster-2]$ kubectl create -f pvc-backup.yaml ``` ```console [cluster-2]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rbd-pvc Bound pvc-65dc0aac-5e15-4474-90f4-7a3532c621ec 1Gi RWO rook-ceph-block 44s ```" } ]
{ "category": "Runtime", "file_name": "rbd-mirroring.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Managing Domain Entries menu_order: 40 search_type: Documentation The following topics are discussed: If you want to give the container a name in DNS other than its hostname, you can register it using the `dns-add` command. For example: ``` $ C=$(docker run -ti weaveworks/ubuntu) $ weave dns-add $C -h pingme2.weave.local ``` You can also use `dns-add` to add the container's configured hostname and domain simply by omitting `-h <fqdn>`, or specify additional IP addresses to be registered against the container's hostname e.g. `weave dns-add 10.2.1.27 $C`. The inverse operation can be carried out using the `dns-remove` command: ``` $ weave dns-remove $C ``` By omitting the container name it is possible to add/remove DNS records that associate names in the weaveDNS domain with IP addresses that do not belong to containers, e.g. non-weave addresses of external services: ``` $ weave dns-add 192.128.16.45 -h db.weave.local ``` Note that such records get removed when stopping the weave peer on which they were added. You can resolve entries from any host running weaveDNS with `weave dns-lookup`: host1$ weave dns-lookup pingme 10.40.0.1 If you would like to deploy a new version of a service, keep the old one running because it has active connections but make all new requests go to the new version, then you can simply start the new server container and then the entry for the old server container. Later, when all connections to the old server have terminated, stop the container as normal. By default, weaveDNS specifies a TTL of 30 seconds in responses to DNS requests. However, you can force a different TTL value by launching weave with the `--dns-ttl` argument: ``` $ weave launch --dns-ttl=10 ``` This will shorten the lifespan of answers sent to clients, so you will be effectively reducing the probability of them having stale information, but you will also be increasing the number of request this weaveDNS instance will receive. See Also *" } ]
{ "category": "Runtime", "file_name": "managing-entries-weavedns.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Access userspace cached content of BPF maps ``` -h, --help help for map ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Display cached list of events for a BPF map - Display cached content of given BPF map - List all open BPF maps" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_map.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: Usage description: Run a Virtual Kubelet either on or external to a Kubernetes cluster weight: 2 You can Virtual Kubelet either or to a Kubernetes cluster using the command-line tool. If you run Virtual Kubelet on a Kubernetes cluster, you can also deploy it using . For `virtual-kubelet` installation instructions, see the guide. To run Virtual Kubelet external to a Kubernetes cluster (not on the Kubernetes cluster you are connecting it to), run the binary with your chosen . Here's an example: ```bash virtual-kubelet --provider aws ``` Once Virtual Kubelet is deployed, run `kubectl get nodes` and you should see a `virtual-kubelet` node (unless you've named it something else using the flag). <!-- The CLI docs are generated using the shortcode in layouts/shortcodes/cli.html and the YAML config in data/cli.yaml --> {{< cli >}} It's possible to run the Virtual Kubelet as a Kubernetes Pod in a or Kubernetes cluster. At this time, automation of this deployment is supported only for the provider. In order to deploy the Virtual Kubelet, you need to install , a Kubernetes development tool. You also need to make sure that your current is either `minikube` or `docker-for-desktop` (depending on which Kubernetes platform you're using). First, clone the Virtual Kubelet repository: ```bash git clone https://github.com/virtual-kubelet/virtual-kubelet cd virtual-kubelet ``` Then: ```bash make skaffold ``` By default, this will run Skaffold in , which will make Skaffold watch and its dependencies for changes and re-deploy the Virtual Kubelet when changes happen. It will also make Skaffold stream logs from the Virtual Kubelet Pod. Alternative, you can run Skaffold outside of development modeif you aren't concerned about continuous deployment and log streamingby running: ```bash make skaffold MODE=run ``` This will build and deploy the Virtual Kubelet and return. {{< info >}} is a package manager that enables you to easily deploy complex systems on Kubernetes using configuration bundles called . {{< /info >}} You can use the Virtual Kubelet to deploy Virtual Kubelet on Kubernetes. First, add the Chart repository (the Chart is currently hosted on ): ```bash helm repo add virtual-kubelet \\ https://raw.githubusercontent.com/virtual-kubelet/virtual-kubelet/master/charts ``` {{< success >}} You can check to make sure that the repo is listed amongst your current repos using `helm repo list`. {{< /success >}} Now you can install Virtual Kubelet using `helm install`. Here's an example command: ```bash helm install virtual-kubelet/virtual-kubelet \\ --name virtual-kubelet-azure \\ --namespace virtual-kubelet \\ --set provider=azure ``` This would install the in the `virtual-kubelet` namespace. To verify that Virtual Kubelet has been installed, run this command, which will list the available nodes and watch for changes: ```bash kubectl get nodes \\ --namespace virtual-kubelet \\ --watch ```" } ]
{ "category": "Runtime", "file_name": "usage.md", "project_name": "Virtual Kubelet", "subcategory": "Container Runtime" }
[ { "data": "Starting from , the import path will be: \"github.com/golang-jwt/jwt/v4\" The `/v4` version will be backwards compatible with existing `v3.x.y` tags in this repo, as well as `github.com/dgrijalva/jwt-go`. For most users this should be a drop-in replacement, if you're having troubles migrating, please open an issue. You can replace all occurrences of `github.com/dgrijalva/jwt-go` or `github.com/golang-jwt/jwt` with `github.com/golang-jwt/jwt/v4`, either manually or by using tools such as `sed` or `gofmt`. And then you'd typically run: ``` go get github.com/golang-jwt/jwt/v4 go mod tidy ``` The original migration guide for older releases can be found at https://github.com/dgrijalva/jwt-go/blob/master/MIGRATION_GUIDE.md." } ]
{ "category": "Runtime", "file_name": "MIGRATION_GUIDE.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 12 sidebar_label: \"FAQs\" The hwameistor-scheduler is deployed as a pod in the hwameistor namespace. Once the applications (Deployment or StatefulSet) are created, the pod will be scheduled to the worker nodes on which HwameiStor is already configured. This question can be extended to: How does HwameiStor schedule applications with multi-replica workloads and how does it differ from traditional shared storage (NFS/block)? To efficiently schedule applications with multi-replica workloads, we highly recommend using StatefulSet. StatefulSet ensures that replicas are deployed on the same worker node as the original pod. It also creates a PV data volume for each replica. If you need to deploy replicas on different worker nodes, manual configuration with `pod affinity` is necessary. We suggest using a single pod for deployment because the block data volumes can not be shared. HwameiStor provides the volume eviction/migration functions to keep the Pods and HwameiStor volumes' data running when retiring/rebooting a node. Before you remove a node from a Kubernetes cluster, the Pods and volumes on the node should be rescheduled and migrated to another available node, and keep the Pods/volumes running. Follow these steps to remove a node: Drain node. ```bash kubectl drain NODE --ignore-daemonsets=true. --ignore-daemonsets=true ``` This command can evict and reschedule Pods on the node. It also automatically triggers HwameiStor's data volume eviction behavior. HwameiStor will automatically migrate all replicas of the data volumes from that node to other nodes, ensuring data availability. Check the migration progress. ```bash kubectl get localstoragenode NODE -o yaml ``` The output may look like: ```yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode metadata: name: NODE spec: hostname: NODE storageIP: 10.6.113.22 topogoly: region: default zone: default status: ... pools: LocalStorage_PoolHDD: class: HDD disks: capacityBytes: 17175674880 devPath: /dev/sdb state: InUse type: HDD freeCapacityBytes: 16101933056 freeVolumeCount: 999 name: LocalStorage_PoolHDD totalCapacityBytes: 17175674880 totalVolumeCount: 1000 type: REGULAR usedCapacityBytes: 1073741824 usedVolumeCount: 1 volumeCapacityBytesLimit: 17175674880 volumes: state: Ready ``` At the same time, HwameiStor will automatically reschedule the evicted Pods to the other node which has the associated volume replica, and continue to run. Remove the NODE from the cluster. ```bash kubectl delete nodes NODE ``` It usually takes a long time (~10minutes) to reboot a node. All the Pods and volumes on the node will not work until the node is back online. For some applications like DataBase, the long downtime is very costly and even unacceptable. HwameiStor can immediately reschedule the Pod to another available node with associated volume data and bring the Pod back to running in very short time (~ 10 seconds for the Pod using a HA volume, and longer time for the Pod with non-HA volume depends on the data size). If users wish to keep data volumes on a specific node, accessible even after the node restarts, they can add the following labels to the node. This prevents the system from migrating the data volumes from that node. However, the system will still immediately schedule Pods on other nodes that have replicas of the data volumes. Add a label (optional) If it is not required to migrate the volumes during the node reboots, you can add the following label to the node before draining it. ```bash kubectl label node NODE hwameistor.io/eviction=disable ``` Drain the node. ```bash kubectl drain NODE" }, { "data": "--ignore-daemonsets=true ``` If Step 1 has been performed, you can reboot the node after Step 2 is successful. If Step 1 has not been performed, you should check if the data migration is complete after Step 2 is successful (similar to Step 2 in ). After the data migration is complete, you can reboot the node. After the first two steps are successful, you can reboot the node and wait for the node system to return to normal. Bring the node back to normal. ```bash kubectl uncordon NODE ``` StatefulSet, which is used for stateful applications, prioritizes deploying replicated replicas to different nodes to distribute the workload. However, it creates a PV data volume for each Pod replica. Only when the number of replicas exceeds the number of worker nodes, multiple replicas will be deployed on the same node. On the other hand, Deployments, which are used for stateless applications, prioritize deploying replicated replicas to different nodes to distribute the workload. All Pods share a single PV data volume (currently only supports NFS). Similar to StatefulSets, multiple replicas will be deployed on the same node only when the number of replicas exceeds the number of worker nodes. For block storage, as data volumes cannot be shared, it is recommended to use a single replica. When encountering the following error while inspecting `LocalStorageNode`: Possible causes of the error: The node does not have LVM2 installed. You can install it using the following command: ```bash rpm -qa | grep lvm2 # Check if LVM2 is installed yum install lvm2 # Install LVM on each node ``` Ensure that the proper disk on the node has GPT partitioning. ```bash blkid /dev/sd* # Confirm if the disk partitions are clean wipefs -a /dev/sd* # Clean the disk ``` Probable reasons: The node has no remaining bare disks that can be automatically managed. You can check it by running the following command: ```bash kubectl get ld # Check disk kubectl get lsn <node-name> -o yaml # Check whether the disk is managed normally ``` The hwameistor related components are not working properly. You can check it by running the following command: > `drbd-adapter` is only needed when HA is enabled, if not, ignore the related error. ```bash kubectl get pod -n hwameistor # Confirm whether the pod is running kubectl get hmcluster -o yaml # View the health field ``` When is manually expanding storage needed: To use the disk partition() Same serial number is shared between different disks(,) Manual expansion steps: Create and expand storage pool ```bash $ vgcreate LocalStorage_PoolHDD /dev/sdb ``` > `LocalStorage_PoolHDD` is the StoragePool name for `HDD` type disk. Other optional names are `LocalStoragePoolSSD` for `SSD` type and `LocalStoragePoolNVMe` for `NVMe` type. If you want to expand the storage pool with disk partition, you can use the following command: ```bash $ vgcreate LocalStorage_PoolHDD /dev/sdb1 ``` If storage pool is already exist, you can use the following command: ```bash $ vgextend LocalStorage_PoolHDD /dev/sdb1 ``` Check the status of the node storage pool and confirm that the disk is added to the storage pool like this: ```bash $ kubectl get lsn node1 -oyaml apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode ... pools: LocalStorage_PoolHDD: class: HDD disks: capacityBytes: 17175674880 devPath: /dev/sdb ... ```" } ]
{ "category": "Runtime", "file_name": "faqs.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List BPF datapath bandwidth settings ``` cilium-dbg bpf bandwidth list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - BPF datapath bandwidth settings" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_bandwidth_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "The high-level container runtime, such as and , pulls container images from registries (e.g., Docker Hub), manages them on disk, and launches a lower-level runtime to run container processes. From this chapter, you can check out specific tutorials for CRI-O and containerd." } ]
{ "category": "Runtime", "file_name": "cri.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "is a distributed scheduler which supports using a variety of different backends to execute tasks. As of the v0.2.0 release, Nomad includes experimental support for using `rkt` as a task execution driver. For more details, check out the ." } ]
{ "category": "Runtime", "file_name": "using-rkt-with-nomad.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage local redirect policies ``` -h, --help help for lrp ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - List local redirect policies" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_lrp.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "OpenEBS Volumes (storage controller) functionality is delivered through containers (say OVCs). Each OpenEBS Volume can comprise of a single or multiple OVCs depending on the redundancy requirements etc. Each of these OVCs will persist data to the attached volume-mounts. The volume-mounts attached to OVCs can range from local directory, a single disk, mirrored (lvm) disk to cloud disks. The volume-mounts also can vary in terms of their performance characteristics like - SAS, SSD or Cache OVCs are designed to run (in hyperconverged) mode on any Container Orchestrators. Maya - will provide the functionality to abstract the different types of storage (also called henceforth as \"Raw Storage\") and convert them into a \"Storage Backend\" that can be associated to the OVCs as Volumes. In case of Kubernetes, the Maya will convert the Storage Backends into Persistent Volumes and associate them with OVC (Pod)." } ]
{ "category": "Runtime", "file_name": "proposal-openebs-volume-volume-mounts.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "This directory contains the config for CSI RBAC (necessary if you are using KinD). Required binaries for the test are downloaded automatically when you use the `justfile`. Right now we only run these things on Linux because the Registrar is still missing [MacOS support](https://github.com/kubernetes-csi/node-driver-registrar/pull/133) and the mock driver would need to handle Windows, which would reuse our ugly code. It may be worth making some sort of more fully featured and separate mock driver in the future Those would work here for our e2e tests since we are running a KinD node. However, we want these tests to reflect more real world usage and there is no guarantee that a Krustlet node will have a container runtime (nor do we want to require one). There is also the additional benefit of having this be some simple instructions of how you can build some of the key components to get CSI running for a Real World cluster. (https://github.com/jedisct1/as-wasi) dependency, which is a helpful set of wrappers around the low level WASI bindings provided in AssemblyScript. If you are interested in starting your own AssemblyScript project, visit the AssemblyScript . If you don't have Krustlet with the WASI provider running locally, see the instructions in the for running locally. Run: ```shell $ npm install && npm run asbuild ``` Detailed instructions for pushing a module can be found in the ." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Krustlet", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Delete individual pcap recorder ``` cilium-dbg recorder delete <recorder id> [flags] ``` ``` -h, --help help for delete ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Introspect or mangle pcap recorder" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_recorder_delete.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Fix a deadlock issue in NetworkPolicy Controller which causes a FQDN resolution failure. ( , [@Dyanngg] [@tnqn]) Fix NetworkPolicy span calculation to avoid out-dated data when multiple NetworkPolicies have the same selector. (, [@tnqn]) Fix SSL library downloading failure in Install-OVS.ps1 on Windows. (, [@XinShuYang]) Fix rollback invocation after CmdAdd failure in CNI server and improve logging. (, [@antoninbas]) Do not apply Egress to traffic destined for ServiceCIDRs to avoid performance issue and unexpected behaviors. (, [@tnqn]) Do not delete IPv6 link-local route in route reconciler to fix cross-Node Pod traffic or Pod-to-external traffic. (, [@wenyingd]) Fix discovered Service CIDR flapping on Agent start. (, [@tnqn]) Change the default flow's action to `drop` in ARPSpoofGuardTable to effectively prevent ARP spoofing. (, [@hongliangl]) Stop using `/bin/sh` and invoke the binary directly for OVS commands in Antrea Agent. (, [@antoninbas]) Increase the rate limit setting of `PacketInMeter` and the size of `PacketInQueue`. (, [@GraysonWu]) Upgrade Open vSwitch to 2.17.7. (, [@antoninbas]) Fix IPv4 groups containing IPv6 endpoints mistakenly in dual-stack clusters in AntreaProxy implementation. (, [@tnqn]) Fix ClusterClaim webhook bug to avoid ClusterClaim deletion failure. (, [@luolanzone]) Ensure the Egress IP is always correctly advertised to the network, including when the userspace ARP responder is not running or when the Egress IP is temporarily claimed by multiple Nodes. (, [@tnqn]) Fix status report when no-op changes are applied to Antrea-native policies. (, [@tnqn]) Bump up libOpenflow version to fix a PacketIn response parse error. (, [@wenyingd]) Remove NetworkPolicyStats dependency of MulticastGroup API to fix the empty list issue when users run `kubectl get multicastgroups` even when the Multicast is enabled. (, [@ceclinux]) Fix an Antrea Controller crash issue in handling empty Pod labels for LabelIdentity when the config `enableStretchedNetworkPolicy` is enabled for Antrea Multi-cluster. ( , [@Dyanngg]) Do not attempt to join Windows agents to the memberlist cluster to avoid misleading error logs. (, [@tnqn]) Fix the burst setting of the `PacketInQueue` to reduce the DNS response delay when a Pod has any FQDN policy applied. (, [@tnqn]) Update Open vSwitch to 2.17.6. (, [@tnqn]) Bump up whereabouts to v0.6.1. (, [@hjiajing]) In Antrea Agent Service CIDR discovery, prevent headless Services from updating the discovered Service CIDR to avoid overwriting the default route of host network unexpectedly. (, [@hongliangl]) Use LOCAL instead of CONTROLLER as the in_port of packet-out messages to fix a Windows agent crash issue. (, [@tnqn]) Fix a bug that a deleted NetworkPolicy is still enforced when a new NetworkPolicy with the same name exists. (, [@tnqn]) Improve Windows cleanup scripts to avoid unexpected failures. (, [@wenyingd]) Fix a race condition between stale controller and ResourceImport reconcilers in Antrea Multi-cluster controller. (, [@Dyanngg]) Make FQDN NetworkPolicy work for upper case FQDNs. (, [@GraysonWu]) Run agent modules that rely on Services access after AntreaProxy is ready to fix a Windows agent crash issue. (, [@tnqn]) Fix the Antrea Agent crash issue which is caused by a concurrency bug in Multicast feature with encap mode. (, [@ceclinux]) Document the limit of maximum receiver group number on a Linux Node for" }, { "data": "(, [@ceclinux]) Fix Service not being updated correctly when stickyMaxAgeSeconds or InternalTrafficPolicy is updated. (, [@tnqn]) Fix EndpointSlice API availablility check to resolve the issue that AntreaProxy always falls back to the Endpoints API when EndpointSlice is enabled (, [@tnqn]) Fix the Antrea Agent crash issue when large amount of multicast receivers with different multicast IPs on one Node start together.(, [@ceclinux]) The EndpointSlice feature is graduated from Alpha to Beta and is therefore enabled by default. Add the following capabilities to Antrea-native policies: ClusterSet scoped policy rules now support with the `namespaces` field. (, [@Dyanngg]) Layer 7 policy rules now support traffic logging. (, [@qiyueyao]) The implementation of FQDN policy rules has been extended to process DNS packets over TCP. ( , [@GraysonWu] [@tnqn]) Add the following capabilities to the AntreaProxy feature: Graduate EndpointSlice from Alpha to Beta; antrea-agent now listens to EndpointSlice events by default. (, [@hongliangl]) Support ProxyTerminatingEndpoints in AntreaProxy. (, [@hongliangl]) Support rejecting requests to Services without available Endpoints. (, [@hongliangl]) Add the following capabilities to Egress policies: Support limiting the number of Egress IPs that can be assigned to a Node via new configuration option `egress.maxEgressIPsPerNode` or Node annotation \"node.antrea.io/max-egress-ips\". ( , [@tnqn]) Add `antctl get memberlist` CLI command to get memberlist state. (, [@Atish-iaf]) Support \"noEncap\", \"hybrid\", and \"networkPolicyOnly\" in-cluster traffic encapsulation modes with Multi-cluster Gateway. (, [@luolanzone]) Enhance CI to validate Antrea with Rancher clusters. (, [@jainpulkit22]) Ensure cni folders are created when starting antrea-agent with containerd on Windows. (, [@XinShuYang]) Decrease log verbosity value for antrea-agent specified in the Windows manifest for containerd from 4 to 0. (, [@XinShuYang]) Bump up cni and plugins libraries to v1.1.1. (, [@wenyingd]) Upgrade OVS version to 2.17.5. (, [@antoninbas]) Extend the message length limitation in the Conditions of Antrea-native policies to 256 characters. (, [@wenyingd]) Stop using ClusterFirstWithHostNet DNSPolicy for antrea-agent; revert it to the default value. (, [@antoninbas]) Perform Service load balancing within OVS for Multi-cluster Service traffic, when the local member Service of the Multi-cluster Service is selected as the destination. (, [@luolanzone]) Rename the `multicluster.enable` configuration parameter to `multicluster.enableGateway`. (, [@jianjuns]) Add the `multicluster.enablePodToPodConnectivity` configuration parameter for antrea-agent to enable Multi-cluster Pod-to-Pod connectivity. (, [@hjiajing]) No longer install Whereabouts CNI to host. (, [@jianjuns]) Add an explicit Secret for the `vm-agent` ServiceAccount to the manifest for non-Kubernetes Nodes. (, [@wenyingd]) Change the `toService.scope` field of Antrea ClusterNetworkPolicy to an enum. (, [@GraysonWu]) Fix route deletion for Service ClusterIP and LoadBalancerIP when AntreaProxy is enabled. (, [@tnqn]) Fix Service routes being deleted on Agent startup on Windows. (, [@hongliangl]) Avoid duplicate Node Results in Live Traceflow Status. (, [@antoninbas]) Fix OpenFlow Group being reused with wrong type because groupDb cache was not cleaned up. (, [@ceclinux]) Ensure NO_FLOOD is always set for IPsec tunnel ports and TrafficControl ports. ( , [@xliuxu]) Fix Agent crash in dual-stack clusters when any Node is not configured with an IP address for each address family. (, [@hongliangl]) Fix antctl not being able to talk with GCP kube-apiserver due to missing platforms specific imports. (, [@luolanzone])" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.11.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "name: Document Defect about: Report a Documentation defect title: '' labels: type-docs assignees: '' Page Include the link to the page you are reporting a defect Summary What is the defect and suggested improvement" } ]
{ "category": "Runtime", "file_name": "docs_defect.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "Short summary of what's included in the PR Give special note to breaking changes: List the exact changes or provide links to documentation. [ ] Categorize the PR by setting a good title and adding one of the labels: `bug`, `enhancement`, `documentation`, `change`, `breaking`, `dependency` as they show up in the changelog [ ] PR contains the label `area:operator` [ ] Link this PR to related issues [ ] I have not made any changes in the `charts/` directory. [ ] Categorize the PR by setting a good title and adding one of the labels: `bug`, `enhancement`, `documentation`, `change`, `breaking`, `dependency` as they show up in the changelog [ ] PR contains the label `area:chart` [ ] PR contains the chart label, e.g. `chart:k8up` . [ ] Chart Version bumped if immediate release after merging is planned [ ] I have run `make chart-docs` [ ] Link this PR to related code release or other issues. <!-- NOTE: Do not mix code changes with chart changes, it will break the release process. Delete the checklist section that doesn't apply to the change. NOTE: These things are not required to open a PR and can be done afterwards, while the PR is open. -->" } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "K8up", "subcategory": "Cloud Native Storage" }
[ { "data": "Install the lvm logical volume management package: ```shell ``` View available block devices on the host: ```shell ``` Use `isuladlvmconf.sh` to configure isulad-thinpool ```sh ``` The contents of `isuladlvmconf.sh` are as follows: ```shell current_dir=$(cd `dirname $0` && pwd) disk=\"/dev/$1\" rm -rf /var/lib/isulad/* dmsetup remove_all lvremove -f isulad/thinpool lvremove -f isulad/thinpoolmeta vgremove -f isulad pvremove -f $disk mount | grep $disk | grep /var/lib/isulad if [ x\"$?\" == x\"0\" ];then umount /var/lib/isulad fi echo y | mkfs.ext4 $disk touch /etc/lvm/profile/isulad-thinpool.profile cat > /etc/lvm/profile/isulad-thinpool.profile <<EOF activation { thinpoolautoextend_threshold=80 thinpoolautoextend_percent=20 } EOF pvcreate -y $disk vgcreate isulad $disk echo y | lvcreate --wipesignatures y -n thinpool isulad -l 80%VG echo y | lvcreate --wipesignatures y -n thinpoolmeta isulad -l 1%VG lvconvert -y --zero n -c 512K --thinpool isulad/thinpool --poolmetadata isulad/thinpoolmeta lvchange --metadataprofile isulad-thinpool isulad/thinpool lvs -o+seg_monitor exit 0 ``` Configure `isulad` Configure the `storage-driver` and `storage-opts` in `/etc/isulad/daemon.json`: ```txt \"storage-driver\": \"devicemapper\", \"storage-opts\": [ \"dm.thinpooldev=/dev/mapper/isulad-thinpool\", \"dm.fs=ext4\", \"dm.minfreespace=10%\" ], ``` Restart `isulad`. ```bash $ sudo systemctl restart isulad ```" } ]
{ "category": "Runtime", "file_name": "devicemapper_environmental_preparation.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "Firecracker makes certain modifications to the guest's registers regardless of whether a CPU template is used to comply with the boot protocol. If a CPU template is used the boot protocol settings are performed after the CPU template is applied. That means that if the CPU template configures CPUID bits used in the boot protocol settings, they will be overwritten. See also: On x86_64, the following MSRs are set to `0`: MSRIA32SYSENTER_CS MSRIA32SYSENTER_ESP MSRIA32SYSENTER_EIP MSR_STAR MSR_CSTAR MSRKERNELGS_BASE MSRSYSCALLMASK MSR_LSTAR MSRIA32TSC and MSRIA32MISC_ENABLE is set to `1`. On aarch64, the following registers are set: PSTATE to PSRMODEEL1h | PSRABIT | PSRFBIT | PSRIBIT | PSRDBIT PC to kernel load address (vCPU0 only) X0 to DTB/FDT address (vCPU0 only)" } ]
{ "category": "Runtime", "file_name": "boot-protocol.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "Kedro Vineyard Plugin ===================== The Kedro vineyard plugin contains components (e.g., `DataSet` and `Runner`) to share intermediate data among nodes in Kedro pipelines using vineyard. Kedro on Vineyard -- Vineyard works as the DataSet provider for kedro workers to allow transferring large-scale data objects between tasks that cannot be efficiently serialized and is not suitable for `pickle`, without involving external storage systems like AWS S3 (or Minio as an alternative). The Kedro vineyard plugin handles object migration as well when the required inputs are not located where the task is scheduled to execute. Requirements The following packages are needed to run Kedro on vineyard, kedro >= 0.18 vineyard >= 0.14.5 Configuration Install required packages: pip3 install vineyard-kedro Configure Vineyard locally The vineyard server can be easier launched locally with the following command: python3 -m vineyard --socket=/tmp/vineyard.sock See also our documentation about . Configure the environment variable to tell Kedro vineyard plugin how to connect to the vineyardd server: export VINEYARDIPCSOCKET=/tmp/vineyard.sock Usage -- After installing the dependencies and preparing the vineyard server, you can execute the Kedro workflows as usual and benefits from vineyard for intermediate data sharing. We take the as an example, ```bash $ kedro new --starter=pandas-iris ``` The nodes in this pipeline look like ```python def split_data( data: pd.DataFrame, parameters: Dict[str, Any] ) -> Tuple[pd.DataFrame, pd.DataFrame, pd.Series, pd.Series]: data_train = data.sample( frac=parameters[\"trainfraction\"], randomstate=parameters[\"random_state\"] ) datatest = data.drop(datatrain.index) Xtrain = datatrain.drop(columns=parameters[\"target_column\"]) Xtest = datatest.drop(columns=parameters[\"target_column\"]) ytrain = datatrain[parameters[\"target_column\"]] ytest = datatest[parameters[\"target_column\"]] return Xtrain, Xtest, ytrain, ytest def make_predictions( Xtrain: pd.DataFrame, Xtest: pd.DataFrame, y_train: pd.Series ) -> pd.Series: Xtrainnumpy = Xtrain.tonumpy() Xtestnumpy = Xtest.tonumpy() squared_distances = np.sum( (Xtrainnumpy[:, None, :] - Xtestnumpy[None, :, :]) 2, axis=-1 ) nearestneighbour = squareddistances.argmin(axis=0) ypred = ytrain.iloc[nearest_neighbour] ypred.index = Xtest.index return y_pred ``` You can see that the intermediate data between `splitdata` and `makepredictions` is some pandas dataframes and series. Try running the pipeline without vineyard, ```bash $ cd iris $ kedro run [05/25/23 11:38:56] INFO Kedro project iris session.py:355 [05/25/23 11:38:57] INFO Loading data from 'exampleirisdata' (CSVDataSet)... data_catalog.py:343 INFO Loading data from 'parameters' (MemoryDataSet)... data_catalog.py:343 INFO Running node: split: splitdata([exampleirisdata,parameters]) -> [Xtrain,Xtest,ytrain,y_test] node.py:329 INFO Saving data to 'Xtrain' (MemoryDataSet)... datacatalog.py:382 INFO Saving data to 'Xtest' (MemoryDataSet)... datacatalog.py:382 INFO Saving data to 'ytrain' (MemoryDataSet)... datacatalog.py:382 INFO Saving data to 'ytest' (MemoryDataSet)... datacatalog.py:382 INFO Completed 1 out of 3 tasks sequential_runner.py:85 INFO Loading data from 'Xtrain' (MemoryDataSet)... datacatalog.py:343 INFO Loading data from 'Xtest' (MemoryDataSet)... datacatalog.py:343 INFO Loading data from 'ytrain' (MemoryDataSet)... datacatalog.py:343 INFO Running node: makepredictions: makepredictions([Xtrain,Xtest,ytrain]) -> [ypred] node.py:329 ... ``` You can see that the intermediate data is shared with memory. When kedro is deploy to a cluster, e.g., to , the `MemoryDataSet` is not applicable anymore and you will need to setup the AWS S3 or Minio service and sharing those intermediate data as CSV files. ```yaml X_train: type: pandas.CSVDataSet filepath: s3://testing/data/02intermediate/Xtrain.csv credentials: minio X_test: type: pandas.CSVDataSet filepath: s3://testing/data/02intermediate/Xtest.csv credentials: minio y_train: type: pandas.CSVDataSet filepath: s3://testing/data/02intermediate/ytrain.csv credentials: minio ``` It might be inefficient for pickling pandas dataframes when data become" }, { "data": "With the kedro vineyard plugin, you can run the pipeline with vineyard as the intermediate data medium by ```bash $ kedro run --runner vineyard.contrib.kedro.runner.SequentialRunner [05/25/23 11:45:34] INFO Kedro project iris session.py:355 INFO Loading data from 'exampleirisdata' (CSVDataSet)... data_catalog.py:343 INFO Loading data from 'parameters' (MemoryDataSet)... data_catalog.py:343 INFO Running node: split: splitdata([exampleirisdata,parameters]) -> [Xtrain,Xtest,ytrain,y_test] node.py:329 INFO Saving data to 'Xtrain' (VineyardDataSet)... datacatalog.py:382 INFO Saving data to 'Xtest' (VineyardDataSet)... datacatalog.py:382 INFO Saving data to 'ytrain' (VineyardDataSet)... datacatalog.py:382 INFO Saving data to 'ytest' (VineyardDataSet)... datacatalog.py:382 INFO Loading data from 'Xtrain' (VineyardDataSet)... datacatalog.py:343 INFO Loading data from 'Xtest' (VineyardDataSet)... datacatalog.py:343 INFO Loading data from 'ytrain' (VineyardDataSet)... datacatalog.py:343 INFO Running node: makepredictions: makepredictions([Xtrain,Xtest,ytrain]) -> [ypred] node.py:329 ... ``` Without any modification to your pipeline code, you can see that the intermediate data is shared with vineyard using the `VineyardDataSet` and no longer suffers from the overhead of (de)serialization and the I/O cost between external AWS S3 or Minio services. Like `kedro catalog create`, the Kedro vineyard plugin provides a command-line interface to generate the catalog configuration for given pipeline, which will rewrite the unspecified intermediate data to `VineyardDataSet`, e.g., ```bash $ kedro vineyard catalog create -p default ``` You will get ```yaml X_test: dsname: Xtest type: vineyard.contrib.kedro.io.dataset.VineyardDataSet X_train: dsname: Xtrain type: vineyard.contrib.kedro.io.dataset.VineyardDataSet y_pred: dsname: ypred type: vineyard.contrib.kedro.io.dataset.VineyardDataSet y_test: dsname: ytest type: vineyard.contrib.kedro.io.dataset.VineyardDataSet y_train: dsname: ytrain type: vineyard.contrib.kedro.io.dataset.VineyardDataSet ``` Deploy to Kubernetes -- When the pipeline scales to Kubernetes, the interaction with the Kedro vineyard plugin is still simple and non-intrusive. The plugin provides tools to prepare the docker image and generate Argo workflow specification file for the Kedro pipeline. Next, we'll demonstrate how to deploy pipelines to Kubernetes while leverage Vineyard for efficient intermediate sharing between tasks step-by-step. Prepare the vineyard cluster (see also ): ```bash $ export KUBECONFIG=/path/to/your/kubeconfig $ go run k8s/cmd/main.go deploy vineyard-cluster --create-namespace ``` Install the argo server: ```bash $ kubectl create namespace argo $ kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.4.8/install.yaml ``` Generate the iris demo project from the official template: ```bash $ kedro new --starter=pandas-iris ``` Build the Docker image for this iris demo project: ```bash $ cd iris $ kedro vineyard docker build ``` A Docker image named `iris` will be built successfully. The docker image need to be pushed to your image registry, or loaded to the kind/minikube cluster, to be available in Kubernetes. ```bash $ docker images | grep iris iris latest 3c92da8241c6 About a minute ago 690MB ``` Next, generate the Argo workflow YAML file from the iris demo project: ```bash $ kedro vineyard argo generate -i iris $ ls -l argo-iris.yml -rw-rw-r-- 1 root root 3685 Jun 12 23:55 argo-iris.yml ``` Finally, submit the Argo workflow to Kubernetes: ```bash $ argo submit -n argo argo-iris.yml ``` You can interact with the Argo workflow using the `argo` command-line tool, e.g., ```bash $ argo list workflows -n argo NAME STATUS AGE DURATION PRIORITY MESSAGE iris-sg6qf Succeeded 18m 30s 0 ``` We have prepared a benchmark to evaluate the performance gain brought by vineyard for data sharing when data scales, for more details, please refer to ." } ]
{ "category": "Runtime", "file_name": "kedro.md", "project_name": "Vineyard", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Cloud Provider Specifics\" layout: docs NOTE: Documentation may change between releases. See the for links to previous versions of this repository and its docs. To ensure that you are working off a specific release, `git checkout <VERSIONTAG>` where `<VERSIONTAG>` is the appropriate tag for the Ark version you wish to use (e.g. \"v0.3.3\"). You should `git checkout main` only if you're planning on . While the uses a local storage service to quickly set up Heptio Ark as a demonstration, this document details additional configurations that are required when integrating with the cloud providers below: * * * * To integrate Heptio Ark with AWS, you should follow the instructions below to create an Ark-specific . If you do not have the AWS CLI locally installed, follow the to set it up. Create an IAM user: ```bash aws iam create-user --user-name heptio-ark ``` Attach a policy to give `heptio-ark` the necessary permissions: ```bash aws iam attach-user-policy \\ --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess \\ --user-name heptio-ark aws iam attach-user-policy \\ --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess \\ --user-name heptio-ark ``` Create an access key for the user: ```bash aws iam create-access-key --user-name heptio-ark ``` The result should look like: ```json { \"AccessKey\": { \"UserName\": \"heptio-ark\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWSSECRETACCESS_KEY>, \"AccessKeyId\": <AWSACCESSKEY_ID> } } ``` Using the output from the previous command, create an Ark-specific credentials file (`credentials-ark`) in your local directory that looks like the following: ``` [default] awsaccesskeyid=<AWSACCESSKEYID> awssecretaccesskey=<AWSSECRETACCESSKEY> ``` In the Ark root directory, run the following to first set up namespaces, RBAC, and other scaffolding: ```bash kubectl apply -f examples/common/00-prereqs.yaml ``` Create a Secret, running this command in the local directory of the credentials file you just created: ```bash kubectl create secret generic cloud-credentials \\ --namespace heptio-ark \\ --from-file cloud=credentials-ark ``` Now that you have your IAM user credentials stored in a Secret, you need to replace some placeholder values in the template files. Specifically, you need to change the following: In file `examples/aws/00-ark-config.yaml`: Replace `<YOURBUCKET>` and `<YOURREGION>`. See the for details. In file `examples/common/10-deployment.yaml`: Make sure that `spec.template.spec.containers[].env.name` is \"AWSSHAREDCREDENTIALS_FILE\". (Optional) If you are running the Nginx example, in file `examples/nginx-app/with-pv.yaml`: Replace `<YOURSTORAGECLASS_NAME>` with `gp2`. This is AWS's default `StorageClass` name. To integrate Heptio Ark with GCP, you should follow the instructions below to create an Ark-specific . If you do not have the gcloud CLI locally installed, follow the to set it up. View your current config settings: ```bash gcloud config list ``` Store the `project` value from the results in the environment variable `$PROJECT_ID`. Create a service account: ```bash gcloud iam service-accounts create heptio-ark \\ --display-name \"Heptio Ark service account\" ``` Then list all accounts and find the `heptio-ark` account you just created: ```bash gcloud iam service-accounts list ``` Set the `$SERVICEACCOUNTEMAIL` variable to match its `email` value. Attach policies to give `heptio-ark` the necessary permissions to function (replacing placeholders appropriately): ```bash gcloud projects add-iam-policy-binding $PROJECT_ID \\ --member serviceAccount:$SERVICEACCOUNTEMAIL \\ --role roles/compute.storageAdmin gcloud projects add-iam-policy-binding $PROJECT_ID \\ --member serviceAccount:$SERVICEACCOUNTEMAIL \\ --role" }, { "data": "``` Create a service account key, specifying an output file (`credentials-ark`) in your local directory: ```bash gcloud iam service-accounts keys create credentials-ark \\ --iam-account $SERVICEACCOUNTEMAIL ``` In the Ark root directory, run the following to first set up namespaces, RBAC, and other scaffolding: ```bash kubectl apply -f examples/common/00-prereqs.yaml ``` Create a Secret, running this command in the local directory of the credentials file you just created: ```bash kubectl create secret generic cloud-credentials \\ --namespace heptio-ark \\ --from-file cloud=credentials-ark ``` Now that you have your Google Cloud credentials stored in a Secret, you need to replace some placeholder values in the template files. Specifically, you need to change the following: In file `examples/gcp/00-ark-config.yaml`: Replace `<YOURBUCKET>` and `<YOURPROJECT>`. See the for details. In file `examples/common/10-deployment.yaml`: Change `spec.template.spec.containers[].env.name` to \"GOOGLEAPPLICATIONCREDENTIALS\". (Optional) If you are running the Nginx example, in file `examples/nginx-app/with-pv.yaml`: Replace `<YOURSTORAGECLASS_NAME>` with `standard`. This is GCP's default `StorageClass` name. Ensure that the VMs for your agent pool allow Managed Disks. If I/O performance is critical, consider using Premium Managed Disks, as these are SSD backed. To integrate Heptio Ark with Azure, you should follow the instructions below to create an Ark-specific . If you do not have the `az` Azure CLI 2.0 locally installed, follow the to set it up. Once done, run: ```bash az login ``` There are seven environment variables that need to be set for Heptio Ark to work properly. The following steps detail how to acquire these, in the process of setting up the necessary RBAC. Obtain your Azure Account Subscription ID and Tenant ID: ```bash AZURESUBSCRIPTIONID=`az account list --query '[?isDefault].id' -o tsv` AZURETENANTID=`az account list --query '[?isDefault].tenantId' -o tsv` ``` Set the name of the Resource Group that contains your Kubernetes cluster. ```bash AZURERESOURCEGROUP=Kubernetes ``` If you are unsure of the Resource Group name, run the following command to get a list that you can select from. Then set the `AZURERESOURCEGROUP` environment variable to the appropriate value. ```bash az group list --query '[].{ ResourceGroup: name, Location:location }' ``` Get your cluster's Resource Group name from the `ResourceGroup` value in the response, and use it to set `$AZURERESOURCEGROUP`. (Also note the `Location` value in the response -- this is later used in the Azure-specific portion of the Ark Config). Create a service principal with `Contributor` role. This will have subscription-wide access, so protect this credential. You can specify a password or let the `az ad sp create-for-rbac` command create one for you. ```bash AZURECLIENTSECRET=supersecretandhighentropypasswordreplacemewithyourown az ad sp create-for-rbac --name \"heptio-ark\" --role \"Contributor\" --password $AZURECLIENTSECRET AZURECLIENTSECRET=`az ad sp create-for-rbac --name \"heptio-ark\" --role \"Contributor\" --query 'password' -o tsv` AZURECLIENTID=`az ad sp list --display-name \"heptio-ark\" --query '[0].appId' -o tsv` ``` Create the storage account and blob container for Ark to store the backups in. The storage account can be created in the same Resource Group as your Kubernetes cluster or separated into its own Resource Group. The example below shows the storage account created in a separate `Ark_Backups` Resource Group. The storage account needs to be created with a globally unique id since this is used for dns. The random function ensures you don't have to come up with a unique name. The storage account is created with encryption at rest capabilities (Microsoft managed keys) and is configured to only allow access via" }, { "data": "```bash AZUREBACKUPRESOURCEGROUP=ArkBackups az group create -n $AZUREBACKUPRESOURCE_GROUP --location WestUS AZURESTORAGEACCOUNT_ID=\"ark`cat /proc/sys/kernel/random/uuid | cut -d '-' -f5`\" az storage account create \\ --name $AZURESTORAGEACCOUNT_ID \\ --resource-group $AZUREBACKUPRESOURCE_GROUP \\ --sku Standard_GRS \\ --encryption-services blob \\ --https-only true \\ --kind BlobStorage \\ --access-tier Hot az storage container create -n ark --public-access off --account-name $AZURESTORAGEACCOUNT_ID AZURESTORAGEKEY=`az storage account keys list \\ --account-name $AZURESTORAGEACCOUNT_ID \\ --resource-group $AZUREBACKUPRESOURCE_GROUP \\ --query [0].value \\ -o tsv` ``` In the Ark root directory, run the following to first set up namespaces, RBAC, and other scaffolding: ```bash kubectl apply -f examples/common/00-prereqs.yaml ``` Now you need to create a Secret that contains all the seven environment variables you just set. The command looks like the following: ```bash kubectl create secret generic cloud-credentials \\ --namespace heptio-ark \\ --from-literal AZURESUBSCRIPTIONID=${AZURESUBSCRIPTIONID} \\ --from-literal AZURETENANTID=${AZURETENANTID} \\ --from-literal AZURERESOURCEGROUP=${AZURERESOURCEGROUP} \\ --from-literal AZURECLIENTID=${AZURECLIENTID} \\ --from-literal AZURECLIENTSECRET=${AZURECLIENTSECRET} \\ --from-literal AZURESTORAGEACCOUNTID=${AZURESTORAGEACCOUNTID} \\ --from-literal AZURESTORAGEKEY=${AZURESTORAGEKEY} ``` Now that you have your Azure credentials stored in a Secret, you need to replace some placeholder values in the template files. Specifically, you need to change the following: In file `examples/azure/10-ark-config.yaml`: Replace `<YOURBUCKET>`, `<YOURLOCATION>`, and `<YOUR_TIMEOUT>`. See the for details. Here is an example of a completed file. ```yaml apiVersion: ark.heptio.com/v1 kind: Config metadata: namespace: heptio-ark name: default persistentVolumeProvider: name: azure config: location: \"West US\" apiTimeout: 15m backupStorageProvider: name: azure bucket: ark backupSyncPeriod: 30m gcSyncPeriod: 30m scheduleSyncPeriod: 1m restoreOnlyMode: false ``` You can get a complete list of Azure locations with the following command: ```bash az account list-locations --query \"sort([].displayName)\" -o tsv ``` Make sure that you have run `kubectl apply -f examples/common/00-prereqs.yaml` first (this command is incorporated in the previous setup instructions because it creates the necessary namespaces). AWS and GCP* Start the Ark server itself, using the Config from the appropriate cloud-provider-specific directory: ```bash kubectl apply -f examples/common/10-deployment.yaml kubectl apply -f examples/<CLOUD-PROVIDER>/ ``` Azure* Because Azure loads its credentials differently (from environment variables rather than a file), you need to instead run: ```bash kubectl apply -f examples/azure/ ``` Start the sample nginx app: ```bash kubectl apply -f examples/nginx-app/base.yaml ``` Now create a backup: ```bash ark backup create nginx-backup --selector app=nginx ``` Simulate a disaster: ```bash kubectl delete namespaces nginx-example ``` Now restore your lost resources: ```bash ark restore create nginx-backup ``` NOTE: For Azure, your Kubernetes cluster needs to be version 1.7.2+ in order to support PV snapshotting of its managed disks. Start the sample nginx app: ```bash kubectl apply -f examples/nginx-app/with-pv.yaml ``` Because Kubernetes does not automatically transfer labels from PVCs to dynamically generated PVs, you need to do so manually: ```bash nginxpvname=$(kubectl get pv -o jsonpath='{.items[?(@.spec.claimRef.name==\"nginx-logs\")].metadata.name}') kubectl label pv $nginxpvname app=nginx ``` Now create a backup with PV snapshotting: ```bash ark backup create nginx-backup --selector app=nginx ``` Simulate a disaster: ```bash kubectl delete namespaces nginx-example kubectl delete pv $nginxpvname ``` Because the default for dynamically-provisioned PVs is \"Delete\", the above commands should trigger your cloud provider to delete the disk backing the PV. The deletion process is asynchronous so this may take some time. Before continuing to the next step, check your cloud provider (via dashboard or CLI) to confirm that the disk no longer exists. Now restore your lost resources: ```bash ark restore create nginx-backup ```" } ]
{ "category": "Runtime", "file_name": "cloud-provider-specifics.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Filesystems internally refer to files and directories via inodes. Inodes are unique identifiers of the entities stored in a filesystem. Whenever an application has to operate on a file/directory (read/modify), the filesystem maps that file/directory to the right inode and start referring to that inode whenever an operation has to be performed on the file/directory. In GlusterFS a new inode gets created whenever a new file/directory is created OR when a successful lookup is done on a file/directory for the first time. Inodes in GlusterFS are maintained by the inode table which gets initiated when the filesystem daemon is started (both for the brick process as well as the mount process). Below are some important data structures for inode management. ``` struct inodetable { pthreadmutext lock; size_t hashsize; / bucket size of inode hash and dentry hash / char name; / name of the inode table, just for gf_log() */ inode_t root; / root directory inode, with inode number and gfid 1 */ xlator_t xl; / xlator to be called to do purge and the xlator which maintains the inode table*/ uint32t lrulimit; / maximum LRU cache size / struct listhead *inodehash; / buckets for inode hash table / struct listhead *namehash; / buckets for dentry hash table / struct list_head active; / list of inodes currently active (in an fop) / uint32t activesize; / count of inodes in active list / struct list_head lru; /* list of inodes recently used. lru.next most recent */ uint32t lrusize; / count of inodes in lru list / struct list_head purge; / list of inodes to be purged soon / uint32t purgesize; / count of inodes in purge list / struct mempool *fdmempool; /* memory pool for fdt */ int ctxcount; / number of slots in inode->ctx / }; ``` ``` inodetablenew (sizet lrulimit, xlator_t *xl) ``` This is a function which allocates a new inode table. Usually the top xlators in the graph such as protocol/server (for bricks), fuse and nfs (for fuse and nfs mounts) and libgfapi do inode managements. Hence they are the ones which will allocate a new inode table by calling the above function. Each xlator graph in glusterfs maintains an inode table. So in fuse clients, whenever there is a graph change due to add brick/remove brick or addition/removal of some other xlators, a new graph is created which creates a new inode table. Thus an allocated inode table is destroyed only when the filesystem daemon is killed or unmounted. Inode table in glusterfs mainly contains a hash table for maintaining inodes. In general a file/directory is considered to be existing if there is a corresponding inode present in the inode" }, { "data": "If a inode for a file/directory cannot be found in the inode table, glusterfs tries to resolve it by sending a lookup on the entry for which the inode is needed. If lookup is successful, then a new inode correponding to the entry is added to the hash table present in the inode table. Thus an inode present in the hash-table means, its an existing file/directory within the filesystem. The inode table also contains the hash size of the hash table (as of now it is hard coded to 14057. The hash value of a inode is calculated using its gfid). Apart from the hash table, inode table also maintains 3 important list of inodes Active list: Active list contains all the active inodes (i.e inodes which are currently part of some fop). Lru list: Least recently used inodes list. A limit can be set for the size of the lru list. For bricks it is 16384 and for clients it is infinity. Purge list: List of all the inodes which have to be purged (i.e inodes which have to be deleted from the inode table due to unlink/rmdir/forget). And at last it also contains the mem-pool for allocating inodes, dentries so that frequent malloc/calloc and free of the data structures can be avoided. ``` struct _inode { inodetablet table; / the table this inode belongs to */ uuid_t gfid; / unique identifier of the inode / gflockt lock; uint64_t nlookup; uint32t fdcount; / Open fd count / uint32_t ref; / reference count on this inode / iatypet ia_type; / what kind of file / struct listhead fdlist; / list of open files on this inode / struct listhead dentrylist; / list of directory entries for this inode / struct list_head hash; / hash table pointers / struct list_head list; / active/lru/purge / struct inodectx _ctx; / place holder for keeping the information about the inode by different xlators */ }; ``` As said above, inodes are internal way of identifying the files/directories. A inode uniquely represents a file/directory. A new inode is created whenever a create/mkdir/symlink/mknod operations are performed. Apart from that a new inode is created upon the successful fresh lookup of a file/directory. Say the filesystem contained some file \"a\" within root and the filesystem was unmounted. Now when glusterfs is mounted and some operation is perfomed on \"/a\", glusterfs tries to get the inode for the entry \"a\" with parent inode as root. But, since glusterfs just came up, it will not be able to find the inode for \"a\" and will send a lookup on \"/a\". If the lookup operation succeeds (i.e. the root of glusterfs contains an entry called \"a\"), then a new inode for \"/a\" is created and added to the inode" }, { "data": "Depending upon the situation, an inode can be in one of the 3 lists maintained by the inode table. If some fop is happening on the inode, then the inode will be present in the active inodes list maintained by the inode table. Active inodes are those inodes whose refcount is greater than zero. Whenever some operation comes on a file/directory, and the resolver tries to find the inode for it, it increments the refcount of the inode before returning the inode. The refcount of an inode can be incremented by calling the below function ``` inoderef (inodet *inode) ``` Any xlator which wants to operate on a inode as part of some fop (or wants the inode in the callback), should hold a ref on the inode. Once the fop is completed before sending the reply of the fop to the above layers , the inode has to be unrefed. When the refcount of an inode becomes zero, it is removed from the active inodes list and put into LRU list maintained by the inode table. Thus in short if some fop is happening on a file/directory, the corresponding inode will be in the active list or it will be in the LRU list. A new inode is created whenever a new file/directory/symlink is created OR a successful lookup of an existing entry is done. The xlators which does inode management (as of now protocol/server, fuse, nfs, gfapi) will perform inode_link operation upon successful lookup or successful creation of a new entry. ``` inodelink (inodet inode, inode_t parent, const char *name, struct iatt *buf); ``` inode_link actually adds the inode to the inode table (to be precise it adds the inode to the hash table maintained by the inode table. The hash value is calculated based on the gfid). Copies the gfid to the inode (the gfid is present in the iatt structure). Creates a dentry with the new name. A inode is removed from the inode table and eventually destroyed when unlink or rmdir operation is performed on a file/directory, or the the lru limit of the inode table has been exceeded. ``` struct _dentry { struct listhead inodelist; / list of dentries of inode / struct list_head hash; / hash table pointers / inode_t inode; / inode of this directory entry */ char name; / name of the directory entry */ inode_t parent; / directory of the entry */ }; ``` A dentry is the presence of an entry for a file/directory within its parent directory. A dentry usually points to the inode to which it belongs to. In glusterfs a dentry contains the following fields. a hook using which it can add itself to the list of the dentries maintained by the inode to which it points" }, { "data": "A hash table pointer. Pointer to the inode to which it belongs to. Name of the dentry Pointer to the inode of the parent directory in which the dentry is present A new dentry is created when a new file/directory/symlink is created or a hard link to an existing file is created. ``` dentrycreate (inodet inode, inode_t parent, const char *name); ``` A dentry holds a refcount on the parent directory so that the parent inode is never removed from the active inode's list and put to the lru list (If the lru limit of the lru list is exceeded, there is a chance of parent inode being destroyed. To avoid it, the dentries hold a reference to the parent inode). A dentry is removed whenevern a unlink/rmdir is perfomed on a file/directory. Or when the lru limit has been exceeded, the oldest inodes are purged out of the inode table, during which all the dentries of the inode are removed. Whenever a unlink/rmdir comes on a file/directory, the corresponding inode should be removed from the inode table. So upon unlink/rmdir, the inode will be moved to the purge list maintained by the inode table and from there it is destroyed. To be more specific, if a inode has to be destroyed, its refcount and nlookup count both should become 0. For refcount to become 0, the inode should not be part of any fop (there should not be any open fds). Or if the inode belongs to a directory, then there should not be any fop happening on the directory and it should not contain any dentries within it. For nlookup count to become zero, a forget has to be sent on the inode with nlookup count set to 0 as an argument. For fuse clients, forget is sent by the kernel itself whenever a unlink/rmdir is performed. But for brick processes, upon unlink/rmdir, the protocol/server itself has to do inode_forget. Whenever the inode has to be deleted due to file removal or lru limit being exceeded the inode is retired (i.e. all the dentries of the inode are deleted and the inode is moved to the purge list maintained by the inode table), the nlookup count is set to 0 via inode_forget api. The inode table, then prunes all the inodes from the purge list by destroying the inode contexts maintained by each xlator. ``` unlinking of the dentry is done via inode_unlink; void inodeunlink (inodet inode, inode_t parent, const char *name); ``` If the inode has multiple hard links, then the unlink operation performed by the application results just in the removal of the dentry with the name provided by the application. For the inode to be removed, all the dentries of the inode should be unlinked." } ]
{ "category": "Runtime", "file_name": "datastructure-inode.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "This page describes CLI options and ENV of spiderpool-controller. Run the spiderpool controller daemon. ``` --config-dir string config file path (default /tmp/spiderpool/config-map) ``` | env | default | description | |-||--| | SPIDERPOOLLOGLEVEL | info | Log level, optional values are \"debug\", \"info\", \"warn\", \"error\", \"fatal\", \"panic\". | | SPIDERPOOLENABLEDMETRIC | false | Enable/disable metrics. | | SPIDERPOOLENABLEDDEBUG_METRIC | false | Enable spiderpool agent to collect debug level metrics. | | SPIDERPOOLMETRICHTTP_PORT | false | The metrics port of spiderpool agent. | | SPIDERPOOLGOPSLISTEN_PORT | 5724 | The gops port of spiderpool Controller. | | SPIDERPOOLWEBHOOKPORT | 5722 | Webhook HTTP server port. | | SPIDERPOOLHEALTHPORT | 5720 | The http Port for spiderpoolController, for health checking and http service. | | SPIDERPOOLGCIP_ENABLED | true | Enable/disable IP GC. | | SPIDERPOOLGCSTATELESSTERMINATINGPODONREADYNODEENABLED | true | Enable/disable IP GC for stateless Terminating pod when the pod corresponding node is ready. | | SPIDERPOOLGCSTATELESSTERMINATINGPODONNOTREADYNODE_ENABLED | true | Enable/disable IP GC for stateless Terminating pod when the pod corresponding node is not ready. | | SPIDERPOOLGCADDITIONALGRACEDELAY | true | The gc delay seconds after the pod times out of deleting graceful period. | | SPIDERPOOLGCDEFAULTINTERVALDURATION | true | The gc all interval duration. | | SPIDERPOOLMULTUSCONFIG_ENABLED | true | Enable/disable SpiderMultusConfig. | | SPIDERPOOLCNICONFIG_DIR | /etc/cni/net.d | The host path of the cni config directory. | | SPIDERPOOLCILIUMCONFIGMAPNAMESPACENAME | kube-system/cilium-config. | The cilium's configMap, default is kube-system/cilium-config. | | SPIDERPOOLCOORDINATORDEFAULT_NAME | default | the name of default spidercoordinator CR | Notify of stopping spiderpool-controller daemon. Get local metrics. ``` --port string http server port of local metric (default to 5721) ``` Show status: Whether local is controller leader ... ``` --port string http server port of local metric (default to 5720) ```" } ]
{ "category": "Runtime", "file_name": "spiderpool-controller.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "% crio.conf.d(5) crio.conf.d - directory for drop-in configuration files for CRI-O Additionally to configuration in crio.conf(5), CRI-O allows to drop configuration snippets into the crio.conf.d directory. The default directory is /etc/crio/crio.conf.d/. The path can be changed via CRIO's --config-dir command line option. When it exists, the main configuration file (/etc/crio/crio.conf by default) is read before any file in the configuration directory (/etc/crio/crio.conf.d). Settings in that file have the lowest precedence. Files in the configuration directory are sorted by name in lexical order and applied in that order. If multiple configuration files specify the same configuration option the setting specified in file sorted last takes precedence over any other value. That is if both 00-default.conf and 10-custom.conf exist in crio.conf.d and both specify different values for a certain configuration option the value from 10-custom.conf will be applied. crio.conf(5), crio(8)" } ]
{ "category": "Runtime", "file_name": "crio.conf.d.5.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "Velero Restore Hooks - PRD (Product Requirements Document) MVP Feature Set Relates to: This is a live document, you can reach me on the following channels for more information or for any questions: Email: Kubernetes slack: Channel: #velero @Stephanie Bauman VMware internal slack: @bstephanie * Background Velero supports restore operations but there are gaps in the process. Gaps in the restore process require users to manually carry out steps to start, clean up, and end the restore process. Other gaps in the restore process can cause issues with application performance for applications running in a pod when a restore operation is carried out. On a restore, Velero currently does not include hooks to execute a pre- or -post restore script. As a result, users are required to perform additional actions following a velero restore operation. Some gaps that currently exist in the Velero restore process are: Users can create a restore operation but has no option to customize or automate commands during the start of the restore operation Users can perform post restore operations but have no option to customize or automate commands during the end of the restore operation Strategic Fit Adding a restore hook action today would allow Velero to unpack the data that was backed up in an automated way by enabling Velero to execute commands in containers during a restore. This will also improve the restore operations on a container and mitigate against any negative performance impacts of apps running in the container during restore. Purpose / Goal The purpose of this feature is to improve the extensibility and user experience of pre and post restore operations for Velero users. Goals for this feature include: Enhance application performance during and following restore events Provide pre-restore hooks for customizing start restore operations Provide actions for things like retry actions, event logging, etc... during restore operations Provide observability/status of restore commands run in restored pods Extend restore logs to include status, error codes and necessary metadata for restore commands run during restore operations for enhanced troubleshooting capabilities Provide post-restore hooks for customizing end of restore operations Include pre-populated data that has been serialized by a Velero backup hook into a container or external service prior to allowing a restore to be marked as completed. Non-goals Feature Description This feature will automate the restore operations/processes in Velero, and will provide restore hook actions in Velero that allows users to execute restore commands on a container. Restore hooks include pre-hook actions and post-hook actions. This feature will mirror the actions of a Velero backup by allowing Velero to check for any restore hooks specified for a pod. Assumptions Restore operations will be run at the pod level instead of at the volume level. Some databases require the pod to be running and in some cases a user cannot manipulate a volume without the pod running. We need to support more than one type of database for this feature and so we need to ensure that this works broadly as opposed to providing support only for specific dbs. Velero will be responsible for invoking the restore hook. MVP Use Cases The following use cases must be included as part of the Velero restore hooks MVP (minimum viable" }, { "data": "Note: Processing of concurrent vs sequential workloads is slated later in the Velero roadmap (see ). The MVP for this feature set will align with restore of single workloads vs concurrent workload restores. A second epic will be created to address the concurrent restore operations and will be added to the backlog for priority visibility. Note: Please refer to the Requirements section of this document for more details on what items are P0 (must have in the MVP), P1 (should not ship without for the MVP), P2 (nice to haves). <span style=\"text-decoration:underline;\">USE CASE 1</span> Title: Run restore hook before pod restart. Description: As a user, I would like to run a restore hook before the applications in my pod are restarted. <span style=\"text-decoration:underline;\">______________________________________________________________</span> <span style=\"text-decoration:underline;\">USE CASE 2</span> Title: Allow restore hook to run on non-kubernetes databases Description: As a user, I would like to run restore hook operations even on databases that are external to Kubernetes (such as postgres, elastic, etc). <span style=\"text-decoration:underline;\">______________________________________________________________</span> <span style=\"text-decoration:underline;\">USE CASE 3</span> Title: Run restore at pod level. Description: As a user, I would like to make sure that I can run the restore hook at the pod level. And, I would like the option to run this with an annotation flag in line or using an init container.<span style=\"text-decoration:underline;\"> </span>The restore pre-hook should allow the user to run the command on the container where the pre-hook should be executed. Similar to the backup hooks, this hook should run to default to run on the first container in the pod. <span style=\"text-decoration:underline;\">______________________________________________________________</span> <span style=\"text-decoration:underline;\">USE CASE 4</span> Title: Specify container in which to run pre-hook Description: As a user, if I do not want to run the pre-hook command on the first container in the pod (default container), I would like the option to annotate the specific container that i would like the hook to run in. <span style=\"text-decoration:underline;\">______________________________________________________________</span> <span style=\"text-decoration:underline;\">USE CASE 5</span> Title: Check for latest snapshot Description: As a user, I would like Velero to run a check for the latest snapshot in object storage prior to starting restore operations on a pod. <span style=\"text-decoration:underline;\">______________________________________________________________</span> <span style=\"text-decoration:underline;\">USE CASE 6</span> Title: Display/surface output from restore hooks/restore status Description: As a user, I would like to see the output of the restore hooks/status of my restore surfaced from the pod volume restore status. Including statuses: Pending, Running/In Progress, Succeeded, Failed, Unknown. <span style=\"text-decoration:underline;\">______________________________________________________________</span> <span style=\"text-decoration:underline;\">USE CASE 7</span> Title: Restore metadata Description: As a user, I would like to have the metadata of the contents of what was restored using Velero. Note: Kubernetes patterns may result in some snapshot metadata being overwritten during restore operations. <span style=\"text-decoration:underline;\">______________________________________________________________</span> <span style=\"text-decoration:underline;\">USE CASE 8</span>* Title: Increase default restore and retry limits. Description: As a user, I would like to increase the default restore retry and timeout limits from the default values to some other value I would like to specify. Note: See use case 11 for the default value specifications. <span style=\"text-decoration:underline;\">______________________________________________________________</span> User Experience The following is representative of what the user experience could look like for Velero restore pre-hooks and" }, { "data": "_Note: These examples are representative and are not to be considered for use in pre- and post- restore hook operations until the technical design is complete._ Restore Pre-Hooks <span style=\"text-decoration:underline;\">Container Command</span> ``` pre.hook.restore.velero.io/container kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \\ --namespace velero \\ --type merge \\ --patch '{\"spec\":{\"accessMode\":\"ReadOnly\"}}' ``` <span style=\"text-decoration:underline;\">Command Execute</span> ``` pre.hook.restore.velero.io/command ``` Includes commands for: Create Create from most recent backup Create from specific backup - allow user to list backups Set backup storage location to read only ``` kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \\ --namespace velero \\ --type merge \\ --patch '{\"spec\":{\"accessMode\":\"ReadOnly\"}} ``` Set backup storage location to read-write ``` kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \\ --namespace velero \\ --type merge \\ --patch '{\"spec\":{\"accessMode\":\"ReadWrite\"}}' ``` <span style=\"text-decoration:underline;\">Error handling </span> ``` pre.hook.restore.velero.io/on-error ``` <span style=\"text-decoration:underline;\">Timeout </span> ``` pre.hook.restore.velero.io/retry ``` Requirements _P0_ = must not ship without (absolute requirement for MVP, engineering requirements for long term viability usually fall in here for ex., and incompletion nearing or by deadline means delaying code freeze and GA date) _P1_ = should not ship without (required for feature to achieve general adoption, should be scoped into feature but can be pushed back to later iterations if needed) _P2_ = nice to have (opportunistic, should not be scoped into the overall feature if it has dependencies and could cause delay) P0 Requirements P0. Use Case 1 - Run restore hook before pod restart. P0. Use Case 3- Run restore at pod level. P0. Use Case 5 - Check for latest snapshot P0. Use Case 9 - Display/surface restore status P0. Use Case 10 - Restore metadata P0. Use Case 11 - Retry restore upon restore failure/error/timeout P0. Use Case 12 - Increase default restore and retry limits. P1 Requirements P1. Use Case 2 - Allow restore hook to run on non-kubernetes databases P1. Use Case 4 - Specify container in which to run pre-hook P1. Use Case 6 - Specify backup snapshot to use for restore P1. Use Case 7 - Include or exclude namespaces from restore P2 Requirements P2. Out of scope The following requirements are out of scope for the Velero Restore Hooks MVP: Verifying the integrity of a backup, resource, or other artifact will not be included in the scope of this effort. Verifying the integrity of a snapshot using Kubernetes hash checks. Running concurrent restore operations (for the MVP) a secondary epic will be opened to align better with the concurrent workload operations currently set on the Velero roadmap for Q4 timeframe. Questions For USE CASE 1: Init vs app containers - if multiple containers are specified for a pod kubelet will run each init container sequentially - does this have an impact on things like concurrent workload processing? Can velero allow a user to specify a specific backup if the most recent backup is not desired in a restore? If a backup specified for a restore operation fails, can velero retry and pick up the next most recent backup in the restore? Can velero provide a delta between the two backups if a different backup needs to be picked up (other than the most recent because the most recent backup cannot be accessed?) What types of errors can velero surface about backups, namespaces, pods, resources, if a backup has an issue with it preventing a restore from being done? For questions, please contact michaelmi@vmware.com," } ]
{ "category": "Runtime", "file_name": "restore-hooks_product-requirements.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "(devices-unix-block)= ```{note} The `unix-block` device type is supported for containers. It supports hotplugging. ``` Unix block devices make the specified block device appear as a device in the instance (under `/dev`). You can read from the device and write to it. `unix-block` devices have the following device options: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group devices-unix-char-block start --> :end-before: <!-- config group devices-unix-char-block end --> ``` (devices-unix-block-hotplugging)= Hotplugging is enabled if you set `required=false` and specify the `source` option for the device. In this case, the device is automatically passed into the container when it appears on the host, even after the container starts. If the device disappears from the host system, it is removed from the container as well." } ]
{ "category": "Runtime", "file_name": "devices_unix_block.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "This document describes the way for debugging and profiling in StratoVirt and how to use it. First, you need to modify or crate toml file in the trace/trace_info directory to add a new event or scope in order to generate the trace function. For example: ```toml [[events]] name = \"virtioreceiverequest\" args = \"device: String, behaviour: String\" message = \"{}: Request received from guest {}, ready to start processing.\" enabled = true [[scopes]] name = \"update_cursor\" args = \"\" message = \"\" enabled = true ``` In the above configuration, \"name\" is used to represent the only trace, and duplication is not allowed; \"message\" and \"args\" will be formatted as information output by trace; \"enabled\" indicates whether it is enabled during compilation. Just call the trace function where needed. ```rust fn process_queue(&mut self) -> Result<()> { trace::virtioreceiverequest(\"Rng\".tostring(), \"to IO\".tostring()); let mut queue_lock = self.queue.lock().unwrap(); let mut need_interrupt = false; ...... } fn updatecursor(&mut self, infocursor: &VirtioGpuUpdateCursor, hdr_type: u32) -> Result<()> { // Trace starts from here, and end when it leaves this scope trace::tracescopestart!(update_cursor); ...... } ``` Trace state in StratoVirt are disabled by default. Users can control whether the trace state is enabled through the command line or qmp command. Before starting, you can prepare the trace list that needs to be enabled and pass it to StratoVirt through . During the running, you can send the command through the qmp socket to enable or disable trace state. Similarly, using the command can check whether the setting is successful. By setting different features during compilation, trace can generate specified code to support different trace tools. StratoVirt currently supports two kinds of settings. StratoVirt supports outputting trace to the log file at trace level. Turn on the trace_to_logger feature to use is. Ftrace is a tracer provided by Linux kernel, which can help linux developers to debug or analyze issues. As ftrace can avoid performance penalty, it's especially suited for performance issues. It can be enabled by turning on the trace_to_ftrace feature during compilation. StratoVirt use ftrace by writing trace data to ftrace marker, and developers can read trace records from trace file under mounted ftrace director, e.g. /sys/kernel/debug/tracing/trace. HiTraceMeter(https://gitee.com/openharmony/hiviewdfx_hitrace) is tool used by developers to trace process and measure performance. Based on the Ftrace, it provides the ability to measure the execution time of user-mode application code. After turning on the trace_to_hitrace feature, it can be used on HarmonyOS." } ]
{ "category": "Runtime", "file_name": "trace.md", "project_name": "StratoVirt", "subcategory": "Container Runtime" }
[ { "data": "title: \"ark create schedule\" layout: docs Create a schedule The --schedule flag is required, in cron notation: | Character Position | Character Period | Acceptable Values | | -|:-:| --:| | 1 | Minute | 0-59,* | | 2 | Hour | 0-23,* | | 3 | Day of Month | 1-31,* | | 4 | Month | 1-12,* | | 5 | Day of Week | 0-7,* | ``` ark create schedule NAME --schedule [flags] ``` ``` ark create schedule NAME --schedule=\"0 /6 \" ``` ``` --exclude-namespaces stringArray namespaces to exclude from the backup --exclude-resources stringArray resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io -h, --help help for schedule --include-cluster-resources optionalBool[=true] include cluster-scoped resources in the backup --include-namespaces stringArray namespaces to include in the backup (use '' for all namespaces) (default ) --include-resources stringArray resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources) --label-columns stringArray a comma-separated list of labels to be displayed as columns --labels mapStringString labels to apply to the backup -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. --schedule string a cron expression specifying a recurring schedule for this backup to run -l, --selector labelSelector only back up resources matching this label selector (default <none>) --show-labels show labels in the last column --snapshot-volumes optionalBool[=true] take snapshots of PersistentVolumes as part of the backup --ttl duration how long before the backup can be garbage collected (default 720h0m0s) ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Create ark resources" } ]
{ "category": "Runtime", "file_name": "ark_create_schedule.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "All notable changes to this project will be documented in this file. The format is based on , and this project adheres to . Default PriorityClasses for all components New DRBD loader detection for: Ubuntu Noble Numbat (24.04) Updated images: LINSTOR 1.27.1 LINSTOR CSI 1.6.0 DRBD 9.2.9 Support for managing ZFS Storage Pools using `LinstorSatelliteConfiguration`. Updated images: LINSTOR 1.27.0 LINSTOR CSI 1.5.0 High Availability Controller 1.2.1 Latest CSI sidecars New DRBD loader detection for: Debian 12 (Bookworm) Rocky Linux 8 & 9 Report `seLinuxMount` capability for the CSI Driver, speeding up volume mounts with SELinux relabelling enabled. Alerts for offline LINSTOR Controller and Satellites. Use node label instead of pod name for Prometheus alerting descriptions. Updated images: LINSTOR 1.26.2 DRBD 9.2.8 Validating Webhook for Piraeus StorageClasses. Use DaemonSet to manage Satellite resources instead of bare Pods. This enables better integration with common administrative tasks such as node draining. This change should be transparent for users, any patches applied on the satellite Pods are internally converted to work on the new DaemonSet instead. Change default monitoring address for DRBD Reactor to support systems with IPv6 completely disabled. Updated images: LINSTOR 1.26.1 LINSTOR CSI 1.4.0 DRBD 9.2.7 High Availability Controller 1.2.0 Latest CSI sidecars Add a new option `.spec.internalTLS.tlsHandshakeDaemon` to enable deployment of `tlshd` alongside the LINSTOR Satellite. Shortcuts to configure specific components. Components can be disabled by setting `enabled: false`, and the deployed workload can be influenced using the `podTemplate` value. Available components: `LinstorCluster.spec.controller` `LinstorCluster.spec.csiController` `LinstorCluster.spec.csiNode` `LinstorCluster.spec.highAvailabilityController` Shortcut to modify the pod of a satellite by adding a `LinstorSatelliteConfiguration.spec.podTemplate`, which is a shortcut for creating a `.spec.patches` patch. Fixed service resources relying on default protocol version. Moved NetworkPolicy for DRBD out of default deployed resources. Updated images: LINSTOR 1.25.1 LINSTOR CSI 1.3.0 DRBD Reactor 1.4.0 Latest CSI sidecars Add a default toleration for the HA Controller taints to the operator. A new `LinstorNodeConnection` resource, used to configure the LINSTOR Node Connection feature in a Kubernetes way. Allow image configuration to be customized by adding additional items to the config map. Items using a \"greater\" key take precedence when referencing the same images. Add image configuration for CSI sidecars. Check kernel module parameters for DRBD on load. Automatically set SELinux labels when loading kernel modules. Allow more complex node selection by adding `LinstorCluster.spec.nodeAffinity`. Upgrade to operator-sdk 1.29 Upgrade to kubebuilder v4 layout Updated images: LINSTOR 1.24.2 LINSTOR CSI 1.2.3 DRBD 9.2.5 Disable operator metrics by default. This removes a dependency on an external container. Dependency on cert-manager for initial deployment. A crash caused by insufficient permissions on the LINSTOR Controller. Satellite will now restart if the Pods terminated for unexpected reasons. LINSTOR Controller deployment now runs DB migrations as a separate init container, creating a backup of current DB state if needed. Apply global rate limit to LINSTOR API, defaulting to 100 qps. Store LINSTOR Satellite logs on the host. Updated images: LINSTOR 1.23.0 LINSTOR CSI 1.1.0 DRBD Reactor 1.2.0 HA Controller" }, { "data": "external CSI images upgraded to latest versions Fixed a bug where `LinstorSatellite` resources would be not be cleaned up when the satellite is already gone. Fixed a bug where the LINSTOR Controller would never report readiness when TLS is enabled. Fixed order in which patches are applied. Always apply user patches last. Ability to skip deploying the LINSTOR Controller by setting `LinstorCluster.spec.externalController`. Automatically reconcile changed image configuration. Fix an issue where the CSI node driver would use the CSI socket not through the expected path in the container. Updated images: LINSTOR 1.22.0 LINSTOR CSI 1.0.1 DRBD 9.2.3 DRBD Reactor 1.1.0 HA Controller 1.1.3 external CSI images upgraded to latest versions `drbd-shutdown-guard` init-container, installing a systemd unit that runs on shutdown. It's purpose is to run `drbdsetup secondary --force` during shutdown, so that systemd can unmount all volumes, even with `suspend-io`. Updated LINSTOR CSI to 1.0.0, mounting the `/run/mount` directory from the host to enable the `_netdev` mount option. HA Controller deployment requires additional rules to run on OpenShift. Removed existing CRD `LinstorController`, `LinstorSatelliteSet` and `LinstorCSIDriver`. Helm chart deprecated in favor of new `kustomize` deployment. Helm chart changed to only deploy the Operator. The LinstorCluster resource to create the storage cluster needs to be created separately. New CRDs to control storage cluster: `LinstorCluster` and `LinstorSatelliteConfiguration`. on how to get started. Automatic selection of loader images based on operating system of node. Customization of single nodes or groups of nodes. Possibility to run DRBD replication using the container network. Support for file system backed storage pools Default deployment for HA Controller. Since we switch to defaulting to `suspend-io` for lost quorum, we should include a way for Pods to get unstuck. Can set the variable `mountDrbdResourceDirectoriesFromHost` in the Helm chart to create hostPath Volumes for DRBD and LINSTOR configuration directories for the satellite set. Change default bind address for satellite monitoring to use IPv6 anylocal `[::]`. This will still to work on IPv4 only systems with IPv6 disabled via sysctl. Default images: LINSTOR 1.20.0 DRBD 9.1.11 DRBD Reactor 0.9.0 external CSI images upgraded to latest versions Comparing IP addresses for registered components uses golang's built-in net.IP type. Restoring satellites after eviction only happens if the node is ready. ServiceMonitor resources can't be patched, instead we recreate them. Support for custom labels and annotations with added options to `values.yaml`. Instructions for deploying the affinity controller. Satellite operator now reports basic satellite status even if controller is not reachable Query single satellite devices to receive errors when the satellite is offline instead of assuming devices are already configured. Disabled the legacy HA Controller deployment by default. It has been replaced a separate . Default images: LINSTOR 1.19.1 LINSTOR CSI 0.20.0 DRBD 9.1.8 DRBD Reactor 0.8.0 external CSI images to latest versions `last-applied-configuration `annotation was never updated, so updating of some fields was not performed correctly. Option to disable creating monitoring resources (Services and ServiceMonitors) Add options `csi.controllerSidecars`, `csi.controllerExtraVolumes`, `csi.nodeSidecars`, `csi.nodeExtraVolumes`, `operator.controller.sidecars`, `operator.controller.extraVolumes`, `operator.satelliteSet.sidecars`, `operator.satelliteSet.extraVolumes` to allow specifying extra sidecar containers. Add options `operator.controller.httpBindAddress`, `operator.controller.httpsBindAddress`, `operator.satelliteSet.monitoringBindAddress` to allow specifying bind address. Add example values and doc reference to run piraeus-operator with rbac-proxy. Default images: LINSTOR 1.18.2 LINSTOR CSI 0.19.1 DRBD Reactor" }, { "data": "Get rid of operator-sdk binary, use native controller-gen instead Default images: LINSTOR 1.18.1 LINSTOR CSI 0.19.0 DRBD 9.1.7 DRBD Reactor 0.6.1 Upgrades involving k8s databases no long require manually confirming a backup exists using `--set IHaveBackedUpAllMyLinstorResources=true`. Allow setting the number of parallel requests created by the CSI sidecars. This limits the load on the LINSTOR backend, which could easily overload when creating many volumes at once. Unify certificates format for SSL enabled installation, no more java tooling required. Automatic certificates generation using Helm or Cert-manager HA Controller and CSI components now wait for the LINSTOR API to be initialized using InitContainers. Create backups of LINSTOR resource if the \"k8s\" database backend is used and an image change is detected. Backups are stored in Secret resources as a `tar.gz`. If the secret would get too big, the backup can be downloaded from the operator pod. Default images: LINSTOR 1.18.0 LINSTOR CSI 0.18.0 DRBD 9.1.6 DRBD Reactor 0.5.3 LINSTOR HA Controller 0.3.0 CSI Attacher v3.4.0 CSI Node Driver Registrar v2.4.0 CSI Provisioner v3.1.0 CSI Snapshotter v5.0.1 CSI Resizer v1.4.0 Stork v2.8.2 Stork updated to support Kubernetes v1.22+. Satellites no longer have a readiness probe defined. This caused issues in the satellites by repeatedly opening unexpected connections, especially when using SSL. Only query node devices if a storage pool needs to be created. Use cached storage pool response to avoid causing excessive load on LINSTOR satellites. Protect LINSTOR passphrase from accidental deletion by using a finalizer. If you have SSL configured, then the certificates must be regenerated in PEM format. Learn more in the . Allow the external-provisioner and external-snapshotter access to secrets. This is required to support StorageClass and SnapshotClass . Instruct external-provisioner to pass PVC name+namespace to the CSI driver, enabling optional support for PVC based names for LINSTOR volumes. Allow setting the log level of LINSTOR components via CRs. Other components are left using their default log level. The new default log level is INFO (was DEBUG previously, which was often too verbose). Override the kernel source directory used when compiling DRBD (defaults to /usr/src). See etcd-chart: add option to set priorityClassName. Use correct secret name when setting up TLS for satellites Correctly configure ServiceMonitor resource if TLS is enabled for LINSTOR Controller. `pv-hostpath`: automatically determine on which nodes PVs should be created if no override is given. Automatically add labels on Kubernetes Nodes to LINSTOR satellites as Auxiliary Properties. This enables using Kubernetes labels for volume scheduling, for example using `replicasOnSame: topology.kubernetes.io/zone`. Support LINSTORs `k8s` backend by adding the necessary RBAC resources and . Automatically create a LINSTOR passphrase when none is configured. Automatic eviction and deletion of offline satellites if the Kubernetes node object was also deleted. Default images: `quay.io/piraeusdatastore/piraeus-server:v1.17.0` `quay.io/piraeusdatastore/piraeus-csi:v0.17.0` `quay.io/piraeusdatastore/drbd9-bionic:v9.1.4` `quay.io/piraeusdatastore/drbd-reactor:v0.4.4` Recreates or updates to the satellite pods are now applied at once, instead of waiting for a node to complete before moving to the next. Enable CSI topology by default, allowing better volume scheduling with `volumeBindingMode: WaitForFirstConsumer`. Disable STORK by default. Instead, we recommend using `volumeBindingMode: WaitForFirstConsumer` in storage classes. Allow CSI to work with distributions that use a kubelet working directory other than `/var/lib/kubelet`. See the" }, { "data": "Enable [Storage Capacity Tacking]. This enables Kubernetes to base Pod scheduling decisions on remaining storage capacity. The feature is in beta and enabled by default starting with Kubernetes 1.21. Disable Stork Health Monitoring by default. Stork cannot distinguish between control plane and data plane issues, which can lead to instances where Stork will migrate a volume that is still mounted on another node, making the volume effectively unusable. Updated operator to kubernetes v1.21 components. Default images: `quay.io/piraeusdatastore/piraeus-server:v1.14.0` `quay.io/piraeusdatastore/drbd9-bionic:v9.0.30` `quay.io/piraeusdatastore/drbd-reactor:v0.4.3` `quay.io/piraeusdatastore/piraeus-ha-controller:v0.2.0` external CSI images The cluster-wide snapshot controller is no longer deployed as a dependency of the piraeus-operator chart. Instead, separate charts are available on that deploy the snapshot controller and extra validation for snapshot resources. The subchart was removed, as it unnecessarily tied updates of the snapshot controller to piraeus and vice versa. With the tightened validation starting with snapshot CRDs `v1`, moving the snapshot controller to a proper chart seems like a good solution. Default images: Piraeus Server v1.13.0 Piraeus CSI v0.13.1 CSI Provisioner v2.1.2 All operator-managed workloads apply recommended labels. This requires the recreation of Deployments and DaemonSets on upgrade. This is automatically handled by the operator, however any customizations applied to the deployments not managed by the operator will be reverted in the process. Use to expose Prometheus endpoints on each satellite. Configure `ServiceMonitor` resources if they are supported by the cluster (i.e. prometheus operator is configured) CSI Nodes no longer use `hostNetwork: true`. The pods already got the correct hostname via the downwardAPI and do not talk to DRBD's netlink interface directly. External: CSI snapshotter subchart now packages `v1` CRDs. Fixes deprecation warnings when installing the snapshot controller. Default images: Piraeus Server v1.12.3 Piraeus CSI v0.13.0 DRBD v9.0.29 Additional environment variables and Linstor properties can now be set in the `LinstorController` CRD. Set node name variable for Controller Pods, enabling [k8s-await-election] to correctly set up the endpoint for hairpin mode. Update the network address of controller pods if they diverged between Linstor and kubernetes. This can happen after a node restart, where a pod is recreated with the same name but different IP address. New guide on host preparation Default image updated: `operator.satelliteSet.kernelModuleInjectionImage`: `quay.io/piraeusdatastore/drbd9-bionic:v9.0.27` `operator.satelliteSet.satelliteImage`: `quay.io/piraeusdatastore/piraeus-server:v1.11.1` `operator.controller.controllerImage`: `quay.io/piraeusdatastore/piraeus-server:v1.11.1` `haController.image`: `quay.io/piraeusdatastore/piraeus-ha-controller:v0.1.3` `pv-hostpath`: `chownerImage`: `quay.io/centos/centos:8` New component: `haController` will deploy the [Piraeus High Availability Controller]. More information is available in the Enable strict checking of DRBD parameter to disable usermode helper in container environments. Override the image used in \"chown\" jobs in the `pv-hostpath` chart by using `--set chownerImage=<my-image>`. Updated `operator-sdk` to v0.19.4 Set CSI component timeout to 1 minute to reduce the number of retries in the CSI driver Default images updated: `operator.controller.controllerImage`: `quay.io/piraeusdatastore/piraeus-server:v1.11.0` `operator.satelliteSet.satelliteImage`: `quay.io/piraeusdatastore/piraeus-server:v1.11.0` `operator.satelliteSet.kernelModuleInjectionImage`: `quay.io/piraeusdatastore/drbd9-bionic:v9.0.26` `csi.pluginImage`: `quay.io/piraeusdatastore/piraeus-csi:v0.11.0` Fixed Helm warnings when setting \"csi.controllerAffinity\", \"operator.controller.affinity\" and \"operator.satelliteSet.storagePools\". `storagePools` can now also set up devices similar to `automaticStorageType`, but with more fine grained control. See the New Helm options to disable creation of LinstorController and LinstorSatelliteSet resource `operator.controller.enabled` and `operator.satelliteSet.enabled`. New Helm option to override the generated controller endpoint: `controllerEndpoint` Allow overriding the default `securityContext` on a component basis: `etcd.podsecuritycontext` sets the securityContext of etcd pods" }, { "data": "sets the securityContext of stork plugin and scheduler pods `csi-snapshotter.podsecuritycontext` sets the securityContext of the CSI-Snapshotter pods `operator.podsecuritycontext` sets the securityContext of the operator pods Example settings for openshift LINSTOR controller runs with additional GID 1000, to ensure write access to log directory Fixed a bug in `pv-hostpath` where permissions on the created directory are not applied on all nodes. Volumes created by `pv-hostpath` are now group writable. This makes them easier to integrate with `fsGroup` settings. Default value for affinity on LINSTOR controller and CSI controller changed. The new default is to distribute the pods across all available nodes. Default value for tolerations for etcd pods changed. They are now able to run on master nodes. Updates to LinstorController, LinstorSatelliteSet and LinstorCSIDriver are now propagated across all created resources Updated default images: csi sidecar containers updated (compatible with Kubernetes v1.17+) LINSTOR 1.10.0 LINSTOR CSI 0.10.0 Using `automaticStorageType` is deprecated. Use the `storagePools` values instead. The LINSTOR controller image given in `operator.controller.controllerImage` has to have its entrypoint set to or newer. Learn more in the . LINSTOR controller can be started with multiple replicas. See . NOTE: This requires support from the container. You need `piraeus-server:v1.8.0` or newer. The `pv-hostpath` helper chart automatically sets up permissions for non-root etcd containers. Disable securityContext enforcement by setting `global.setSecurityContext=false`. Add cluster roles to work with OpenShift's SCC system. Control volume placement and accessibility by using CSIs Topology feature. Controlled by setting . All pods use a dedicated service account to allow for fine-grained permission control. The new can automatically configure the ServiceAccount of all components to use the appropriate roles. Default values: `operator.controller.controllerImage`: `quay.io/piraeusdatastore/piraeus-server:v1.9.0` `operator.satelliteSet.satelliteImage`: `quay.io/piraeusdatastore/piraeus-server:v1.9.0` `operator.satelliteSet.kernelModuleInjectionImage`: `quay.io/piraeusdatastore/drbd9-bionic:v9.0.25` `stork.storkImage`: `docker.io/openstorage/stork:2.5.0` linstor-controller no longer starts in a privileged container. legacy CRDs (LinstorControllerSet, LinstorNodeSet) have been removed. `v1alpha` CRD versions have been removed. default pull secret `drbdiocred` removed. To keep using it, use `--set drbdRepoCred=drbdiocred`. `v1` of all CRDs Central value for controller image pull policy of all pods. Use `--set global.imagePullPolicy=<value>` on helm deployment. `charts/piraeus/values.cn.yaml` a set of helm values for faster image download for CN users. Allow specifying [resource requirements] for all pods. In helm you can set: `etcd.resources` for etcd containers `stork.storkResources` for stork plugin resources `stork.schedulerResources` for the kube-scheduler deployed for use with stork `csi-snapshotter.resources` for the cluster snapshotter controller `csi.resources` for all CSI related containers. for brevity, there is only one setting for ALL CSI containers. They are all stateless go process which use the same amount of resources. `operator.resources` for operator containers `operator.controller.resources` for LINSTOR controller containers `operator.satelliteSet.resources` for LINSTOR satellite containers `operator.satelliteSet.kernelModuleInjectionResources` for kernel module injector/builder containers Components deployed by the operator can now run with multiple replicas. Components elect a leader, that will take on the actual work as long as it is active. Should one pod go down, another replica will take over. Currently these components support multiple replicas: `etcd` => set `etcd.replicas` to the desired count `stork` => set `stork.replicas` to the desired count for stork scheduler and controller `snapshot-controller` => set `csi-snapshotter.replicas` to the desired count for cluster-wide CSI snapshot controller `csi-controller` => set" }, { "data": "to the desired count for the linstor CSI controller `operator` => set `operator.replicas` to have multiple replicas of the operator running Reference docs for all helm settings. `stork.schedulerTag` can override the automatically chosen tag for the `kube-scheduler` image. Previously, the tag always matched the kubernetes release. Renamed `LinstorNodeSet` to `LinstorSatelliteSet`. This brings the operator in line with other LINSTOR resources. Existing `LinstorNodeSet` resources will automatically be migrated to `LinstorSatelliteSet`. Renamed `LinstorControllerSet` to `LinstorController`. The old name implied the existence of multiple (separate) controllers. Existing `LinstorControllerSet` resources will automatically be migrated to `LinstorController`. Helm values renamed to align with new CRD names: `operator.controllerSet` to `operator.controller` `operator.nodeSet` to `operator.satelliteSet` Node scheduling no longer relies on `linstor.linbit.com/piraeus-node` labels. Instead, all CRDs support setting pod [affinity] and [tolerations]. In detail: `linstorcsidrivers` gained 4 new resource keys, with no change in default behaviour: `nodeAffinity` affinity passed to the csi nodes `nodeTolerations` tolerations passed to the csi nodes `controllerAffinity` affinity passed to the csi controller `controllerTolerations` tolerations passed to the csi controller `linstorcontrollerset` gained 2 new resource keys, with no change in default behaviour: `affinity` affinity passed to the linstor controller pod `tolerations` tolerations passed to the linstor controller pod `linstornodeset` gained 2 new resource keys, with change in default behaviour*: `affinity` affinity passed to the linstor controller pod `tolerations` tolerations passed to the linstor controller pod Controller is now a Deployment instead of StatefulSet. Renamed `kernelModImage` to `kernelModuleInjectionImage` Renamed `drbdKernelModuleInjectionMode` to `KernelModuleInjectionMode` Support volume resizing with newer CSI versions. A new Helm chart `csi-snapshotter` that deploys extra components needed for volume snapshots. Add new kmod injection mode `DepsOnly`. Will try load kmods for LINSTOR layers from the host. Deprecates `None`. Automatic deployment of scheduler configured for LINSTOR. Replaced `bitnami/etcd` dependency with vendored custom version Some important keys for the `etcd` helm chart have changed: `statefulset.replicaCount` -> `replicas` `persistence.enabled` -> `persistentVolume.enabled` `persistence.size` -> `persistentVolume.storage` `uth.rbac` was removed: use `auth.peer.useAutoTLS` was removed `envVarsConfigMap` was removed When using etcd with TLS enabled: For peer communication, peers need valid certificates for `.<release-name>-etcd` (was `.<release-name>>-etcd-headless.<namespace>.svc.cluster.local`) For client communication, servers need valid certificates for `.<release-name>-etcd` (was `.<release-name>>-etcd.<namespace>.svc.cluster.local`) Automatic storage pool creation via `automaticStorageType` on `LinstorNodeSet`. If this option is set, LINSTOR will create a storage pool based on all available devices on a node. Moved storage documentation to the Helm: update default images Secured database connection for Linstor: When using the `etcd` connector, you can specify a secret containing a CA certificate to switch from HTTP to HTTPS communication. Secured connection between Linstor components: You can specify TLS keys to secure the communication between controller and satellite Secure storage with LUKS: You can specify the master passphrase used by Linstor when creating encrypted volumes when installing via Helm. Authentication with etcd using TLS client certificates. Secured connection between linstor-client and controller (HTTPS). More in the Linstor controller endpoint can now be customized for all resources. If not specified, the old default values will be filled in. NodeSet service (`piraeus-op-ns`) was replaced by the ControllerSet service (`piraeus-op-cs`) everywhere CSI storage driver setup: move setup from helm to go operator. This is mostly an internal" }, { "data": "These changes may be of note if you used a non-default CSI configuration: helm value `csi.image` was renamed to `csi.pluginImage` CSI deployment can be controlled by a new resource `linstorcsidrivers.piraeus.linbit.com` PriorityClasses are not automatically created. When not specified, the priority class is: \"system-node-critical\", if deployed in \"kube-system\" namespace default PriorityClass in other namespaces RBAC rules for CSI: creation moved to deployment step (Helm/OLM). ServiceAccounts should be specified in CSI resource. If no ServiceAccounts are named, the implicitly created accounts from previous deployments will be used. Helm: update default images Use single values for images in CRDs instead of specifying the version separately Helm: Use single values for images instead of specifying repo, name and version separately Helm: Replace fixed storage pool configuration with list Helm: Do not create any storage pools by default Helm: Replace `operator.nodeSet.spec` and `operator.controllerSet.spec` by just `operator.nodeSet` and `operator.controllerSet`. Fix reporting of errors in LinstorControllerSet status Helm: Update LINSTOR server dependencies to fix startup problems Helm: Allow an existing database to be used instead of always setting up a dedicated etcd instance Rename `etcdURL` parameter of LinstorControllerSet to `dbConnectionURL` to reflect the fact that it can be used for any database type Upgrade to operator-sdk v0.16.0 Helm: Create multiple volumes with a single `pv-hostchart` installation Helm: Update dependencies Helm: Add support for `hostPath` `PersistentVolume` persistence of etcd Helm: Remove vendored etcd chart from repository Rename CRDs from Piraeus to Linstor* Make priority classes configurable Fix LINSTOR Controller/Satellite arguments Helm: Make etcd persistent by default Helm: Fix deployment of permissions objects into a non-default namespace Helm: Set default etcd size to 1Gi Helm: Update dependent image versions Docker: Change base image to Debian Buster Support for kernel module injection based on shipped modules - necessary for CoreOS support. /charts contains Helm v3 chart for this operator CRDs contain additional Spec parameters that allow customizing image repo and tag/version of the image. Another Spec parameter 'drbdRepoCred' can specify the name of the k8s secret used to access the container images. LINSTOR Controller image now contains the LINSTOR client, away from the Satellite images as it was previously the case. Hence, the readiness probe is changed to use `curl` instead of `linstor` client command. examples/operator-intra.yaml file to bundle all the rbac, crds, etc to run the operator EtcdURL field to controllersetcontroller spec. default: etcd-piraeus:2379 Host networking on the LINSTOR Satellite pods with DNSClusterFirstWithHostNet DNS policy NodeSet service for the Satellite pods that also point to the Controller service for LINSTOR discovery `ControllerEndpoint` and `DefaultController` from the PiraeusNodeSet spec Controller persistence is now handled by etcd. There must be a reachable and operable etcd cluster for this operator to work. Networking is now handled by a kubernetes service with the same name as the ControllerSet. The NodeSet must have the same name as the ControllerSet for networking to work properly. Opt-in node label for nodes is now `linstor.linbit.com/piraeus-node=true` Remove requirement for `kube-system` namespace Naming convention for NodeSet and ControllerSet Pods Updated ports for LINSTOR access on NodeSet and ControllerSet pods Updated framework to work with Operator Framework 0.13.0 API Versions on PriorityClass, DaemonSet, StatefulSet, and CRD kinds to reflect K8s 1.16.0 release Initial public version with docs" } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "rkt currently implements the and therefore relies on the (Application Container Image) image format internally. on the other hand defines a new following a separate specification. This new specification differs considerably from rkt's internal ACI-based image format handling. The internal rkt image handling is currently divided in three subsystems: fetching: This subsystem is responsible for downloading images of various types. Non-ACI image types (Docker and OCI) are converted to ACI images by delegating to . The logic resides in the `github.com/rkt/rkt/rkt/image` package. image store: The image store is responsible for persisting and managing downloaded images. It consists of two parts, a directory tree storing the actual image file blobs (usually residing under `/var/lib/rkt/cas/blob`) and a separate embedded SQL database storing image metadata usually residing in `/var/lib/rkt/cas/db/ql.db`. The implementation resides in the `github.com/rkt/rkt/store/imagestore` package. tree store: Since dependencies between ACI images form a directed acyclic graph according to the they are pre-rendered in a directory called the tree store cache. If the is enabled, the pre-rendered image is used as the `lowerdir` for the pod's rendered rootfs. The implementation resides in the `github.com/rkt/rkt/store/treestore` package. The actual internal lifecycle of an image is documented in the . The following table gives an overview of the relevant differences between OCI and appc regarding image handling aspects: Aspect | OCI | ACI --|--| Dependencies | Layers array in the | Hash algorithms | Potentially multiple , but SHA-256 preferred | Current ongoing work to support OCI in rkt is tracked in the following Github project: . With the deprecation of the appc Spec (https://github.com/appc/spec#-disclaimer-) the current internal rkt architecture is not favorable any more. Currently rkt does support ACI, Docker, and OCI images, but the conversion step from OCI to ACI using `docker2aci` seems unnecessary. It introduces CPU and I/O bound overhead and is bound by semantical differences between the formats. For these reasons native support of OCI images inside rkt is envisioned. The goal therefore is to support OCI images natively next to ACI. rkt will continue to support the ACI image format and distribution mechanism. There is currently no plans to remove that support. This document outlines the following necessary steps and references existing work to transition to native OCI image support in rkt: Distribution points Reference based image handling Transport handlers Tree store support for OCI A non-goal is the implementation of the covering this topic. rkt historically used the image name and heuristics around it to determine the actual image format type (appc, Docker, OCI). The concept of \"distribution points\" introduced a URI syntax that uniquely points to the different image formats including the necessary metadata (file location, origin URL, version," }, { "data": "The URI scheme \"cimd\" (Container Image Distribution) was chosen to uniquely identify different image formats. The following CIMDs are currently supported: Name | Example --|-- appc | `cimd:appc:v=0:coreos.com/etcd?version=v3.0.3&os=linux&arch=amd64` ACIArchive | `cimd:aci-archive:v=0:file%3A%2F%2Fabsolute%2Fpath%2Fto%2Ffile` Docker | `cimd:docker:v=0:busybox` The design document can be found in . The design document (https://github.com/rkt/rkt/pull/2953) is merged. The implementation (https://github.com/rkt/rkt/pull/3369) is merged. Introduce a dedicated remote `cimd:oci` and potentially also a local `cimd:oci-layout` (see ) CIMD scheme. The current image store implementation does not support different image formats. The blob image store only supports SHA-512. The ql backed SQL image store has a simple SQL scheme referencing only ACI images. In order to prepare native support for OCI the following changes need to be implemented: Store the CIMD URI as a primary key in the current image store. Support for multiple hash algorithms: Currently only SHA-512 is supported. OCI in addition needs SHA-256 and potentially other hash algorithms. The database schema needs to be reworked to reflect multi-image support. The design and initial implementation is proposed in https://github.com/rkt/rkt/pull/3071. The actual design document of the above PR can be found in . Note that the above design document also suggests the introduction of a new key/value based store. The current consensus is that the replacement of `ql` as the backing store can be done independently and therefore should be a non-goal for the OCI roadmap. Finalize the initial design proposal and implementation. Currently rkt directly fetches remote ACI based images or uses `docker2aci` to delegate non-ACI fetching. The current implementation makes it hard to integrate separate fetching subsystems due to the lack of any abstraction. The current proposal is to abstract fetching logic behind \"transport handlers\" allowing for independent (potentially swappable) fetching implementations for the various image formats. A first initial design is proposed in https://github.com/rkt/rkt/pull/2964. The actual design document of the above PR can be found in . A first initial implementation is proposed in https://github.com/rkt/rkt/pull/3232. Note that the initial design and implementation are in very early stage and should only be considered inspirational. Fetching images remotely and locally from disk for all formats must be supported. The current initial design proposal needs to be finalized. The current fetcher logic needs to be abstracted allowing to introduce alternative libraries like https://github.com/containers/image to delegate fetching logic for OCI or Docker images. The current tree store implementation is used for rendering ACI images only. A design document and initial implementation has to be created to prototype deflating OCI images and their dependencies. Not started yet Backwards compatibility: Currently the biggest concern identified is backwards compatibility/rollback capabilities. The proposed changes do not only imply simple schema changes in the `ql` backed database, but also intrusive schema and directory layout changes." } ]
{ "category": "Runtime", "file_name": "oci.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "English | Spiderpool provides a solution for assigning static IP addresses in underlay networks. In this page, we'll demonstrate how to build a complete underlay network solution using , , , and , which meets the following kinds of requirements: Applications can be assigned static Underlay IP addresses through simple operations. Pods with multiple Underlay NICs connect to multiple Underlay subnets. Pods can communicate in various ways, such as Pod IP, clusterIP, and nodePort. Make sure a Kubernetes cluster is ready has already been installed Check the NIC's bus-info: ```shell ~# ethtool -i enp4s0f0np0 |grep bus-info bus-info: 0000:04:00.0 ``` Check whether the NIC supports SR-IOV via bus-info. If the `Single Root I/O Virtualization (SR-IOV)` field appears, it means that SR-IOV is supported: ```shell ~# lspci -s 0000:04:00.0 -v |grep SR-IOV Capabilities: [180] Single Root I/O Virtualization (SR-IOV) ``` If your OS is such as Fedora and CentOS and uses NetworkManager to manage network configurations, you need to configure NetworkManager in the following scenarios: If you are using Underlay mode, the `coordinator` will create veth interfaces on the host. To prevent interference from NetworkManager with the veth interface. It is strongly recommended that you configure NetworkManager. If you create VLAN and Bond interfaces through Ifacer, NetworkManager may interfere with these interfaces, leading to abnormal pod access. It is strongly recommended that you configure NetworkManager. ```shell ~# IFACER_INTERFACE=\"<NAME>\" ~# cat > /etc/NetworkManager/conf.d/spidernet.conf <<EOF [keyfile] unmanaged-devices=interface-name:^veth*;interface-name:${IFACER_INTERFACE} EOF ~# systemctl restart NetworkManager ``` Install Spiderpool. ```shell helm repo add spiderpool https://spidernet-io.github.io/spiderpool helm repo update spiderpool helm install spiderpool spiderpool/spiderpool --namespace kube-system --set sriov.install=true --set multus.multusCNI.defaultCniCRName=\"sriov-test\" ``` > When using the helm option `--set sriov.install=true`, it will install the . The default value for resourcePrefix is \"spidernet.io\" which can be modified via the helm option `--set sriov.resourcePrefix`. > > For users in the Chinese mainland, it is recommended to specify the spec `--set global.imageRegistryOverride=ghcr.m.daocloud.io` to avoid image pull failures from Spiderpool. > > Specify the name of the NetworkAttachmentDefinition instance for the default CNI used by Multus via `multus.multusCNI.defaultCniCRName`. If the `multus.multusCNI.defaultCniCRName` option is provided, an empty NetworkAttachmentDefinition instance will be automatically generated upon installation. Otherwise, Multus will attempt to create a NetworkAttachmentDefinition instance based on the first CNI configuration found in the /etc/cni/net.d directory. If no suitable configuration is found, a NetworkAttachmentDefinition instance named `default` will be created to complete the installation of Multus. To enable the SR-IOV CNI on specific nodes, you need to apply the following command to label those nodes. This will allow the sriov-network-operator to install the components on the designated nodes. ```shell kubectl label node $NodeName node-role.kubernetes.io/worker=\"\" ``` Create VFs on the node Use the following command to view the available network interfaces on the node: ```shell $ kubectl get sriovnetworknodestates -n kube-system NAME SYNC STATUS AGE node-1 Succeeded 24s ... $ kubectl get sriovnetworknodestates -n kube-system node-1 -o yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState spec:" }, { "data": "status: interfaces: deviceID: \"1017\" driver: mlx5_core linkSpeed: 10000 Mb/s linkType: ETH mac: 04:3f:72:d0:d2:86 mtu: 1500 name: enp4s0f0np0 pciAddress: \"0000:04:00.0\" totalvfs: 8 vendor: 15b3 syncStatus: Succeeded ``` > If the status of SriovNetworkNodeState CRs is `InProgress`, it indicates that the sriov-operator is currently synchronizing the node state. Wait for the status to become `Succeeded` to confirm that the synchronization is complete. Check the CR to ensure that the sriov-network-operator has discovered the network interfaces on the node that support SR-IOV. Based on the given information, it is known that the network interface's `enp4s0f0np0` on the node `node-1` supports SR-IOV capability with a maximum of 8 VFs. Now, let's create SriovNetworkNodePolicy CRs and specify PF (Physical function, physical network interface) through `nicSelector.pfNames` to generate VFs(Virtual Function) on these network interfaces of the respective nodes: ```shell $ cat << EOF | kubectl apply -f - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy1 namespace: sriov-network-operator spec: deviceType: netdevice nodeSelector: kubernetes.io/os: \"linux\" nicSelector: pfNames: enp4s0f0np0 numVfs: 8 # desired number of VFs resourceName: sriov_netdevice EOF ``` > After executing the above command, please note that configuring nodes to enable SR-IOV functionality may require a node restart. If needed, specify worker nodes instead of master nodes for this configuration. > The resourceName should not contain special characters and is limited to [0-9], [a-zA-Z], and \"_\". After applying the SriovNetworkNodePolicy CRs, you can check the status of the SriovNetworkNodeState CRs again to verify that the VFs have been successfully configured: ```shell $ kubectl get sriovnetworknodestates -n sriov-network-operator node-1 -o yaml ... Vfs: deviceID: 1018 driver: mlx5_core pciAddress: 0000:04:00.4 vendor: \"15b3\" deviceID: 1018 driver: mlx5_core pciAddress: 0000:04:00.5 vendor: \"15b3\" deviceID: 1018 driver: mlx5_core pciAddress: 0000:04:00.6 vendor: \"15b3\" deviceID: \"1017\" driver: mlx5_core mtu: 1500 numVfs: 8 pciAddress: 0000:04:00.0 totalvfs: 8 vendor: \"8086\" ... ``` To confirm that the SR-IOV resources named `spidernet.io/sriov_netdevice` have been successfully enabled on a specific node and that the number of VFs is set to 8, you can use the following command: ```shell ~# kubectl get node node-1 -o json |jq '.status.allocatable' { \"cpu\": \"24\", \"ephemeral-storage\": \"94580335255\", \"hugepages-1Gi\": \"0\", \"hugepages-2Mi\": \"0\", \"spidernet.io/sriov_netdevice\": \"8\", \"memory\": \"16247944Ki\", \"pods\": \"110\" } ``` > The sriov-network-config-daemon Pod is responsible for configuring VF on nodes, and it will sequentially complete the work on each node. When configuring VF on each node, the SR-IOV network configuration daemon will evict all Pods on the node, configure VF, and possibly restart the node. When SR-IOV network configuration daemon fails to evict a Pod, it will cause all processes to stop, resulting in the vf number of nodes remaining at 0. In this case, the SR-IOV network configuration daemon Pod will see logs similar to the following: > > `error when evicting pods/calico-kube-controllers-865d498fd9-245c4 -n kube-system (will retry after 5s) ...` > > This issue can be referred to similar topics in the sriov-network-operator community > > The reason why the designated Pod cannot be expelled can be investigated, which may include the following: > >" }, { "data": "The Pod that failed the eviction may have been configured with a PodDisruptionBudget, resulting in a > shortage of available replicas. Please adjust the PodDisruptionBudget > > 2. Insufficient available nodes in the cluster, resulting in no nodes available for scheduling Create a SpiderIPPool instance. The Pod will obtain an IP address from this subnet for underlying network communication, so the subnet needs to correspond to the underlying subnet that is being accessed. Here is an example of creating a SpiderSubnet instance: ```shell cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: ippool-test spec: default: true ips: \"10.20.168.190-10.20.168.199\" subnet: 10.20.0.0/16 gateway: 10.20.0.1 multusName: kube-system/sriov-test EOF ``` Create a SpiderMultusConfig instance. ```shell $ cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: sriov-test namespace: kube-system spec: cniType: sriov sriov: resourceName: spidernet.io/sriov_netdevice ``` > SpiderIPPool.Spec.multusName: 'kube-system/sriov-test' must be to match the Name and Namespace of the SpiderMultusConfig instance created. Create test Pods and Services via the command below ```shell cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: sriov-deploy spec: replicas: 2 selector: matchLabels: app: sriov-deploy template: metadata: annotations: v1.multus-cni.io/default-network: kube-system/sriov-test labels: app: sriov-deploy spec: containers: name: sriov-deploy image: nginx imagePullPolicy: IfNotPresent ports: name: http containerPort: 80 protocol: TCP resources: requests: spidernet/sriov_netdevice: '1' limits: spidernet/sriov_netdevice: '1' apiVersion: v1 kind: Service metadata: name: sriov-deploy-svc labels: app: sriov-deploy spec: type: ClusterIP ports: port: 80 protocol: TCP targetPort: 80 selector: app: sriov-deploy EOF ``` Spec descriptions: > `spidernet/sriov_netdevice`: Sriov resources used. > >`v1.multus-cni.io/default-network`: specifies the CNI configuration for Multus. > > For more information on Multus annotations, refer to . Check the status of Pods: ```shell ~# kubectl get pod -l app=sriov-deploy -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES sriov-deploy-9b4b9f6d9-mmpsm 1/1 Running 0 6m54s 10.20.168.191 worker-12 <none> <none> sriov-deploy-9b4b9f6d9-xfsvj 1/1 Running 0 6m54s 10.20.168.190 master-11 <none> <none> ``` Spiderpool ensuring that the applications' IPs are automatically fixed within the defined ranges. ```shell ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT DISABLE ippool-test 4 10.20.0.0/16 2 10 true false ~# kubectl get spiderendpoints NAME INTERFACE IPV4POOL IPV4 IPV6POOL IPV6 NODE sriov-deploy-9b4b9f6d9-mmpsm eth0 ippool-test 10.20.168.191/16 worker-12 sriov-deploy-9b4b9f6d9-xfsvj eth0 ippool-test 10.20.168.190/16 master-11 ``` Test the communication between Pods: ```shell ~# kubectl exec -it sriov-deploy-9b4b9f6d9-mmpsm -- ping 10.20.168.190 -c 3 PING 10.20.168.190 (10.20.168.190) 56(84) bytes of data. 64 bytes from 10.20.168.190: icmp_seq=1 ttl=64 time=0.162 ms 64 bytes from 10.20.168.190: icmp_seq=2 ttl=64 time=0.138 ms 64 bytes from 10.20.168.190: icmp_seq=3 ttl=64 time=0.191 ms 10.20.168.190 ping statistics 3 packets transmitted, 3 received, 0% packet loss, time 2051ms rtt min/avg/max/mdev = 0.138/0.163/0.191/0.021 ms ``` Test the communication between Pods and Services: Check Services' IPs ```shell ~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 23d sriov-deploy-svc ClusterIP 10.43.54.100 <none> 80/TCP 20m ``` Access its own service within the Pod: ```shell ~# kubectl exec -it sriov-deploy-9b4b9f6d9-mmpsm -- curl 10.43.54.100 -I HTTP/1.1 200 OK Server: nginx/1.23.3 Date: Mon, 27 Mar 2023 08:22:39 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 13 Dec 2022 15:53:53 GMT Connection: keep-alive ETag: \"6398a011-267\" Accept-Ranges: bytes ```" } ]
{ "category": "Runtime", "file_name": "get-started-sriov.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "v1.4.0 released! This release introduces many enhancements, improvements, and bug fixes as described below about stability, performance, data integrity, troubleshooting, and so on. Please try it and feedback. Thanks for all the contributions! In the previous versions, Longhorn relies on Pod Security Policy (PSP) to authorize Longhorn components for privileged operations. From Kubernetes 1.25, PSP has been removed and replaced with Pod Security Admission (PSA). Longhorn v1.4.0 supports opt-in PSP enablement, so it can support Kubernetes versions with or without PSP. ARM64 has been experimental from Longhorn v1.1.0. After receiving more user feedback and increasing testing coverage, ARM64 distribution has been stabilized with quality as per our regular regression testing, so it is qualified for general availability. RWX has been experimental from Longhorn v1.1.0, but it lacks availability support when the Longhorn Share Manager component behind becomes unavailable. Longhorn v1.4.0 supports NFS recovery backend based on Kubernetes built-in resource, ConfigMap, for recovering NFS client connection during the fail-over period. Also, the NFS client hard mode introduction will further avoid previous potential data loss. For the detail, please check the issue and enhancement proposal. Data integrity is a continuous effort for Longhorn. In this version, Snapshot Checksum has been introduced w/ some settings to allow users to enable or disable checksum calculation with different modes. When enabling the Volume Snapshot Checksum feature, Longhorn will periodically calculate and check the checksums of volume snapshots, find corrupted snapshots, then fix them. When enabling the Volume Snapshot Checksum feature, Longhorn will use the calculated snapshot checksum to avoid needless snapshot replication between nodes for improving replica rebuilding speed and resource consumption. Longhorn engine supports UNMAP SCSI command to reclaim space from the block volume. Longhorn engine supports optional parameters to pass size expansion requests when updating the volume frontend to support online volume expansion and resize the filesystem via CSI node driver. Local volume is based on a new Data Locality setting, Strict Local. It will allow users to create one replica volume staying in a consistent location, and the data transfer between the volume frontend and engine will be through a local socket instead of the TCP stack to improve performance and reduce resource consumption. Recurring jobs binding to a volume can be backed up to the remote backup target together with the volume backup metadata. They can be restored back as well for a better operation experience. Longhorn enriches Volume metrics by providing real-time IO stats including IOPS, latency, and throughput of R/W IO. Users can set up a monotoning solution like Prometheus to monitor volume performance. Users can back up the longhorn system to the remote backup" }, { "data": "Afterward, it's able to restore back to an existing cluster in place or a new cluster for specific operational purposes. Longhorn introduces a new support bundle integration based on a general solution. This can help us collect more complete troubleshooting info and simulate the cluster environment. In the current Longhorn versions, the default timeout between the Longhorn engine and replica is fixed without any exposed user settings. This will potentially bring some challenges for users having a low-spec infra environment. By exporting the setting configurable, it will allow users adaptively tune the stability of volume operations. Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.4.0. Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions . Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.0 from v1.3.x. Only support upgrading from 1.3.x. Follow the upgrade instructions . Pod Security Policy is an opt-in setting. If installing Longhorn with PSP support, need to enable it first. The built-in CSI Snapshotter sidecar is upgraded to v5.0.1. The v1beta1 version of Volume Snapshot custom resource is deprecated but still supported. However, it will be removed after upgrading CSI Snapshotter to 6.1 or later versions in the future, so please start using v1 version instead before the deprecated version is removed. Please follow up on about any outstanding issues found after this" }, { "data": ") - @yangchiu @derekbit @smallteeths @shuo-wu ) - @c3y1huang @khushboo-rancher ) - @shuo-wu @chriscchien ) - @yangchiu @mantissahz ) - @derekbit @chriscchien ) - @derekbit @roger-ryao ) - @c3y1huang @chriscchien ) - @yangchiu @derekbit ) - @derekbit @chriscchien ) - @PhanLe1010 @chriscchien ) - @yangchiu @derekbit ) - @derekbit @roger-ryao ) - @yangchiu @PhanLe1010 ) - @PhanLe1010 @chriscchien ) - @yangchiu @derekbit ) - @derekbit @roger-ryao ) - @yangchiu @derekbit ) - @weizhe0422 @chriscchien ) - @mantissahz @chriscchien ) - @c3y1huang @chriscchien ) - @weizhe0422 @chriscchien ) - @weizhe0422 @chriscchien ) - @weizhe0422 @roger-ryao ) - @weizhe0422 @roger-ryao ) - @mantissahz @chriscchien ) - @c3y1huang @roger-ryao ) - @PhanLe1010 @chriscchien ) - @yangchiu @derekbit ) - @derekbit ) - @yangchiu @derekbit ) - @derekbit ) - @yangchiu @shuo-wu ) - @yangchiu @c3y1huang ) - @c3y1huang @chriscchien ) - @yangchiu @PhanLe1010 ) - @derekbit @chriscchien ) - @c3y1huang ) - @derekbit ) - @c3y1huang ) - @derekbit @chriscchien ) - @derekbit ) - @yangchiu @PhanLe1010 ) - @weizhe0422 @roger-ryao ) - @weizhe0422 @roger-ryao ) - @yangchiu @derekbit ) - @yangchiu @derekbit ) - @joshimoo @chriscchien ) - @derekbit ) - @smallteeths ) - @derekbit @chriscchien ) - @derekbit @PhanLe1010 @chriscchien ) - @mantissahz @chriscchien ) - @yangchiu @smallteeths ) - @smallteeths @chriscchien ) - @derekbit @chriscchien ) - @derekbit @roger-ryao ) - @derekbit ) - @smallteeths @khushboo-rancher ) - @c3y1huang @khushboo-rancher ) - @c3y1huang @chriscchien ) - @smallteeths @khushboo-rancher @roger-ryao ) - @smallteeths @roger-ryao ) - @c3y1huang @khushboo-rancher @roger-ryao ) - @derekbit @chriscchien ) - @yangchiu @smallteeths ) - @derekbit @chriscchien ) - @yangchiu @smallteeths ) - @yangchiu @PhanLe1010 ) - @derekbit ) - @yangchiu ) - @yangchiu @PhanLe1010 ) - @derekbit @chriscchien ) - @derekbit @roger-ryao ) - @olljanat @mantissahz ) - @derekbit @chriscchien ) - @yangchiu @derekbit ) - @yangchiu @c3y1huang ) - @derekbit ) - @mantissahz ) - @mantissahz @chriscchien ) - @PhanLe1010 @roger-ryao ) - @mantissahz @roger-ryao ) - @mantissahz @chriscchien ) - @derekbit @chriscchien ) - @weizhe0422 @roger-ryao ) - @mantissahz @roger-ryao ) - @weizhe0422 @roger-ryao ) - @weizhe0422 @chriscchien ) - @c3y1huang @chriscchien ) - @c3y1huang @chriscchien ) - @yangchiu @derekbit ) - @shuo-wu @chriscchien ) - @derekbit @chriscchien ) - @yangchiu @PhanLe1010 ) - @derekbit @c3y1huang @chriscchien ) - @flkdnt @chriscchien ) - @yangchiu @shuo-wu ) - @yangchiu @derekbit ) - @shuo-wu @chriscchien ) - @yangchiu @mantissahz ) - @yangchiu @PhanLe1010 ) - @mantissahz ) - @smallteeths ) - @derekbit @chriscchien ) - @yangchiu @shuo-wu ) - @mantissahz @w13915984028 ) - @shuo-wu @chriscchien ) - @yangchiu @derekbit @PhanLe1010 ) - @PhanLe1010 @chriscchien ) - @PhanLe1010 @roger-ryao ) - @derekbit @chriscchien ) - @yangchiu @smallteeths ) - @weizhe0422 @roger-ryao ) - @mantissahz @roger-ryao ) - @yangchiu @smallteeths ) - @weizhe0422 ) - @yangchiu @c3y1huang ) - @derekbit ) - @mantissahz @roger-ryao ) - @yangchiu @mantissahz ) - @weizhe0422 @roger-ryao ) - @weizhe0422 @roger-ryao ) - @mantissahz ) - @yangchiu @mantissahz @roger-ryao ) - @derekbit @roger-ryao ) - @derekbit @chriscchien ) - @PhanLe1010 @roger-ryao ) - @derekbit @chriscchien ) - @yangchiu @PhanLe1010 @roger-ryao ) - @mantissahz @chriscchien ) - @c3y1huang @roger-ryao ) - @mantissahz @chriscchien ) - @FrankYang0529 @chriscchien ) - @weizhe0422 @roger-ryao ) - @shuo-wu @chriscchien ) - @smallteeths @shuo-wu @chriscchien ) - @derekbit ) - @weizhe0422 ) - @yangchiu @derekbit ) - @derekbit ) - @smallteeths @khushboo-rancher ) - @c3y1huang @khushboo-rancher ) - @mantissahz @roger-ryao ) - @c3y1huang @roger-ryao ) - @c3y1huang @khushboo-rancher ) - @yangchiu @c3y1huang ) - @c3y1huang @khushboo-rancher ) - @yangchiu @mantissahz ) - @smallteeths @chriscchien ) - @derekbit @chriscchien ) - @c3y1huang @chriscchien ) - @c3y1huang @chriscchien ) - @yangchiu" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.4.0.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "<p align=\"center\"> <a href=\"https://github.com/kuasar-io/kuasar/actions/workflows/ci.yml\"> <img alt=\"GitHub Workflow Status\" src=\"https://github.com/kuasar-io/kuasar/actions/workflows/ci.yml/badge.svg?branch=main\"> </a> <a href=\"https://cloud-native.slack.com/archives/C052JRURD8V\"> <img src=\"https://img.shields.io/badge/slack-join_chat-brightgreen.svg\" alt=\"chat\" /> </a> <img src=\"https://img.shields.io/badge/rustc-stable+-green.svg\" alt=\"supported rustc stable\" /> <a href=\"https://github.com/kuasar-io/kuasar/blob/main/LICENSE\"> <img alt=\"GitHub\" src=\"https://img.shields.io/github/license/kuasar-io/kuasar?color=427ece&label=License&style=flat-square\"> </a> <a href=\"https://github.com/kuasar-io/kuasar/graphs/contributors\"> <img alt=\"GitHub contributors\" src=\"https://img.shields.io/github/contributors/kuasar-io/kuasar?label=Contributors&style=flat-square\"> </a> </p> Kuasar is an efficient container runtime that provides cloud-native, all-scenario container solutions by supporting multiple sandbox techniques. Written in Rust, it offers a standard sandbox abstraction based on the sandbox API. Additionally, Kuasar provides an optimized framework to accelerate container startup and reduce unnecessary overheads. | Sandboxer | Sandbox | Status | |||--| | MicroVM | Cloud Hypervisor | Supported | | | QEMU | Supported | | | Firecracker | Planned in 2024 | | | StratoVirt | Supported | | Wasm | WasmEdge | Supported | | | Wasmtime | Supported | | | Wasmer | Planned in 2024 | | App Kernel | gVisor | Planned in 2024 | | | Quark | Supported | | runC | runC | Supported | In the container world, a sandbox is a technique used to separate container processes from each other, and from the operating system itself. After the introduction of the , sandbox has become the first-class citizen in containerd. With more and more sandbox techniques available in the container world, a management service called \"sandboxer\" is expected to be proposed. Kuasar supports various types of sandboxers, making it possible for users to select the most appropriate sandboxer for each application, according to application requirements. Compared with other container runtimes, Kuasar has the following advantages: Unified Sandbox Abstraction: The sandbox is a first-class citizen in Kuasar as the Kuasar is entirely built upon the Sandbox API, which was previewed by the containerd community in October 2022. Kuasar fully utilizes the advantages of the Sandbox API, providing a unified way for sandbox access and management, and improving sandbox O&M efficiency. Multi-Sandbox Colocation: Kuasar has built-in support for mainstream sandboxes, allowing multiple types of sandboxes to run on a single node. Kuasar is able to balance user's demands for security isolation, fast startup, and standardization, and enables a serverless node resource pool to meet various cloud-native scenario requirements. Optimized Framework: Optimization has been done in Kuasar via removing all pause containers and replacing shim processes by a single resident sandboxer process, bringing about a 1:N process management model, which has a better performance than the current 1:1 shim v2 process model. The benchmark test results showed that Kuasar's sandbox startup speed 2x, while the resource overhead for management was reduced by 99%. More details please refer to . Open and Neutral: Kuasar is committed to building an open and compatible multi-sandbox technology ecosystem. Thanks to the Sandbox API, it is more convenient and time-saving to integrate sandbox technologies. Kuasar keeps an open and neutral attitude towards sandbox technologies, therefore all sandbox technologies are welcome. Currently, the Kuasar project is collaborating with open-source communities and projects such as WasmEdge, openEuler and QuarkContainers. Sandboxers in Kuasar use their own isolation techniques for the containers, and they are also external plugins of containerd built on the new sandbox plugin mechanism. A discussion about the sandboxer plugin has been raised in this , with a community meeting record and slides attached in this . Now this feature has been put into 2.0" }, { "data": "Currently, Kuasar provides three types of sandboxers - MicroVM Sandboxer, App Kernel Sandboxer and Wasm Sandboxer - all of which have been proven to be secure isolation techniques in a multi-tenant environment. The general architecture of a sandboxer consists of two modules: one that implements the Sandbox API to manage the sandbox's lifecycle, and the other that implements the Task API to handle operations related to containers. Additionally, Kuasar is also a platform under active development, and we welcome more sandboxers can be built on top of it, such as Runc sandboxer. In the microVM sandbox scenario, the VM process provides complete virtual machines and Linux kernels based on open-source VMMs such as , , and . All of these vm must be running on virtualization-enabled node, otherwise, it won't work!. Hence, the `vmm-sandboxer` of MicroVM sandboxer is responsible for launching VMs and calling APIs, and the `vmm-task`, as the init process in VMs, plays the role of running container processes. The container IO can be exported via vsock or uds. The microVM sandboxer avoids the necessity of running shim process on the host, bringing about a cleaner and more manageable architecture with only one process per pod. Please note that only Cloud Hypervisor, StratoVirt and QEMU are supported currently. The app kernel sandbox launches a KVM virtual machine and a guest kernel, without any application-level hypervisor or Linux kernel. This allows for customized optimization to speed up startup procedure, reduce memory overheads, and improve IO and network performance. Examples of such app kernel sandboxes include and . Quark is an application kernel sandbox that utilizes its own hypervisor named `QVisor` and a customized kernel called `QKernel`. With customized modifications to these components, Quark can achieve significant performance. The `quark-sandboxer` of app kernel sandboxer starts `Qvisor` and an app kernel named `Qkernel`. Whenever containerd needs to start a container in the sandbox, the `quark-task` in `QVisor` will call `Qkernel` to launch a new container. All containers within the same pod will be running within the same process. Please note that only Quark is currently supported. The wasm sandbox, such as or , is incredibly lightweight, but it may have constraints for some applications at present. The `wasm-sandboxer` and `wasm-task` launch containers within a WebAssembly runtime. Whenever containerd needs to start a container in the sandbox, the `wasm-task` will fork a new process, start a new WasmEdge runtime, and run the Wasm code inside it. All containers within the same pod will share the same Namespace/Cgroup resources with the `wasm-task` process. Except secure containers, Kuasar also has provide the ability for containers. In order to generate a seperate namespace, a slight process is created by the `runc-sandboxer` through double folked and then becomes the PID 1. Based on this namespace, the `runc-task` can create the container process and join the namespace. If the container need a private namespace, it will unshare a new namespace for itself. The performance of Kuasar is measured by two metrics: End-to-End containers startup time. Process memory consumption to run containers. We used the Cloud Hypervisor in the benchmark test and tested the startup time of 100 PODs under both serial and parallel scenario. The result demonstrates that Kuasar outperforms open-source in terms of both startup speed and memory consumption. For detailed test scripts, test data, and results, please refer to the . The minimum versions of Linux distributions supported by Kuasar are Ubuntu 22.04 or CentOS 8 or openEuler" }, { "data": "Please also note that Quark requires a Linux kernel version >= 5.15. MicroVM: To launch a microVM-based sandbox, a hypervisor must be installed on the virtualization-enabled host. It is recommended to install Cloud Hypervisor by default. You can find Cloud Hypervisor installation instructions . If you want to run kuasar with iSulad container engine and StratoVirt hypervisor, you can refer to this guide . Quark: To use Quark, please refer to the installation instructions . WasmEdge: To start WebAssembly sandboxes, you need to install WasmEdge v0.11.2. Instructions for installing WasmEdge can be found in . Kuasar sandboxers are external plugins of containerd, so both containerd and its CRI plugin are required in order to manage the sandboxes and containers. We offer two ways to interact Kuasar with containerd: EXPERIMENTAL in containerd 2.0 milestone: If you desire the full experience of Kuasar, please install . Rest assured that our containerd is built based on the official v1.7.0, so there is no need to worry about missing any functionalities. If the compatibility is a real concern, you need to install official containerd v1.7.0 with an extra for request forwarding, see . However, it's possible that this way may be deprecated in the future as containerd 2.0 evolves. Since Kuasar is built on top of the Sandbox API, which has already been integrated into the CRI of containerd, it makes sense to experience Kuasar from the CRI level. `crictl` is a debug CLI for CRI. To install it, please see MicroVMs like Cloud Hypervisor needs a virtiofs daemon to share the directories on the host. Please refer to . Rust 1.67 or higher version is required to compile Kuasar. Build it with root user: ```shell git clone https://github.com/kuasar-io/kuasar.git cd kuasar make all make install ``` Tips: `make all` build command will download the Rust and Golang packages from the internet network, so you need to provide the `httpproxy` and `httpsproxy` environments for the `make all` command. If a self-signed certificate is used in the `make all` build command execution environment, you may encounter SSL issues with downloading resources from https URL failed. Therefore, you need to provide a CA-signed certificate and copy it into the root directory of the Kuasar project, then rename it as \"proxy.crt\". In this way, our build script will use the \"proxy.crt\" certificate to access the https URLs of Rust and Golang installation packages. Launch the sandboxers by the following commands: For vmm: `nohup vmm-sandboxer --listen /run/vmm-sandboxer.sock --dir /run/kuasar-vmm &` For quark: `nohup quark-sandboxer --listen /run/quark-sandboxer.sock --dir /var/lib/kuasar-quark &` For wasm: `nohup wasm-sandboxer --listen /run/wasm-sandboxer.sock --dir /run/kuasar-wasm &` For runc: `nohup runc-sandboxer --listen /run/runc-sandboxer.sock --dir /run/kuasar-runc &` Since Kuasar is a low-level container runtime, all interactions should be done via CRI in containerd, such as crictl or Kubernetes. We use crictl as examples: For vmm, quark or runc, run the following scripts: `examples/runexamplecontainer.sh vmm`, `examples/runexamplecontainer.sh quark` or `examples/runexamplecontainer.sh runc` For wasm: Wasm container needs its own container image so our script has to build and import the container image at first. `examples/runexamplewasm_container.sh` If you have questions, feel free to reach out to us in the following ways: | If you're interested in being a contributor and want to get involved in developing the Kuasar code, please see for details on submitting patches and the contribution workflow. Kuasar is under the Apache 2.0 license. See the file for details. Kuasar documentation is under the ." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Kuasar", "subcategory": "Container Runtime" }
[ { "data": "Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: import \"gopkg.in/natefinch/lumberjack.v2\" The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. Example To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts. Code: ```go log.SetOutput(&lumberjack.Logger{ Filename: \"/var/log/myapp/foo.log\", MaxSize: 500, // megabytes MaxBackups: 3, MaxAge: 28, //days Compress: true, // disabled by default }) ``` ``` go type Logger struct { // Filename is the file to write logs to. Backup log files will be retained // in the same directory. It uses <processname>-lumberjack.log in // os.TempDir() if empty. Filename string `json:\"filename\" yaml:\"filename\"` // MaxSize is the maximum size in megabytes of the log file before it gets // rotated. It defaults to 100 megabytes. MaxSize int `json:\"maxsize\" yaml:\"maxsize\"` // MaxAge is the maximum number of days to retain old log files based on the // timestamp encoded in their filename. Note that a day is defined as 24 // hours and may not exactly correspond to calendar days due to daylight // savings, leap seconds, etc. The default is not to remove old log files // based on age. MaxAge int `json:\"maxage\" yaml:\"maxage\"` // MaxBackups is the maximum number of old log files to retain. The default // is to retain all old log files (though MaxAge may still cause them to get // deleted.) MaxBackups int `json:\"maxbackups\" yaml:\"maxbackups\"` // LocalTime determines if the time used for formatting the timestamps in // backup files is the computer's local time. The default is to use UTC // time. LocalTime bool `json:\"localtime\" yaml:\"localtime\"` // Compress determines if the rotated log files should be compressed // using gzip. The default is not to perform compression. Compress bool `json:\"compress\" yaml:\"compress\"` // contains filtered or unexported fields } ``` Logger is an io.WriteCloser that writes to the specified filename. Logger opens or creates the logfile on first Write. If the file exists and is less than MaxSize megabytes, lumberjack will open and append to that file. If the file exists and its size is >= MaxSize megabytes, the file is renamed by putting the current time in a timestamp in the name immediately before the file's extension (or the end of the filename if there's no extension). A new log file is then created using original filename. Whenever a write would cause the current log file exceed MaxSize megabytes, the current file is closed, renamed, and a new log file created with the original name. Thus, the filename you give Logger is always the \"current\" log file. Backups use the log file name given to Logger, in the form `name-timestamp.ext` where name is the filename without the extension, timestamp is the time at which the log was rotated formatted with the time.Time format of `2006-01-02T15-04-05.000` and the extension is the original extension. For example, if your Logger.Filename is `/var/log/foo/server.log`, a backup created at 6:30pm on Nov 11 2016 would use the filename" }, { "data": "Whenever a new logfile gets created, old log files may be deleted. The most recent files according to the encoded timestamp will be retained, up to a number equal to MaxBackups (or all of them if MaxBackups is 0). Any files with an encoded timestamp older than MaxAge days are deleted, regardless of MaxBackups. Note that the time encoded in the timestamp is the rotation time, which may differ from the last time that file was written to. If MaxBackups and MaxAge are both 0, no old log files will be deleted. ``` go func (l *Logger) Close() error ``` Close implements io.Closer, and closes the current logfile. ``` go func (l *Logger) Rotate() error ``` Rotate causes Logger to close the existing log file and immediately create a new one. This is a helper function for applications that want to initiate rotations outside of the normal rotation rules, such as in response to SIGHUP. After rotating, this initiates a cleanup of old log files according to the normal rules. Example Example of how to rotate in response to SIGHUP. Code: ```go l := &lumberjack.Logger{} log.SetOutput(l) c := make(chan os.Signal, 1) signal.Notify(c, syscall.SIGHUP) go func() { for { <-c l.Rotate() } }() ``` ``` go func (l *Logger) Write(p []byte) (n int, err error) ``` Write implements io.Writer. If a write would cause the log file to be larger than MaxSize, the file is closed, renamed to include a timestamp of the current time, and a new log file is created using the original log file name. If the length of the write is greater than MaxSize, an error is returned. - - Generated by ement at the top of the file (if it is not already there) and add in a type alias line. Note that if your type is significantly different on different architectures, you may need some `#if/#elif` macros in your include statements. This script is used to generate the system's various constants. This doesn't just include the error numbers and error strings, but also the signal numbers and a wide variety of miscellaneous constants. The constants come from the list of include files in the `includes_${uname}` variable. A regex then picks out the desired `#define` statements, and generates the corresponding Go constants. The error numbers and strings are generated from `#include <errno.h>`, and the signal numbers and strings are generated from `#include <signal.h>`. All of these constants are written to `zerrors${GOOS}${GOARCH}.go` via a C program, `_errors.c`, which prints out all the constants. To add a constant, add the header that includes it to the appropriate variable. Then, edit the regex (if necessary) to match the desired constant. Avoid making the regex too broad to avoid matching unintended constants. This program is used to extract duplicate const, func, and type declarations from the generated architecture-specific files listed below, and merge these into a common file for each OS. The merge is performed in the following steps: Construct the set of common code that is idential in all architecture-specific files. Write this common code to the merged file. Remove the common code from all architecture-specific files. A file containing all of the system's generated error numbers, error strings, signal numbers, and constants. Generated by `mkerrors.sh` (see above). A file containing all the generated syscalls for a specific GOOS and GOARCH. Generated by `mksyscall.go` (see above). A list of numeric constants for all the syscall number of the specific GOOS and GOARCH. Generated by mksysnum (see above). A file containing Go types for passing into (or returning from) syscalls. Generated by godefs and the types file (see above)." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Generating ReST pages from a cobra command is incredibly easy. An example is as follows: ```go package main import ( \"log\" \"github.com/spf13/cobra\" \"github.com/spf13/cobra/doc\" ) func main() { cmd := &cobra.Command{ Use: \"test\", Short: \"my test program\", } err := doc.GenReSTTree(cmd, \"/tmp\") if err != nil { log.Fatal(err) } } ``` That will get you a ReST document `/tmp/test.rst` This program can actually generate docs for the kubectl command in the kubernetes project ```go package main import ( \"log\" \"io/ioutil\" \"os\" \"k8s.io/kubernetes/pkg/kubectl/cmd\" cmdutil \"k8s.io/kubernetes/pkg/kubectl/cmd/util\" \"github.com/spf13/cobra/doc\" ) func main() { kubectl := cmd.NewKubectlCommand(cmdutil.NewFactory(nil), os.Stdin, ioutil.Discard, ioutil.Discard) err := doc.GenReSTTree(kubectl, \"./\") if err != nil { log.Fatal(err) } } ``` This will generate a whole series of files, one for each command in the tree, in the directory specified (in this case \"./\") You may wish to have more control over the output, or only generate for a single command, instead of the entire command tree. If this is the case you may prefer to `GenReST` instead of `GenReSTTree` ```go out := new(bytes.Buffer) err := doc.GenReST(cmd, out) if err != nil { log.Fatal(err) } ``` This will write the ReST doc for ONLY \"cmd\" into the out, buffer. Both `GenReST` and `GenReSTTree` have alternate versions with callbacks to get some control of the output: ```go func GenReSTTreeCustom(cmd *Command, dir string, filePrepender func(string) string, linkHandler func(string, string) string) error { //... } ``` ```go func GenReSTCustom(cmd Command, out bytes.Buffer, linkHandler func(string, string) string) error { //... } ``` The `filePrepender` will prepend the return value given the full filepath to the rendered ReST file. A common use case is to add front matter to use the generated documentation with : ```go const fmTemplate = ` date: %s title: \"%s\" slug: %s url: %s ` filePrepender := func(filename string) string { now := time.Now().Format(time.RFC3339) name := filepath.Base(filename) base := strings.TrimSuffix(name, path.Ext(name)) url := \"/commands/\" + strings.ToLower(base) + \"/\" return fmt.Sprintf(fmTemplate, now, strings.Replace(base, \"_\", \" \", -1), base, url) } ``` The `linkHandler` can be used to customize the rendered links to the commands, given a command name and reference. This is useful while converting rst to html or while generating documentation with tools like Sphinx where `:ref:` is used: ```go // Sphinx cross-referencing format linkHandler := func(name, ref string) string { return fmt.Sprintf(\":ref:`%s <%s>`\", name, ref) } ```" } ]
{ "category": "Runtime", "file_name": "rest_docs.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: PVC Storage Cluster In a \"PVC-based cluster\", the Ceph persistent data is stored on volumes requested from a storage class of your choice. This type of cluster is recommended in a cloud environment where volumes can be dynamically created and also in clusters where a local PV provisioner is available. In this example, the mon and OSD volumes are provisioned from the AWS `gp2` storage class. This storage class can be replaced by any storage class that provides `file` mode (for mons) and `block` mode (for OSDs). ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: false volumeClaimTemplate: spec: storageClassName: gp2 resources: requests: storage: 10Gi storage: storageClassDeviceSets: name: set1 count: 3 portable: false encrypted: false volumeClaimTemplates: metadata: name: data spec: resources: requests: storage: 10Gi storageClassName: gp2 volumeMode: Block accessModes: ReadWriteOnce onlyApplyOSDPlacement: false ``` In the CRD specification below, 3 OSDs (having specific placement and resource values) and 3 mons with each using a 10Gi PVC, are created by Rook using the `local-storage` storage class. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: false volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi cephVersion: image: quay.io/ceph/ceph:v18.2.2 allowUnsupported: false dashboard: enabled: true network: hostNetwork: false storage: storageClassDeviceSets: name: set1 count: 3 portable: false resources: limits: memory: \"4Gi\" requests: cpu: \"500m\" memory: \"4Gi\" placement: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: weight: 100 podAffinityTerm: labelSelector: matchExpressions: key: \"rook.io/cluster\" operator: In values: cluster1 topologyKey: \"topology.kubernetes.io/zone\" volumeClaimTemplates: metadata: name: data spec: resources: requests: storage: 10Gi storageClassName: local-storage volumeMode: Block accessModes: ReadWriteOnce ``` In the CRD specification below three monitors are created each using a 10Gi PVC created by Rook using the `local-storage` storage class. Even while the mons consume PVCs, the OSDs in this example will still consume raw devices on the host. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: false volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi dashboard: enabled: true storage: useAllNodes: true useAllDevices: true ``` In the simplest case, Ceph OSD BlueStore consumes a single (primary) storage device. BlueStore is the engine used by the OSD to store data. The storage device is normally used as a whole, occupying the full device that is managed directly by BlueStore. It is also possible to deploy BlueStore across additional devices such as a DB device. This device can be used for storing BlueStores internal metadata. BlueStore (or rather, the embedded RocksDB) will put as much metadata as it can on the DB device to improve performance. If the DB device fills up, metadata will spill back onto the primary device (where it would have been" }, { "data": "Again, it is only helpful to provision a DB device if it is faster than the primary device. You can have multiple `volumeClaimTemplates` where each might either represent a device or a metadata device. An example of the `storage` section when specifying the metadata device is: ```yaml storage: storageClassDeviceSets: name: set1 count: 3 portable: false volumeClaimTemplates: metadata: name: data spec: resources: requests: storage: 10Gi storageClassName: gp2 volumeMode: Block accessModes: ReadWriteOnce metadata: name: metadata spec: resources: requests: storage: 5Gi storageClassName: io1 volumeMode: Block accessModes: ReadWriteOnce ``` !!! note Note that Rook only supports three naming convention for a given template: \"data\": represents the main OSD block device, where your data is being stored. \"metadata\": represents the metadata (including block.db and block.wal) device used to store the Ceph Bluestore database for an OSD. \"wal\": represents the block.wal device used to store the Ceph Bluestore database for an OSD. If this device is set, \"metadata\" device will refer specifically to block.db device. It is recommended to use a faster storage class for the metadata or wal device, with a slower device for the data. Otherwise, having a separate metadata device will not improve the performance. The bluestore partition has the following reference combinations supported by the ceph-volume utility: A single \"data\" device. ```yaml storage: storageClassDeviceSets: name: set1 count: 3 portable: false volumeClaimTemplates: metadata: name: data spec: resources: requests: storage: 10Gi storageClassName: gp2 volumeMode: Block accessModes: ReadWriteOnce ``` A \"data\" device and a \"metadata\" device. ```yaml storage: storageClassDeviceSets: name: set1 count: 3 portable: false volumeClaimTemplates: metadata: name: data spec: resources: requests: storage: 10Gi storageClassName: gp2 volumeMode: Block accessModes: ReadWriteOnce metadata: name: metadata spec: resources: requests: storage: 5Gi storageClassName: io1 volumeMode: Block accessModes: ReadWriteOnce ``` A \"data\" device and a \"wal\" device. A WAL device can be used for BlueStores internal journal or write-ahead log (block.wal), it is only useful to use a WAL device if the device is faster than the primary device (data device). There is no separate \"metadata\" device in this case, the data of main OSD block and block.db located in \"data\" device. ```yaml storage: storageClassDeviceSets: name: set1 count: 3 portable: false volumeClaimTemplates: metadata: name: data spec: resources: requests: storage: 10Gi storageClassName: gp2 volumeMode: Block accessModes: ReadWriteOnce metadata: name: wal spec: resources: requests: storage: 5Gi storageClassName: io1 volumeMode: Block accessModes: ReadWriteOnce ``` A \"data\" device, a \"metadata\" device and a \"wal\" device. ```yaml storage: storageClassDeviceSets: name: set1 count: 3 portable: false volumeClaimTemplates: metadata: name: data spec: resources: requests: storage: 10Gi storageClassName: gp2 volumeMode: Block accessModes: ReadWriteOnce metadata: name: metadata spec: resources: requests: storage: 5Gi storageClassName: io1 volumeMode: Block accessModes: ReadWriteOnce metadata: name: wal spec: resources: requests: storage: 5Gi storageClassName: io1 volumeMode: Block accessModes: ReadWriteOnce ``` To determine the size of the metadata block follow the . With the present configuration, each OSD will have its main block allocated a 10GB device as well a 5GB device to act as a bluestore database." } ]
{ "category": "Runtime", "file_name": "pvc-cluster.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "(devices-unix-hotplug)= ```{note} The `unix-hotplug` device type is supported for containers. It supports hotplugging. ``` Unix hotplug devices make the requested Unix device appear as a device in the instance (under `/dev`). If the device exists on the host system, you can read from it and write to it. The implementation depends on `systemd-udev` to be run on the host. `unix-hotplug` devices have the following device options: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group devices-unix-hotplug start --> :end-before: <!-- config group devices-unix-hotplug end --> ```" } ]
{ "category": "Runtime", "file_name": "devices_unix_hotplug.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "title: Using The Weave Docker API Proxy menu_order: 10 search_type: Documentation When containers are created via the Weave Net proxy, their entrypoint is modified to wait for the Weave network interface to become available. When they are started via the Weave Net proxy, containers are and connected to the Weave network. To create and start a container via the Weave Net proxy run: host1$ docker run -ti weaveworks/ubuntu or, equivalently run: host1$ docker create -ti --name=foo weaveworks/ubuntu host1$ docker start foo Specific IP addresses and networks can be supplied in the `WEAVE_CIDR` environment variable, for example: host1$ docker run -e WEAVE_CIDR=10.2.1.1/24 -ti weaveworks/ubuntu Multiple IP addresses and networks can be supplied in the `WEAVE_CIDR` variable by space-separating them, as in `WEAVE_CIDR=\"10.2.1.1/24 10.2.2.1/24\"`. The Docker NetworkSettings (including IP address, MacAddress, and IPPrefixLen), are still returned when `docker inspect` is run. If you want `docker inspect` to return the Weave NetworkSettings instead, then the proxy must be launched using the `--rewrite-inspect` flag. This command substitutes the Weave network settings when the container has a Weave Net IP. If a container has more than one Weave Net IP, then the inspect call only includes one of them. host1$ weave launch --rewrite-inspect By default, multicast traffic is routed over the Weave network. To turn this off, for example, because you want to configure your own multicast route, add the `--no-multicast-route` flag to `weave launch`. `--without-dns` -- stop telling containers to use `--log-level=debug|info|warning|error` -- controls how much information to emit for debugging `--no-restart` -- remove the default policy of `--restart=always`, if you want to control start-up of the proxy yourself If for some reason you need to disable the proxy, but still want to start other Weave Net components (router, weaveDNS), you can do so using: weave launch --proxy=false See Also *" } ]
{ "category": "Runtime", "file_name": "using-proxy.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for powershell Generate the autocompletion script for powershell. To load completions in your current shell session: cilium-operator completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. ``` cilium-operator completion powershell [flags] ``` ``` -h, --help help for powershell --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-operator_completion_powershell.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Issue #2082 requested that with the command `velero backup-location delete <bsl name>` (implemented in Velero 1.6 with #3073), the following will be deleted: associated Velero backups (to be clear, these are custom Kubernetes resources called \"backups\" that are stored in the API server) associated Restic repositories (custom Kubernetes resources called \"resticrepositories\") This design doc explains how the request will be implemented. When a BSL resource is deleted from its Velero namespace, the associated custom Kubernetes resources, backups and Restic repositories, can no longer be used. It makes sense to clean those resources up when a BSL is deleted. Update the `velero backup-location delete <bsl name>` command to delete associated backup and Restic repository resources in the same Velero namespace. to fix bug #2697 alongside this issue. However, I think that should be fixed separately because although it is similar (restore objects are not being deleted), it is also quite different. One is adding a command feature update (this issue) and the other is a bug fix and each affect different parts of the code base. Update the `velero backup-location delete <bsl name>` command to do the following: find in the same Velero namespace from which the BSL was deleted the associated backup resources and Restic repositories, called \"backups.velero.io\" and \"resticrepositories.velero.io\" respectively delete the resources found The above logic will be added to . I had considered deleting the backup files (the ones in json format and tarballs) in the BSL itself. However, a standard use case is to back up a cluster and then restore into a new cluster. Deleting the backup storage location in either location is not expected to remove all of the backups in the backup storage location and should not be done." } ]
{ "category": "Runtime", "file_name": "2082-bsl-delete-associated-resources_design.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently, we have several counters to globally count the used space and inodes, which can be used to show information or set quota. However, we do not have efficient ways to show used information of or set quota for each directory. This document give a proposal to efficiently and almost immediately collect used space and inodes for each directory. The \"efficiently\" means this operation cannot affect the performance of normal IO operations like `mknod`, `write` .etc. And the \"almost immediately\" means this operation cannot be lazy or scheduled, we must update the used space and inodes actively, but there may be a little latency (between several seconds and 1 minute). The counters should be stored in meta engines, in this section we introduce how to store them in three kinds of meta engines. Redis engine stores the counters in hashes. ```go func (m *redisMeta) dirUsedSpaceKey() string { return m.prefix + \"dirUsedSpace\" } func (m *redisMeta) dirUsedInodesKey() string { return m.prefix + \"dirUsedInodes\" } ``` SQL engine stores the counters in a table. ```go type dirUsage struct { Inode Ino `xorm:\"pk\"` UsedSpace uint64 `xorm:\"notnull\"` UsedInodes uint64 `xorm:\"notnull\"` } ``` TKV engine stores each counter in one key. ```go func (m *kvMeta) dirUsageKey(inode Ino) []byte { return m.fmtKey(\"U\", inode) } ``` In this section we represent how and when to update and read the counters. The are several file types among the children, we should clarify how to deal with each kinds of files first. | Type | Used space | Used inodes | | - | | -- | | Normal file | `align4K(size)` | 1 | | Directory | 4KiB | 1 | | Symlink | 4KiB | 1 | | FIFO | 4KiB | 1 | | Block device | 4KiB | 1 | | Char device | 4KiB | 1 | | Socket | 4KiB | 1 | Each meta engine should implement `doUpdateDirUsage`. ```go type engine interface { ... doUpdateDirUsage(ctx Context, ino Ino, space int64, inodes int64) } ``` Relevant IO operations should call `doUpdateDirUsage` asynchronously. ```go func (m *baseMeta) Mknod(ctx Context, parent Ino, ...) syscall.Errno { ... err := m.en.doMknod(ctx, m.checkRoot(parent), ...) ... go m.en.doUpdateDirUsage(ctx, parent, 1<<12, 1) return err } func (m *baseMeta) Unlink(ctx Context, parent Ino, name string) syscall.Errno { ... err := m.en.doUnlink(ctx, m.checkRoot(parent), name) ... go m.en.doUpdateDirUsage(ctx, parent, -align4K(attr.size), -1) return err } ``` Each meta engine should implement `doGetDirUsage`. ```go type engine interface { ... doGetDirUsage(ctx Context, ino Ino) (space, inodes uint64, err syscall.Errno) } ``` Now we can fasly recursively calculate the space and inodes usage in a directory by `doGetDirUsage`. ```go // walk all directories in root func (m *baseMeta) fastWalkDir(ctx Context, inode Ino, walkDir func(Context, Ino)) syscall.Errno { walkDir(ctx, inode) var entries []*Entry st := m.en.doReaddir(ctx, inode, 0, &entries, -1) // disable plus ... for _, entry := range entries { if ent.Attr.Typ != TypeDirectory { continue } m.fastWalkDir(ctx, entry.Inode, walkFn) ... } return 0 } func (m *baseMeta) getDirUsage(ctx Context, root Ino) (space, inodes uint64, err syscall.Errno) { m.fastWalkDir(ctx, root, func(_ Context, ino Ino) { s, i, err := m.doGetDirUsage(ctx, ino) ... space += s inodes += i }) return } ```" } ]
{ "category": "Runtime", "file_name": "1-dir-used-statistics.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<h1><center>CSPC-Operator Design Document</h1> CStor data engine was made available in OpenEBS from the 0.7.0 version. StoragePoolClaim (SPC) was used to provision cStor Pools. A user was allowed only to change SPC to add new pools on new nodes. Handling the day 2 operations like pool expansion and replacing block devices and so forth were not intuitive via the current SPC schema. This document proposes the introduction of a new schema for cStor pool provisioning and also refactors the code to make cStor a completely pluggable engine into OpenEBS. The new schema also makes it easy to perform day 2 operations on cStor pools. Users that have deployed cStor have the following feedback: The name StoragePoolClaim (SPC) is not intuitive. Single SPC CR is handling multiple provisioning modes for cStor pool i.e auto and manual provisioning. The topology information on the CR is not apparent. CStor-Operator that reconciles on SPC CR to provision, de-provision and carry the operations on cStor pools is also embedded in maya-apiserver which is against the microservice model. Additionally, SPC and other CRs used in cStor pool provisioning are at cluster namespace which stops or comes with challenges for multiple teams to use OpenEBS on the same cluster. Please refer to the Appendix section (end of the doc) to gain more background on SPC and its limitations. At a high level, the objective of this document is to introduce: Two new CRs i.e. CStorPoolcluster(CSPC) and CStorPoolInstance(CSPI) to facilitate pool provisioning which addresses the above-mentioned concerns related to naming, topology, multiple behaviors. This is analogous to StoragePoolClaim and CStorPool CRs respectively but with schema changes. CSPC and CSPI will be at namespace scope. New cStor pool provisioner known as cspc-operator which will run as deployment and manages the CSPC. Also, introduces cspi-mgmt deployment that watches for CSPI and is analogous to cstor-pool-mgmt in SPC case. NOTES: This design has also taken into account the backward compatibility so users running cStor with SPC will not be impacted. CSPC can be experimented with and the migration process to convert SPC to CSPC will be provided. The new schema also takes into account the ease with which cStor pool day 2 operations can be performed. Volume provisioning on CSPC provisioned pools will be supported only via CSI. Following is the proposed CSPC schema in go struct: ``` go // CStorPoolCluster describes a CStorPoolCluster custom resource. type CStorPoolCluster struct { metav1.TypeMeta `json:\",inline\"` metav1.ObjectMeta `json:\"metadata,omitempty\"` Spec CStorPoolClusterSpec `json:\"spec\"` Status CStorPoolClusterStatus `json:\"status\"` } // CStorPoolClusterSpec is the spec for a CStorPoolClusterSpec resource type CStorPoolClusterSpec struct { // Pools is the spec for pools for various nodes // where it should be created. Pools []PoolSpec `json:\"pools\"` } //PoolSpec is the spec for a pool on a node where it should be created. type PoolSpec struct { // NodeSelector is the labels that will be used to select // a node for pool provisioning. // Required field NodeSelector map[string]string `json:\"nodeSelector\"` // RaidConfig is the raid group configuration for the given pool. RaidGroups []RaidGroup `json:\"raidGroups\"` // PoolConfig is the default pool config that applies to the // pool on node. PoolConfig PoolConfig `json:\"poolConfig\"` } // PoolConfig is the default pool config that applies to the // pool on node. type PoolConfig struct { // Cachefile is used for faster pool imports // optional -- if not specified or left empty cache file is not //" }, { "data": "CacheFile string `json:\"cacheFile\"` // DefaultRaidGroupType is the default raid type which applies // to all the pools if raid type is not specified there // Compulsory field if any raidGroup is not given Type DefaultRaidGroupType string `json:\"defaultRaidGroupType\"` // OverProvisioning to enable over provisioning // Optional -- defaults to false OverProvisioning bool `json:\"overProvisioning\"` // Compression to enable compression // Optional -- defaults to off // Possible values : lz, off Compression string `json:\"compression\"` } // RaidGroup contains the details of a raid group for the pool type RaidGroup struct { // Type is the raid group type // Supported values are: stripe, mirror, raidz and raidz2 // stripe -- stripe is a raid group which divides data into blocks and // spreads the data blocks across multiple block devices. // mirror -- mirror is a raid group which does redundancy // across multiple block devices. // raidz -- RAID-Z is a data/parity distribution scheme like RAID-5, but uses dynamic stripe width. // radiz2 -- TODO // Optional -- defaults to `defaultRaidGroupType` present in `PoolConfig` Type string `json:\"type\"` // IsWriteCache is to enable this group as a write cache. IsWriteCache bool `json:\"isWriteCache\"` // IsSpare is to declare this group as spare which will be // part of the pool that can be used if some block devices // fail. IsSpare bool `json:\"isSpare\"` // IsReadCache is to enable this group as read cache. IsReadCache bool `json:\"isReadCache\"` // BlockDevices contains a list of block devices that // constitute this raid group. BlockDevices []CStorPoolClusterBlockDevice `json:\"blockDevices\"` } // CStorPoolClusterBlockDevice contains the details of block devices that // constitutes a raid group. type CStorPoolClusterBlockDevice struct { // BlockDeviceName is the name of the block device. BlockDeviceName string `json:\"blockDeviceName\"` // Capacity is the capacity of the block device. // It is system generated Capacity string `json:\"capacity\"` // DevLink is the dev link for block devices DevLink string `json:\"devLink\"` } // CStorPoolClusterStatus is for handling status of pool. type CStorPoolClusterStatus struct { Phase string `json:\"phase\"` Capacity CStorPoolCapacityAttr `json:\"capacity\"` } // CStorPoolClusterList is a list of CStorPoolCluster resources type CStorPoolClusterList struct { metav1.TypeMeta `json:\",inline\"` metav1.ListMeta `json:\"metadata\"` Items []CStorPoolCluster `json:\"items\"` } ``` Following is the proposed CSPI schema in go struct: ``` go // CStorPoolInstance describes a cstor pool instance resource created as a custom resource. type CStorPoolInstance struct { metav1.TypeMeta `json:\",inline\"` metav1.ObjectMeta `json:\"metadata,omitempty\"` Spec CStorPoolInstanceSpec `json:\"spec\"` Status CStorPoolStatus `json:\"status\"` } // CStorPoolInstanceSpec is the spec listing fields for a CStorPoolInstance resource. type CStorPoolInstanceSpec struct { // HostName is the name of Kubernetes node where the pool // should be created. HostName string `json:\"hostName\"` // NodeSelector is the labels that will be used to select // a node for pool provisioning. // Required field NodeSelector map[string]string `json:\"nodeSelector\"` // PoolConfig is the default pool config that applies to the // pool on node. PoolConfig PoolConfig `json:\"poolConfig\"` // RaidGroups is the group containing block devices RaidGroups []RaidGroup `json:\"raidGroup\"` } // CStorPoolInstanceList is a list of CStorPoolInstance resources type CStorPoolInstanceList struct { metav1.TypeMeta `json:\",inline\"` metav1.ListMeta `json:\"metadata\"` Items []CStorPoolInstance `json:\"items\"` } ``` A user puts the cStor pool intent in a CSPC YAML and applies it to provision cStor pools. When a CSPC is created, k number of CStorPoolInstance(CSPI) CR and CStorPool deployment(known as cspi-mgmt) is created. CStorPool deployment watches (is controller) for CSPI CR and there is one to one mapping between the CSPI CR and the cspi-mgmt" }, { "data": "This number k depends on the length of the spec.pools field on CSPC specification. For a given CSPC the number k described above is known as the desired pool count. Also, the number of existing CSPI CRs is known as the current pool count. The system will always try to converge the current pool count to the desired pool count by creating a required number of CSPI CRs. For every CSPI CR, a corresponding cspi-mgmt deployment will always exist. If due to some reason, a cspi-mgmt of a corresponding CSPI CR is deleted, a new cspi-mgmt for the same CR will come up again. Similarly, if a CSPI of a given CSPC is deleted, its corresponding cspi-mgmt will be deleted too but again a new CSPI and its corresponding cspi-mgmt will come up again. The parent to child(left to right) structure of the resources are shown as : ```ascii |> CSPI > cspi-mgmt CSPC> |> > |> CSPI > cspi-mgmt ``` The following are a few samples of CStorPoolCluster YAMLs to go through. (Pool on one node with `stripe` type) ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-stripe spec: pools: nodeSelector: kubernetes.io/hostName: gke-cstor-it-default-pool-1 raidGroups: type: stripe isWriteCache:false isSpare: false isReadCache: false blockDevices: blockDeviceName: sparse-3c1fc7491f9e4cf50053730740647318 poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: mirror overProvisioning: false compression: off ``` (Pool on one node with `stripe` type) ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-stripe spec: pools: nodeSelector: kubernetes.io/hostName: gke-cstor-it-default-pool-1 raidGroups: type: stripe isWriteCache:false isSpare: false isReadCache: false blockDevices: blockDeviceName: sparse-3c1fc7491f9e4cf50053730740647318 blockDeviceName: sparse-9x6fc7491f9e4cf5005454373074fsds blockDeviceName: sparse-8c1fc7491f9sdpefgsdjk46845nssdf5 blockDeviceName: sparse-54cedzcs1f9e4cf50053730740647318 poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: mirror overProvisioning: false compression: off ``` (Pool on one node with `stripe` and `mirror` type) ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-stripe spec: pools: nodeSelector: kubernetes.io/hostName: gke-cstor-it-default-pool-1 raidGroups: type: stripe isWriteCache:false isSpare: false isReadCache: false blockDevices: blockDeviceName: sparse-3c1fc7491f9e4cf50053730740647318 blockDeviceName: sparse-9x6fc7491f9e4cf5005454373074fsds blockDeviceName: sparse-8c1fc7491f9sdpefgsdjk46845nssdf5 blockDeviceName: sparse-54cedzcs1f9e4cf50053730740647318 type: mirror isWriteCache:false isSpare: false isReadCache: false blockDevices: blockDeviceName: sparse-3c1fc7491f9e4cf50053730740647318 blockDeviceName: sparse-5x6fc7491sdffsfs75005454373074fs poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: mirror overProvisioning: false compression: off ``` (Pool on two nodes with `mirror` type) ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-mirror spec: pools: nodeSelector: kubernetes.io/hostname: gke-cstor-it-default-pool-1 raidGroups: type: mirror isWriteCache: false isSpare: false isReadCache: false blockDevices: blockDeviceName: pool-1-bd-1 blockDeviceName: pool-1-bd-2 type: mirror name: group-2 isWriteCache: false isSpare: false isReadCache: false blockDevices: blockDeviceName: pool-1-bd-3 blockDeviceName: pool-1-bd-4 poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: mirror overProvisioning: false compression: lz nodeSelector: kubernetes.io/hostname: gke-cstor-it-default-pool-2 raidGroups: type: mirror blockDevices: blockDeviceName: pool-2-bd-1 blockDeviceName: pool-2-bd-2 type: mirror name: group-2 blockDevices: blockDeviceName: pool-2-bd-3 blockDeviceName: pool-2-bd-4 poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: mirror overProvisioning: false compression: off ``` Following operations should be supported on CSPC manifest to carry out pool day 2 operations A block device can be added on `striped` type raid group.`[Pool Expansion]` A new raid group of any type can be added inside the pool spec (Path on CSPC: spec.pools.raidGroups).`[Pool Expansion]` A new pool spec can be added on the CSPC. ( Path on CSPC: spec.pools) `[Horizontal Pool Scaling]` Node selector can be changed on CSPC to do pool migration on different node. But before doing that, all the associated block Devices should be attached to the newer nodes. [Pool Migration] // TODO: More POC on `Pool Migration` regarding block devices // management. A block device can be replaced in raid-groups of type other than striped. `[Block Device Replacement]` A pool can be deleted by removing the entire pool" }, { "data": "`[Pool Deletion]` Any other operations except those described above is invalid and will be handled gracefully via error returns. This validation is done by CSPC admission module in the openebs-admission server. Following are some invalid operations (wrt to day2 ops) on CSPC but the list may not be exhaustive: Block device Removal. Raid group removal. Pool Expansion As an OpenEBS user, I should be able to add block devices to CSPC to increase the pool capacity. Steps To Be Performed By User: `kubectl edit cspc yourcspcname` Put the block devices in the CSPC spec against the correct Kubernetes nodes. Consider following example for stripe pool expansion Current CSPC is following: This CSPC corresponds to a stripe pool on node `gke-cstor-it-default-pool-1` ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-stripe spec: pools: nodeSelector: kubernetes.io/hostName: gke-cstor-it-default-pool-1 raidGroups: type: stripe isWriteCache:false isSpare: false isReadCache: false blockDevices: blockDeviceName: sparse-3c1fc7491f9e4cf50053730740647318 poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: mirror overProvisioning: false compression: false ``` Expanding stripe pools -- the spec will look following: ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-stripe spec: pools: nodeSelector: kubernetes.io/hostName: gke-cstor-it-default-pool-1 raidGroups: type: stripe isWriteCache:false isSpare: false isReadCache: false blockDevices: blockDeviceName: sparse-3c1fc7491f9e4cf50053730740647318 // New block device added blockDeviceName: sparse-4cf345h41f9e4cf50053730740647318 poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: stripe overProvisioning: false compression: false ``` Consider following example for mirror pool expansion Current CSPC is following: This CSPC corresponds to a mirror pool on node `gke-cstor-it-default-pool-1` and on node `gke-cstor-it-default-pool-2` ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-mirror spec: pools: nodeSelector: kubernetes.io/hostname: gke-cstor-it-default-pool-1 raidGroups: type: mirror isWriteCache: false isSpare: false isReadCache: false blockDevices: blockDeviceName: pool-1-bd-1 blockDeviceName: pool-1-bd-2 poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: mirror overProvisioning: false compression: false ``` Expanding mirror pools -- the spec will look following: ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-mirror spec: pools: nodeSelector: kubernetes.io/hostname: gke-cstor-it-default-pool-1 raidGroups: type: mirror isWriteCache: false isSpare: false isReadCache: false blockDevices: blockDeviceName: pool-1-bd-1 blockDeviceName: pool-1-bd-2 // New group added type: mirror blockDevices: blockDeviceName: pool-1-bd-3 blockDeviceName: pool-1-bd-4 poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: mirror overProvisioning: false compression: false ``` As an OpenEBS user, I should be able to replace existing block devices on CSPC to perform disk replacement operations. Steps To Be Performed By User: `kubectl edit cspc yourcspcname` Update the existing block devices with new block devices in the CSPC spec against the correct Kubernetes nodes. Consider following example for mirror pool disk replacement Current CSPC is following: This CSPC corresponds to a mirror pool on node `kubernetes-node1`. ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-mirror spec: pools: nodeSelector: kubernetes.io/hostname: kubernetes-node1 raidGroups: type: mirror isWriteCache: false isSpare: false isReadCache: false blockDevices: blockDeviceName: node1-bd-1 blockDeviceName: node1-bd-2 poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: mirror overProvisioning: false compression: false ``` Replacing block devices in mirror pool -- the spec will look following: ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-mirror spec: pools: nodeSelector: kubernetes.io/hostname: kubernetes-node1 raidGroups: type: mirror isWriteCache: false isSpare: false isReadCache: false blockDevices: blockDeviceName: node1-bd-3 blockDeviceName: node1-bd-2 poolConfig: cacheFile: /tmp/pool1.cache defaultRaidGroupType: mirror overProvisioning: false compression: false ``` In the above CSPC spec node1-bd-1 is replaced with node1-bd-3 Disk Replacement Validations: Once user modifies the CSPC spec to trigger disk replacement process below are the validations done by the admission-server to restrict invalid changes made to CSPC Below are validations on CSPC by admission server Not more than one block device should be replaced simultaneously in the same raid group. Replacing block devices should not be already in use by the same cStor" }, { "data": "Replacing another block device in the raid group is not allowed when any of the block device in the same raid group is undergoing replacement[How admission server can detect it? It can be verified by checking for `openebs.io/bd-predecessor` annotation in the block device claims of block devices present in the same raid group will not be empty]. Steps to validate whether someone(other CSPC or local PV) already claimed replacing block device 4.1 Verify is there any claim create for replacing block device. If there are no claims for replacing block device jump to step 5. 4.2 If replacing block device was already claimed and if that claim doesn't have CSPC label (or) different CSPC label(i.e block device belongs to some other CSPC) then reject replacement request. 4.3 If existing block devices in this CSPC has block device claim with annotation openebs.io/bd-predecessor as replacing block device name then reject the request else jump to step 5. Create a claim for replacing block device with annotation `openebs.io/bd-predecessor: oldblockdevice_name` only if claim doesn't exists. If claim already exists for replacing block device then update annotation with proper block device name(If some operator or admin has already created claim for block device and triggered replace). `NOTE:` Pool expansion and disk replacement can go in parallel. Across the pool parallel block device replacements are allowed only when the block devices belong to the different raid groups. Block Device replacement is supported only in RAID groups(i.e Mirror, Raidz and Raidz2). Block Device Replacement Workflow: CSPC-Operator Work done by CStor-Operator after the user has updated the existing block device name with new block device name The CSPC-Operator will detect under which CSPC pool spec block device has been replaced[How it can detect? By comparing new CSPC spec changes with corresponding node CSPI spec] after identifying the changes CSPC-Operator will update the corresponding raid group of CSPI with new block device name(replace old block device with new block device name). Note:There can't be more than one block device change between CSPI and CSPC in a particular raid group. Work done by CSPI-Mgmt after CSPC-Operator replaces block device name in CSPI spec The CSPI-Mgmt will reconcile for the changes made on CSPI. CSPI-Mgmt will process changes in the following manner 1.1 CSPI-Mgmt will detect are this changes for replacement[How? CSPI-Mgmt will trigger `zpool dump <pool_name>` command and it will get pool dump output from cstor-pool container. CSPI-Mgmt will verify whether any of block device links are in use by pool via pool dump output if links are not in use by pool then it might be pool expansion (or) replacement operation. If claim of current block device has annotation `openebs.io/bd-predecessor: old_bdName` then changes are identified as replacement]. If changes are for replacement CSPI-Mgmt will execute `zpool replace <poolname> <olddevice> <new_device>` this command which will trigger disk replacement operation in cstor. 1.2 For each reconciliation CSPI-Mgmt will process each block device and checks if claim of current block device has `openebs.io/bd-predecessor` annotation then CSPI-Mgmt will trigger `zpool dump` command and verifies the status of resilvering for particular vdev(How it will detect vdev? CSPI-Mgmt will get vdev based on the device link of block device). 1.3 On completion of resilvering process CSPI-Mgmt will unclaim the old block device if block device was replaced and removes the `openebs.io/bd-predecessor` annotation from new block device. Note: Representing resilvering status on CSPI CR(Not" }, { "data": "( Includes `Pool Expansion`, `Pool Deletion`, `Block Device Replacement`, and `Pool Migration`) Work done by CStor-operator after the user has edited the CSPC: The CSPC spec change as a result of the user modifying CSPC and the CStor-Operator should propagate the changes to corresponding CSPI(s). Hostname changed in the CSPC can cause following:( `Pool Migration`) Cstor-Operator will patch the corresponding CSPI and deployment with the hostname in the CSPI spec. ( // TODO: Is there anything that cspi-mgmt is expected to do ? ) // TODO: More POC on pool migration wrt -- reconnecting volumes etc. When the entire pool spec is deleted, CStor-Operator should figure out the orphaned CSP and delete it which will cause the pool to be deleted. Block device addition can cause the following: Add block device in stripe pool on a node(`Pool Expansion`): If a block device is added in a stripe pool in CSPC, corresponding CSP will be patched by Cstor-Operator to reflect the new block device. Now cspi-mgmt should handle this change by adding the device to the pool Add a raid group to pool spec (`Pool Expansion`): A raid group can be added (`e.g. mirror raid group, raidz raid group etc`) and this change will propagate operation Workflow to corresponding CSP to be finally handled by cspi-mgmt. If a pool spec is deleted the entire pool will get deleted. (`Pool Deletion`) If a pool spec is added a new pool will be formed. (`Horizontal Pool Scale`) Block device replacement can cause the following: Replacing a block device under spec of CSPC will trigger disk replacement. Note: Disk Replacement is supported only in mirror and raidz raid group. NOTE: Cstor-Operator will handle the spec change in the following order. Also, the pool operations will be carried out only when there is no pending pool creation/deletion on nodes. Host Name Change<br/> 1.1 Pool Migration Block Device Addition<br/> 2.1 Pool Expansion Block Device Replacement<br/> 3.1 Block Device Replacement The order of handling is 1.1 > 2.1 > 3.1 Till OpenEBS 1.1 cStor pool can be provisioned using StoragePoolClaim and cStor volumes can be provisioned on the pools provisioned via SPC. SPC provisioning is facilitated by CStor-Operator that runs as a routine in maya-apiserver. Please visit the following link to know more about the CStor-Operator which is also known as SPC-Watcher synonymously. https://docs.google.com/document/d/1dcm7wvdpUHfSOMoFTPJBUeesMej46eAIN06-1hlvw/edit#heading=h.big02sfdb5gh Limitations with SPC: StoragePoolClaim parents a collection of cStor pools on different kubernetes nodes, meaning the creation of SPC can create 'k' number of cStor pools on nodes, where this 'k' can be controlled by the user. But looks like claim type of resources ( e.g PersistentVolumeClaim) maps to only one object ( e.g. PVC maps to a single PV) and this can create confusion at the very beginning. Once you have created an SPC -- you do not get enough information from the SPC e.g. which block device is on what node, what are the collection of block devices that form a cStor pool! `StoragePoolClaim` is at cluster scope which can cause a problem for multiple instances of OpenEBS running in the same cluster in different namespaces. Idea is to enable teams to use OpenEBS in their provided namespace in the cluster Pool topology information is not visible on the SPC spec and it hides information about block devices/disks that constitute a cStor pool on a node. Additionally, this lack of information poses challenges in incorporating pool day 2 operations features such as pool expansion, block device replacement, etc." } ]
{ "category": "Runtime", "file_name": "doc.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "For the metrics capability, Firecracker uses a single Metrics system. This system can be configured either by: a) sending a `PUT` API Request to the `/metrics` path: or b) using the `--metrics-path` CLI option. Note the metrics configuration is not part of the guest configuration and is not restored from a snapshot. In order to configure the Metrics, first you have to create the resource that will be used for storing the metrics: ```bash mkfifo metrics.fifo touch metrics.file ``` When launching Firecracker, use the CLI option to set the metrics file. ```bash ./firecracker --metrics-path metrics.fifo ``` You can configure the Metrics system by sending the following API command: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT \"http://localhost/metrics\" \\ -H \"accept: application/json\" \\ -H \"Content-Type: application/json\" \\ -d \"{ \\\"metrics_path\\\": \\\"metrics.fifo\\\" }\" ``` Details about this configuration can be found in the . The metrics are written to the `metrics_path` in JSON format. The metrics get flushed in two ways: without user intervention every 60 seconds; upon user demand, by issuing a `FlushMetrics` request. You can find how to use this request in the . If the path provided is a named pipe, you can use the script below to read from it: ```shell metrics=metrics.fifo while true do if read line <$metrics; then echo $line fi done echo \"Reader exiting\" ``` Otherwise, if the path points to a normal file, you can simply do: ```shell script cat metrics.file ``` The metrics emitted by Firecracker are in JSON" }, { "data": "Below are the keys present in each metrics json object emitted by Firecracker: ``` \"api_server\" \"balloon\" \"block\" \"deprecated_api\" \"entropy\" \"getapirequests\" \"i8042\" \"latencies_us\" \"logger\" \"mmds\" \"net\" \"patchapirequests\" \"putapirequests\" \"rtc\" \"seccomp\" \"signals\" \"uart\" \"vcpu\" \"vhostuserblock\" \"vmm\" \"vsock\" ``` Below table explains where Firecracker metrics are defined : | Metrics key | Device | Additional comments | | -- | -- | - | | balloon | | Represent metrics for the Balloon device. | | block | | Represent aggregate metrics for Virtio Block device. | | block\\{blockdriveid} | | Represent Virtio Block device metrics for the endpoint `\"/drives/{driveid}\"` e.g. `\"block_rootfs\":` represent metrics for the endpoint `\"/drives/rootfs\"` | | i8042 | | Represent Metrics specific to the i8042 device. | | net | | Represent aggregate metrics for Virtio Net device. | | net\\{ifaceid} | | Represent Virtio Net device metrics for the endpoint `\"/network-interfaces/{ifaceid}\"` e.g. `neteth0` represent metrics for the endpoint `\"/network-interfaces/eth0\"` | | rtc | | Represent Metrics specific to the RTC device. `Note`: this is emitted only on `aarch64`. | | uart | | Represent Metrics specific to the serial device. | | vhostuser\\{dev}\\{devid} | | Represent Vhost-user device metrics for the device `dev` and device id `devid`. e.g. `\"vhostuserblockrootfs\":` represent metrics for vhost-user block device having the endpoint `\"/drives/rootfs\"` | | vsock | | Represent Metrics specific to the vsock device. | | entropy | | Represent Metrics specific to the entropy device. | | \"apiserver\"<br>\"deprecatedapi\"<br>\"getapirequests\"<br>\"latenciesus\"<br>\"logger\"<br>\"mmds\"<br>\"patchapirequests\"<br>\"putapi_requests\"<br>\"seccomp\"<br>\"signals\"<br>\"vcpu\"<br>\"vmm\" | | Rest of the metrics are defined in the same file metrics.rs. | Note: Firecracker emits all the above metrics regardless of the presense of that component i.e. even if `vsock` device is not attached to the Microvm, Firecracker will still emit the Vsock metrics with key as `vsock` and value of all metrics defined in `VsockDeviceMetrics` as `0`. Units for Firecracker metrics are embedded in their name.<br/> Below pseudo code should be to extract units from Firecracker metrics name:<br/> Note: An example of fullkey for below logic is `\"vcpu.exitioinagg.min_us\"` ``` if substring \"bytes\" or \"bytescount\" is present in any subkey of fullkey Unit is \"Bytes\" else substring \"ms\" is present in any subkey of fullkey Unit is \"Milliseconds\" else substring \"us\" is present in any subkey of fullkey Unit is \"Microseconds\" else Unit is \"Count\" ```" } ]
{ "category": "Runtime", "file_name": "metrics.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "The following document is based on experience doing code development, bug troubleshooting and code review across a number of projects using Git. The document is mostly borrowed from , but made more meaningful in the context of GlusterFS project. This topic can be split into two areas of concern The structured set/split of the code changes The information provided in the commit message The points and examples that will be raised in this document ought to clearly demonstrate the value in splitting up changes into a sequence of individual commits, and the importance in writing good commit messages to go along with them. If these guidelines were widely applied it would result in a significant improvement in the quality of the GlusterFS Git history. Both a carrot & stick will be required to effect changes. This document intends to be the carrot by alerting people to the benefits, while anyone doing Gerrit code review can act as the stick ;-P In other words, when reviewing a change in Gerrit: Do not simply look at the correctness of the code. Review the commit message itself and request improvements to its content. Look out for commits which are mixing multiple logical changes and require the submitter to split them into separate commits. Ensure whitespace changes are not mixed in with functional changes. Ensure no-op code refactoring is done separately from functional changes. And so on. It might be mentioned that Gerrit's handling of patch series is not entirely perfect. Let that not become a valid reason to avoid creating patch series. The tools being used should be subservient to developers needs, and since they are open source they can be fixed / improved. Software source code is \"read mostly, write occassionally\" and thus the most important criteria is to improve the long term maintainability by the large pool of developers in the community, and not to sacrifice too much for the sake of the single author who may never touch the code again. And now the long detailed guidelines & examples of good & bad practice The cardinal rule for creating good commits is to ensure there is only one \"logical change\" per commit. There are many reasons why this is an important rule: The smaller the amount of code being changed, the quicker & easier it is to review & identify potential flaws. If a change is found to be flawed later, it may be necessary to revert the broken commit. This is much easier to do if there are not other unrelated code changes entangled with the original commit. When troubleshooting problems using Git's bisect capability, small well defined changes will aid in isolating exactly where the code problem was" }, { "data": "When browsing history using Git annotate/blame, small well defined changes also aid in isolating exactly where & why a piece of code came from. With the above points in mind, there are some commonly encountered examples of bad things to avoid Mixing whitespace changes with functional code changes. The whitespace changes will obscure the important functional changes, making it harder for a reviewer to correctly determine whether the change is correct. Solution: Create 2 commits, one with the whitespace changes, one with the functional changes. Typically the whitespace change would be done first, but that need not be a hard rule. Mixing two unrelated functional changes. Again the reviewer will find it harder to identify flaws if two unrelated changes are mixed together. If it becomes necessary to later revert a broken commit, the two unrelated changes will need to be untangled, with further risk of bug creation. Sending large new features in a single giant commit. It may well be the case that the code for a new feature is only useful when all of it is present. This does not, however, imply that the entire feature should be provided in a single commit. New features often entail refactoring existing code. It is highly desirable that any refactoring is done in commits which are separate from those implementing the new feature. This helps reviewers and test suites validate that the refactoring has no unintentional functional changes. Even the newly written code can often be split up into multiple pieces that can be independently reviewed. For example, changes which add new internal fops or library functions, can be in self-contained commits. Again this leads to easier code review. It also allows other developers to cherry-pick small parts of the work, if the entire new feature is not immediately ready for merge. This will encourage the author & reviewers to think about the generic library functions' design, and not simply pick a design that is easier for their currently chosen internal implementation. The basic rule to follow is If a code change can be split into a sequence of patches/commits, then it should be split. Less is not more. More is more. TODO: Pick glusterfs specific example. As important as the content of the change, is the content of the commit message describing it. When writing a commit message there are some important things to remember Do not assume the reviewer understands what the original problem was. When reading bug reports, after a number of back & forth comments, it is often as clear as mud, what the root cause problem is. The commit message should have a clear statement as to what the original problem is. The bug is merely interesting historical background on /how/ the problem was" }, { "data": "It should be possible to review a proposed patch for correctness without needing to read the bug ticket. Do not assume the reviewer has access to external web services/site. In 6 months time when someone is on a train/plane/coach/beach/pub troubleshooting a problem & browsing Git history, there is no guarantee they will have access to the online bug tracker, or online blueprint documents. The great step forward with distributed SCM is that you no longer need to be \"online\" to have access to all information about the code repository. The commit message should be totally self-contained, to maintain that benefit. Do not assume the code is self-evident/self-documenting. What is self-evident to one person, might be clear as mud to another person. Always document what the original problem was and how it is being fixed, for any change except the most obvious typos, or whitespace only commits. Describe why a change is being made. A common mistake is to just document how the code has been written, without describing /why/ the developer chose to do it that way. By all means describe the overall code structure, particularly for large changes, but more importantly describe the intent/motivation behind the changes. Read the commit message to see if it hints at improved code structure. Often when describing a large commit message, it becomes obvious that a commit should have in fact been split into 2 or more parts. Don't be afraid to go back and rebase the change to split it up into separate commits. Ensure sufficient information to decide whether to review. When Gerrit sends out email alerts for new patch submissions there is minimal information included, principally the commit message and the list of files changes. Given the high volume of patches, it is not reasonable to expect all reviewers to examine the patches in detail. The commit message must thus contain sufficient information to alert the potential reviewers to the fact that this is a patch they need to look at. The first commit line is the most important. In Git commits the first line of the commit message has special significance. It is used as email subject line, git annotate messages, gitk viewer annotations, merge commit messages and many more places where space is at a premium. As well as summarizing the change itself, it should take care to detail what part of the code is affected. eg if it is 'afr', 'dht' or any translator. Or in some cases, it can be touching all these components, but the commit message can be 'coverity:', 'txn-framework:', 'new-fop: ', etc. Describe any limitations of the current code. If the code being changed still has future scope for improvements, or any known limitations then mention these in the commit" }, { "data": "This demonstrates to the reviewer that the broader picture has been considered and what tradeoffs have been done in terms of short term goals vs. long term wishes. Do not include patch set-specific comments. In other words, if you rebase your change please don't add \"Patch set 2: rebased\" to your commit message. That isn't going to be relevant once your change has merged. Please do make a note of that in Gerrit as a comment on your change, however. It helps reviewers know what changed between patch sets. This also applies to comments such as \"Added unit tests\", \"Fixed localization problems\", or any other such patch set to patch set changes that don't affect the overall intent of your commit. The main rule to follow is: The commit message must contain all the information required to fully understand & review the patch for correctness. Less is not more. More is more. The commit message is primarily targeted towards human interpretation, but there is always some metadata provided for machine use. In the case of GlusterFS this includes at least the 'Change-id', \"bug\"/\"feature\" ID references and \"Signed-off-by\" tag (generated by 'git commit -s'). The 'Change-id' line is a unique hash describing the change, which is generated by a Git commit hook. This should not be changed when rebasing a commit following review feedback, since it is used by Gerrit, to track versions of a patch. The 'bug' line can reference a bug in a few ways. Gerrit creates a link to the bug when viewing the patch on review.gluster.org so that reviewers can quickly access the bug/issue on Bugzilla or Github. Fixes: bz#1601166 -- use 'Fixes: bz#NNNNN' if the commit is intended to fully fix and close the bug being referenced. Fixes: #411 -- use 'Fixes: #NNN' if the patch fixes the github issue completely. Updates: bz#1193929 -- use 'Updates: bz#NNNN' if the commit is only a partial fix and more work is needed. Updates: #175 -- use 'Updates: #NNNN' if the commit is only a partial fix and more work is needed for the feature completion. We encourage the use of `Co-Authored-By: name <name@example.com>` in commit messages to indicate people who worked on a particular patch. It's a convention for recognizing multiple authors, and our projects would encourage the stats tools to observe it when collecting statistics. Provide a brief description of the change in the first line. The first line should be limited to 50 characters and should not end with a period. Insert a single blank line after the first line. Provide a detailed description of the change in the following lines, breaking paragraphs where needed. Subsequent lines should be wrapped at 72 characters. Put the 'Change-id', 'Fixes bz#NNNNN' and 'Signed-off-by: <>' lines at the very end. TODO: Add good examples" } ]
{ "category": "Runtime", "file_name": "commit-guidelines.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at coc@linuxcontainers.org. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at <https://www.contributor-covenant.org/version/1/4/code-of-conduct.html> For answers to common questions about this code of conduct, see <https://www.contributor-covenant.org/faq>" } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Join the [kubernetes-security-announce] group for security and vulnerability announcements. You can also subscribe to an RSS feed of the above using . Instructions for reporting a vulnerability can be found on the [Kubernetes Security and Disclosure Information] page. Information about supported Kubernetes versions can be found on the [Kubernetes version and version skew support policy] page on the Kubernetes website." } ]
{ "category": "Runtime", "file_name": "SECURITY.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Generate the autocompletion script for powershell Generate the autocompletion script for powershell. To load completions in your current shell session: cilium-agent completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. ``` cilium-agent completion powershell [flags] ``` ``` -h, --help help for powershell --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-agent_completion_powershell.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This design includes the changes to the BackupItemAction (BIA) api design as required by the feature. The BIA v2 interface will have two new methods, and the Execute() return signature will be modified. If there are any additional BIA API changes that are needed in the same Velero release cycle as this change, those can be added here as well. This API change is needed to facilitate long-running plugin actions that may not be complete when the Execute() method returns. It is an optional feature, so plugins which don't need this feature can simply return an empty operation ID and the new methods can be no-ops. This will allow long-running plugin actions to continue in the background while Velero moves on to the next plugin, the next item, etc. Allow for BIA Execute() to optionally initiate a long-running operation and report on operation status. Allowing velero control over when the long-running operation begins. As per the design, a new BIAv2 plugin `.proto` file will be created to define the GRPC interface. v2 go files will also be created in `plugin/clientmgmt/backupitemaction` and `plugin/framework/backupitemaction`, and a new PluginKind will be created. The velero Backup process will be modified to reference v2 plugins instead of v1 plugins. An adapter will be created so that any existing BIA v1 plugin can be executed as a v2 plugin when executing a backup. The v2 BackupItemAction.proto will be like the current v1 version with the following changes: ExecuteResponse gets a new field: ``` message ExecuteResponse { bytes item = 1; repeated generated.ResourceIdentifier additionalItems = 2; string operationID = 3; repeated generated.ResourceIdentifier itemsToUpdate = 4; } ``` The BackupItemAction service gets two new rpc methods: ``` service BackupItemAction { rpc AppliesTo(BackupItemActionAppliesToRequest) returns (BackupItemActionAppliesToResponse); rpc Execute(ExecuteRequest) returns (ExecuteResponse); rpc Progress(BackupItemActionProgressRequest) returns (BackupItemActionProgressResponse); rpc Cancel(BackupItemActionCancelRequest) returns" }, { "data": "} ``` To support these new rpc methods, we define new request/response message types: ``` message BackupItemActionProgressRequest { string plugin = 1; string operationID = 2; bytes backup = 3; } message BackupItemActionProgressResponse { generated.OperationProgress progress = 1; } message BackupItemActionCancelRequest { string plugin = 1; string operationID = 2; bytes backup = 3; } ``` One new shared message type will be added, as this will also be needed for v2 RestoreItemAction and VolmeSnapshotter: ``` message OperationProgress { bool completed = 1; string err = 2; int64 nCompleted = 3; int64 nTotal = 4; string operationUnits = 5; string description = 6; google.protobuf.Timestamp started = 7; google.protobuf.Timestamp updated = 8; } ``` In addition to the two new rpc methods added to the BackupItemAction interface, there is also a new `Name()` method. This one is only actually used internally by Velero to get the name that the plugin was registered with, but it still must be defined in a plugin which implements BackupItemActionV2 in order to implement the interface. It doesn't really matter what it returns, though, as this particular method is not delegated to the plugin via RPC calls. The new (and modified) interface methods for `BackupItemAction` are as follows: ``` type BackupItemAction interface { ... Name() string ... Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) Progress(operationID string, backup *api.Backup) (velero.OperationProgress, error) Cancel(operationID string, backup *api.Backup) error ... } ``` A new PluginKind, `BackupItemActionV2`, will be created, and the backup process will be modified to use this plugin kind. See for more details on implementation plans, including v1 adapters, etc. The included v1 adapter will allow any existing BackupItemAction plugin to work as expected, with an empty operation ID returned from Execute() and no-op Progress() and Cancel() methods. This will be implemented during the Velero 1.11 development cycle." } ]
{ "category": "Runtime", "file_name": "biav2-design.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This enhancement will address backup issues that are the result of concurrently running backup operations, by implementing a synchronisation solution that utilizes files on the backup store as Locks. https://github.com/longhorn/longhorn/issues/612 https://github.com/longhorn/longhorn/issues/1393 https://github.com/longhorn/longhorn/issues/1392 https://github.com/longhorn/backupstore/pull/37 Identify and prevent backup issues caused as a result of concurrent backup operations. Since it should be safe to do backup creation & backup restoration at the same time, we should allow these concurrent operations. The idea is to implement a locking mechanism that utilizes the backupstore, to prevent the following dangerous cases of concurrent operations. prevent backup deletion during backup restoration prevent backup deletion while a backup is in progress prevent backup creation during backup deletion prevent backup restoration during backup deletion The locking solution shouldn't unnecessary block operations, so the following cases should be allowed. allow backup creation during restoration allow backup restoration during creation The locking solution should have a maximum wait time for lock acquisition, which will fail the backup operation so that the user does not have to wait forever. The locking solution should be self expiring, so that when a process dies unexpectedly, future processes are able to acquire the lock. The locking solution should guarantee that only a single type of lock is active at a time. The locking solution should allow a lock to be passed down into async running go routines. Before this enhancement, it is possible to delete a backup while a backup restoration is in progress. This would lead to an unhealthy restoration volume. After this enhancement, a backup deletion could only happen after the restoration has been completed. This way the backupstore continues to contain all the necessary blocks that are required for the restoration. After this enhancement, creation & restoration operations are mutually exclusive with backup deletion operations. Conceptually the lock can be thought of as a RW lock, it includes a `Type` specifier where different types are mutually exclusive. To allow the lock to be passed into async running go routines, we add a `count` field, that keeps track of the current references to this lock. ```go type FileLock struct { Name string Type LockType Acquired bool driver BackupStoreDriver volume string count int32 serverTime time.Time refreshTimer *time.Ticker } ``` To make the lock self expiring, we rely on `serverTime` updates which needs to be refreshed by a timer. We chose a `LOCKREFRESHINTERVAL` of 60 seconds, each refresh cycle a locks `serverTime` will be updated. A lock is considered expired once the current time is after a locks `serverTime` + `LOCKMAXWAIT_TIME` of 150 seconds. Once a lock is expired any currently active attempts to acquire that lock will" }, { "data": "```go const ( LOCKS_DIRECTORY = \"locks\" LOCK_PREFIX = \"lock\" LOCK_SUFFIX = \".lck\" LOCKREFRESHINTERVAL = time.Second * 60 LOCKMAXWAIT_TIME = time.Second * 150 LOCKCHECKINTERVAL = time.Second * 10 LOCKCHECKWAIT_TIME = time.Second * 2 ) ``` Lock Usage create a new lock instance via `lock := lock.New()` call `lock.Lock()` which will block till the lock has been acquired and increment the lock reference count. defer `lock.Unlock()` which will decrement the lock reference count and remove the lock once unreferenced. To make sure the locks are mutually exclusive, we use the following process to acquire a lock. create a lock file on the backupstore with a unique `Name`. retrieve all lock files from the backupstore order them by `Acquired` then by `serverTime` followed by `Name` check if we can acquire the lock, we can only acquire if there is no unexpired(i) lock of a different type(ii) that has priority(iii). Locks are self expiring, once the current time is after `lock.serverTime + LOCKMAXWAIT_TIME` we no longer need to consider this lock as valid. Backup & Restore Locks are mapped to compatible types while Delete Locks are mapped to a different type to be mutually exclusive with the others. Priority is based on the comparison order, where locks are compared by `lock.Acquired` then by `lock.serverTime` followed by `lock.Name`. Where acquired locks are always sorted before non acquired locks. if lock acquisition times out, return err which will fail the backup operation. once the lock is acquired, continuously refresh the lock (updates `lock.serverTime`) once the lock is acquired, it can be passed around by calling `lock.Lock()` once the lock is no longer referenced, it will be removed from the backupstore. It's very unlikely to run into lock collisions, since we use uniquely generated name for the lock filename. In cases where two locks have the same `lock.serverTime`, we can rely on the `lock.Name` as a differentiator between 2 locks. A number of integration tests will need to be added for the `longhorn-engine` in order to test the changes in this proposal: place an expired lock file into a backupstore, then verify that a new lock can be acquired. place an active lock file of Type `Delete` into a backupstore, then verify that backup/restore operations will trigger lock acquisition timeout. place an active lock file of Type `Delete` into a backupstore, then verify that a new `Delete` operation can acquire a lock. place an active lock file of Type `Backup/Restore` into a backupstore, then verify that delete operations will trigger lock acquisition timeout. place an active lock file of Type `Backup/Restore` into a backupstore, then verify that a new `Backup/Restore` operation can acquire a lock. No special upgrade strategy is necessary." } ]
{ "category": "Runtime", "file_name": "20200701-backupstore-file-locks.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "name: Generate Purpose Issue Template about: Generate Purpose Issue Template title: '' labels: '' assignees: '' <! Provide a general summary of the issue in the Title above --> <! If you believe the issue may have security implications, please report it as a vulnerability --> <! Report a vulnerability: https://github.com/projectcalico/calico/security --> <! If you're describing a bug, tell us what should happen --> <! If you're suggesting a change/improvement, tell us how it should work --> <! If describing a bug, tell us what happens instead of the expected behavior --> <! If suggesting a change/improvement, explain the difference from current behavior --> <! Not obligatory, but suggest a fix/reason for the bug, --> <! or ideas how to implement the addition or change --> <! Provide a link to a live example, or an unambiguous set of steps to --> <! reproduce this bug. Include code to reproduce, if relevant --> <! How has this issue affected you? What are you trying to accomplish? --> <! Providing context helps us come up with a solution that is most useful in the real world --> <! Include as many relevant details about the environment you experienced the bug in --> Calico version Orchestrator version (e.g. kubernetes, mesos, rkt): Operating System and version: Link to your project (optional):" } ]
{ "category": "Runtime", "file_name": "generate-purpose-issue-template.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "(instances-create)= To create an instance, you can use either the or the command. The command only creates the instance, while the command creates and starts it. Enter the following command to create a container: incus launch|init <imageserver>:<imagename> <instance_name> [flags] Image : Images contain a basic operating system (for example, a Linux distribution) and some Incus-related information. Images for various operating systems are available on the built-in remote image servers. See {ref}`images` for more information. Unless the image is available locally, you must specify the name of the image server and the name of the image (for example, `images:ubuntu/22.04` for the official 22.04 Ubuntu image). Instance name : Instance names must be unique within an Incus deployment (also within a cluster). See {ref}`instance-properties` for additional requirements. Flags : See or for a full list of flags. The most common flags are: `--config` to specify a configuration option for the new instance `--device` to override {ref}`device options <devices>` for a device provided through a profile, or to specify an {ref}`initial configuration for the root disk device <devices-disk-initial-config>` `--profile` to specify a {ref}`profile <profiles>` to use for the new instance `--network` or `--storage` to make the new instance use a specific network or storage pool `--target` to create the instance on a specific cluster member `--vm` to create a virtual machine instead of a container Instead of specifying the instance configuration as flags, you can pass it to the command as a YAML file. For example, to launch a container with the configuration from `config.yaml`, enter the following command: incus launch images:ubuntu/22.04 ubuntu-config < config.yaml ```{tip} Check the contents of an existing instance configuration () to see the required syntax of the YAML file. ``` The following examples use , but you can use in the same way. To launch a container with an Ubuntu 22.04 image from the `images` server using the instance name `ubuntu-container`, enter the following command: incus launch images:ubuntu/22.04 ubuntu-container To launch a virtual machine with an Ubuntu 22.04 image from the `images` server using the instance name `ubuntu-vm`, enter the following command: incus launch images:ubuntu/22.04 ubuntu-vm --vm Or with a bigger disk: incus launch" }, { "data": "ubuntu-vm-big --vm --device root,size=30GiB To launch a container and limit its resources to one vCPU and 192 MiB of RAM, enter the following command: incus launch images:ubuntu/22.04 ubuntu-limited --config limits.cpu=1 --config limits.memory=192MiB To launch a virtual machine on the cluster member `server2`, enter the following command: incus launch images:ubuntu/22.04 ubuntu-container --vm --target server2 Incus supports simple instance types for clouds. Those are represented as a string that can be passed at instance creation time. The syntax allows the three following forms: `<instance type>` `<cloud>:<instance type>` `c<CPU>-m<RAM in GiB>` For example, the following three instance types are equivalent: `t2.micro` `aws:t2.micro` `c1-m1` To launch a container with this instance type, enter the following command: incus launch images:ubuntu/22.04 my-instance --type t2.micro The list of supported clouds and instance types can be found at . To launch a VM that boots from an ISO, you must first create a VM. Let's assume that we want to create a VM and install it from the ISO image. In this scenario, use the following command to create an empty VM: incus init iso-vm --empty --vm ```{note} Depending on the needs of the operating system being installed, you may want to allocate more CPU, memory or storage to the virtual machine. For example, for 2 CPUs, 4 GiB of memory and 50 GiB of storage, you can do: incus init iso-vm --empty --vm -c limits.cpu=2 -c limits.memory=4GiB -d root,size=50GiB ``` The second step is to import an ISO image that can later be attached to the VM as a storage volume: incus storage volume import <pool> <path-to-image.iso> iso-volume --type=iso Lastly, you need to attach the custom ISO volume to the VM using the following command: incus config device add iso-vm iso-volume disk pool=<pool> source=iso-volume boot.priority=10 The `boot.priority` configuration key ensures that the VM will boot from the ISO first. Start the VM and connect to the console as there might be a menu you need to interact with: incus start iso-vm --console Once you're done in the serial console, you need to disconnect from the console using `ctrl+a-q`, and connect to the VGA console using the following command: incus console iso-vm --type=vga You should now see the installer. After the installation is done, you need to detach the custom ISO volume: incus storage volume detach <pool> iso-volume iso-vm Now the VM can be rebooted, and it will boot from disk." } ]
{ "category": "Runtime", "file_name": "instances_create.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Dump all policy maps ``` cilium-dbg bpf policy list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage policy related BPF maps" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_policy_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Run cilium-operator-alibabacloud ``` cilium-operator-alibabacloud [flags] ``` ``` --alibaba-cloud-vpc-id string Specific VPC ID for AlibabaCloud ENI. If not set use same VPC as operator --auto-create-cilium-pod-ip-pools map Automatically create CiliumPodIPPool resources on startup. Specify pools in the form of <pool>=ipv4-cidrs:<cidr>,[<cidr>...];ipv4-mask-size:<size> (multiple pools can also be passed by repeating the CLI flag) --bgp-announce-lb-ip Announces service IPs of type LoadBalancer via BGP --bgp-config-path string Path to file containing the BGP configuration (default \"/var/lib/cilium/bgp/config.yaml\") --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cilium-endpoint-gc-interval duration GC interval for cilium endpoints (default 5m0s) --cilium-pod-labels string Cilium Pod's labels. Used to detect if a Cilium pod is running to remove the node taints where its running and set NetworkUnavailable to false (default \"k8s-app=cilium\") --cilium-pod-namespace string Name of the Kubernetes namespace in which Cilium is deployed in. Defaults to the same namespace defined in k8s-namespace --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --cluster-pool-ipv4-cidr strings IPv4 CIDR Range for Pods in cluster. Requires 'ipam=cluster-pool' and 'enable-ipv4=true' --cluster-pool-ipv4-mask-size int Mask size for each IPv4 podCIDR per node. Requires 'ipam=cluster-pool' and 'enable-ipv4=true' (default 24) --cluster-pool-ipv6-cidr strings IPv6 CIDR Range for Pods in cluster. Requires 'ipam=cluster-pool' and 'enable-ipv6=true' --cluster-pool-ipv6-mask-size int Mask size for each IPv6 podCIDR per node. Requires 'ipam=cluster-pool' and 'enable-ipv6=true' (default 112) --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --cnp-status-cleanup-burst int Maximum burst of requests to clean up status nodes updates in CNPs (default 20) --cnp-status-cleanup-qps float Rate used for limiting the clean up of the status nodes updates in CNP, expressed as qps (default 10) --config string Configuration file (default \"$HOME/ciliumd.yaml\") --config-dir string Configuration directory that contains a file for each option --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium" }, { "data": "-D, --debug Enable debugging mode --enable-cilium-endpoint-slice If set to true, the CiliumEndpointSlice feature is enabled. If any CiliumEndpoints resources are created, updated, or deleted in the cluster, all those changes are broadcast as CiliumEndpointSlice updates to all of the Cilium agents. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-ipv4 Enable IPv4 support (default true) --enable-ipv6 Enable IPv6 support (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-metrics Enable Prometheus metrics --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. (default true) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host network. --gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) -h, --help help for cilium-operator-alibabacloud --identity-allocation-mode string Method to use for identity allocation (default \"kvstore\") --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for" }, { "data": "(default \"cilium-ingress\") --instance-tags-filter map EC2 Instance tags in the form of k1=v1,k2=v2 (multiple k/v pairs can also be passed by repeating the CLI flag --ipam string Backend to use for IPAM (default \"alibabacloud\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-namespace string Name of the Kubernetes namespace in which Cilium Operator is deployed in --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --kvstore string Key-value store type --kvstore-opt map Key-value store options e.g. etcd.address=127.0.0.1:4001 --leader-election-lease-duration duration Duration that non-leader operator candidates will wait before forcing to acquire leadership (default 15s) --leader-election-renew-deadline duration Duration that current acting master will retry refreshing leadership in before giving up the lock (default 10s) --leader-election-retry-period duration Duration that LeaderElector clients should wait between retries of the actions (default 2s) --limit-ipam-api-burst int Upper burst limit when accessing external APIs (default 20) --limit-ipam-api-qps float Queries per second limit when accessing external IPAM APIs (default 4) --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-operator, configmap example for syslog driver: {\"syslog.level\":\"info\",\"syslog.facility\":\"local4\"} --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --nodes-gc-interval duration GC interval for CiliumNodes (default 5m0s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --parallel-alloc-workers int Maximum number of parallel IPAM workers (default 50) --pod-restart-selector string cilium-operator will delete/restart any pods with these labels if the pod is not managed by Cilium. If this option is empty, then all pods may be restarted (default \"k8s-app=kube-dns\") --remove-cilium-node-taints Remove node taint \"node.cilium.io/agent-not-ready\" from Kubernetes nodes once Cilium is up and running (default true) --set-cilium-is-up-condition Set CiliumIsUp Node condition to mark a Kubernetes Node that a Cilium pod is up and running in that node (default true) --set-cilium-node-taints Set node taint \"node.cilium.io/agent-not-ready\" from Kubernetes nodes if Cilium is scheduled but not up and running --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created --subnet-ids-filter strings Subnets IDs (separated by commas) --subnet-tags-filter map Subnets tags in the form of k1=v1,k2=v2 (multiple k/v pairs can also be passed by repeating the CLI flag --synchronize-k8s-nodes Synchronize Kubernetes nodes to kvstore and perform CNP GC (default true) --synchronize-k8s-services Synchronize Kubernetes services to kvstore (default true) --unmanaged-pod-watcher-interval int Interval to check for unmanaged kube-dns pods (0 to disable) (default 15) --version Print version information ``` - Generate the autocompletion script for the specified shell - Inspect" } ]
{ "category": "Runtime", "file_name": "cilium-operator-alibabacloud.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Name | Type | Description | Notes | - | - | - Path | string | | Iommu | Pointer to bool | | [optional] [default to false] PciSegment | Pointer to int32 | | [optional] Id | Pointer to string | | [optional] `func NewDeviceConfig(path string, ) *DeviceConfig` NewDeviceConfig instantiates a new DeviceConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewDeviceConfigWithDefaults() *DeviceConfig` NewDeviceConfigWithDefaults instantiates a new DeviceConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *DeviceConfig) GetPath() string` GetPath returns the Path field if non-nil, zero value otherwise. `func (o DeviceConfig) GetPathOk() (string, bool)` GetPathOk returns a tuple with the Path field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DeviceConfig) SetPath(v string)` SetPath sets Path field to given value. `func (o *DeviceConfig) GetIommu() bool` GetIommu returns the Iommu field if non-nil, zero value otherwise. `func (o DeviceConfig) GetIommuOk() (bool, bool)` GetIommuOk returns a tuple with the Iommu field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DeviceConfig) SetIommu(v bool)` SetIommu sets Iommu field to given value. `func (o *DeviceConfig) HasIommu() bool` HasIommu returns a boolean if a field has been set. `func (o *DeviceConfig) GetPciSegment() int32` GetPciSegment returns the PciSegment field if non-nil, zero value otherwise. `func (o DeviceConfig) GetPciSegmentOk() (int32, bool)` GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DeviceConfig) SetPciSegment(v int32)` SetPciSegment sets PciSegment field to given value. `func (o *DeviceConfig) HasPciSegment() bool` HasPciSegment returns a boolean if a field has been set. `func (o *DeviceConfig) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o DeviceConfig) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DeviceConfig) SetId(v string)` SetId sets Id field to given value. `func (o *DeviceConfig) HasId() bool` HasId returns a boolean if a field has been set." } ]
{ "category": "Runtime", "file_name": "DeviceConfig.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "This document aims to provide information about packaging rkt in Linux distributions. It covers dependencies, file ownership and permissions, and tips to observe packaging policies. Please see . By default, the rkt build will download a CoreOS Container Linux PXE image from the internet and extract some binaries, such as `systemd-nspawn` and `bash`. However, some packaging environments don't allow internet access during the build. To work around this, download the Container Linux PXE image before starting the build process, and use the `--with-coreos-local-pxe-image-path` and `--with-coreos-local-pxe-image-systemd-version` parameters. For more details, see the . Most Linux distributions don't allow the use of prebuilt binaries, or reuse of code that is already otherwise packaged. systemd falls in this category, as Debian and Fedora already package systemd, and rkt needs systemd. The configure script's `--with-stage1-flavors` option can be set to `host` to avoid rkt's dependency on systemd in these environments: ``` ./configure --with-stage1-flavors=host ``` The `stage1-host.aci` archive generated by this build will not contain bash, systemd, or any other binaries from external sources. The binaries embedded in the stage1 archive are all built from the sources in the rkt git repository. The external binaries needed by this `stage1-host.aci` are copied from the host at run time. Packages using the `--with-stage1-flavors=host` option must therefore add a run-time dependency on systemd and bash. Whenever systemd and bash are upgraded on the host, rkt will use the new version at run time. It becomes the packager's responsibility to test the rkt package whenever a new version of systemd is packaged. For more details, see the . rkt uses to maintain . Please see . In general, subdirectories of `/var/lib/rkt`, and `/etc/rkt` should be created with the same ownership and permissions as described in the . Any rkt package should create a system group `rkt`, and `rkt-admin`. The directory `/var/lib/rkt` should belong to group `rkt` with the `setgid` bit set (`chmod g+s`). The directory `/etc/rkt` should belong to group `rkt-admin` with the `setgid` bit set (`chmod g+s`). When the ownership and permissions of `/var/lib/rkt` are set up correctly, members of group `rkt` should be able to fetch ACIs. Members of group `rkt-admin` should be able to trust GPG keys, and add additional configurations in `/etc/rkt`. Root privilege is still required to run pods. The motivation to have separate `rkt`, and `rkt-admin` groups is that the person who makes administrative changes would likely be different than the unprivileged user who is able to fetch. A few are included in the rkt sources. These units demonstrate systemd-managed units to run the rkt with socket-activation, the rkt , and a periodic service invoked at 12-hour intervals to purge dead pods." } ]
{ "category": "Runtime", "file_name": "packaging.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "title: \"Resource filtering\" layout: docs Filter objects by namespace, type, or labels. This page describes how to use the include and exclude flags with the `velero backup` and `velero restore` commands. By default Velero includes all objects in a backup or restore when no filtering options are used. Only specific resources are included, all others are excluded. Wildcard takes precedence when both a wildcard and specific resource are included. Namespaces to include. Default is `*`, all namespaces. Backup a namespace and it's objects. ```bash velero backup create <backup-name> --include-namespaces <namespace> ``` Restore two namespaces and their objects. ```bash velero restore create <restore-name> --include-namespaces <namespace1>,<namespace2> --from-backup <backup-name> ``` Kubernetes resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use `*` for all resources). Backup all deployments in the cluster. ```bash velero backup create <backup-name> --include-resources deployments ``` Restore all deployments and configmaps in the cluster. ```bash velero restore create <restore-name> --include-resources deployments,configmaps --from-backup <backup-name> ``` Backup the deployments in a namespace. ```bash velero backup create <backup-name> --include-resources deployments --include-namespaces <namespace> ``` Includes cluster-scoped resources. This option can have three possible values: `true`: all cluster-scoped resources are included. `false`: no cluster-scoped resources are included. `nil` (\"auto\" or not supplied): Cluster-scoped resources are included when backing up or restoring all namespaces. Default: `true`. Cluster-scoped resources are not included when namespace filtering is used. Default: `false`. Some related cluster-scoped resources may still be backed/restored up if triggered by a custom action (for example, PVC->PV) unless `--include-cluster-resources=false`. Backup entire cluster including cluster-scoped resources. ```bash velero backup create <backup-name> ``` Restore only namespaced resources in the cluster. ```bash velero restore create <restore-name> --include-cluster-resources=false --from-backup <backup-name> ``` Backup a namespace and include cluster-scoped resources. ```bash velero backup create <backup-name> --include-namespaces <namespace> --include-cluster-resources=true ``` Include resources matching the label selector. ```bash velero backup create <backup-name> --selector <key>=<value> ``` Include resources that are not matching the selector ```bash velero backup create <backup-name> --selector \"<key> notin (<value>)\" ``` For more information read the Exclude specific resources from the backup. Wildcard excludes are ignored. Namespaces to exclude. Exclude kube-system from the cluster backup. ```bash velero backup create <backup-name> --exclude-namespaces kube-system ``` Exclude two namespaces during a restore. ```bash velero restore create <restore-name> --exclude-namespaces <namespace1>,<namespace2> --from-backup <backup-name> ``` Kubernetes resources to exclude, formatted as resource.group, such as storageclasses.storage.k8s.io. Exclude secrets from the backup. ```bash velero backup create <backup-name> --exclude-resources secrets ``` Exclude secrets and rolebindings. ```bash velero backup create <backup-name> --exclude-resources secrets,rolebindings ``` Resources with the label `velero.io/exclude-from-backup=true` are not included in backup, even if it contains a matching selector label." } ]
{ "category": "Runtime", "file_name": "resource-filtering.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Copyright (C) 2021 Matt Layher Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ARE." } ]
{ "category": "Runtime", "file_name": "LICENSE.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage XDP CIDR filters ``` -h, --help help for prefilter ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Delete CIDR filters - List CIDR filters - Update CIDR filters" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_prefilter.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "(initialize)= Before you can create an Incus instance, you must configure and initialize Incus. Run the following command to start the interactive configuration process: incus admin init ```{note} For simple configurations, you can run this command as a normal user. However, some more advanced operations during the initialization process (for example, joining an existing cluster) require root privileges. In this case, run the command with `sudo` or as root. ``` The tool asks a series of questions to determine the required configuration. The questions are dynamically adapted to the answers that you give. They cover the following areas: Clustering (see {ref}`exp-clustering` and {ref}`cluster-form`) : A cluster combines several Incus servers. The cluster members share the same distributed database and can be managed uniformly using the Incus client () or the REST API. The default answer is `no`, which means clustering is not enabled. If you answer `yes`, you can either connect to an existing cluster or create one. Networking (see {ref}`networks` and {ref}`Network devices <devices-nic>`) : Provides network access for the instances. You can let Incus create a new bridge (recommended) or use an existing network bridge or interface. You can create additional bridges and assign them to instances later. Storage pools (see {ref}`exp-storage` and {ref}`storage-drivers`) : Instances (and other data) are stored in storage pools. For testing purposes, you can create a loop-backed storage pool. For production use, however, you should use an empty partition (or full disk) instead of loop-backed storage (because loop-backed pools are slower and their size can't be reduced). The recommended backends are `zfs` and `btrfs`. You can create additional storage pools later. Remote access (see {ref}`securityremoteaccess` and {ref}`authentication`) : Allows remote access to the server over the network. The default answer is `no`, which means remote access is not allowed. If you answer `yes`, you can connect to the server over the network. You can choose to add client certificates to the server (manually or through tokens). Automatic image update (see {ref}`about-images`) : You can download images from image servers. In this case, images can be updated automatically. The default answer is `yes`, which means that Incus will update the downloaded images regularly. YAML `incus admin init` preseed (see {ref}`initialize-preseed`) : If you answer `yes`, the command displays a summary of your chosen configuration options in the terminal. To create a minimal setup with default options, you can skip the configuration steps by adding the `--minimal` flag to the `incus admin init` command: incus admin init --minimal ```{note} The minimal setup provides a basic configuration, but the configuration is not optimized for speed or functionality. Especially the , which is used by default, is slower than other drivers and doesn't provide fast snapshots, fast copy/launch, quotas and optimized backups. If you want to use an optimized setup, go through the interactive configuration process instead. ``` (initialize-preseed)= The `incus admin init` command supports a `--preseed` command line flag that makes it possible to fully configure the Incus daemon settings, storage pools, network devices and profiles, in a non-interactive way through a preseed YAML" }, { "data": "For example, starting from a brand new Incus installation, you could configure Incus with the following command: ```bash cat <<EOF | incus admin init --preseed config: core.https_address: 192.0.2.1:9999 images.autoupdateinterval: 15 networks: name: incusbr0 type: bridge config: ipv4.address: auto ipv6.address: none EOF ``` This preseed configuration initializes the Incus daemon to listen for HTTPS connections on port 9999 of the 192.0.2.1 address, to automatically update images every 15 hours and to create a network bridge device named `incusbr0`, which gets assigned an IPv4 address automatically. If you are configuring a new Incus installation, the preseed command applies the configuration as specified (as long as the given YAML contains valid keys and values). There is no existing state that might conflict with the specified configuration. However, if you are re-configuring an existing Incus installation using the preseed command, the provided YAML configuration might conflict with the existing configuration. To avoid such conflicts, the following rules are in place: The provided YAML configuration overwrites existing entities. This means that if you are re-configuring an existing entity, you must provide the full configuration for the entity and not just the different keys. If the provided YAML configuration contains entities that do not exist, they are created. This is the same behavior as for a `PUT` request in the {doc}`../rest-api`. If some parts of the new configuration conflict with the existing state (for example, they try to change the driver of a storage pool from `dir` to `zfs`), the preseed command fails and automatically attempts to roll back any changes that were applied so far. For example, it deletes entities that were created by the new configuration and reverts overwritten entities back to their original state. Failure modes when overwriting entities are the same as for the `PUT` requests in the {doc}`../rest-api`. ```{note} The rollback process might potentially fail, although rarely (typically due to backend bugs or limitations). You should therefore be careful when trying to reconfigure an Incus daemon via preseed. ``` Unlike the interactive initialization mode, the `incus admin init --preseed` command does not modify the default profile, unless you explicitly express that in the provided YAML payload. For instance, you will typically want to attach a root disk device and a network interface to your default profile. See the following section for an example. The supported keys and values of the various entities are the same as the ones documented in the {doc}`../rest-api`, but converted to YAML for convenience. However, you can also use JSON, since YAML is a superset of JSON. The following snippet gives an example of a preseed payload that contains most of the possible configurations. You can use it as a template for your own preseed file and add, change or remove what you need: ```yaml config: core.https_address: 192.0.2.1:9999 images.autoupdateinterval: 6 storage_pools: name: data driver: zfs config: source: my-zfs-pool/my-zfs-dataset networks: name: incus-my-bridge type: bridge config: ipv4.address: auto ipv6.address: none profiles: name: default devices: root: path: / pool: data type: disk name: test-profile description: \"Test profile\" config: limits.memory: 2GiB devices: test0: name: test0 nictype: bridged parent: incus-my-bridge type: nic ```" } ]
{ "category": "Runtime", "file_name": "initialize.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "What this PR does / why we need it: Which issue(s) this PR fixes (optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged): Fixes # Please check the following list: [ ] Does the affected code have corresponding tests, e.g. unit test, E2E test? [ ] Does this change require a documentation update? [ ] Does this introduce breaking changes that would require an announcement or bumping the major version? [ ] Do all new files have an appropriate license header? <!-- If this is a security issue, please do not discuss on GitHub. Please report any suspected or confirmed security issues directly to https://github.com/oras-project/oras/blob/main/OWNERS.md. -->" } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "ORAS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List node IDs and their IP addresses List node IDs and their IP addresses. ``` cilium-dbg bpf nodeid list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the node IDs" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_nodeid_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"ark client\" layout: docs Ark client related commands Ark client related commands ``` -h, --help help for client ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Back up and restore Kubernetes cluster resources. - Get and set client configuration file values" } ]
{ "category": "Runtime", "file_name": "ark_client.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Display cached content of given BPF map ``` cilium-dbg map get <name> [flags] ``` ``` cilium map get cilium_ipcache ``` ``` -h, --help help for get -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Access userspace cached content of BPF maps" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_map_get.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "As of today Velero supports filtering of resources based on single label selector per backup. It is desired that Velero support backing up of resources based on multiple labels (OR logic). Note: This solution is required because Kubernetes label selectors only allow AND logic of labels. Currently, Velero's Backup/Restore API has a spec field `LabelSelector` which helps in filtering of resources based on a single label value per backup/restore request. For instance, if the user specifies the `Backup.Spec.LabelSelector` as `data-protection-app: true`, Velero will grab all the resources that possess this label and perform the backup operation on them. The `LabelSelector` field does not accept more than one labels, and thus if the user want to take backup for resources consisting of a label from a set of labels (label1 OR label2 OR label3) then the user needs to create multiple backups per label rule. It would be really useful if Velero Backup API could respect a set of labels (OR Rule) for a single backup request. Related Issue: https://github.com/vmware-tanzu/velero/issues/1508 Enable support for backing up resources based on multiple labels (OR Logic) in a single backup config. Enable support for restoring resources based on multiple labels (OR Logic) in a single restore config. Let's say as a Velero user you want to take a backup of secrets, but all these secrets do not have one single consistent label on them. We want to take backup of secrets having any one label in `app=gdpr`, `app=wpa` and `app=ccpa`. Here we would have to create 3 instances of backup for each label rule. This can become cumbersome at scale. For Velero to back up resources if they consist of any one label from a set of labels, we would like to add a new spec field `OrLabelSelectors` which would enable user to specify them. The Velero backup would somewhat look like: ``` apiVersion: velero.io/v1 kind: Backup metadata: name: backup-101 namespace: openshift-adp spec: includedNamespaces: test storageLocation: velero-sample-1 ttl: 720h0m0s orLabelSelectors: matchLabels: app=gdpr matchLabels: app=wpa matchLabels: app=ccpa ``` Note: This approach will not be changing any current behavior related to Backup API spec `LabelSelector`. Rather we propose that the label in `LabelSelector` spec and labels in `OrLabelSelectors` should be treated as different Velero" }, { "data": "Both these fields will be treated as separate Velero Backup API specs. If `LabelSelector` (singular) is present then just match that label. And if `OrLabelSelectors` is present then match to any label in the set specified by the user. For backup case, if both the `LabelSelector` and `OrLabelSelectors` are specified (we do not anticipate this as a real world use-case) then the `OrLabelSelectors` will take precedence, `LabelSelector` will only be used to filter only when `OrLabelSelectors` is not specified by the user. This helps to keep both spec behaviour independent and not confuse the users. This way we preserve the existing Velero behaviour and implement the new functionality in a much cleaner way. For instance, let's take a look the following cases: Only `LabelSelector` specified: Velero will create a backup with resources matching label `app=protect-db` ``` apiVersion: velero.io/v1 kind: Backup metadata: name: backup-101 namespace: openshift-adp spec: includedNamespaces: test storageLocation: velero-sample-1 ttl: 720h0m0s labelSelector: matchLabels: app=gdpr ``` Only `OrLabelSelectors` specified: Velero will create a backup with resources matching any label from set `{app=gdpr, app=wpa, app=ccpa}` ``` apiVersion: velero.io/v1 kind: Backup metadata: name: backup-101 namespace: openshift-adp spec: includedNamespaces: test storageLocation: velero-sample-1 ttl: 720h0m0s orLabelSelectors: matchLabels: app=gdpr matchLabels: app=wpa matchLabels: app=ccpa ``` Similar implementation will be done for the Restore API as well. With the Introduction of `OrLabelSelectors` the BackupSpec and RestoreSpec will look like: BackupSpec: ``` type BackupSpec struct { [...] // OrLabelSelectors is a set of []metav1.LabelSelector to filter with // when adding individual objects to the backup. Resources matching any one // label from the set of labels will be added to the backup. If empty // or nil, all objects are included. Optional. // +optional OrLabelSelectors []\\*metav1.LabelSelector [...] } ``` RestoreSpec: ``` type RestoreSpec struct { [...] // OrLabelSelectors is a set of []metav1.LabelSelector to filter with // when restoring objects from the backup. Resources matching any one // label from the set of labels will be restored from the backup. If empty // or nil, all objects are included from the backup. Optional. // +optional OrLabelSelectors []\\*metav1.LabelSelector [...] } ``` The logic to collect resources to be backed up for a particular backup will be updated in the `backup/item_collector.go` around . And for filtering the resources to be restored, the changes will go Note: This feature will not be exposed via Velero CLI." } ]
{ "category": "Runtime", "file_name": "multiple-label-selectors_design.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "When Pods use VLAN networks, network administrators may need to manually configure various VLAN or Bond interfaces on the nodes in advance. This process can be tedious and error-prone. Spiderpool provides a CNI meta-plugin called `ifacer`. This plugin dynamically creates VLAN sub-interfaces or Bond interfaces on the nodes during Pod creation, based on the provided `ifacer` configuration, greatly simplifying the configuration workload. In the following sections, we will delve into this plugin. Support dynamic creation of VLAN sub-interfaces Support dynamic creation of Bond interfaces The VLAN/Bond interfaces created by this plugin will be lost when the node restarts, but they will be automatically recreated upon the Pod restarts. Deleting existed VLAN/Bond interfaces is not supported. Configuring the address of VLAN/Bond interfaces during creation is not supported. If your OS(such as Fedora, CentOS, etc.) uses NetworkManager, Highly recommend configuring following configuration file at `/etc/NetworkManager/conf.d/spidernet.conf` to prevent interference from NetworkManager with Vlan and Bond interfaces created by `Ifacer`: ```shell ~# INTERFACE=<yourinterfacename> ~# cat > /etc/NetworkManager/conf.d/spidernet.conf <<EOF [keyfile] unmanaged-devices=interface-name:^veth*;interface-name:${IFACER_INTERFACE} EOF ~# systemctl restart NetworkManager ``` There are no specific requirements including Kubernetes or Kernel versions for using this plugin. During the installation of Spiderpool, the plugin will be automatically installed in the `/opt/cni/bin/` directory on each host. You can verify by checking for the presence of the `ifacer` binary in that directory on each host. Examples please see" } ]
{ "category": "Runtime", "file_name": "plugin-ifacer.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: Filesystem Storage Overview A filesystem storage (also named shared filesystem) can be mounted with read/write permission from multiple pods. This may be useful for applications which can be clustered using a shared filesystem. This example runs a shared filesystem for the . This guide assumes you have created a Rook cluster as explained in the main Create the filesystem by specifying the desired settings for the metadata pool, data pools, and metadata server in the `CephFilesystem` CRD. In this example we create the metadata pool with replication of three and a single data pool with replication of three. For more options, see the documentation on . Save this shared filesystem definition as `filesystem.yaml`: ```yaml apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: myfs namespace: rook-ceph spec: metadataPool: replicated: size: 3 dataPools: name: replicated replicated: size: 3 preserveFilesystemOnDelete: true metadataServer: activeCount: 1 activeStandby: true ``` The Rook operator will create all the pools and other resources necessary to start the service. This may take a minute to complete. ```console kubectl create -f filesystem.yaml [...] ``` To confirm the filesystem is configured, wait for the mds pods to start: ```console $ kubectl -n rook-ceph get pod -l app=rook-ceph-mds NAME READY STATUS RESTARTS AGE rook-ceph-mds-myfs-7d59fdfcf4-h8kw9 1/1 Running 0 12s rook-ceph-mds-myfs-7d59fdfcf4-kgkjp 1/1 Running 0 12s ``` To see detailed status of the filesystem, start and connect to the . A new line will be shown with `ceph status` for the `mds` service. In this example, there is one active instance of MDS which is up, with one MDS instance in `standby-replay` mode in case of failover. ```console $ ceph status [...] services: mds: myfs-1/1/1 up {[myfs:0]=mzw58b=up:active}, 1 up:standby-replay ``` Before Rook can start provisioning storage, a StorageClass needs to be created based on the filesystem. This is needed for Kubernetes to interoperate with the CSI driver to create persistent volumes. Save this storage class definition as `storageclass.yaml`: ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-cephfs provisioner: rook-ceph.cephfs.csi.ceph.com parameters: clusterID: rook-ceph fsName: myfs pool: myfs-replicated csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph reclaimPolicy: Delete ``` If you've deployed the Rook operator in a namespace other than \"rook-ceph\" as is common change the prefix in the provisioner to match the namespace you used. For example, if the Rook operator is running in \"rook-op\" the provisioner value should be \"rook-op.rbd.csi.ceph.com\". Create the storage class. ```console kubectl create -f deploy/examples/csi/cephfs/storageclass.yaml ``` !!! attention The CephFS CSI driver uses quotas to enforce the PVC size requested. Only newer kernels support CephFS quotas (kernel version of at least 4.17). If you require quotas to be enforced and the kernel driver does not support it, you can disable the kernel driver and use the FUSE client. This can be done by setting `CSIFORCECEPHFSKERNELCLIENT: false` in the operator deployment (`operator.yaml`). However, it is important to know that when the FUSE client is enabled, there is an issue that during upgrade the application pods will be disconnected from the mount and will need to be restarted. See the for more details. As an example, we will start the kube-registry pod with the shared filesystem as the backing store. Save the following spec as" }, { "data": "```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc namespace: kube-system spec: accessModes: ReadWriteMany resources: requests: storage: 1Gi storageClassName: rook-cephfs apiVersion: apps/v1 kind: Deployment metadata: name: kube-registry namespace: kube-system labels: k8s-app: kube-registry kubernetes.io/cluster-service: \"true\" spec: replicas: 3 selector: matchLabels: k8s-app: kube-registry template: metadata: labels: k8s-app: kube-registry kubernetes.io/cluster-service: \"true\" spec: containers: name: registry image: registry:2 imagePullPolicy: Always resources: limits: memory: 100Mi env: name: REGISTRYHTTPADDR value: :5000 name: REGISTRYHTTPSECRET value: \"Ple4seCh4ngeThisN0tAVerySecretV4lue\" name: REGISTRYSTORAGEFILESYSTEM_ROOTDIRECTORY value: /var/lib/registry volumeMounts: name: image-store mountPath: /var/lib/registry ports: containerPort: 5000 name: registry protocol: TCP livenessProbe: httpGet: path: / port: registry readinessProbe: httpGet: path: / port: registry volumes: name: image-store persistentVolumeClaim: claimName: cephfs-pvc readOnly: false ``` Create the Kube registry deployment: ```console kubectl create -f deploy/examples/csi/cephfs/kube-registry.yaml ``` You now have a docker registry which is HA with persistent storage. If the Rook cluster has more than one filesystem and the application pod is scheduled to a node with kernel version older than 4.7, inconsistent results may arise since kernels older than 4.7 do not support specifying filesystem namespaces. Once you have pushed an image to the registry (see the to expose and use the kube-registry), verify that kube-registry is using the filesystem that was configured above by mounting the shared filesystem in the toolbox pod. See the topic for more details. A PVC that you create using the `rook-cephfs` storageClass can be shared between different Pods simultaneously, either read-write or read-only, but is restricted to a single namespace (a PVC is a namespace-scoped resource, so you cannot use it in another one). However there are some use cases where you want to share the content from a CephFS-based PVC among different Pods in different namespaces, for a shared library for example, or a collaboration workspace between applications running in different namespaces. You can do that using the following recipe. In the `rook` namespace, create a copy of the secret `rook-csi-cephfs-node`, name it `rook-csi-cephfs-node-user` . Edit your new secret, changing the name of the keys (keep the value as it is): `adminID` -> `userID` `adminKey` -> `userKey` Create the PVC you want to share, for example: ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: base-pvc namespace: first-namespace spec: accessModes: ReadWriteMany resources: requests: storage: 100Gi storageClassName: rook-cephfs volumeMode: Filesystem ``` The corresponding PV that is created will have all the necessary info to connect to the CephFS volume (all non-necessary information are removed here): ```yaml kind: PersistentVolume apiVersion: v1 metadata: name: pvc-a02dd277-cb26-4c1e-9434-478ebc321e22 annotations: pv.kubernetes.io/provisioned-by: rook.cephfs.csi.ceph.com finalizers: kubernetes.io/pv-protection spec: capacity: storage: 100Gi csi: driver: rook.cephfs.csi.ceph.com volumeHandle: >- 0001-0011-rook-0000000000000001-8a528de0-e274-11ec-b069-0a580a800213 volumeAttributes: clusterID: rook fsName: rook-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1654174264855-8081-rook.cephfs.csi.ceph.com subvolumeName: csi-vol-8a528de0-e274-11ec-b069-0a580a800213 subvolumePath: >- /volumes/csi/csi-vol-8a528de0-e274-11ec-b069-0a580a800213/da98fb83-fff3-485a-a0a9-57c227cb67ec nodeStageSecretRef: name: rook-csi-cephfs-node namespace: rook controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: rook accessModes: ReadWriteMany claimRef: kind: PersistentVolumeClaim namespace: first-namespace name: base-pvc apiVersion: v1 resourceVersion: '49728' persistentVolumeReclaimPolicy: Retain storageClassName: rook-cephfs volumeMode: Filesystem ``` On this PV, change the `persistentVolumeReclaimPolicy` parameter to `Retain` to avoid it from being deleted when you will delete PVCs. Don't forget to change it back to `Delete` when you want to remove the shared volume (see full procedure in the next section). Copy the YAML content of the PV, and create a new static PV with the same information and some modifications. From the original YAML, you must: Modify the original" }, { "data": "To keep track, the best solution is to append to the original name the namespace name where you want your new PV. In this example `newnamespace`. Modify the volumeHandle. Again append the targeted namespace. Add the `staticVolume: \"true\"` entry to the volumeAttributes. Add the rootPath entry to the volumeAttributes, with the same content as `subvolumePath`. In the `nodeStageSecretRef` section, change the name to point to the secret you created earlier, `rook-csi-cephfs-node-user`. Remove the unnecessary information before applying the YAML (claimRef, managedFields,...): Your YAML should look like this: ```yaml kind: PersistentVolume apiVersion: v1 metadata: name: pvc-a02dd277-cb26-4c1e-9434-478ebc321e22-newnamespace spec: capacity: storage: 100Gi csi: driver: rook.cephfs.csi.ceph.com volumeHandle: >- 0001-0011-rook-0000000000000001-8a528de0-e274-11ec-b069-0a580a800213-newnamespace volumeAttributes: clusterID: rook fsName: rook-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1654174264855-8081-rook.cephfs.csi.ceph.com subvolumeName: csi-vol-8a528de0-e274-11ec-b069-0a580a800213 subvolumePath: >- /volumes/csi/csi-vol-8a528de0-e274-11ec-b069-0a580a800213/da98fb83-fff3-485a-a0a9-57c227cb67ec rootPath: >- /volumes/csi/csi-vol-8a528de0-e274-11ec-b069-0a580a800213/da98fb83-fff3-485a-a0a9-57c227cb67ec staticVolume: \"true\" nodeStageSecretRef: name: rook-csi-cephfs-node namespace: rook accessModes: ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: rook-cephfs volumeMode: Filesystem ``` In a new or other namespace, create a new PVC that will use this new PV you created. You simply have to point to it in the `volumeName` parameter. Make sure you enter the same size as the original PVC!: ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: second-pvc namespace: newnamespace finalizers: kubernetes.io/pvc-protection spec: accessModes: ReadWriteMany resources: requests: storage: 100Gi volumeName: pvc-a02dd277-cb26-4c1e-9434-478ebc321e22-newnamespace storageClassName: rook-cephfs volumeMode: Filesystem ``` You have now access to the same CephFS subvolume from different PVCs in different namespaces. Redo the previous steps (copy PV with a new name, create a PVC pointing to it) in each namespace you want to use this subvolume. Note: the new PVCs/PVs we have created are static. Therefore CephCSI does not support snapshots, clones, resizing or delete operations for them. If those operations are required, you must make them on the original PVC. As the same CephFS volume is used by different PVCs/PVs, you must proceed very orderly to remove it properly. Delete the static PVCs in the different namespaces, but keep the original one! Delete the corresponding static PVs that should now have been marked as \"Released\". Again, don't delete the original one yet! Edit the original PV, changing back the `persistentVolumeReclaimPolicy` from `Retain` to `Delete`. Delete the original PVC. It will now properly delete the original PV, as well as the subvolume in CephFS. Due to , the global mount for a Volume that is mounted multiple times on the same node will not be unmounted. This does not result in any particular problem, apart from polluting the logs with unmount error messages, or having many different mounts hanging if you create and delete many shared PVCs, or you don't really use them. Until this issue is solved, either on the Rook or Kubelet side, you can always manually unmount the unwanted hanging global mounts on the nodes: Log onto each node where the volume has been mounted. Check for hanging mounts using their `volumeHandle`. Unmount the unwanted volumes. To clean up all the artifacts created by the filesystem demo: ```console kubectl delete -f kube-registry.yaml ``` To delete the filesystem components and backing data, delete the Filesystem CRD. !!! warning Data will be deleted if preserveFilesystemOnDelete=false. ```console kubectl -n rook-ceph delete cephfilesystem myfs ``` Note: If the \"preserveFilesystemOnDelete\" filesystem attribute is set to true, the above command won't delete the filesystem. Recreating the same CRD will reuse the existing filesystem. The Ceph filesystem example can be found here: ." } ]
{ "category": "Runtime", "file_name": "filesystem-storage.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "::: warning Note Quota management is a new feature added in v3.2.1. ::: Limits the number of files or directories within a single directory to avoid the occurrence of excessively large directories, which could exhaust the resources of MP nodes. The default limit for the number of children in each directory is 20 million, which can be configured with a minimum value of 1 million and no upper limit. If the number of children created under a single directory exceeds the limit, the creation fails. The configured limit value takes effect on the entire cluster and is persisted on the master. Full support requires upgrading the client, metadata node, and master to version 3.2.1. ```bash curl -v \"http://192.168.0.11:17010/setClusterInfo?dirQuota=20000000\" ``` ::: tip Note `192.168.0.11` is the IP address of the master, and the same applies below. ::: | Parameter | Type | Description | |--|--|-| | dirQuota | uint32 | Quota value | ```bash curl -v \"http://192.168.0.11:17010/admin/getIp\" ``` The response is as follows: ```json { \"code\": 0, \"data\": { \"Cluster\": \"test\", \"DataNodeAutoRepairLimitRate\": 0, \"DataNodeDeleteLimitRate\": 0, \"DirChildrenNumLimit\": 20000000, \"EbsAddr\": \"\", \"Ip\": \"192.168.0.1\", \"MetaNodeDeleteBatchCount\": 0, \"MetaNodeDeleteWorkerSleepMs\": 0, \"ServicePath\": \"\" }, \"msg\": \"success\" } ``` The `DirChildrenNumLimit` field is the directory quota value for the current cluster." } ]
{ "category": "Runtime", "file_name": "quota.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<! Provide a general summary of the issue in the Title above --> <! If you're describing a bug, tell us what should happen --> <! If you're suggesting a change/improvement, tell us how it should work --> <! If describing a bug, tell us what happens instead of the expected behavior --> <! If suggesting a change/improvement, explain the difference from current behavior --> <! Not obligatory, but suggest a fix/reason for the bug, --> <! or ideas how to implement the addition or change --> <! Provide a link to a live example, or an unambiguous set of steps to --> <! reproduce this bug. Include code to reproduce, if relevant --> <! How has this issue affected you? What are you trying to accomplish? --> <! Providing context helps us come up with a solution that is most useful in the real world --> <! Include as many relevant details about the environment you experienced the bug in --> Flannel version: Backend used (e.g. vxlan or udp): Etcd version: Kubernetes version (if used): Operating System and version: Link to your project (optional):" } ]
{ "category": "Runtime", "file_name": "ISSUE_TEMPLATE.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "The license for the project is . All repositories in the project have a top level file called `LICENSE`. This file lists full details of all licences used by the repository. Where possible all files in all repositories also contain a license identifier. This provides fine-grained licensing and allows automated tooling to check the license of individual files. This SPDX licence identifier requirement is enforced by the ." } ]
{ "category": "Runtime", "file_name": "Licensing-strategy.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "oep-number: draft Resize 20190701 title: cStor Volume Resize authors: \"@mittachaitu\" owners: \"@kmova\" \"@vishnuitta\" \"@amitkumardas\" \"@payes\" \"@pawanpraka1\" editor: \"@mittachaitu\" creation-date: 2019-07-01 last-updated: 2019-07-01 status: Implemented see-also: NA replaces: NA superseded-by: NA - - - - - - - - This proposal charts out the design details to implement resize workflow on cStor CSI Volumes. Ability to resize a cStor volume by changing the capacity in the PVC spec. As an application developer I should be able to resize a volume on fly(when application consuming volume). Steps To Be Performed By User: Edit the pvc spec capacity using `kubectl edit pvc <pvc_name>`. Consider following PVC as example for CSI cStor Volume resize ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"v1\",\"kind\":\"PersistentVolumeClaim\",\"metadata\":{\"annotations\":{},\"name\":\"csi-vol-claim\",\"namespace\":\"default\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"5Gi\"}},\"storageClassName\":\"openebs-csi-sc\"}} pv.kubernetes.io/bind-completed: \"yes\" pv.kubernetes.io/bound-by-controller: \"yes\" volume.beta.kubernetes.io/storage-provisioner: cstor.csi.openebs.io creationTimestamp: \"2019-12-23T13:11:33Z\" finalizers: kubernetes.io/pvc-protection name: csi-vol-claim namespace: default resourceVersion: \"1387\" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/csi-vol-claim uid: a91e400f-ff9b-44ec-b447-bc7bfcfccd80 spec: accessModes: ReadWriteOnce resources: requests: storage: 5Gi storageClassName: openebs-csi-sc volumeMode: Filesystem volumeName: pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 status: accessModes: ReadWriteOnce capacity: storage: 5Gi phase: Bound ``` After updating spec.capacity in above PVC then PVC will look following: ```yaml piVersion: v1 kind: PersistentVolumeClaim metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"v1\",\"kind\":\"PersistentVolumeClaim\",\"metadata\":{\"annotations\":{},\"name\":\"csi-vol-claim\",\"namespace\":\"default\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"5Gi\"}},\"storageClassName\":\"openebs-csi-sc\"}} pv.kubernetes.io/bind-completed: \"yes\" pv.kubernetes.io/bound-by-controller: \"yes\" volume.beta.kubernetes.io/storage-provisioner: cstor.csi.openebs.io creationTimestamp: \"2019-12-23T13:11:33Z\" finalizers: kubernetes.io/pvc-protection name: csi-vol-claim namespace: default resourceVersion: \"7804\" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/csi-vol-claim uid: a91e400f-ff9b-44ec-b447-bc7bfcfccd80 spec: accessModes: ReadWriteOnce resources: requests: storage: 10Gi storageClassName: openebs-csi-sc volumeMode: Filesystem volumeName: pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 status: accessModes: ReadWriteOnce capacity: storage: 5Gi conditions: lastProbeTime: null lastTransitionTime: \"2019-12-23T13:59:08Z\" message: Waiting for filesystem resize. status: \"True\" type: FileSystemResizePending phase: Bound ``` In the above PVC spec capacity is updated to 10Gi but status capacity is 5Gi(Which means resize is in progress). The sections below can be assumed to be specific to `cStor` unless mentioned otherwise. CSI controller will get gRPC resize request from kubernetes-csi external-resizer controller(when pvc storage field is updated). CSI controller acquire lease and checks is there any ongoing resize on requested volume if yes then CSI controller will return error. If volume size is up to date (i.e cvc.spec.capacity == cvc.status.capacity) then CSI controller updates CVC spec capacity with latest size. Example status: status of CVC when resize request is in progress. ```yaml Status: capacity: storage: 5Gi Conditions: Type: Resizing Status: InProgress LastProbeTime: date LastTransitionTime: (update only when there is a change in size) Reason: Capacity changed Message: Resize pending ``` Note: Please refer under Custom Resources used to resize cStor volume section for entier CVC. Based on the status capacity of CVC CSI will respond to the gRPC request(request is from kubernetes-csi). CVC controller present in maya-apiserver will get update event on corresponding CVC CR. Update event will process below steps Check is there any ongoing resize on cstorvolume(CV) if so return. Example status: CV status when resize is in progress ```yaml Status: capacity: 5Gi Conditions: Type: Resizing Status: InProgress LastProbeTime: date LastTransitionTime: (update value by picking it from CVC LastTransitionTime) Reason: Capacity changed Message: Resize pending ``` Note: Please refer under Custom Resources used to resize cStor volume section for entier CV. Check if there is any increased changes in capacity of CV and CVC object if so update spec capacity field of CV object with latest size. During reconciliation time Check is there any resize is pending on CVC if so check corresponding CV resize status if it is success take a lease and check is there any capacity difference in CV and CVC if there is no change then update CVC status condition as resize success and release the" }, { "data": "If capacity change is noticed after taking lease, repeat this step after releasing lease(left as is reconciliation will do rework). Example status: CVC status when resize is success ```yaml Status: capacity: 10Gi Conditions: Type: Resizing Status: Success LastProbeTime: date LastTransitionTime: (update while updating resize condition status) Reason: Capacity changed Message: Resize Success ``` Note: Please refer under Custom Resources used to resize cStor volume section for entier CVC. Cstor-volume-mgmt container as a target side car which is already having a volume-controller watching on CV object will get update event (if event is missed volume controller will process during sync event) then volume-controller will process the request with following steps Volume-controller will executes `istgtcontrol resize <size_unit>` command(rpc call) if there is any resize status is pending on that volume(based on resize status condition). If above rpc call success volume-controller will updates istgt.conf file alone but not CVR capacity[why? CVR capacity is treated initial volume capacity and maintaining initial capacity will be helpful in ephemeral case]. If volume controller succeed in updating istgt.conf file then update status capacity and success status for resize condition on CV. If it is a failure response from rpc then generate kubernetes events and do nothing just return(reconciliation will take care of doing the above process again). When cstor-istgt get a resize request it will trigger a resize request to replica and return a success response to cstor-volume-mgmt[why? cstor-istgt will not concern whether resize request is success or failure. If resize request succeed then there is no problem. If resize request fails and if replica gets IO greater than it's size then IO will be failed on that replica and cstor-istgt will disconnect IO failure replica. As part of replica's data connection, target will exchange the size information with replica. During data connection replica checks if it is single replica volume then it will resize main volume else it will resize the internal rebuild clone volume]. Note: Processing `istgtcontrol resize` is a blocking call. zfs will receives the resize request, it will resize the corresponding volume and sent back the response to cstor-istgt(aka target). Once resize operation is succeed at OpenEBS side CSI node plugin(/kubelet) will trigger resize operation on filesystem level as part of online resizing. Sample CVC yaml when resize is in progress ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorVolumeClaim metadata: annotations: openebs.io/volumeID: pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 creationTimestamp: \"2019-12-23T13:11:33Z\" finalizers: cvc.openebs.io/finalizer generation: 5 labels: openebs.io/cstor-pool-cluster: cstor-sparse-cspc name: pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 namespace: openebs resourceVersion: \"7781\" selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/cstorvolumeclaims/pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 uid: c4e3dfe5-e103-4d8a-8447-a827b710de1c publish: nodeId: 127.0.0.1 spec: capacity: storage: 10Gi cstorVolumeRef: apiVersion: openebs.io/v1alpha1 kind: CStorVolume name: pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 namespace: openebs resourceVersion: \"1483\" uid: d277113c-8ded-451b-bae6-7c49aa2e346d replicaCount: 1 status: capacity: storage: 5Gi condition: lastTransitionTime: \"2019-12-23T13:59:03Z\" message: \"Resizing cStor volume\" reason: \"\" type: Resizing phase: Bound versionDetails: autoUpgrade: false desired: 1.4.0 status: current: 1.4.0 dependentsUpgraded: true lastUpdateTime: null state: \"\" ``` Sample cstorvolume yaml when resize is in progress ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorVolume metadata: creationTimestamp: \"2019-12-23T13:12:05Z\" generation: 101 labels: openebs.io/persistent-volume: pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 openebs.io/version: 1.5.0 name: pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 namespace: openebs ownerReferences: apiVersion: openebs.io/v1alpha1 blockOwnerDeletion: true controller: true kind: CStorVolumeClaim name: pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 uid: c4e3dfe5-e103-4d8a-8447-a827b710de1c resourceVersion: \"7838\" selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/cstorvolumes/pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 uid: d277113c-8ded-451b-bae6-7c49aa2e346d spec: capacity: 10Gi consistencyFactor: 1 desiredReplicationFactor: 1 iqn: iqn.2016-09.com.openebs.cstor:pvc-a91e400f-ff9b-44ec-b447-bc7bfcfccd80 nodeBase: iqn.2016-09.com.openebs.cstor replicaDetails: {} replicationFactor: 1 status: \"\" targetIP: 10.0.0.243 targetPort: \"3260\" targetPortal: 10.0.0.243:3260 status: phase: \"\" capacity: 5Gi Conditions: Type: Resizing Status: InProgress LastProbeTime: LastTransitionTime: Reason: Resize is in progress Message: ``` When resize is in progress spec and status capacity will vary. Resize conditions will be available to know the status of resize. NA NA" } ]
{ "category": "Runtime", "file_name": "20190107-cStor-volume-resize.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> PCAP recorder ``` -h, --help help for recorder ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - List PCAP recorder entries" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_recorder.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Added: Connector HA is implemented; More calico modes are supported; Flannel host-gw mode is supported; Fixed: Fix the bug that nodePort service doesn't work on cloud side; Fix the bug that cloud-agent lost connections to connector after connector reboot; Fix the bug that fabedge-agent can't initialize tunnels if strongswan container reboot;" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.0.0.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "This document provides details about Firecracker network performance. The numbers presented are dependent on the hardware (CPU, networking card, etc.), OS version and settings. Scope of the measurements is to illustrate the limits for the emulation thread. | Segment size/ Direction | 1460bytes | 256bytes | 128bytes | 96bytes | | -- | | -- | -- | - | | Ingress | 25Gbps | 23Gbps | 20Gbps | 18Gbps | | Egress | 25Gbps | 23Gbps | 20Gbps | 18Gbps | | Bidirectional | 18Gbps | 18Gbps | 18Gbps | 18Gbps | Setup and test description Throughput measurements were done using . The target is to fully saturate the emulation thread and keep it at 100% utilization. No adjustments were done to socket buffer, or any other network related kernel parameters. To identify the limit of emulation thread, TCP throughput was measured between host and guest. An EC2 instance, running , was used as a host. For ingress or egress throughput measurements, a Firecracker microVM running Kernel 4.14 with 4GB of Ram, 8 vCPUs and one network interface was used. The measurements were taken using 6 iperf3 clients running on host and 6 iperf3 serves running on guest and vice versa. For bidirectional throughput measurements, a Firecracker microVM running Amazon Linux 2, Kernel 4.14 with 4GB of Ram, 12 vCPUs and one network interface was used. The measurements were taken using 4 iperf3 clients and 4 iperf3 servers running on both host and guest. The virtualization layer, Firecracker emulation thread plus host kernel stack, is responsible for adding on average 0.06ms of network latency. Setup and test description Latency measurements were done using ping round trip times. 2 x EC2 M5d.metal instances running Amazon Linux 2 within the same were used, with a security group configured so that it would allow traffic from instances using private IPs. A 10Mbps background traffic was running between instances. Round trip time between instances was measured. `rtt min/avg/max/mdev = 0.101/0.198/0.237/0.044 ms` On one of the instances, a Firecracker microVM running Kernel 4.14, with 1 GB of RAM, 2 vCPUs, one network interface running was used. Round trip between the microVM and the other instance was measured, while a 10Mbps background traffic was running. `rtt min/avg/max/mdev = 0.191/0.321/0.519/0.058 ms` From the difference between those we can conclude that ~0.06ms are the virtualization overhead." } ]
{ "category": "Runtime", "file_name": "network-performance.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "One to two sentences that describes the goal of this proposal and the problem being solved by the proposed change. The reader should be able to tell by the title, and the opening paragraph, if this document is relevant to them. Currently, Velero does not have a complete manifest of everything in the backup, aside from the backup tarball itself. This change introduces a new data structure to be stored with a backup in object storage which will allow for more efficient operations in reporting of what a backup contains. Additionally, this manifest should enable advancements in Velero's features and architecture, enabling dry-run support, concurrent backup and restore operations, and reliable restoration of complex applications. Right now, Velero backs up items one at a time, sorted by API Group and namespace. It also restores items one at a time, using the restoreResourcePriorities flag to indicate which order API Groups should have their objects restored first. While this does work currently, it presents challenges for more complex applications that have their dependencies in the form of a graph rather than strictly linear. For example, Cluster API clusters are a set of complex Kubernetes objects that require that the \"root\" objects are restored first, before their \"leaf\" objects. If a Cluster that a ClusterResourceSetBinding refers to does not exist, then a restore of the CAPI cluster will fail. Additionally, Velero does not have a reliable way to communicate what objects will be affected in a backup or restore operation without actually performing the operation. This complicates dry-run tasks, because a user must simply perform the action without knowing what will be touched. It also complicates allowing backups and restores to run in parallel, because there is currently no way to know if a single Kubernetes object is included in multiple backups or restores, which can lead to unreliability, deadlocking, and race conditions were Velero made to be more concurrent today. Introduce a manifest data structure that defines the contents of a backup. Store the manifest data into object storage alongside existing backup data. This proposal seeks to enable, but not define, the following. Implementing concurrency beyond what already exists in Velero. Implementing a dry-run feature. Implementing a new restore ordering procedure. While the data structure should take these scenarios into account, they will not be implemented alongside it. To uniquely identify a Kubernetes object within a cluster or backup, the following fields are sufficient: API Group and Version (example: backup.velero.io/v1) Namespace Name Labels This criteria covers the majority of Velero's inclusion or exclusion logic. However, some additional fields enable further use cases. Owners, which are other Kubernetes objects that have some relationship to this object. They may be strict or soft dependencies. Annotations, which provide extra metadata about the object that might be useful for other programs to consume. UUID generated by Kubernetes. This is useful in defining Owner relationships, providing a single, immutable key to find an object. This is not considered at restore time, only internally for defining links. All of this information already exists within a Velero backup's tarball of resources, but extracting such data is inefficient. The entire tarball must be downloaded and extracted, and then JSON within parsed to read labels, owners, annotations, and a UUID. The rest of the information is encoded in the file system structure within the Velero backup tarball. While doable, this is heavyweight in terms of time and potentially memory. Instead, this proposal suggests adding a new manifest structure that is kept alongside the backup" }, { "data": "This structure would contain the above fields only, and could be used to perform inclusion/exclusion logic on a backup, select a resource from within a backup, and do set operations over backup or restore contents to identify overlapping resources. Here are some use cases that this data structure should enable, that have been difficult to implement prior to its existence: A dry-run operation on backup, informing the user what would be selected if they were to perform the operation. A manifest could be created and saved, allowing for a user to do a dry-run, then accept it to perform the backup. Restore operations can be treated similarly. Efficient, non-overlapping parallelization of backup and restore operations. By building or reading a manifest before performing a backup or restore, Velero can determine if there are overlapping resources. If there are no overlaps, the operations can proceed in parallel. If there are overlaps, the operations can proveed serially. Graph-based restores for non-linear dependencies. Not all resources in a Kubernetes cluster can be defined in a strict, linear way. They may have multiple owners, and writing BackupItemActions or RestoreItemActions to simply return a chain of owners is not an efficient way to support the many Kubernetes operators/controllers being written. Instead, by having a manifest with enough information, Velero can build a discrete list that ensures dependencies are restored before their dependents, with less input from plugin authors. The Manifest data structure would look like this, in Go type structure: ```golang // NamespacedItems maps a given namespace to all of its contained items. type NamespacedItems map[string]*Item // APIGroupNamespaces maps an API group/version to a map of namespaces and their items. type KindNamespaces map[string]NamespacedItems type Manifest struct { // Kinds holds the top level map of all resources in a manifest. Kinds KindNamespaces // Index is used to look up an individual item quickly based on UUID. // This enables fetching owners out of the maps more efficiently at the cost of memory space. Index map[string]*Item } // Item represents a Kubernetes resource within a backup based on it's selectable criteria. // It is not the whole Kubernetes resource as retrieved from the API server, but rather a collection of important fields needed for filtering. type Item struct { // Kubernetes API group which this Item belongs to. // Could be a core resource, or a CustomResourceDefinition. APIGroup string // Version of the APIGroup that the Item belongs to. APIVersion string // Kubernetes namespace which contains this item. // Empty string for cluster-level resource. Namespace string // Item's given name. Name string // Map of labels that the Item had at backup time. Labels map[string]string // Map of annotations that the Item had at Backup time. // Useful for plugins that may decide to process only Items with specific annotations. Annotations map[string]string // Owners is a list of UUIDs to other items that own or refer to this item. Owners []string // Manifest is a pointer to the Manifest in which this object is contained. // Useful for getting access to things like the Manifest.Index map. Manifest *Manifest } ``` In addition to the new types, the following Go interfaces would be provided for convenience. ```golang type Itermer interface { // Returns the Item as a string, following the current Velero backup version 1.1.0 tarball structure format. // <APIGroup>/<Namespace>/<APIVersion>/<name>.json String() string // Owners returns a slice of realized Items that own or refer to the current Item. // Useful for building out a full graph of Items to" }, { "data": "// Will use the UUIDs in Item.Owners to look up the owner Items in the Manifest. Owners() []*Item // Kind returns the Kind of an object, which is a combination of the APIGroup and APIVersion. // Useful for verifying the needed CustomResourceDefinition exists before actually restoring this Item. Kind() *Item // Children returns a slice of all Items that refer to this item as an Owner. Children() []*Items } // This error type is being created in order to make reliable sentinel errors. // See https://dave.cheney.net/2019/06/10/constant-time for more details. type ManifestError string func (e ManifestError) Error() string { return string(e) } const ItemAlreadyExists = ManifestError(\"item already exists in manifest\") type Manifester interface { // Set returns the entire list of resources as a set of strings (using Itemer.String). // This is useful for comparing two manifests and determining if they have any overlapping resources. // In the future, when implementing concurrent operations, this can be used as a sanity check to ensure resources aren't being backed up or restored by two operations at once. Set() sets.String // Adds an item to the appropriate APIGroup and Namespace within a Manifest // Returns (true, nil) if the Item is successfully added to the Manifest, // Returns (false, ItemAlreadyExists) if the Item is already in the Manifest. Add(*Item) (bool, error) } ``` The entire `Manifest` should be serialized into the `manifest.json` file within the object storage for a single backup. It is possible that this file could also be compressed for space efficiency. Because the `Manifest` is holding a minimal amount of data, memory sizes should not be a concern for most clusters. TODO: Document known limits on API group name, resource name, and kind name character limits. Introducing this manifest does not increase the attack surface of Velero, as this data is already present in the existing backups. Storing the manifest.json file next to the existing backup data in the object storage does not change access patterns. The introduction of this file should trigger Velero backup version 1.2.0, but it will not interfere with Velero versions that do not support the `Manifest` as the file will be additive. In time, this file will replace the `<backupname>-resource-list.json.gz` file, but for compatibility the two will appear side by side. When first implemented, Velero should simply build the `Manifest` as it backs up items, and serialize it at the end. Any logic changes that rely on the `Manifest` file must be introduced with their own design document, with their own compatibility concerns. The `Manifest` object will not be implemented as a Kubernetes CustomResourceDefinition, but rather one of Velero's own internal constructs. Implementation for the data structure alone should be minimal - the types will need to be defined in a `manifest` package. Then, the backup process should create a `Manifest`, passing it to the various `*Backuppers` in the `backup` package. These methods will insert individual `Items` into the `Manifest`. Finally, logic should be added to the `persistence` package to ensure that the new `manifest.json` file is uploadable and allowed. None so far. When should compatibility with the `<backupname>-resource-list.json.gz` file be dropped? What are some good test case Kubernetes resources and controllers to try this out with? Cluster API seems like an obvious choice, but are there others? Since it is not implemented as a CustomResourceDefinition, how can a `Manifest` be retained so that users could issue a dry-run command, then perform their actual desire operation? Could it be stored in Velero's temp directories? Note that this is making Velero itself more stateful." } ]
{ "category": "Runtime", "file_name": "graph-manifest.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "(backups)= In a production setup, you should always back up the contents of your Incus server. The Incus server contains a variety of different entities, and when choosing your backup strategy, you must decide which of these entities you want to back up and how frequently you want to save them. The various contents of your Incus server are located on your file system and, in addition, recorded in the {ref}`Incus database <database>`. Therefore, only backing up the database or only backing up the files on disk does not give you a full functional backup. Your Incus server contains the following entities: Instances (database records and file systems) Images (database records, image files, and file systems) Networks (database records and state files) Profiles (database records) Storage volumes (database records and file systems) Consider which of these you need to back up. For example, if you don't use custom images, you don't need to back up your images since they are available on the image server. If you use only the `default` profile, or only the standard `incusbr0` network bridge, you might not need to worry about backing them up, because they can easily be re-created. To create a full backup of all contents of your Incus server, back up the `/var/lib/incus` directory. This directory contains your local storage, the Incus database, and your configuration. It does not contain separate storage devices, however. That means that whether the directory also contains the data of your instances depends on the storage drivers that you use. ```{important} If your Incus server uses any external storage (for example, LVM volume groups, ZFS zpools, or any other resource that isn't directly self-contained to Incus), you must back this up separately. ``` To back up your data, create a tarball of `/var/lib/incus`. If your system uses `/etc/subuid` and `/etc/subgid` file, you should also back up these files. Restoring them avoids needless shifting of instance file systems. To restore your data, complete the following steps: Stop Incus on your server (for example, with `sudo systemctl stop incus.service incus.socket`). Delete the directory (`/var/lib/incus/`). Restore the directory from the backup. Delete and restore any external storage devices. Restore the `/etc/subuid` and `/etc/subgid` files if present. Restart Incus (for example, with `sudo systemctl start incus.socket incus.service` or by restarting your machine). If you decide to only back up specific entities, you have different options for how to do" }, { "data": "You should consider doing some of these partial backups even if you are doing full backups in addition. It can be easier and safer to, for example, restore a single instance or reconfigure a profile than to restore the full Incus server. Instances and storage volumes are backed up in a very similar way (because when backing up an instance, you basically back up its instance volume, see {ref}`storage-volume-types`). See {ref}`instances-backup` and {ref}`howto-storage-backup-volume` for detailed information. The following sections give a brief summary of the options you have for backing up instances and volumes. Incus supports copying and moving instances and storage volumes between two hosts. See {ref}`move-instances` and {ref}`howto-storage-move-volume` for instructions. So if you have a spare server, you can regularly copy your instances and storage volumes to that secondary server to back them up. If needed, you can either switch over to the secondary server or copy your instances or storage volumes back from it. If you use the secondary server as a pure storage server, it doesn't need to be as powerful as your main Incus server. You can use the `export` command to export instances and volumes to a backup tarball. By default, those tarballs include all snapshots. You can use an optimized export option, which is usually quicker and results in a smaller size of the tarball. However, you must then use the same storage driver when restoring the backup tarball. See {ref}`instances-backup-export` and {ref}`storage-backup-export` for instructions. Snapshots save the state of an instance or volume at a specific point in time. However, they are stored in the same storage pool and are therefore likely to be lost if the original data is deleted or lost. This means that while snapshots are very quick and easy to create and restore, they don't constitute a secure backup. See {ref}`instances-snapshots` and {ref}`storage-backup-snapshots` for more information. (backup-database)= While there is no trivial method to restore the contents of the {ref}`Incus database <database>`, it can still be very convenient to keep a backup of its content. Such a backup can make it much easier to re-create, for example, networks or profiles if the need arises. Use the following command to dump the content of the local database to a file: incus admin sql local .dump > <output_file> Use the following command to dump the content of the global database to a file: incus admin sql global .dump > <output_file> You should include these two commands in your regular Incus backup." } ]
{ "category": "Runtime", "file_name": "backup.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "(storage-drivers)= Incus supports the following storage drivers for storing images, instances and custom volumes: ```{toctree} :maxdepth: 1 storage_dir storage_btrfs storage_lvm storage_zfs storage_ceph storage_cephfs storage_cephobject ``` See the corresponding pages for driver-specific information and configuration options. (storage-drivers-features)= Where possible, Incus uses the advanced features of each storage system to optimize operations. Feature | Directory | Btrfs | LVM | ZFS | Ceph RBD | CephFS | Ceph Object : | : | : | : | : | : | : | : {ref}`storage-optimized-image-storage` | no | yes | yes | yes | yes | n/a | n/a Optimized instance creation | no | yes | yes | yes | yes | n/a | n/a Optimized snapshot creation | no | yes | yes | yes | yes | yes | n/a Optimized image transfer | no | yes | no | yes | yes | n/a | n/a {ref}`storage-optimized-volume-transfer` | no | yes | no | yes | yes | n/a | n/a Copy on write | no | yes | yes | yes | yes | yes | n/a Block based | no | no | yes | no | yes | no | n/a Instant cloning | no | yes | yes | yes | yes | yes | n/a Storage driver usable inside a container | yes | yes | no | yes[^1] | no | n/a | n/a Restore from older snapshots (not latest) | yes | yes | yes | no | yes | yes | n/a Storage quotas | yes[^2] | yes | yes | yes | yes | yes | yes Available on `incus admin init` | yes | yes | yes | yes | yes | no | no Object storage | yes | yes | yes | yes | no | no | yes to be enabled. ```{include} storage_dir.md :start-after: <!-- Include start dir quotas --> :end-before: <!-- Include end dir quotas --> ``` (storage-optimized-image-storage)= All storage drivers except for the directory driver have some kind of optimized image storage format. To make instance creation near instantaneous, Incus clones a pre-made image volume when creating an instance rather than unpacking the image tarball from scratch. To prevent preparing such a volume on a storage pool that might never be used with that image, the volume is generated on demand. Therefore, the first instance takes longer to create than subsequent ones. (storage-optimized-volume-transfer)= Btrfs, ZFS and Ceph RBD have an internal send/receive mechanism that allows for optimized volume transfer. Incus uses this optimized transfer when transferring instances and snapshots between storage pools that use the same storage driver, if the storage driver supports optimized transfer and the optimized transfer is actually" }, { "data": "Otherwise, Incus uses `rsync` to transfer container and file system volumes, or raw block transfer to transfer virtual machine and custom block volumes. The optimized transfer uses the underlying storage driver's native functionality for transferring data, which is usually faster than using `rsync`. However, the full potential of the optimized transfer becomes apparent when refreshing a copy of an instance or custom volume that uses periodic snapshots. With optimized transfer, Incus bases the refresh on the latest snapshot, which means: When you take a first snapshot and refresh the copy, the transfer will take roughly the same time as a full copy. Incus transfers the new snapshot and the difference between the snapshot and the main volume. For subsequent snapshots, the transfer is considerably faster. Incus does not transfer the full new snapshot, but only the difference between the new snapshot and the latest snapshot that already exists on the target. When refreshing without a new snapshot, Incus transfers only the differences between the main volume and the latest snapshot on the target. This transfer is usually faster than using `rsync` (as long as the latest snapshot is not too outdated). On the other hand, refreshing copies of instances without snapshots (either because the instance doesn't have any snapshots or because the refresh uses the `--instance-only` flag) would actually be slower than using `rsync`. In such cases, the optimized transfer would transfer the difference between the (non-existent) latest snapshot and the main volume, thus the full volume. Therefore, Incus uses `rsync` instead of the optimized transfer for refreshes without snapshots. The two best options for use with Incus are ZFS and Btrfs. They have similar functionalities, but ZFS is more reliable. Whenever possible, you should dedicate a full disk or partition to your Incus storage pool. Incus allows to create loop-based storage, but this isn't recommended for production use. See {ref}`storage-location` for more information. The directory backend should be considered as a last resort option. It supports all main Incus features, but is slow and inefficient because it cannot perform instant copies or snapshots. Therefore, it constantly copies the instance's full storage. Currently, the Linux kernel might silently ignore mount options and not apply them when a block-based file system (for example, `ext4`) is already mounted with different mount options. This means when dedicated disk devices are shared between different storage pools with different mount options set, the second mount might not have the expected mount options. This becomes security relevant when, for example, one storage pool is supposed to provide `acl` support and the second one is supposed to not provide `acl` support. For this reason, it is currently recommended to either have dedicated disk devices per storage pool or to ensure that all storage pools that share the same dedicated disk device use the same mount options." } ]
{ "category": "Runtime", "file_name": "storage_drivers.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We have full documentation on how to get started contributing here: <! If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources --> Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers - We have a diverse set of mentorship programs available that are always looking for volunteers! <! Custom Information - if you're copying this template for the first time you can add custom content here, for example: - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. -->" } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "CNI-Genie", "subcategory": "Cloud Native Network" }
[ { "data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|" } ]
{ "category": "Runtime", "file_name": "fuzzy_mode_convert_table.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "Antrea supports multicast traffic in the following scenarios: Pod to Pod - a Pod that has joined a multicast group will receive the multicast traffic to that group from the Pod senders. Pod to External - external hosts can receive the multicast traffic sent from Pods, when the Node network supports multicast forwarding / routing to the external hosts. External to Pod - Pods can receive the multicast traffic from external hosts. <!-- toc --> - - - - - <!-- /toc --> Multicast support was introduced in Antrea v1.5.0 as an alpha feature, and was graduated to beta in v1.12.0. Prior to v1.12.0, a feature gate, `Multicast` must be enabled in the `antrea-controller` and `antrea-agent` configuration to use the feature. Starting from v1.12.0, the feature gate is enabled by default, you need to set the `multicast.enable` flag to true in the `antrea-agent` configuration to use the feature. There are three other configuration options -`multicastInterfaces`, `igmpQueryVersions`, and `igmpQueryInterval` for `antrea-agent`. ```yaml antrea-agent.conf: | multicast: enable: true multicastInterfaces: igmpQueryVersions: 1 2 3 igmpQueryInterval: \"125s\" ``` Antrea NetworkPolicy and Antrea ClusterNetworkPolicy are supported for the following types of multicast traffic: IGMP egress rules: applied to IGMP membership report and IGMP leave group messages. IGMP ingress rules: applied to IGMP query, which includes IGMPv1, IGMPv2, and IGMPv3. Multicast egress rules: applied to non-IGMP multicast traffic from the selected Pods to other Pods or external hosts. Note, multicast ingress rules are not supported at the moment. Examples: You can refer to the and examples in the Antrea NetworkPolicy document. Antrea provides tooling to check multicast group information and multicast traffic statistics. The `kubectl get multicastgroups` command prints multicast groups joined by Pods in the cluster. Example output of the command: ```bash $ kubectl get multicastgroups GROUP PODS 225.1.2.3 default/mcjoin, namespace/pod 224.5.6.4 default/mcjoin ``` `antctl` supports printing multicast traffic statistics of Pods. Please refer to the corresponding . The also supports multicast NetworkPolices. This section will take multicast video streaming as an example to demonstrate how multicast works with Antrea. In this example, multimedia tools are used to generate and consume multicast video streams. To start a video streaming server, we start a VLC Pod to stream a sample video to the multicast IP address" }, { "data": "with TTL 6. ```bash kubectl run -i --tty --image=quay.io/galexrt/vlc:latest vlc-sender -- --intf ncurses --vout dummy --aout dummy 'https://upload.wikimedia.org/wikipedia/commons/transcoded/2/26/Beesonflowers.webm/Beesonflowers.webm.120p.vp9.webm' --sout udp:239.255.12.42 --ttl 6 --repeat ``` You can verify multicast traffic is sent out from this Pod by running `antctl get podmulticaststats` in the `antrea-agent` Pod on the local Node, which indicates the VLC Pod is sending out multicast video streams. You can also check the multicast routes on the Node by running command `ip mroute`, which should print the following route for forwarding the multicast traffic from the Antrea gateway interface to the transport interface. ```bash $ ip mroute (<POD IP>, 239.255.12.42) Iif: antrea-gw0 Oifs: <TRANSPORT INTERFACES> State: resolved ``` We also create a VLC Pod to be the receiver with the following command: ```bash kubectl run -i --tty --image=quay.io/galexrt/vlc:latest vlc-receiver -- --intf ncurses --vout dummy --aout dummy udp://@239.255.12.42 --repeat ``` It's expected to see inbound multicast traffic to this Pod by running `antctl get podmulticaststats` in the local `antrea-agent` Pod, which indicates the VLC Pod is receiving the video stream. Also, the `kubectl get multicastgroups` command will show that `vlc-receiver` has joined multicast group `239.255.12.42`. This feature is currently supported only for IPv4 Linux clusters. Support for Windows and IPv6 will be added in the future. The configuration option `multicastInterfaces` is not supported with encap mode. Multicast packets in encap mode are SNATed and forwarded to the transport interface only. A Linux host limits the maximum number of multicast groups it can subscribe to; the default number is 20. The limit can be changed by setting . Users are responsible for changing the limit if Pods on the Node are expected to join more than 20 groups. Multicast IPs in (224.0.0.0/24) can only work in encap mode. Multicast traffic destined for those addresses is not expected to be forwarded, therefore, no multicast route will be configured for them. External hosts are not supposed to send and receive traffic with those addresses either. If the following situations apply to your Nodes, you may observe multicast traffic is not routed correctly: Node kernel version under 5.4 Node network doesn't support IGMP snooping The configuration option `multicastInterfaces` is not supported with . When Antrea FlexibleIPAM is enabled, multicast packets are forwarded to the uplink interface only." } ]
{ "category": "Runtime", "file_name": "multicast-guide.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "This document provides the design details on how the OpenEBS Control Plane will create OpenEBS StoragePools and Volumes using OpenEBS cStor storage-engine. Container Attached Storage or storage for containers in containers. Knowledge about Kubernetes Knowledge about Kubernetes Knowledge about Kubernetes concepts like Custom Controllers, initializers and reconciliation loop that wait on objects to move from actual to desired state. Familiar with Kubernetes and OpenEBS Storage Concepts: PersistentVolume(PV) and PersistentVolumeClaim(PVC) are standard Kubernetes terms used to associate volumes to a given workload. A PVC will be watched by a dynamic provisioner, and helps with provisioning a new PV and binding to PVC. In case of OpenEBS, provisioning a PV involves launching OpenEBS Volume Containers (aka iSCSI Target Service) and creating a in-tree iSCSI PV. BlockDevices(BDs) and BlockDeviceClaims(BDCs) are Kubernetes custom resources, used to represent and identify the storage (a disk) attached to a Kubernetes Node. BlockDeviceClaims are used by the cStor Pool Operator to claim a BlockDevice before using it to create a Pool. Each BlockDevice will be represented by a cluster wide unique identifier. BlockDevice and BlockDeviceClaims are managed by StoragePoolClaim(SPC) is a custom resource, that can be used to claim creation of cStor Pools. Unlike, PVC/PV - SPC can result in more than one cStor Pool. Think of SPC as more of a Deployment Kind or Statefulset Kind with replica count. Familiar with cStor Data Engine. cStor Data Engine comprises of two components - cStor Target (aka iSCSI frontend) and cStor Pool. cStor Target receives the IO from the application and it interacts with one or more cStor Pools to serve the IO. A single cStor Volume comprises of a cStor target and the logical replicas that are provisioned on a set of cStor Pools. The replicas are called as cStor Volume Replicas. The cStor Pool and Replica functionality is containerized and is available in `openebs/cstor-pool`. The target functionality is available under `openebs/cstor-istgt`. Each of these expose a CLI that can be accessed via UXD. The dynamic provisioner will be implemented using the Kubernetes-incubator/external-provisioner, which is already used for Jiva volumes. At the time of this writing CSI was still under heavy development. In future, the external-provisioner can be replaced with CSI. Creation of cStor Pools and cStor Volume Replicas will follow the Kubernetes reconciliation pattern - where a custom resource is defined and the operator will eventually provision the objects. This approach was picked over - an API based approach - because: The cStor Pool and Target pods only support CLI that can be accessed via UXD. Write a higher level operator requires API to be exposed. The reconciliation has the advantage of removing the dependency on control plane once provisioned - for cases like node/pool not being ready to receive the requests. Recover from chaos generated within the cluster - without having to depend on a higher level operator. To make use of the cStor Volumes, users will have to select a StorageClass - that indicates the volumes should be created using a certain set of cStor Pools. As part of setting up the cluster, the administrator will have to create cStor Pools and a corresponding StorageClass. At a high level, the provisioning can be divided into two steps: Administrator - creating cStor Storage Pool and StorageClass (and) Users - request for cStor Volume via PVC and cStor" }, { "data": "As part of implementing the cStor, the following new CRDs are loaded: StoragePoolClaim CStorPool CStorVolume CStorVolumeReplica The following new operators/controllers are created to watch for the new CRs: cstor-pool-operator ( embedded in the maya-apiserver), will watch for SPC and helps with creating and deleting the cStor Pools. cstor-pool-mgmt ( deployed as a side-car to the cStor Pool Pod will help with watching the cStorPool - to create uZFS Pool using the provided block devices. Will also watch for the cStorVolumeReplica to create uZFS Volumes (aka zvol) to be associated with cStor Volume. cstor-volume-mgmt ( deployed as a side-car to the cStor Target Pod will help with reading the target parameters and setting them on cStor Target.) Also, this workflow assumes that BlockDevice and BlockDevice claims are available. Admin will create StoragePoolClaim(SPC) with `cas-type=cstor`. The SPC will contain information regarding the blockdevices to be used for creating cstor pools. A SPC is analogous to a Deployment that results in creating one or more pods. A sample SPC is as follows: ``` apiVersion: openebs.io/v1alpha1 kind: StoragePoolClaim metadata: name: cstor-disk-pool annotations: cas.openebs.io/config: | name: PoolResourceRequests value: |- memory: 2Gi name: PoolResourceLimits value: |- memory: 4Gi spec: name: cstor-disk-pool type: disk poolSpec: poolType: striped blockDevices: blockDeviceList: blockdevice-936911c5c9b0218ed59e64009cc83c8f blockdevice-77f834edba45b03318d9de5b79af0734 blockdevice-1c10eb1bb14c94f02a00373f2fa09b93 ``` Note: It is possible to let cstor-operator automatically select the block devices by replacing the `blockDevices` section with `maxPool: 3` as shown . The support for automatic selection is experimental and will undergo changes in the upcoming releases. To keep clear separation between the auto and manual mode of provisioning, the auto mode might be represented by a totally different CRD. maya-cstor-operator (embedded into maya-apiserver), will be watching for SPCs (type=cstor). When maya-cstor-operator detects a new SPC object, it will identify the list of nodes that satisfy the SPC constraints in terms of availability of disks/block devices. For each of the potential node where the cStor pool can be created, maya-cstor-operator will: create a CStorPool (CR), which will include the following information: unique id name actual disks paths to be used. redundancy type (stripe or mirror) ``` apiVersion: openebs.io/v1alpha1 kind: CStorPool metadata: labels: kubernetes.io/hostname: gke-kmova-helm-default-pool-20ff78e2-w4q3 openebs.io/storage-pool-claim: cstor-disk-pool name: cstor-disk-pool-2i3d ownerReferences: apiVersion: openebs.io/v1alpha1 blockOwnerDeletion: true controller: true kind: StoragePoolClaim name: cstor-disk-pool uid: 594b983b-b6c1-11e9-93aa-42010a800035 uid: 598ad94c-b6c1-11e9-93aa-42010a800035 spec: group: blockDevice: deviceID: /dev/disk/by-id/scsi-0GoogleEphemeralDisklocal-ssd-1 inUseByPool: true name: blockdevice-22f5154b7fe508f65a72fea09311d29e poolSpec: cacheFile: /tmp/pool1.cache overProvisioning: false poolType: striped status: phase: init ``` create a Deployment YAML file that contains the cStor container and its associated sidecars. The cStor sidecar is passed the unique id of the CStorPool (CR). The Deployment YAML will have the node selectors set to pin the containers to the node where the disks are attached. ``` apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: openebs.io/cstor-pool: cstor-disk-pool-2i3d openebs.io/storage-pool-claim: cstor-disk-pool name: cstor-disk-pool-2i3d namespace: openebs spec: replicas: 1 selector: matchLabels: app: cstor-pool strategy: type: Recreate template: metadata: labels: app: cstor-pool spec: nodeSelector: kubernetes.io/hostname: gke-kmova-helm-default-pool-20ff78e2-w4q3 containers: name: cstor-pool image: quay.io/openebs/cstor-pool:1.1.0 imagePullPolicy: IfNotPresent env: name: OPENEBSIOCSTOR_ID value: 598ad94c-b6c1-11e9-93aa-42010a800035 ports: containerPort: 12000 protocol: TCP containerPort: 3233 protocol: TCP containerPort: 3232 protocol: TCP resources: limits: memory: 4Gi requests: memory: 2Gi securityContext: privileged: true volumeMounts: mountPath: /dev name: device mountPath: /tmp name: tmp mountPath: /var/openebs/sparse name: sparse mountPath: /run/udev name: udev name: cstor-pool-mgmt image:" }, { "data": "imagePullPolicy: IfNotPresent env: name: OPENEBSIOCSTOR_ID value: 598ad94c-b6c1-11e9-93aa-42010a800035 resources: {} securityContext: privileged: true procMount: Default volumeMounts: mountPath: /dev name: device mountPath: /tmp name: tmp mountPath: /var/openebs/sparse name: sparse mountPath: /run/udev name: udev name: maya-exporter image: quay.io/openebs/m-exporter:1.1.0 imagePullPolicy: IfNotPresent command: maya-exporter args: -e=pool ports: containerPort: 9500 protocol: TCP resources: {} securityContext: privileged: true procMount: Default volumeMounts: mountPath: /dev name: device mountPath: /tmp name: tmp mountPath: /var/openebs/sparse name: sparse mountPath: /run/udev name: udev volumes: hostPath: path: /dev type: Directory name: device hostPath: path: /var/openebs/sparse/shared-cstor-disk-pool type: DirectoryOrCreate name: tmp hostPath: path: /var/openebs/sparse type: DirectoryOrCreate name: sparse hostPath: path: /run/udev type: Directory name: udev ``` Above is a snipped version to indicate the main aspects of the deployment. A key aspect is that resources{} will be filled based on the resource (cpu, mem) requests and limits given in the SPC spec. If nothing has been provided, Kubernetes will assign default values depending on the node resources. Please refer to the . Admin associates the SPC with a StorageClass ``` apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: openebs-cstor-disk annotations: openebs.io/cas-type: cstor cas.openebs.io/config: | name: StoragePoolClaim value: \"cstor-disk-pool\" name: ReplicaCount value: \"3\" provisioner: openebs.io/provisioner-iscsi ``` Admin will create a PVC that is associated to above CStor Pools (linked by the pool name). ``` apiVersion: v1 kind: PersistentVolumeClaim metadata: name: demo-vol spec: accessModes: ReadWriteOnce resources: requests: storage: 5G storageClassName: openebs-cstor-disk ``` openebs-provisioner receives the request for volume provisioning and passes on the request to maya-apsierver. maya-apiserver will create target service corresponding to the new PVC (to get the portal ip address) and a cStor Target Deployment that will contain cstor-target (iscsi target) container. This target deployment will be attached with cstor-volume-mgmt and metrics exporter side-cars. The configuration options for running the cstor-ctrl(iscsi target) will be passed via CStorVolume CR. The Kubernetes Service YAML for Target Service will have the following details: ``` apiVersion: v1 kind: Service metadata: name: pvc-42c47193-b6c8-11e9-93aa-42010a800035 spec: ports: name: cstor-iscsi port: 3260 protocol: TCP targetPort: 3260 << other ports for management and metrics >> selector: openebs.io/persistent-volume: pvc-42c47193-b6c8-11e9-93aa-42010a800035 openebs.io/target: cstor-target type: ClusterIP ``` The CStorVolume CR will contain the following details: ``` apiVersion: openebs.io/v1alpha1 kind: CStorVolume metadata: labels: openebs.io/persistent-volume: pvc-42c47193-b6c8-11e9-93aa-42010a800035 name: pvc-42c47193-b6c8-11e9-93aa-42010a800035 spec: capacity: 5G replicationFactor: 3 consistencyFactor: 2 targetIP: 10.47.243.9 targetPort: \"3260\" ``` The cstor-volume-mgmt will get the details from this CR and creates the required iSCSI Target configuration The cStorTarget Deployment YAML will have the following details: ``` apiVersion: extensions/v1beta1 kind: Deployment metadata: name: pvc-42c47193-b6c8-11e9-93aa-42010a800035-target spec: replicas: 1 selector: matchLabels: app: cstor-volume-manager openebs.io/persistent-volume: pvc-42c47193-b6c8-11e9-93aa-42010a800035 openebs.io/target: cstor-target strategy: type: Recreate template: metadata: labels: app: cstor-volume-manager openebs.io/persistent-volume: pvc-42c47193-b6c8-11e9-93aa-42010a800035 openebs.io/target: cstor-target spec: containers: name: cstor-istgt image: quay.io/openebs/cstor-istgt:1.1.0 imagePullPolicy: IfNotPresent ports: containerPort: 3260 protocol: TCP resources: {} securityContext: privileged: true procMount: Default volumeMounts: mountPath: /var/run name: sockfile mountPath: /usr/local/etc/istgt name: conf mountPath: /tmp mountPropagation: Bidirectional name: tmp name: maya-volume-exporter image: quay.io/openebs/m-exporter:1.1.0 imagePullPolicy: IfNotPresent args: -e=cstor command: maya-exporter ports: containerPort: 9500 protocol: TCP resources: {} volumeMounts: mountPath: /var/run name: sockfile mountPath: /usr/local/etc/istgt name: conf name: cstor-volume-mgmt image: quay.io/openebs/cstor-volume-mgmt:1.1.0 imagePullPolicy: IfNotPresent env: name: OPENEBSIOCSTORVOLUMEID value: 42e222c8-b6c8-11e9-93aa-42010a800035 ports: containerPort: 80 protocol: TCP resources: {} securityContext: privileged: true procMount: Default volumeMounts: mountPath: /var/run name: sockfile mountPath: /usr/local/etc/istgt name: conf mountPath: /tmp mountPropagation: Bidirectional name: tmp volumes: emptyDir: {} name: sockfile emptyDir: {} name: conf hostPath: path: /var/openebs/sparse/shared-pvc-42c47193-b6c8-11e9-93aa-42010a800035-target type: DirectoryOrCreate name: tmp ``` The resources{} will be filled based on the resource (cpu, mem) requests and limits given in the Storage Policies associated with PVC. If nothing has been provided, Kubernetes will assign default values depending on the node resources. Please refer to the" }, { "data": "maya-apiserver will then create the CStorVolumeReplicas as follows: Query for the cStor Storage Pools patching the SPC provided in the StorageClass. Pickup a subset of pools based on the replica-count of the PVC. For each replica, maya-apiserver will create - CStorVolumeReplica CR. This CStorVolumeReplica CR will contain: CStorPool on which the Replica needs to be provisioned Unique Name (same in all replicas for a given PVC) Required capacity (obtained from PVC or StorageClass or default value) cStor Target Service IP The YAML for the CStorVolumeReplica is as follows: ``` apiVersion: openebs.io/v1alpha1 kind: CStorVolumeReplica metadata: labels: cstorpool.openebs.io/name: cstor-disk-pool-2i3d cstorpool.openebs.io/uid: 598ad94c-b6c1-11e9-93aa-42010a800035 cstorvolume.openebs.io/name: pvc-42c47193-b6c8-11e9-93aa-42010a800035 openebs.io/persistent-volume: pvc-42c47193-b6c8-11e9-93aa-42010a800035 name: pvc-42c47193-b6c8-11e9-93aa-42010a800035-cstor-manual-pool-2i3d spec: capacity: 5G targetIP: 10.47.243.9 status: phase: Init ``` The cstor-sidecar (cstor-pool-mgmt) running in the CStorPool(cstor-disk-pool-2i3d), will watch on the CStorVolumeReplica CR for creating the replica and associating itself with the cStor Target. The cstor-sidecar will only be allowed to update the CStorVolumeReplica CR, it SHOULD NOT create/delete CStorVolumeReplica CR. The previous two sections have laid out the workflow for a successful pool and volume creation. As part of the workflow, several cases need to be considered like: Node hosting the CStorPool is down or not reachable. Node hosting the cStorContainer is running out of resources and cStorContainer is evicted Node hosting the CStorPool is restarted OpenEBS Volume with replica count = 1 and CStorPool is restarted. OpenEBS Volume with replica count = 3 and cases where 1 of the 3 replica nodes, 2 of 3 replica nodes and 3 or 3 replica nodes are down All nodes are down and are restarted one by one OpenEBS Volume (PVC) is accidentally deleted and user wants to get the data stored in the volume back. OpenEBS Volume data needs to be backed up or restored from a backup. CStorPool has a capacity of 100G and volumes are created adding up to more than 100G One of the disks of the CStorPool is showing high latency cStorPool has exclusive access to the disks. Can there be some kind of lock mechanisms implemented? Install/Setup the CRDs used in this design Container images for - cstor-pool, cstor-istgt, cstor-pool-mgmt, cstor-volume-mgmt cstor-pool-mgmt sidecar interfaces between observing the CStorPool CR objects and issues - pool create and delete cstor-volume-mgmt sidecar interfaces between observing the CStorVolume CR objects and generates the configuration required for cstor-ctrl Usage of NDM to manage access to the underlying block devices. Issue claims on devices before using them. Enhance the maya-exporter to export cstor volume and pool metrics Enhance openebs-provisioner and maya-apiserver to implement the workflow specified above for creation of pools and volumes. Enhance mayactl to display the details of cstor pools and volumes. Support for upgrading cstor pools and volumes with newer versions. Support for Snapshot and Clones Support for Backup and Restore Expanding the pool to add more capacity Replacing failed disks using a spare disk from the pool Editing either the Pool or Volume related parameters Marking a Pool as unavailable or failed due to slow disk or to bring it down for maintenance of the underlying disks Reassign the pool from one node to another by shifting the attached disks to a new node User should be able to specify required values for the features available on the CStorPool and CStorVolumeReplica like compression, block size, de-duplication, etc. Scale up/down the number of replicas associated with a cStor Volume. Performance Testing and Tunables" } ]
{ "category": "Runtime", "file_name": "proposal-1.0-cstor-pool-volume-provisioning.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "CubeFS has developed a CSI plugin based on the interface specification to support cloud storage in Kubernetes clusters. Currently, there are two different branches of the CSI plugin: Master branch: compatible with CSI protocol versions after 1.0.0 Csi-spec-v0.3.0 branch: compatible with CSI protocol versions before 0.3.0 The reason for these two branches is that the official k8s is not compatible with version 1.0.0 and previous versions of 0.\\* when evolving the CSI protocol. According to the compatibility of the CSI protocol, if the user's k8s version is before v1.13, please use the csi-spec-v0.3.0 branch code, and k8s v1.13 and later versions can directly use the master branch. ::: warning Note The code of the csi-spec-v0.3.0 branch is basically frozen and not updated. New CSI features will be developed on the master branch in the future. ::: | CSI Protocol | Kubernetes Compatible Version | |--|--| | v0.3.0 | Versions before v1.13, such as v1.12 | | v1.0.0 | Versions v1.13 and later, such as v1.15 | Before deploying CSI, please set up the CubeFS cluster. Please refer to the section for documentation. ::: tip Note The deployment steps in this article are based on the master branch. ::: Before deploying CSI, you need to label the cluster nodes. The CSI container will be stationed based on the k8s node label. For example, if the k8s cluster node has the following labels, the CSI container will be automatically deployed on this machine. If there is no such label, the CSI container will not be stationed. Users execute the following command on the corresponding k8s node according to their own situation. ``` bash $ kubectl label node <nodename> component.cubefs.io/csi=enabled ``` There are two ways to deploy CSI, one is to use helm for automated deployment, and the other is to manually deploy according to the steps in order. Using helm can reduce some manual operations, and users can choose the deployment method according to their own situation. The required deployment files are basically in the deploy directory of the csi repository. ``` bash $ git clone https://github.com/cubefs/cubefs-csi.git $ cd cubefs-csi ``` The content of csi-rbac.yaml does not need to be modified. It mainly declares the role of the CSI component and the corresponding permissions, so execute the following command directly: ``` bash $ kubectl apply -f deploy/csi-rbac.yaml ``` The content of storageclass.yaml is as follows (you need to modify the masterAddr): ``` yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: cfs-sc provisioner: csi.cubefs.com allowVolumeExpansion: true reclaimPolicy: Delete parameters: masterAddr: \"master-service.cubefs.svc.cluster.local:17010\" owner: \"csiuser\" ``` `masterAddr`: The address of the master component in the CubeFS cluster. Here, you need to modify it to the actual deployed master IP address, in the format of `ip:port`. If there are multiple addresses, separate them with commas in the format of `ip1:port1,ip2:port2` (domain names can also be used). After modification, execute the following command: ``` bash $ kubectl apply -f deploy/storageclass.yaml ``` The CSI controller component will only deploy one pod in the entire cluster. After the deployment is completed, you can use the command `kubectl get pod -n cubefs` to find that only one controller is in the RUNING state. ``` bash $ kubectl apply -f deploy/csi-controller-deployment.yaml ``` The CSI node will deploy multiple pods, and the number of pods is the same as the number of k8s nodes" }, { "data": "``` bash $ kubectl apply -f deploy/csi-node-daemonset.yaml ``` To use helm for deployment, you need to install helm first. . ``` bash $ git clone https://github.com/cubefs/cubefs-helm.git $ cd cubefs-helm ``` Write a helm deployment yaml file related to csi in cubefs-helm: cubefs-csi-helm.yaml ``` bash $ touch cubefs-csi-helm.yaml ``` The content of cubefs-csi-helm.yaml is as follows: ``` yaml component: master: false datanode: false metanode: false objectnode: false client: false csi: true monitor: false ingress: false image: csi_driver: ghcr.io/cubefs/cfs-csi-driver:3.2.0.150.0 csi_provisioner: registry.k8s.io/sig-storage/csi-provisioner:v2.2.2 csi_attacher: registry.k8s.io/sig-storage/csi-attacher:v3.4.0 csi_resizer: registry.k8s.io/sig-storage/csi-resizer:v1.3.0 driver_registrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0 csi: driverName: csi.cubefs.com logLevel: error kubeletPath: /var/lib/kubelet controller: tolerations: [ ] nodeSelector: \"component.cubefs.io/csi\": \"enabled\" node: tolerations: [ ] nodeSelector: \"component.cubefs.io/csi\": \"enabled\" resources: enabled: false requests: memory: \"4048Mi\" cpu: \"2000m\" limits: memory: \"4048Mi\" cpu: \"2000m\" storageClass: setToDefault: true reclaimPolicy: \"Delete\" masterAddr: \"\" otherParameters: ``` `masterAddr` is the address of the corresponding CubeFS cluster. If there are multiple addresses, separate them with commas in English in the format of `ip1:port1,ip2:port2`. The `csi.kubeletPath` parameter is the default kubelet path, which can be modified as needed. The resource limit in `csi.resources.requests/limits` is set to 2 CPUs and 4G memory by default, which can also be modified as needed. `image.csi_driver` is the csi component image. If it is a user's own compiled image, it can be modified as needed. The other csi images should be kept as close to the default as possible, unless the user understands the running principle of CSI well. Next, execute the helm installation command to install CSI in the cubefs-helm directory: ``` bash $ helm upgrade --install cubefs ./cubefs -f ./cubefs-csi-helm.yaml -n cubefs --create-namespace ``` After the above CSI deployment operations are completed, you need to check the deployment status of CSI. The command is as follows: ``` bash $ kubectl get pod -n cubefs ``` Under normal circumstances, you can see that there is only one `controller` component container, and the number of `node` component containers is the same as the number of k8s nodes labeled, and their status is `RUNNING`. If you find that the status of a pod is not `RUNNING`, first check the error situation of this problematic `pod`, the command is as follows: ``` bash $ kubectl describe pod -o wide -n cubefs pod_name ``` Troubleshoot based on the above error information. If the error information is limited, you can proceed to the next step: go to the host where the `pod` is located and execute the following command. ``` bash $ docker ps -a | grep pod_name ``` The above command is used to filter the container information related to this `pod`. Then find the problematic container and view its log output: ``` bash $ docker logs problematicdockercontainer_id ``` Troubleshoot based on the container's log output. If the log shows words like \"no permission\", it is likely that the `rbac` permission has not been generated properly. Just deploy the `CSI RBAC` again. After the CSI component is deployed, you can create a `PVC` for verification. The content of pvc.yaml is as follows: ``` yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cubefs-pvc namespace: default spec: accessModes: ReadWriteOnce resources: requests: storage: 5Gi storageClassName: cfs-sc ``` `storageClassName` needs to be consistent with the `name` attribute in the `metadata` of the StorageClass created just now. This will create a storage volume based on the parameters defined in" }, { "data": "When users write pvc yaml, pay attention to the following parameters: `metadata.name`: The name of the pvc, which can be modified as needed. The pvc name is unique in the same namespace, and two names that are the same are not allowed. `metadata.namespace`: The namespace where the pvc is located, which can be modified as needed. `spec.resources.request.storage`: The capacity of the pvc. `storageClassName`: This is the name of the storage class. If you want to know which `storageclass` the current cluster has, you can use the command `kubectl get sc` to view it. With the pvc yaml, you can create the PVC with the following command: ``` bash $ kubectl create -f pvc.yaml ``` After the command is executed, you can use the command `kubectl get pvc -n namespace` to check the status of the corresponding pvc. Pending means it is waiting, and Bound means it has been created successfully. If the status of the PVC has been Pending, you can use the command to check the reason: ``` bash $ kubectl describe pvc -n namespace pvc_name ``` If the error message is not obvious or there is no error, you can use the `kubectl logs` command to first check the error message of the `csi-provisioner` container in the csi controller pod. `csi-provisioner` is the intermediate bridge between k8s and the csi driver, and many information can be viewed in the logs here. If the log of `csi-provisioner` still cannot show the specific problem, you can use the `kubectl exec` command to view the log of the `cfs-driver` container in the csi controller pod. Its log is located under `/cfs/logs` in the container. The reason why `kubectl logs` command cannot be used here is that the log of `cfs-driver` is not printed to standard output, while the logs of other sidecar containers similar to `csi-provisioner` are printed to standard output, so you can use the `kubectl logs` command to view them. With the PVC, you can mount it to the specified directory in the application. The sample yaml is as follows: ``` yaml apiVersion: apps/v1 kind: Deployment metadata: name: cfs-csi-demo namespace: default spec: replicas: 1 selector: matchLabels: app: cfs-csi-demo-pod template: metadata: labels: app: cfs-csi-demo-pod spec: nodeSelector: component.cubefs.io/csi: enabled containers: name: cfs-csi-demo image: nginx:1.17.9 imagePullPolicy: \"IfNotPresent\" ports: containerPort: 80 name: \"http-server\" volumeMounts: mountPath: \"/usr/share/nginx/html\" mountPropagation: HostToContainer name: mypvc volumes: name: mypvc persistentVolumeClaim: claimName: cubefs-pvc ``` The above yaml indicates that a PVC named cubefs-pvc is mounted to /usr/share/nginx/html in the cfs-csi-demo container. The above PVC and cfs-csi-demo usage examples, please refer to the . Download the cubefs-csi source code and execute the following command: ``` bash $ make image ``` If the PVC has been mounted for container use, you can enter the corresponding mount location of the container to view it; or mount the volume corresponding to the PVC to a directory on the host using the cubefs client for viewing. Please refer to the for how to use the client. kube-apiserver startup parameters: ``` bash --feature-gates=CSIPersistentVolume=true,MountPropagation=true --runtime-config=api/all ``` kube-controller-manager startup parameters: ``` bash --feature-gates=CSIPersistentVolume=true ``` kubelet startup parameters: ``` bash --feature-gates=CSIPersistentVolume=true,MountPropagation=true,KubeletPluginsWatcher=true --enable-controller-attach-detach=true ``` The PVC is bound to a PV, and the name of the bound PV is the name of the underlying cubefs volume, usually starting with pvc-. The PV corresponding to the PVC can be viewed directly using the command `kubectl get pvc -n namespace`. Please refer to the ." } ]
{ "category": "Runtime", "file_name": "k8s.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Welcome to kuasar! The table below lists the core parts of the project: | Component | Description | ||| | | Source code of vmm sandbox, including vmm-sandboxer, vmm-task and some building scripts. | | | Quark sandboxer that can run a container by quark. | | | Wasm sandboxer that can run a container by WebAssembly runtime. | | | An optional workaround of inactive with containerd | | | Documentations of all sandboxer architecture. | | | Benchmark tests and e2e tests directory. | | | Examples of how to run a container via Kuasar sandboxers. | Please make sure to read and observe our . Kuasar is a community project driven by its community which strives to promote a healthy, friendly and productive environment. The goal of the community is to develop a multi-sandbox ecosystem to meet the requirements under cloud native all-scenario. To build a platform at such scale requires the support of a community with similar aspirations. See for a list of various community roles. With gradual contributions, one can move up in the chain. Fork the repository on GitHub Read the . We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and getting your work reviewed and merged. If you have questions about the development process, feel free to jump into our or join our . We are always in need of help, be it fixing documentation, reporting bugs or writing some code. Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing. Here is how you get started. There are within the Kuasar organization. Each repository has beginner-friendly issues that provide a good first issue. For example, has . Another good way to contribute is to find a documentation improvement, such as a missing/broken link. Please see below for the workflow. When you are willing to take on an issue, you can assign it to yourself. Just reply with `/assign` or `/assign @yourself` on an issue, then the robot will assign the issue to you and your name will present at `Assignees` list. While we encourage everyone to contribute code, it is also appreciated when someone reports an issue. Issues should be filed under the appropriate Kuasar sub-repository. Example: a Kuasar issue should be opened to . Please follow the prompted submission guidelines while opening an issue. Please do not ever hesitate to ask a question or send a pull request. This is a rough outline of what a contributor's workflow looks like: Create a topic branch from where to base the contribution. This is usually master. Make commits of logical" }, { "data": "Make sure commit messages are in the proper format (see below). Push changes in a topic branch to a personal fork of the repository. Submit a pull request to . The PR must receive an approval from two maintainers. Pull requests are often called simply \"PR\". Kuasar generally follows the standard process. To submit a proposed change, please develop the code/fix and add new test cases. To make it easier for your PR to receive reviews, consider the reviewers will need you to: follow . write . break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue. label PRs with appropriate reviewers: to do this read the messages the bot sends you to guide you through the PR process. We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ``` scripts: add test codes for metamanager this add some unit test codes to improve code coverage for metamanager Fixes #12 ``` The format can be described more formally as follows: ``` <subsystem>: <what changed> <BLANK LINE> <why this change was made> <BLANK LINE> <footer> ``` The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools. Note: if your pull request isn't getting enough attention, you can use the reach out on Slack to get help finding reviewers. TO be add Please make sure to read and observe our . Kuasar is a community project driven by its community which strives to promote a healthy, friendly and productive environment. The goal of the community is to develop a multi-sandbox ecosystem to meet the requirements under cloud native all-scenario. To build a platform at such scale requires the support of a community with similar aspirations. See for a list of various community roles. With gradual contributions, one can move up in the chain. Fork the repository on GitHub Read the . We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and getting your work reviewed and merged. If you have questions about the development process, feel free to jump into our or join our . We are always in need of help, be it fixing documentation, reporting bugs or writing some code. Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing. Here is how you get started. There are within the Kuasar" }, { "data": "Each repository has beginner-friendly issues that provide a good first issue. For example, has . Another good way to contribute is to find a documentation improvement, such as a missing/broken link. Please see below for the workflow. When you are willing to take on an issue, you can assign it to yourself. Just reply with `/assign` or `/assign @yourself` on an issue, then the robot will assign the issue to you and your name will present at `Assignees` list. While we encourage everyone to contribute code, it is also appreciated when someone reports an issue. Issues should be filed under the appropriate Kuasar sub-repository. Example: a Kuasar issue should be opened to . Please follow the prompted submission guidelines while opening an issue. Please do not ever hesitate to ask a question or send a pull request. This is a rough outline of what a contributor's workflow looks like: Create a topic branch from where to base the contribution. This is usually master. Make commits of logical units. Make sure commit messages are in the proper format (see below). Push changes in a topic branch to a personal fork of the repository. Submit a pull request to . The PR must receive an approval from two maintainers. Pull requests are often called simply \"PR\". Kuasar generally follows the standard process. To submit a proposed change, please develop the code/fix and add new test cases. To make it easier for your PR to receive reviews, consider the reviewers will need you to: follow . write . break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue. label PRs with appropriate reviewers: to do this read the messages the bot sends you to guide you through the PR process. We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ``` scripts: add test codes for metamanager this add some unit test codes to improve code coverage for metamanager Fixes #12 ``` The format can be described more formally as follows: ``` <subsystem>: <what changed> <BLANK LINE> <why this change was made> <BLANK LINE> <footer> ``` The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools. Note: if your pull request isn't getting enough attention, you can use the reach out on Slack to get help finding reviewers. TO be add" } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Kuasar", "subcategory": "Container Runtime" }