tag
dict | content
listlengths 1
171
|
---|---|
{
"category": "Runtime",
"file_name": "log-attach-design.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
} | [
{
"data": "rkt can run multiple applications in a pod, under a supervising process and alongside with a sidecar service which takes care of multiplexing its I/O toward the outside world. Historically this has been done via systemd-journald only, meaning that all logging was handled via journald and interactive applications had to re-use a parent TTY. Starting from systemd v232, it is possible to connect a service streams to arbitrary socket units and let custom sidecar multiplex all the I/O. This document describes the architectural design for the current logging and attaching subsystem, which allows custom logging and attaching logic. In order to be able to attach or apply custom logging logic to applications, an appropriate runtime mode must be specified when adding/preparing an application inside a pod. This is done via stage0 CLI arguments (`--stdin`, `--stdout`, and `--stder`) which translate into per-application stage2 annotations. This mode results in the application having the corresponding stream attached to the parent terminal. For historical reasons and backward compatibility, this is a special mode activated via `--interactive` and only supports single-app pods. Interactive mode does not support attaching and ties the runtime to the lifetime of the parent terminal. Internally, this translates to an annotation at the app level: ``` { \"name\": \"coreos.com/rkt/stage2/stdin\", \"value\": \"interactive\" }, { \"name\": \"coreos.com/rkt/stage2/stdout\", \"value\": \"interactive\" }, { \"name\": \"coreos.com/rkt/stage2/stderr\", \"value\": \"interactive\" } ``` In this case, the corresponding service unit file gains the following properties: ``` [Service] StandardInput=tty StandardOutput=tty StandardError=tty ... ``` No further sidecar dependencies are introduced in this case. This mode results in the application having the corresponding stream attached to a dedicated pseudo-terminal. This is different from the \"interactive\" mode because: it allocates a new pseudo-terminal accounted towards pod resources it supports external attaching/detaching it supports multiple applications running inside a single pod it does not tie the pod lifetime to the parent terminal one Internally, this translates to an annotation at the app level: ``` { \"name\": \"coreos.com/rkt/stage2/stdin\", \"value\": \"tty\" }, { \"name\": \"coreos.com/rkt/stage2/stdout\", \"value\": \"tty\" }, { \"name\": \"coreos.com/rkt/stage2/stderr\", \"value\": \"tty\" } ``` In this case, the corresponding service unit file gains the following properties: ``` [Service] TTYPath=/rkt/iomux/<appname>/stage2-pts StandardInput=tty StandardOutput=tty StandardError=tty ... ``` A sidecar dependency to `ttymux@.service` is introduced in this case. Application has a `Wants=` and `After=` relationship to it. This mode results in the application having each of the corresponding streams separately handled by a muxing"
},
{
"data": "This is different from the \"interactive\" and \"tty\" modes because: it does not allocate any terminal for the application single streams can be separately handled it supports multiple applications running inside a single pod Internally, this translates to an annotation at the app level: ``` { \"name\": \"coreos.com/rkt/stage2/stdin\", \"value\": \"stream\" }, { \"name\": \"coreos.com/rkt/stage2/stdout\", \"value\": \"stream\" }, { \"name\": \"coreos.com/rkt/stage2/stderr\", \"value\": \"stream\" } ``` In this case, the corresponding service unit file gains the following properties: ``` [Service] StandardInput=fd Sockets=<appname>-stdin.socket StandardOutput=fd Sockets=<appname>-stdout.socket StandardError=fd Sockets=<appname>-stderr.socket ... ``` A sidecar dependency to `iomux@.service` is introduced in this case. Application has a `Wants=` and `Before=` relationship to it. Additional per-stream socket units are generated, as follows: ``` [Unit] Description=<stream> socket for <appname> DefaultDependencies=no StopWhenUnneeded=yes RefuseManualStart=yes RefuseManualStop=yes BindsTo=<appname>.service [Socket] RemoveOnStop=yes Service=<appname>.service FileDescriptorName=<stream> ListenFIFO=/rkt/iottymux/<appname>/stage2-<stream> ``` This mode results in the application having the corresponding stream attached to systemd-journald. This is the default mode for stdout/stderr, for historical reasons and backward compatibility. Internally, this translates to an annotation at the app level: ``` { \"name\": \"coreos.com/rkt/stage2/stdout\", \"value\": \"log\" }, { \"name\": \"coreos.com/rkt/stage2/stderr\", \"value\": \"log\" } ``` In this case, the corresponding service unit file gains the following properties: ``` [Service] StandardOutput=journal StandardError=journal ... ``` A sidecar dependency to `systemd-journald.service` is introduced in this case. Application has a `Wants=` and `After=` relationship to it. Logging is not a valid mode for stdin. This mode results in the application having the corresponding stream closed. This is the default mode for stdin, for historical reasons and backward compatibility. Internally, this translates to an annotation at the app level: ``` { \"name\": \"coreos.com/rkt/stage2/stdin\", \"value\": \"null\" }, { \"name\": \"coreos.com/rkt/stage2/stdout\", \"value\": \"null\" }, { \"name\": \"coreos.com/rkt/stage2/stderr\", \"value\": \"null\" } ``` In this case, the corresponding service unit file gains the following properties: ``` [Service] StandardInput=null StandardOutput=null StandardError=null [...] ``` No further sidecar dependencies are introduced in this case. The following per-app annotations are defined for internal use, with the corresponding set of allowed values: `coreos.com/rkt/stage2/stdin` `interactive` `null` `stream` `tty` `coreos.com/rkt/stage2/stdout` `interactive` `log` `null` `stream` `tty` `coreos.com/rkt/stage2/stderr` `interactive` `log` `null` `stream` `tty` All the logging and attaching logic is handled by the stage1 `iottymux` binary. Each main application may additionally have a dedicated sidecar for I/O multiplexing, which proxies I/O to external clients over sockets. Sidecar state is persisted at `/rkt/iottymux/<appname>` while the main application is running. `rkt attach` can auto-discover endpoints, by reading the content of status file located at"
},
{
"data": "This file provides a versioned JSON document, whose content varies depending on the I/O for the specific application. For example, an application with all streams available for attaching will have a status file similar to the following: ``` { \"version\": 1, \"targets\": [ { \"name\": \"stdin\", \"domain\": \"unix\", \"address\": \"/rkt/iottymux/alpine-sh/sock-stdin\" }, { \"name\": \"stdout\", \"domain\": \"unix\", \"address\": \"/rkt/iottymux/alpine-sh/sock-stdout\" }, { \"name\": \"stderr\", \"domain\": \"unix\", \"address\": \"/rkt/iottymux/alpine-sh/sock-stderr\" } ] } ``` Its `--mode=list` option just read the file and print it back to the user. `rkt attach --mode=auto` performs the auto-discovery mechanism described above, and the proceed to attach stdin/stdour/stderr of the current process (itself) to all available corresponding endpoints. This the default attaching mode. `rkt attach --mode=<stream>` performs the auto-discovery mechanism described above, and the proceed to the corresponding available endpoints. This is the default output multiplexer for stdout/stderr in logging mode, for historical reasons and backward compatibility. Restrictions: requires journalctl (or similar libsystemd-based helper) to decode output entries requires a libsystemd on the host compiled with LZ4 support systemd-journald does not support distinguishing between entries from stdout and stderr TODO(lucab): k8s logmode This is the standard systemd-journald service. It is the default output handler for the \"logging\" mode. iottymux is a multi-purpose stage1 binary. It currently serves the following purposes: Multiplex I/O over TTY (in TTY mode) Multiplex I/O from streams (in streaming mode) Attach to existing attachable applications (in TTY or streaming mode) This component takes care of multiplexing dedicated streams and receiving clients for attaching. It is started as an instance of the templated `iomux@.service` service by a `Before=` dependency from the application. Internally, it attaches to available FIFOs and proxies them to separate sockets for external clients. It is implemented as a sub-action of the main `iottymux` binary and completely run in stage1 context. This component takes care of multiplexing TTY and receiving clients for attaching. It is started as an instance of the templated `ttymux@.service` service by a `After=` dependency from the application. Internally, it creates a pesudo-tty pair (whose slave is used by the main application) and proxies the master to a socket for external clients. It is implemented as a sub-action of the main `iottymux` binary and completely run in stage1 context. This component takes care of discovering endpoints and attaching to them, both for TTY and streaming modes. It is invoked by the \"stage1\" attach entrypoint and completely run in stage1 context. It is implemented as a sub-action of the main `iottymux` binary."
}
] |
{
"category": "Runtime",
"file_name": "scaleworkload.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
} | [
{
"data": "`ScaleWorkload` function can be used to scale a workload to specified replicas. It automatically sets the original replica count of the workload as output artifact, which makes using `ScaleWorkload` function in blueprints a lot easier. Below is an example of how this function can be used ``` yaml apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: name: my-blueprint actions: backup: outputArtifacts: backupOutput: keyValue: origReplicas: \"{{ .Phases.shutdownPod.Output.originalReplicaCount }}\" phases: func: ScaleWorkload name: shutdownPod args: namespace: \"{{ .StatefulSet.Namespace }}\" name: \"{{ .StatefulSet.Name }}\" kind: StatefulSet replicas: 0 # this is the replica count, the STS will scaled to restore: inputArtifactNames: backupOutput phases: func: ScaleWorkload name: bringUpPod args: namespace: \"{{ .StatefulSet.Namespace }}\" name: \"{{ .StatefulSet.Name }}\" kind: StatefulSet replicas: \"{{ .ArtifactsIn.backupOutput.KeyValue.origReplicas }}\" ```"
}
] |
{
"category": "Runtime",
"file_name": "nebd_en.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
} | [
{
"data": "CURVE client serves as the entrance of the services provided by CURVE, providing dynamic link library for QEMU/CURVE-NBD. As a result, restarting QEMU/CURVE-NBD is necessary when CURVE Client needs to be updated. In order to relief the impact of updating on applications based on CURVE, we decoupled CURVE client and its applications, and imported the hot upgrade module NEBD in between. <p align=\"center\"> <img src=\"../images/nebd-overview.jpg\" alt=\"nebd-overview\" width=\"650\" /><br> <font size=3> Figure 1: NEBD structure</font> </p> Figure1 shows the deployment structure of NEBD. NEBD Client (part1 in source code directory): NEBD Client corresponds to applications based on CURVE, including QEMU and CURVE-NBD. An NEBD client connects to specified NEBD server through Unix Domain Socket. NEBD Server(part2 in source code directory)NEBD Server is responsible for receiving the requests from part1, then call CURVE client for corresponding operations. An NEBD server can receive requests from different NEBD clients. Also, figure 1 shows that instead of CURVE client, NEBD client is now the component that serves the application above. In this case, the applications will still be influenced when NEBD client is being upgraded. So in our design, we simplified the business logic of NEBD client as much as possible, which means it will only be responsible for request forwarding and limited retries for requests if needed. There are few steps for NEBD server/CURVE client's upgrade: Install the latest version of CURVE client/NEBD server Stop running processes of part2 Restart processes of part2 In our practice, we use daemon to monitor the processes of part2, and start them if not exist. Also, notice that from the stop of process of part2 to the start of the new one, only 1 to 5 seconds are required in our test and production environment. <p align=\"center\"> <img src=\"../images/nebd-modules.png\" alt=\"nebd-modules\" width=\"500\" /><br> <font size=3> Figure 2: Structure of each module</font> </p> Figure 2 show the components of NEBD client and NEBD server. libnebd: API interface for upper level applications, including open/close and read/write. File Client: The implementations of libnebd interface, send users' requests to NEBD server. MetaCache Manager: Record information of files already opened"
},
{
"data": "Heartbeat Client: Sent regular heartbeat carrying opened file info to NEBD server. File Service: Receive and deal with file requests from NEBD client. Heartbeat Service: Receive and process heartbeat from NEBD client. File Manager: Manage opened files on NEBD server. IO Executor: Responsible for the actual execution of file requests, which calls the interface of CURVE client and send requests to storage clusters. Metafile Manager: Manage metadata files, and also responsible for metadata persistence, or load persistent data from files. Retry policy of part1: As what we've mentioned above, part1 only execute limited retries, and this characteristic can be reflected in two aspects: There's no time out for RPC requests from part1. Part1 only executes retries for errors of RPC requests themselves, and forward error codes returned by RPC to upper level directly. Use Write request as an example, and figure3 is the flow chart of the request: Forward Write request from upper level to part2 through RPC requests, and wait without setting the time out. If the RPC requests return successfully, return to upper level corresponding to the RPC response. If disconnection occurs of unable to connect, wait for a while and retry. <p align=\"center\"> <img src=\"../images/nebd-part1-write-request-en.png\" alt=\"images\\nebd-part1-write-request-en\" width=\"400\" /><br> <font size=3> Figure 3: Flow chart of write request sent by NEBD client</font> </p> Other requests follow similar procedures as write request does. Heartbeat management: In order to avoid upper level application finishing without closing files opened, part2 will check heartbeat status of files (opened files info reported by part1 through regular heartbeat), and will close the files of which the last heartbeat time has exceeded the threshold. The difference between closing files with time out heartbeat and files that requested to close by upper level applications is that the time-out closing will not remove file info from metafile. But there's a case that an upper level application suspended somehow and recovered later, this will also cause a heartbeat time out and therefore corresponding files are closed. Thus, when part2 receiving requests from part1, it will first check whether the metafile own the records for current files. If it does and corresponding files are in closed status, part2 will first open corresponding files and execute following requests."
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} | [
{
"data": "All notable changes to this project will be documented in this file. This project adheres to . Enhancements: : Add `WithLazy` method for `SugaredLogger`. : zaptest: Add `NewTestingWriter` for customizing TestingWriter with more flexibility than `NewLogger`. : Add `Log`, `Logw`, `Logln` methods for `SugaredLogger`. : Add `WithPanicHook` option for testing panic logs. Thanks to @defval, @dimmo, @arxeiss, and @MKrupauskas for their contributions to this release. Enhancements: : Add Dict as a Field. : Add `WithLazy` method to `Logger` which lazily evaluates the structured context. : String encoding is much (~50%) faster now. Thanks to @hhk7734, @jquirke, and @cdvr1993 for their contributions to this release. This release contains several improvements including performance, API additions, and two new experimental packages whose APIs are unstable and may change in the future. Enhancements: : Add `zap/exp/zapslog` package for integration with slog. : Add `Name` to `Logger` which returns the Logger's name if one is set. : Add `zap/exp/expfield` package which contains helper methods `Str` and `Strs` for constructing String-like zap.Fields. : Reduce stack size on `Any`. Thanks to @knight42, @dzakaammar, @bcspragu, and @rexywork for their contributions to this release. Enhancements: : Add `Level` to both `Logger` and `SugaredLogger` that reports the current minimum enabled log level. : `SugaredLogger` turns errors to zap.Error automatically. Thanks to @Abirdcfly, @craigpastro, @nnnkkk7, and @sashamelentyev for their contributions to this release. Enhancements: : Add a `zapcore.LevelOf` function to determine the level of a `LevelEnabler` or `Core`. : Add `zap.Stringers` field constructor to log arrays of objects that implement `String() string`. Enhancements: : Add `zap.Objects` and `zap.ObjectValues` field constructors to log arrays of objects. With these two constructors, you don't need to implement `zapcore.ArrayMarshaler` for use with `zap.Array` if those objects implement `zapcore.ObjectMarshaler`. : Add `SugaredLogger.WithOptions` to build a copy of an existing `SugaredLogger` with the provided options applied. : Add `ln` variants to `SugaredLogger` for each log level. These functions provide a string joining behavior similar to `fmt.Println`. : Add `zap.WithFatalHook` option to control the behavior of the logger for `Fatal`-level log entries. This defaults to exiting the program. : Add a `zap.Must` function that you can use with `NewProduction` or `NewDevelopment` to panic if the system was unable to build the logger. : Add a `Logger.Log` method that allows specifying the log level for a statement dynamically. Thanks to @cardil, @craigpastro, @sashamelentyev, @shota3506, and @zhupeijun for their contributions to this release. Enhancements: : Add `zapcore.ParseLevel` to parse a `Level` from a string. : Add `zap.ParseAtomicLevel` to parse an `AtomicLevel` from a string. Bugfixes: : Fix panic in JSON encoder when `EncodeLevel` is unset. Other changes: : Improve encoding performance when the `AddCaller` and `AddStacktrace` options are used together. Thanks to @aerosol and @Techassi for their contributions to this release. Enhancements: : Add `EncoderConfig.SkipLineEnding` flag to disable adding newline characters between log statements. : Add `EncoderConfig.NewReflectedEncoder` field to customize JSON encoding of reflected log fields. Bugfixes: : Fix inaccurate precision when encoding complex64 as JSON. , : Close JSON namespaces opened in `MarshalLogObject` methods when the methods return. : Avoid panicking in Sampler core if `thereafter` is zero. Other changes: : Drop support for Go < 1.15. Thanks to @psrajat, @lruggieri, @sammyrnycreal for their contributions to this release. Bugfixes: : JSON: Fix complex number encoding with negative imaginary part. Thanks to @hemantjadon. : JSON: Fix inaccurate precision when encoding float32. Enhancements: : Avoid panicking in Sampler core if the level is out of bounds. : Reduce the size of BufferedWriteSyncer by aligning the fields"
},
{
"data": "Thanks to @lancoLiu and @thockin for their contributions to this release. Bugfixes: : Fix nil dereference in logger constructed by `zap.NewNop`. Enhancements: : Add `zapcore.BufferedWriteSyncer`, a new `WriteSyncer` that buffers messages in-memory and flushes them periodically. : Add `zapio.Writer` to use a Zap logger as an `io.Writer`. : Add `zap.WithClock` option to control the source of time via the new `zapcore.Clock` interface. : Avoid panicking in `zap.SugaredLogger` when arguments of `w` methods don't match expectations. : Add support for filtering by level or arbitrary matcher function to `zaptest/observer`. : Comply with `io.StringWriter` and `io.ByteWriter` in Zap's `buffer.Buffer`. Thanks to @atrn0, @ernado, @heyanfu, @hnlq715, @zchee for their contributions to this release. Bugfixes: : Encode `<nil>` for nil `error` instead of a panic. , : Update minimum version constraints to address vulnerabilities in dependencies. Enhancements: : Improve alignment of fields of the Logger struct, reducing its size from 96 to 80 bytes. : Support `grpclog.LoggerV2` in zapgrpc. : Support URL-encoded POST requests to the AtomicLevel HTTP handler with the `application/x-www-form-urlencoded` content type. : Support multi-field encoding with `zap.Inline`. : Speed up SugaredLogger for calls with a single string. : Add support for filtering by field name to `zaptest/observer`. Thanks to @ash2k, @FMLS, @jimmystewpot, @Oncilla, @tsoslow, @tylitianrui, @withshubh, and @wziww for their contributions to this release. Bugfixes: : Fix missing newline in IncreaseLevel error messages. : Fix panic in JSON encoder when encoding times or durations without specifying a time or duration encoder. : Honor CallerSkip when taking stack traces. : Fix the default file permissions to use `0666` and rely on the umask instead. : Encode `<nil>` for nil `Stringer` instead of a panic error log. Enhancements: : Added `zapcore.TimeEncoderOfLayout` to easily create time encoders for custom layouts. : Added support for a configurable delimiter in the console encoder. : Optimize console encoder by pooling the underlying JSON encoder. : Add ability to include the calling function as part of logs. : Add `StackSkip` for including truncated stacks as a field. : Add options to customize Fatal behaviour for better testability. Thanks to @SteelPhase, @tmshn, @lixingwang, @wyxloading, @moul, @segevfiner, @andy-retailnext and @jcorbin for their contributions to this release. Bugfixes: : Fix handling of `Time` values out of `UnixNano` range. : Fix `IncreaseLevel` being reset after a call to `With`. Enhancements: : Add `WithCaller` option to supersede the `AddCaller` option. This allows disabling annotation of log entries with caller information if previously enabled with `AddCaller`. : Deprecate `NewSampler` constructor in favor of `NewSamplerWithOptions` which supports a `SamplerHook` option. This option adds support for monitoring sampling decisions through a hook. Thanks to @danielbprice for their contributions to this release. Bugfixes: : Fix panic on attempting to build a logger with an invalid Config. : Vendoring Zap with `go mod vendor` no longer includes Zap's development-time dependencies. : Fix issue introduced in 1.14.0 that caused invalid JSON output to be generated for arrays of `time.Time` objects when using string-based time formats. Thanks to @YashishDua for their contributions to this release. Enhancements: : Optimize calls for disabled log levels. : Add millisecond duration encoder. : Add option to increase the level of a logger. : Optimize time formatters using `Time.AppendFormat` where possible. Thanks to @caibirdme for their contributions to this release. Enhancements: : Add `Intp`, `Stringp`, and other similar `p` field constructors to log pointers to primitives with support for `nil` values. Thanks to @jbizzle for their contributions to this release. Enhancements: : Migrate to Go modules. Enhancements: : Add `zapcore.OmitKey` to omit keys in an `EncoderConfig`. : Add `RFC3339` and `RFC3339Nano` time"
},
{
"data": "Thanks to @juicemia, @uhthomas for their contributions to this release. Bugfixes: : Fix `MapObjectEncoder.AppendByteString` not adding value as a string. : Fix incorrect call depth to determine caller in Go 1.12. Enhancements: : Add `zaptest.WrapOptions` to wrap `zap.Option` for creating test loggers. : Don't panic when encoding a String field. : Disable HTML escaping for JSON objects encoded using the reflect-based encoder. Thanks to @iaroslav-ciupin, @lelenanam, @joa, @NWilson for their contributions to this release. Bugfixes: : MapObjectEncoder should not ignore empty slices. Enhancements: : Reduce number of allocations when logging with reflection. , : Expose a registry for third-party logging sinks. Thanks to @nfarah86, @AlekSi, @JeanMertz, @philippgille, @etsangsplk, and @dimroc for their contributions to this release. Enhancements: : Make log level configurable when redirecting the standard library's logger. : Add a logger that writes to a `testing.TB`. : Add a top-level alias for `zapcore.Field` to clean up GoDoc. Bugfixes: : Add a missing import comment to `go.uber.org/zap/buffer`. Thanks to @DiSiqueira and @djui for their contributions to this release. Bugfixes: : Store strings when using AddByteString with the map encoder. Enhancements: : Add `NewStdLogAt`, which extends `NewStdLog` by allowing the user to specify the level of the logged messages. Enhancements: : Omit zap stack frames from stacktraces. : Add a `ContextMap` method to observer logs for simpler field validation in tests. Enhancements: and : Support errors produced by `go.uber.org/multierr`. : Support user-supplied encoders for logger names. Bugfixes: : Fix a bug that incorrectly truncated deep stacktraces. Thanks to @richard-tunein and @pavius for their contributions to this release. This release fixes two bugs. Bugfixes: : Support a variety of case conventions when unmarshaling levels. : Fix a panic in the observer. This release adds a few small features and is fully backward-compatible. Enhancements: : Add a `LineEnding` field to `EncoderConfig`, allowing users to override the Unix-style default. : Preserve time zones when logging times. : Make `zap.AtomicLevel` implement `fmt.Stringer`, which makes a variety of operations a bit simpler. This release adds an enhancement to zap's testing helpers as well as the ability to marshal an AtomicLevel. It is fully backward-compatible. Enhancements: : Add a substring-filtering helper to zap's observer. This is particularly useful when testing the `SugaredLogger`. : Make `AtomicLevel` implement `encoding.TextMarshaler`. This release adds a gRPC compatibility wrapper. It is fully backward-compatible. Enhancements: : Add a `zapgrpc` package that wraps zap's Logger and implements `grpclog.Logger`. This release fixes two bugs and adds some enhancements to zap's testing helpers. It is fully backward-compatible. Bugfixes: : Fix caller path trimming on Windows. : Fix a panic when attempting to use non-existent directories with zap's configuration struct. Enhancements: : Add filtering helpers to zaptest's observing logger. Thanks to @moitias for contributing to this release. This is zap's first stable release. All exported APIs are now final, and no further breaking changes will be made in the 1.x release series. Anyone using a semver-aware dependency manager should now pin to `^1`. Breaking changes: : Add byte-oriented APIs to encoders to log UTF-8 encoded text without casting from `[]byte` to `string`. : To support buffering outputs, add `Sync` methods to `zapcore.Core`, `zap.Logger`, and `zap.SugaredLogger`. : Rename the `testutils` package to `zaptest`, which is less likely to clash with other testing helpers. Bugfixes: : Make the ISO8601 time formatters fixed-width, which is friendlier for tab-separated console output. : Remove the automatic locks in `zapcore.NewCore`, which allows zap to work with concurrency-safe `WriteSyncer` implementations. : Stop reporting errors when trying to `fsync` standard out on Linux systems. : Report the correct caller from zap's standard library interoperability"
},
{
"data": "Enhancements: : Add a registry allowing third-party encodings to work with zap's built-in `Config`. : Make the representation of logger callers configurable (like times, levels, and durations). : Allow third-party encoders to use their own buffer pools, which removes the last performance advantage that zap's encoders have over plugins. : Add `CombineWriteSyncers`, a convenience function to tee multiple `WriteSyncer`s and lock the result. : Make zap's stacktraces compatible with mid-stack inlining (coming in Go 1.9). : Export zap's observing logger as `zaptest/observer`. This makes it easier for particularly punctilious users to unit test their application's logging. Thanks to @suyash, @htrendev, @flisky, @Ulexus, and @skipor for their contributions to this release. This is the third release candidate for zap's stable release. There are no breaking changes. Bugfixes: : Byte slices passed to `zap.Any` are now correctly treated as binary blobs rather than `[]uint8`. Enhancements: : Users can opt into colored output for log levels. : In addition to hijacking the output of the standard library's package-global logging functions, users can now construct a zap-backed `log.Logger` instance. : Frames from common runtime functions and some of zap's internal machinery are now omitted from stacktraces. Thanks to @ansel1 and @suyash for their contributions to this release. This is the second release candidate for zap's stable release. It includes two breaking changes. Breaking changes: : Zap's global loggers are now fully concurrency-safe (previously, users had to ensure that `ReplaceGlobals` was called before the loggers were in use). However, they must now be accessed via the `L()` and `S()` functions. Users can update their projects with ``` gofmt -r \"zap.L -> zap.L()\" -w . gofmt -r \"zap.S -> zap.S()\" -w . ``` and : RC1 was mistakenly shipped with invalid JSON and YAML struct tags on all config structs. This release fixes the tags and adds static analysis to prevent similar bugs in the future. Bugfixes: : Redirecting the standard library's `log` output now correctly reports the logger's caller. Enhancements: and : Zap now transparently supports non-standard, rich errors like those produced by `github.com/pkg/errors`. : Though `New(nil)` continues to return a no-op logger, `NewNop()` is now preferred. Users can update their projects with `gofmt -r 'zap.New(nil) -> zap.NewNop()' -w .`. : Incorrectly importing zap as `github.com/uber-go/zap` now returns a more informative error. Thanks to @skipor and @chapsuk for their contributions to this release. This is the first release candidate for zap's stable release. There are multiple breaking changes and improvements from the pre-release version. Most notably: Zap's import path is now \"go.uber.org/zap\"* — all users will need to update their code. User-facing types and functions remain in the `zap` package. Code relevant largely to extension authors is now in the `zapcore` package. The `zapcore.Core` type makes it easy for third-party packages to use zap's internals but provide a different user-facing API. `Logger` is now a concrete type instead of an interface. A less verbose (though slower) logging API is included by default. Package-global loggers `L` and `S` are included. A human-friendly console encoder is included. A declarative config struct allows common logger configurations to be managed as configuration instead of code. Sampling is more accurate, and doesn't depend on the standard library's shared timer heap. This is a minor version, tagged to allow users to pin to the pre-1.0 APIs and upgrade at their leisure. Since this is the first tagged release, there are no backward compatibility concerns and all functionality is new. Early zap adopters should pin to the 0.1.x minor version until they're ready to upgrade to the upcoming stable release."
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "FD.io",
"subcategory": "Cloud Native Network"
} | [
{
"data": "This is a small experiment to have a wrapper CLI which can call both API functions as well as debug CLI. To facilitate tab completion and help, the API call names are broken up with spaces replacing the underscores."
}
] |
{
"category": "Runtime",
"file_name": "ADOPTERS.md",
"project_name": "Virtual Kubelet",
"subcategory": "Container Runtime"
} | [
{
"data": "Microsoft Azure AWS Alibaba VMWare Netflix Hashi Corp Admiralty Elotl Tencent Games Since end-users are specific per provider within VK we have many end-user customers that we don't have permission to list publically. Please contact ribhatia@microsoft.com for more informtation. Are you currently using Virtual Kubelet in production? Please let us know by adding your company name and a description of your use case to this document!"
}
] |
{
"category": "Runtime",
"file_name": "devctr-image.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
} | [
{
"data": "Firecracker uses a to standardize the build process. This also fixes the build tools and dependencies to specific versions. Every once in a while, something needs to be updated. To do this, a new container image needs to be built locally, then published to the registry. The Firecracker CI suite must also be updated to use the new image. Access to the . The `docker` package installed locally. You should already have this if you've ever built Firecracker from source. Access to both an `x86_64` and `aarch64` machines to build the container images. Ensure `aws --version` is >=1.17.10. This step is optional but recommended, to be on top of Python package changes. ```sh ./tools/devtool shell --privileged poetry update --lock --directory tools/devctr/ ``` This will change `poetry.lock`, which you can commit with your changes. Login to the Docker organization in a shell. Make sure that your account has access to the repository: ```bash aws ecr-public get-login-password --region us-east-1 \\ | docker login --username AWS --password-stdin public.ecr.aws ``` For non-TTY devices, although not recommended a less secure approach can be used: ```bash docker login --username AWS --password \\ $(aws ecr-public get-login-password --region us-east-1) public.ecr.aws ``` Navigate to the Firecracker directory. Verify that you have the latest container image locally. ```bash docker images REPOSITORY TAG IMAGE ID CREATED SIZE public.ecr.aws/firecracker/fcuvm v26 8d00deb17f7a 2 weeks ago 2.41GB ``` Make your necessary changes, if any, to the . There's one for all the architectures in the Firecracker source tree. Commit the changes, if any. Build a new container image with the updated Dockerfile. ```bash tools/devtool build_devctr ``` Verify that the new image exists. ```bash docker images REPOSITORY TAG IMAGE ID CREATED SIZE public.ecr.aws/firecracker/fcuvm latest 1f9852368efb 2 weeks ago 2.36GB public.ecr.aws/firecracker/fcuvm v26 8d00deb17f7a 2 weeks ago 2.41GB ``` Tag the new image with the next available version `X` and the architecture you're on. Note that this will not always be \"current version in devtool + 1\", as sometimes that version might already be used on feature branches. Always check the \"Image Tags\" on to make sure you do not accidentally overwrite an existing image. As a sanity check, run: ```bash docker pull public.ecr.aws/firecracker/fcuvm:vX ``` and verify that you get an error message along the lines of ``` Error response from daemon: manifest for public.ecr.aws/firecracker/fcuvm:vX not found: manifest unknown: Requested image not found ``` This means the version you've chosen does not exist yet, and you are good to go. ```bash docker tag 1f9852368efb public.ecr.aws/firecracker/fcuvm:v27x8664 docker images REPOSITORY TAG IMAGE ID CREATED public.ecr.aws/firecracker/fcuvm latest 1f9852368efb 1 week ago public.ecr.aws/firecracker/fcuvm v27x8664 1f9852368efb 1 week ago public.ecr.aws/firecracker/fcuvm v26 8d00deb17f7a 2 weeks ago ``` Push the image. ```bash docker push public.ecr.aws/firecracker/fcuvm:v27x8664 ``` Login to the `aarch64` build machine. Steps 1-4 are identical across architectures, change `x86_64` to `aarch64`. Then continue with the above steps: Build a new container image with the updated Dockerfile. ```bash tools/devtool build_devctr ``` Verify that the new image exists. ```bash docker images REPOSITORY TAG IMAGE ID CREATED public.ecr.aws/firecracker/fcuvm latest 1f9852368efb 2 minutes ago"
},
{
"data": "v26 8d00deb17f7a 2 weeks ago ``` Tag the new image with the next available version `X` and the architecture you're on. Note that this will not always be \"current version in devtool + 1\", as sometimes that version might already be used on feature branches. Always check the \"Image Tags\" on to make sure you do not accidentally overwrite an existing image. As a sanity check, run: ```bash docker pull public.ecr.aws/firecracker/fcuvm:vX ``` and verify that you get an error message along the lines of ``` Error response from daemon: manifest for public.ecr.aws/firecracker/fcuvm:vX not found: manifest unknown: Requested image not found ``` This means the version you've chosen does not exist yet, and you are good to go. ```bash docker tag 1f9852368efb public.ecr.aws/firecracker/fcuvm:v27_aarch64 docker images REPOSITORY TAG IMAGE ID public.ecr.aws/firecracker/fcuvm latest 1f9852368efb public.ecr.aws/firecracker/fcuvm v27_aarch64 1f9852368efb public.ecr.aws/firecracker/fcuvm v26 8d00deb17f7a ``` Push the image. ```bash docker push public.ecr.aws/firecracker/fcuvm:v27_aarch64 ``` Create a manifest to point the latest container version to each specialized image, per architecture. ```bash docker manifest create public.ecr.aws/firecracker/fcuvm:v27 \\ public.ecr.aws/firecracker/fcuvm:v27x8664 public.ecr.aws/firecracker/fcuvm:v27_aarch64 docker manifest push public.ecr.aws/firecracker/fcuvm:v27 ``` Update the image tag in the . Commit and push the change. ```bash PREV_TAG=v26 CURR_TAG=v27 sed -i \"s%DEVCTRIMAGETAG=\\\"$PREVTAG\\\"%DEVCTRIMAGETAG=\\\"$CURRTAG\\\"%\" tools/devtool ``` Check out the for additional troubleshooting steps and guidelines. ```bash docker manifest is only supported when experimental cli features are enabled ``` See for explanations and fix. Either fetch and run it locally on another machine than the one you used to build it, or clean up any artifacts from the build machine and fetch. ```bash docker system prune -a docker images REPOSITORY TAG IMAGE ID CREATED SIZE tools/devtool shell [Firecracker devtool] About to pull docker image public.ecr.aws/firecracker/fcuvm:v15 [Firecracker devtool] Continue? ``` ```bash docker push public.ecr.aws/firecracker/fcuvm:v27 The push refers to repository [public.ecr.aws/firecracker/fcuvm] e2b5ee0c4e6b: Preparing 0fbb5fd5f156: Preparing ... a1aa3da2a80a: Waiting denied: requested access to the resource is denied ``` Only a Firecracker maintainer can update the container image. If you are one, ask a member of the team to add you to the AWS ECR repository and retry. Tags can be deleted from the . Also, pushing the same tag twice will overwrite the initial content. If you see unrelated `Python` errors, it's likely because the dev container pulls `Python 3` at build time. `Python 3` means different minor versions on different platforms, and is not backwards compatible. So it's entirely possible that `docker build` has pulled in unwanted `Python` dependencies. To include only your changes, an alternative to the method described above is to make the changes inside the container, instead of in the `Dockerfile`. Let's say you want to update (random example). Enter the container as `root`. ```bash tools/devtool shell -p ``` Make the changes locally. Do not exit the container. ```bash cargo install cargo-audit --force ``` Find your running container. ```bash docker ps CONTAINER ID IMAGE COMMAND CREATED e9f0487fdcb9 fcuvm:v14 \"bash\" 53 seconds ago ``` Commit the modified container to a new image. Use the `container ID`. ```bash docker commit e9f0487fdcb9 fcuvm:v15x8664 ``` ```bash docker image ls REPOSITORY TAG IMAGE ID CREATED fcuvm v15x8664 514581e654a6 18 seconds ago fcuvm v14 c8581789ead3 2 months ago ``` Repeat for `aarch64`. Create and push the manifest."
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_sha.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
} | [
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage compiled BPF template objects ``` -h, --help help for sha ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - Get datapath SHA header - List BPF template objects."
}
] |
{
"category": "Runtime",
"file_name": "velero-debug.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} | [
{
"data": "To simplify the communication between velero users and developers, this document proposes the `velero debug` command to generate a tarball including the logs needed for debugging. Github issue: https://github.com/vmware-tanzu/velero/issues/675 Gathering information to troubleshoot a Velero deployment is currently spread across multiple commands, and is not very efficient. Logs for the Velero server itself are accessed via a kubectl logs command, while information on specific backups or restores are accessed via a Velero subcommand. Restic logs are even more complicated to retrieve, since one must gather logs for every instance of the daemonset, and theres currently no good mechanism to locate which node a particular restic backup ran against. A dedicated subcommand can lower this effort and reduce back-and-forth between user and developer for collecting the logs. Enable efficient log collection for Velero and associated components, like plugins and restic. Collecting logs for components that do not belong to velero such as storage service. Automated log analysis. With the introduction of the new command `velero debug`, the command would download all of the following information: velero deployment logs restic DaemonSet logs plugin logs All the resources in the group `velero.io` that are created such as: Backup Restore BackupStorageLocation PodVolumeBackup PodVolumeRestore etc ... Log of the backup and restore, if specified in the param A project called `crash-diagnostics` (or `crashd`) (https://github.com/vmware-tanzu/crash-diagnostics) implements the Kubernetes API queries and provides Starlark scripting language to abstract details, and collect the information into a local copy. It can be used as a standalone CLI executing a Starlark script file. With the capabilities of embedding files in Go 1.16, we can define a Starlark script gathering the necessary information, embed the script at build time, then the velero debug command will invoke `crashd`, passing in the scripts text contents. The Starlark script to be called by crashd: ```python def capturebackuplogs(cmd, namespace): if args.backup: log(\"Collecting log and information for backup: {}\".format(args.backup)) backupDescCmd = \"{} --namespace={} backup describe {} --details\".format(cmd, namespace, args.backup) capturelocal(cmd=backupDescCmd, filename=\"backupdescribe{}.txt\".format(args.backup)) backupLogsCmd = \"{} --namespace={} backup logs {}\".format(cmd, namespace, args.backup) capturelocal(cmd=backupLogsCmd, filename=\"backup_{}.log\".format(args.backup)) def capturerestorelogs(cmd, namespace): if args.restore: log(\"Collecting log and information for restore: {}\".format(args.restore)) restoreDescCmd = \"{} --namespace={} restore describe {} --details\".format(cmd, namespace, args.restore) capturelocal(cmd=restoreDescCmd, filename=\"restoredescribe{}.txt\".format(args.restore)) restoreLogsCmd = \"{} --namespace={} restore logs {}\".format(cmd, namespace, args.restore) capturelocal(cmd=restoreLogsCmd, filename=\"restore_{}.log\".format(args.restore)) ns = args.namespace if args.namespace else \"velero\" output = args.output if args.output else \"bundle.tar.gz\" cmd = args.cmd if args.cmd else \"velero\" crshd = crashd_config(workdir=\"./velero-bundle\") setdefaults(kubeconfig(path=args.kubeconfig, cluster_context=args.kubecontext)) log(\"Collecting velero resources in namespace: {}\". format(ns)) kube_capture(what=\"objects\", namespaces=[ns], groups=['velero.io']) capturelocal(cmd=\"{} version -n {}\".format(cmd, ns), filename=\"version.txt\") log(\"Collecting velero deployment logs in namespace: {}\". format(ns)) kube_capture(what=\"logs\", namespaces=[ns]) capturebackuplogs(cmd, ns) capturerestorelogs(cmd, ns) archive(outputfile=output,"
},
{
"data": "log(\"Generated debug information bundle: {}\".format(output)) ``` The sample command to trigger the script via crashd: ```shell ./crashd run ./velero.cshd --args 'backup=harbor-backup-2nd,namespace=velero,basedir=,restore=,kubeconfig=/home/.kube/minikube-250-224/config,output=' ``` To trigger the script in `velero debug`, in the package `pkg/cmd/cli/debug` a struct `option` will be introduced ```go type option struct { // currCmd the velero command currCmd string // workdir for crashd will be $baseDir/velero-debug baseDir string // the namespace where velero server is installed namespace string // the absolute path for the log bundle to be generated outputPath string // the absolute path for the kubeconfig file that will be read by crashd for calling K8S API kubeconfigPath string // the kubecontext to be used for calling K8S API kubeContext string // optional, the name of the backup resource whose log will be packaged into the debug bundle backup string // optional, the name of the restore resource whose log will be packaged into the debug bundle restore string // optional, it controls whether to print the debug log messages when calling crashd verbose bool } ``` The code will consolidate the input parameters and execution context of the `velero` CLI to form the option struct, which can be transformed into the `argsMap` that can be used when calling the func `exec.Execute` in `crashd`: https://github.com/vmware-tanzu/crash-diagnostics/blob/v0.3.4/exec/executor.go#L17 The collection could be done via the Kubernetes client-go API, but such integration is not necessarily trivial to implement, therefore, `crashd` is preferred approach The starlark script will be embedded into the velero binary, and the byte slice will be passed to the `exec.Execute` func directly, so theres little risk that the script will be modified before being executed. As the `crashd` project evolves the behavior of the internal functions used in the Starlark script may change. Well ensure the correctness of the script via regular E2E tests. Bump up to use Go v1.16 to compile velero Embed the starlark script Implement the `velero debug` sub-command to call the script Add E2E test case Command dependencies: In the Starlark script, for collecting version info and backup logs, it calls the `velero backup logs` and `velero version`, which makes the call stack like velero debug -> crashd -> velero xxx. We need to make sure this works under different PATH settings. Progress and error handling: The log collection may take a relatively long time, log messages should be printed to indicate the progress when different items are being downloaded and packaged. Additionally, when an error happens, `crashd` may omit some errors, so before the script is executed we'll do some validation and make sure the `debug` command fail early if some parameters are incorrect."
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
} | [
{
"data": "OpenEBS is the most widely deployed open source example of a category of storage solutions sometimes called . OpenEBS is itself deployed as a set of containers on Kubernetes worker nodes. This document describes the high level architecture of OpenEBS and the links to the Source Code and its Dependencies. Some key aspects that make OpenEBS different compared to other traditional storage solutions: Built using the micro-services architecture like the applications it serves. Use Kubernetes itself to orchestrate and manage the OpenEBS components Built completely in userspace making it highly portable to run across any OS / Platform Completely intent driven, inheriting the same principles that drive the ease of use with Kubernetes The architecture of OpenEBS is container native and horizontally scalable. OpenEBS is a collection of different microservices that can be grouped into 3 major areas (or planes): The data engines are the containers responsible for interfacing with the underlying storage devices such as host filesystem, rotational drives, SSDs and NVMe devices. The data engines provide volumes with required capabilities like high availability, snapshots, clones, etc. Volume capabilities can be optimized based on the workload they serve. Depending on the capabilities requested, OpenEBS selects different data engines like cStor ( a CoW based) or Jiva or even Local PVs for a given volume. The high availability is achieved by abstracting the access to the volume into the target container - which in turn does the synchronous replication to multiple different replica containers. The replica containers save the data to the underlying storage devices. If a node serving the application container and the target container fails, the application and target are rescheduled to a new node. The target connects with the other available replicas and will start serving the IO. The Storage Management or Control Plane is responsible for interfacing between Kubernetes (Volume/CSI interface) and managing the volumes created using the OpenEBS Data Engines. The Storage Management Plane is implemented using a set of containers that are either running at the cluster level or the node level. Some of the storage management options are also provided by containers running as side-cars to the data engine containers. The storage management containers are responsible for providing APIs for gathering details about the volumes. The APIs can be used by Kubernetes Provisioners for managing volumes, snapshots, backups, so forth; used by Prometheus to collect metrics of volumes; used by custom programs like CLI or UI to provide insights into the OpenEBS Storage status or management. While this plane is an integral part of the OpenEBS Storage Management Plane, the containers and custom resources under this plane can be used by other projects that require a Kubernetes native way of managing the Storage Devices (rotational drives, SSDs and NVMe, etc.) attached to Kubernetes nodes. Storage Device Management Plane can be viewed as an Inventory Management tool, that discovers devices and keeps track of their usage via device claims (akin to PV/PVC"
},
{
"data": "All the operations like device listing, identifying the topology or details of a specific device can be accessed via kubectl and Kubernetes Custom Resources. OpenEBS source code is spread across multiple repositories, organized either by the storage engine or management layer. This section describes the various actively maintained repositories. is OpenEBS meta repository that contains design documents, project management, community and contributor documents, deployment and workload examples. contains the source code for OpenEBS Documentation portal (https://docs.openebs.io) implemented using Docusaurus framework and other libraries listed in . contains the source code for OpenEBS portal (https://openebs.io) implemented using Gatsby framework and other libraries listed in . contains the Helm chart source code for OpenEBS and also hosts a gh-pages website for install artifacts and Helm packages. contains OpenEBS Storage Management components that help with managing cStor, Jiva and Local Volumes. This repository contains the non-CSI drivers. The code is being moved from this repository to engine specific CSI drivers. Detailed dependency list can be found in: . OpenEBS also maintains a forked copy of the Kubernetes external-storage repository to support the external-provisioners for cStor and Jiva volumes. contains OpenEBS extensions for Kubernetes External Dynamic Provisioners. These provisioners will be deprecated in the near term in favor of the CSI drivers that are under beta and alpha stage at the moment. This is a forked repository from . has the plugin code to perform cStor and ZFS Local PV based Backup and Restore using Velero. is a general purpose alpine container used to launch some management jobs by OpenEBS Operators. contains the OpenEBS related Kubernetes custom resource specifications and the related Go-client API to manage those resources. This functionality is being split from the mono-repo into its own repository. contains management tools for upgrading and migrating OpenEBS volumes and pools. This functionality is being split from the mono-repo into its own repository. Go dependencies are listed . contains the Litmus based e2e tests that are executed on GitLab pipelines. Contains tests for Jiva, cStor and Local PV. contains the OpenEBS CLI that can be run as kubectl plugin. This functionality is being split from the mono-repo into its own repository. (Currently in Alpha). contains tools/scripts for running performance benchmarks on Kubernetes volumes. This is an experimental repo. The work in this repo can move to other repos like e2e-tests. is a prometheus exporter for sending capacity usage statistics using `du` from hostpath volumes. is wrapper around OpenEBS Helm to allow installation the . contains Kubernetes native Device Inventory Management functionality. A detailed dependency list can be found in . Along with being dependent on Kubernetes and Operator SDK for managing the Kubernetes custom resources, NDM also optionally depends on the following. (License: MPL 2.0) for discovering device attributes. OpenEBS maintains forked repositories of openSeaChest to fix/upstream the issues found in this"
},
{
"data": "is one of the data engines supported by OpenEBS which was forked from Rancher Longhorn engine and has diverged from the way Jiva volumes are managed within Kubernetes. At the time of the fork, Longhorn was focused towards Docker and OpenEBS was focused on supporting Kubernetes. Jiva engine depends on the following: A fork of Longhorn engine is maintained in OpenEBS to upstream the common changes from Jiva to Longhorn. for providing user space iSCSI Target support implemented in Go. A fork of the project is maintained in OpenEBS to keep the dependencies in sync and upstream the changes. fork is also maintained by OpenEBS to manage the differences between Jiva way of writing into the sparse files. provides a complete list of dependencies used by the Jiva project. contains the CSI Driver for Jiva Volumes. Currently in alpha. Dependencies are in: . contains Kubernetes custom resources and operators to manage Jiva volumes. Currently in alpha used by Jiva CSI Driver. This will replace the volume management functionality offered by OpenEBS API Server. Dependencies are in: . contains the cStor Replica functionality that makes use of uZFS - userspace ZFS to store the data on devices. is a fork of (License: CDDL). This fork contains the code that modifies ZFS to run in user space. contains the iSCSI Target functionality used by cStor volumes. This work is derived from earlier work available as FreeBSD port at http://www.peach.ne.jp/archives/istgt/ (archive link: https://web.archive.org/web/20190622064711/peach.ne.jp/archives/istgt/). The original work was licensed under BSD license. is the CSI Driver for cStor Volumes. This will deprecate the external provisioners. Currently in beta. Dependencies are in: . contain the Kubernetes custom resources and operators to manage cStor Pools volumes. Currently in beta and used by cStor CSI Driver. This will replace the volume management functionality offered by OpenEBS API Server. Dependencies are in: . contains the Mayastor data engine, CSI driver and management utilitities. is a forked repository of (License: BSD) for managing the upstream changes. (License: MIT) contains Rust bindings for SPDK. is forked from (License: MIT) for managing the upstream changes. is forked from (License: MIT) for managing the upstream changes. is forked from (License: MIT) for managing upstream changes. is forked from (License: MIT) for managing upstream changes. is forked from (License: MIT) for managing upstream changes. contains the CSI driver for provisioning Kubernetes Local Volumes on ZFS installed on the nodes. contains the CSI driver for provisioning Kubernetes Local Volumes on Hostpath by creating sparse files. (Currently in Alpha). Architectural overview on how each of the Data engines operate are provided in this Design Documents for various components and features are listed There is always something more that is required, to make it easier to suit your use-cases. Feel free to join the discussion on new features or raise a PR with your proposed change. - Already signed up? Head to our discussions at Pick an issue of your choice to work on from any of the repositories listed above. Here are some contribution ideas to start looking at: . . Help with backlogs from the by discussing requirements and design."
}
] |
{
"category": "Runtime",
"file_name": "interconnect_with_libvirt.md",
"project_name": "StratoVirt",
"subcategory": "Container Runtime"
} | [
{
"data": "Libvirt is one of manager for StratoVirt, it manages StratoVirt by creating cmdlines to launch StratoVirt and giving commands via QMP. Currently, five virsh commands are supported to manage StratoVirt: `virsh create`, `virsh destroy`, `virsh suspend`, `virsh resume` and `virsh console`. StratoVirt can be configured by following ways: memory: ``` <memory unit='GiB'>8</memory> or <memory unit='MiB'>8192</memory> ``` CPU: CPU topology is not supported, please configure the number of VCPUs only. ``` <vcpu>4</vcpu> ``` Architecture: Optional value of `arch` are: `aarch64` and `x86_64`. On X86 platform, supported machine is `q35`; on aarch64 platform, supported machine is `virt`. ``` <os> <type arch='x86_64' machine='q35'>hvm</type> </os> ``` Kernel and cmdline: `/path/to/standardvmkernel` is the path of standard vm kernel. ``` <kernel>/path/to/standardvmkernel</kernel> <cmdline>console=ttyS0 root=/dev/vda reboot=k panic=1 rw</cmdline> ``` feature: As the acpi is used in Standard VM, therefore the acpi feature must be configured. ``` <features> <acpi/> </features> ``` For aarch64 platform, as gicv3 is used the `gic` should also be added to feature. ``` <features> <acpi/> <gic version='3'/> </features> ``` emulator: Set emulator for libvirt, `/path/to/StratoVirtbinaryfile` is the path to StratoVirt binary file. ``` <devices> <emulator>/path/to/StratoVirtbinaryfile</emulator> </devices> ``` balloon ``` <controller type='pci' index='4' model='pcie-root-port' /> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x000' bus='0x04' slot='0x00' function='0x00'/> </memballoon> ``` pflash Pflash can be added by the following config. `/path/to/pflash` is the path of pflash file. ``` <loader readonly='yes' type='pflash'>/path/to/pflash</loader> <nvram template='/path/to/OVMFVARS'>/path/to/OVMFVARS</nvram> ``` iothread ``` <iothreads>1</iothreads> ``` block: ``` <controller type='pci' index='1' model='pcie-root-port' /> <disk type='file' device='disk'> <driver name='qemu' type='raw' iothread='1'/> <source file='/path/to/rootfs'/> <target dev='hda' bus='virtio'/> <iotune> <totaliopssec>1000</totaliopssec> </iotune> <address type='pci' domain='0x000' bus='0x01' slot='0x00' function='0x00'/> </disk> ``` net ``` <controller type='pci' index='2' model='pcie-root-port' /> <interface type='ethernet'> <mac address='de:ad:be:ef:00:01'/> <source bridge='qbr0'/> <target dev='tap0'/> <model type='virtio'/> <address type='pci' domain='0x000' bus='0x02' slot='0x00' function='0x00'/> </interface> ``` console To use `virsh console` command, the virtio console with redirect `pty` should be configured. ``` <controller type='pci' index='3' model='pcie-root-port' /> <controller type='virtio-serial' index='0'> <alias name='virt-serial0'/> <address type='pci' domain='0x000' bus='0x03' slot='0x00' function='0x00'/> </controller> <console type='pty'> <target type='virtio' port='0'/> <alias name='console0'/> </console> ``` vhost-vsock ``` <controller type='pci' index='6' model='pcie-root-port' /> <vsock model='virtio'> <cid auto='no' address='3'/> <address type='pci' domain='0x000' bus='0x00' slot='0x06' function='0x00'/> </vsock> ``` rng ``` <controller type='pci' index='5' model='pcie-root-port' /> <rng model='virtio'> <rate period='1000' bytes='1234'/> <backend model='random'>/path/to/random_file</backend> <address type='pci' domain='0x000' bus='0x05' slot='0x00' function='0x00'/> </rng> ``` vfio ``` <controller type='pci' index='7' model='pcie-root-port' /> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> </hostdev> ```"
}
] |
{
"category": "Runtime",
"file_name": "prerequisites.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
} | [
{
"data": "title: Prerequisites Rook can be installed on any existing Kubernetes cluster as long as it meets the minimum version and Rook is granted the required privileges (see below for more information). Kubernetes versions v1.25 through v1.30 are supported. Architectures supported are `amd64 / x86_64` and `arm64`. To configure the Ceph storage cluster, at least one of these local storage types is required: Raw devices (no partitions or formatted filesystems) Raw partitions (no formatted filesystem) LVM Logical Volumes (no formatted filesystem) Persistent Volumes available from a storage class in `block` mode Confirm whether the partitions or devices are formatted with filesystems with the following command: ```console $ lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT vda vda1 LVM2_member >eSO50t-GkUV-YKTH-WsGq-hNJY-eKNf-3i07IB ubuntu--vg-root ext4 c2366f76-6e21-4f10-a8f3-6776212e2fe4 / ubuntu--vg-swap_1 swap 9492a3dc-ad75-47cd-9596-678e8cf17ff9 [SWAP] vdb ``` If the `FSTYPE` field is not empty, there is a filesystem on top of the corresponding device. In this example, `vdb` is available to Rook, while `vda` and its partitions have a filesystem and are not available. Ceph OSDs have a dependency on LVM in the following scenarios: If encryption is enabled (`encryptedDevice: \"true\"` in the cluster CR) A `metadata` device is specified `osdsPerDevice` is greater than 1 LVM is not required for OSDs in these scenarios: OSDs are created on raw devices or partitions OSDs are created on PVCs using the `storageClassDeviceSets` If LVM is required, LVM needs to be available on the hosts where OSDs will be running. Some Linux distributions do not ship with the `lvm2` package. This package is required on all storage nodes in the k8s cluster to run Ceph OSDs. Without this package even though Rook will be able to successfully create the Ceph OSDs, when a node is rebooted the OSD pods running on the restarted node will fail to start. Please install LVM using your Linux distribution's package manager. For example: CentOS: ```console sudo yum install -y lvm2 ``` Ubuntu: ```console sudo apt-get install -y lvm2 ``` RancherOS: Since version LVM is supported Logical volumes during the boot process. You need to add an for that. ```yaml runcmd: [ \"vgchange\", \"-ay\" ] ``` Ceph requires a Linux kernel built with the RBD module. Many Linux distributions have this module, but not all. For example, the GKE Container-Optimised OS (COS) does not have RBD. Test your Kubernetes nodes by running `modprobe rbd`. If the rbd module is 'not found', rebuild the kernel to include the `rbd` module, install a newer kernel, or choose a different Linux distribution. Rook's default RBD configuration specifies only the `layering` feature, for broad compatibility with older kernels. If your Kubernetes nodes run a 5.4 or later kernel, additional feature flags can be enabled in the storage class. The `fast-diff` and `object-map` features are especially useful. ```yaml imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock ``` If creating RWX volumes from a Ceph shared file system (CephFS), the recommended minimum kernel version is 4.17. If the kernel version is less than 4.17, the requested PVC sizes will not be enforced. Storage quotas will only be enforced on newer kernels. Specific configurations for some distributions. For NixOS, the kernel modules will be found in the non-standard path `/run/current-system/kernel-modules/lib/modules/`, and they'll be symlinked inside the also non-standard path `/nix`. Rook containers require read access to those locations to be able to load the required modules. They have to be bind-mounted as volumes in the CephFS and RBD plugin pods. If installing Rook with Helm, uncomment these example settings in `values.yaml`: `csi.csiCephFSPluginVolume` `csi.csiCephFSPluginVolumeMount` `csi.csiRBDPluginVolume` `csi.csiRBDPluginVolumeMount` If deploying without Helm, add those same values to the settings in the `rook-ceph-operator-config` ConfigMap found in operator.yaml: `CSICEPHFSPLUGIN_VOLUME` `CSICEPHFSPLUGINVOLUMEMOUNT` `CSIRBDPLUGIN_VOLUME` `CSIRBDPLUGINVOLUMEMOUNT`"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "kube-vip",
"subcategory": "Cloud Native Network"
} | [
{
"data": "Prerequisites: Tests must be run on a Linux OS Docker installed with IPv6 enabled You will need to restart your Docker engine after updating the config Target kube-vip Docker image exists locally. Either build the image locally with `make dockerx86Local` or `docker pull` the image from a registry. Run the tests from the repo root: ``` make e2e-tests ``` Note: To preserve the test cluster after a test run, run the following: ``` make E2EPRESERVECLUSTER=true e2e-tests ``` The E2E tests: Start a local kind cluster Load the local docker image into kind Test connectivity to the control plane using the VIP Kills the current leader This causes leader election to occur Attempts to connect to the control plane using the VIP The new leader will need send ndp advertisements before this can succeed within a timeout"
}
] |
{
"category": "Runtime",
"file_name": "ARCHITECTURE.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
} | [
{
"data": "Architecture of the library === ELF -> Specifications -> Objects -> Links ELF BPF is usually produced by using Clang to compile a subset of C. Clang outputs an ELF file which contains program byte code (aka BPF), but also metadata for maps used by the program. The metadata follows the conventions set by libbpf shipped with the kernel. Certain ELF sections have special meaning and contain structures defined by libbpf. Newer versions of clang emit additional metadata in BPF Type Format (aka BTF). The library aims to be compatible with libbpf so that moving from a C toolchain to a Go one creates little friction. To that end, the is tested against the Linux selftests and avoids introducing custom behaviour if possible. The output of the ELF reader is a `CollectionSpec` which encodes all of the information contained in the ELF in a form that is easy to work with in Go. The BPF Type Format describes more than just the types used by a BPF program. It includes debug aids like which source line corresponds to which instructions and what global variables are used. lives in a separate internal package since exposing it would mean an additional maintenance burden, and because the API still has sharp corners. The most important concept is the `btf.Type` interface, which also describes things that aren't really types like `.rodata` or `.bss` sections. `btf.Type`s can form cyclical graphs, which can easily lead to infinite loops if one is not careful. Hopefully a safe pattern to work with `btf.Type` emerges as we write more code that deals with it. Specifications `CollectionSpec`, `ProgramSpec` and `MapSpec` are blueprints for in-kernel objects and contain everything necessary to execute the relevant `bpf(2)` syscalls. Since the ELF reader outputs a `CollectionSpec` it's possible to modify clang-compiled BPF code, for example to rewrite constants. At the same time the package provides an assembler that can be used to generate `ProgramSpec` on the fly. Creating a spec should never require any privileges or be restricted in any way, for example by only allowing programs in native endianness. This ensures that the library stays flexible. Objects `Program` and `Map` are the result of loading specs into the kernel. Sometimes loading a spec will fail because the kernel is too old, or a feature is not enabled. There are multiple ways the library deals with that: Fallback: older kernels don't allowing naming programs and maps. The library automatically detects support for names, and omits them during load if necessary. This works since name is primarily a debug aid. Sentinel error: sometimes it's possible to detect that a feature isn't available. In that case the library will return an error wrapping `ErrNotSupported`. This is also useful to skip tests that can't run on the current kernel. Once program and map objects are loaded they expose the kernel's low-level API, e.g. `NextKey`. Often this API is awkward to use in Go, so there are safer wrappers on top of the low-level API, like `MapIterator`. The low-level API is useful as an out when our higher-level API doesn't support a particular use case. Links BPF can be attached to many different points in the kernel and newer BPF hooks tend to use bpf_link to do so. Older hooks unfortunately use a combination of syscalls, netlink messages, etc. Adding support for a new link type should not pull in large dependencies like netlink, so XDP programs or tracepoints are out of scope."
}
] |
{
"category": "Runtime",
"file_name": "migration-case.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
} | [
{
"data": "title: \"Cluster migration\" layout: docs Velero's backup and restore capabilities make it a valuable tool for migrating your data between clusters. Cluster migration with Velero is based on Velero's functionality, which is responsible for syncing Velero resources from your designated object storage to your cluster. This means that to perform cluster migration with Velero you must point each Velero instance running on clusters involved with the migration to the same cloud object storage location. This page outlines a cluster migration scenario and some common configurations you will need to start using Velero to begin migrating data. Before migrating you should consider the following, Velero does not natively support the migration of persistent volumes snapshots across cloud providers. If you would like to migrate volume data between cloud platforms, enable , which will backup volume contents at the filesystem level. Velero doesn't support restoring into a cluster with a lower Kubernetes version than where the backup was taken. Migrating workloads across clusters that are not running the same version of Kubernetes might be possible, but some factors need to be considered before migration, including the compatibility of API groups between clusters for each custom resource. If a Kubernetes version upgrade breaks the compatibility of core/native API groups, migrating with Velero will not be possible without first updating the impacted custom resources. For more information about API group versions, please see . The Velero plugin for AWS and Azure does not support migrating data between regions. If you need to do this, you must use . This scenario steps through the migration of resources from Cluster 1 to Cluster 2. In this scenario, both clusters are using the same cloud provider, AWS, and Velero's . On Cluster 1, make sure Velero is installed and points to an object storage location using the `--bucket` flag. ``` velero install --provider aws --image velero/velero:v1.8.0 --plugins velero/velero-plugin-for-aws:v1.4.0 --bucket velero-migration-demo --secret-file xxxx/aws-credentials-cluster1 --backup-location-config region=us-east-2 --snapshot-location-config region=us-east-2 ``` During installation, Velero creates a Backup Storage Location called `default` inside the `--bucket` your provided in the install command, in this case `velero-migration-demo`. This is the location that Velero will use to store backups. Running `velero backup-location get` will show the backup location of Cluster 1. ``` velero backup-location get NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT default aws velero-migration-demo Available 2022-05-13 13:41:30 +0800 CST ReadWrite true ``` Still on Cluster 1, make sure you have a backup of your cluster. Replace `<BACKUP-NAME>` with a name for your backup. ``` velero backup create <BACKUP-NAME> ``` Alternatively, you can create a of your data with the Velero `schedule`"
},
{
"data": "This is the recommended way to make sure your data is automatically backed up according to the schedule you define. The default backup retention period, expressed as TTL (time to live), is 30 days (720 hours); you can use the `--ttl <DURATION>` flag to change this as necessary. See for more information about backup expiry. On Cluster 2, make sure that Velero is installed. Note that the install command below has the same `region` and `--bucket` location as the install command for Cluster 1. The Velero plugin for AWS does not support migrating data between regions. ``` velero install --provider aws --image velero/velero:v1.8.0 --plugins velero/velero-plugin-for-aws:v1.4.0 --bucket velero-migration-demo --secret-file xxxx/aws-credentials-cluster2 --backup-location-config region=us-east-2 --snapshot-location-config region=us-east-2 ``` Alternatively you could configure `BackupStorageLocations` and `VolumeSnapshotLocations` after installing Velero on Cluster 2, pointing to the `--bucket` location and `region` used by Cluster 1. To do this you can use to `velero backup-location create` and `velero snapshot-location create` commands. ``` velero backup-location create bsl --provider aws --bucket velero-migration-demo --config region=us-east-2 --access-mode=ReadOnly ``` Its recommended that you configure the `BackupStorageLocations` as read-only by using the `--access-mode=ReadOnly` flag for `velero backup-location create`. This will make sure that the backup is not deleted from the object store by mistake during the restore. See `velero backup-location help` for more information about the available flags for this command. ``` velero snapshot-location create vsl --provider aws --config region=us-east-2 ``` See `velero snapshot-location help` for more information about the available flags for this command. Continuing on Cluster 2, make sure that the Velero Backup object created on Cluster 1 is available. `<BACKUP-NAME>` should be the same name used to create your backup of Cluster 1. ``` velero backup describe <BACKUP-NAME> ``` Velero resources are with the backup files in object storage. This means that the Velero resources created by Cluster 1's backup will be synced to Cluster 2 through the shared Backup Storage Location. Once the sync occurs, you will be able to access the backup from Cluster 1 on Cluster 2 using Velero commands. The default sync interval is 1 minute, so you may need to wait before checking for the backup's availability on Cluster 2. You can configure this interval with the `--backup-sync-period` flag to the Velero server on Cluster 2. On Cluster 2, once you have confirmed that the right backup is available, you can restore everything to Cluster 2. ``` velero restore create --from-backup <BACKUP-NAME> ``` Make sure `<BACKUP-NAME>` is the same backup name from Cluster 1. Check that the Cluster 2 is behaving as expected: On Cluster 2, run: ``` velero restore get ``` Then run: ``` velero restore describe <RESTORE-NAME-FROM-GET-COMMAND> ``` Your data that was backed up from Cluster 1 should now be available on Cluster 2. If you encounter issues, make sure that Velero is running in the same namespace in both clusters."
}
] |
{
"category": "Runtime",
"file_name": "user-guide.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
} | [
{
"data": "<!-- toc --> - - - - - - - - <!-- /toc --> Antrea Multi-cluster implements , which allows users to create multi-cluster Services that can be accessed cross clusters in a ClusterSet. Antrea Multi-cluster also extends Antrea-native NetworkPolicy to support Multi-cluster NetworkPolicy rules that apply to cross-cluster traffic, and ClusterNetworkPolicy replication that allows a ClusterSet admin to create ClusterNetworkPolicies which are replicated across the entire ClusterSet and enforced in all member clusters. Antrea Multi-cluster was first introduced in Antrea v1.5.0. In Antrea v1.7.0, the Multi-cluster Gateway feature was added that supports routing multi-cluster Service traffic through tunnels among clusters. The ClusterNetworkPolicy replication feature is supported since Antrea v1.6.0, and Multi-cluster NetworkPolicy rules are supported since Antrea v1.10.0. Antrea v1.13 promoted the ClusterSet CRD version from v1alpha1 to v1alpha2. If you plan to upgrade from a previous version to v1.13 or later, please check the . Please refer to the to learn how to build a ClusterSet with two clusters quickly. In this guide, all Multi-cluster installation and ClusterSet configuration are done by applying Antrea Multi-cluster YAML manifests. Actually, all operations can also be done with `antctl` Multi-cluster commands, which may be more convenient in many cases. You can refer to the and to learn how to use the Multi-cluster commands. We assume an Antrea version >= `v1.8.0` is used in this guide, and the Antrea version is set to an environment variable `TAG`. For example, the following command sets the Antrea version to `v1.8.0`. ```bash export TAG=v1.8.0 ``` To use the latest version of Antrea Multi-cluster from the Antrea main branch, you can change the YAML manifest path to: `https://github.com/antrea-io/antrea/tree/main/multicluster/build/yamls/` when applying or downloading an Antrea YAML manifest. and , in particular configuration (please check the corresponding sections to learn more information), requires an Antrea Multi-cluster Gateway to be set up in each member cluster by default to route Service and Pod traffic across clusters. To support Multi-cluster Gateways, `antrea-agent` must be deployed with the `Multicluster` feature enabled in a member cluster. You can set the following configuration parameters in `antrea-agent.conf` of the Antrea deployment manifest to enable the `Multicluster` feature: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: Multicluster: true multicluster: enableGateway: true namespace: \"\" # Change to the Namespace where antrea-mc-controller is deployed. ``` In order for Multi-cluster features to work, it is necessary for `enableGateway` to be set to true by the user, except when Pod-to-Pod direct connectivity already exists (e.g., provided by the cloud provider) and `endpointIPType` is configured as `PodIP`. Details can be found in . Please note that always requires Gateway. Prior to Antrea v1.11.0, Multi-cluster Gateway only works with Antrea `encap` traffic mode, and all member clusters in a ClusterSet must use the same tunnel type. Since Antrea v1.11.0, Multi-cluster Gateway also works with the Antrea `noEncap`, `hybrid` and `networkPolicyOnly` modes. For `noEncap` and `hybrid` modes, Antrea Multi-cluster deployment is the same as `encap` mode. For `networkPolicyOnly` mode, we need extra Antrea configuration changes to support Multi-cluster Gateway. Please check for more information. When using Multi-cluster Gateway, it is not possible to enable WireGuard for inter-Node traffic within the same member cluster. It is however possible to [enable WireGuard for"
},
{
"data": "traffic](#multi-cluster-wireguard-encryption) between member clusters. A Multi-cluster ClusterSet is comprised of a single leader cluster and at least two member clusters. Antrea Multi-cluster Controller needs to be deployed in the leader and all member clusters. A cluster can serve as the leader, and meanwhile also be a member cluster of the ClusterSet. To deploy Multi-cluster Controller in a dedicated leader cluster, please refer to [Deploy in a Dedicated Leader cluster](#deploy-in-a-dedicated-leader-cluster). To deploy Multi-cluster Controller in a member cluster, please refer to . To deploy Multi-cluster Controller in a dual-role cluster, please refer to . Since Antrea v1.14.0, you can run the following command to install Multi-cluster Controller in the leader cluster. Multi-cluster Controller is deployed into a Namespace. You must create the Namespace first, and then apply the deployment manifest in the Namespace. For a version older than v1.14, please check the user guide document of the version: `https://github.com/antrea-io/antrea/blob/release-$version/docs/multicluster/user-guide.md`, where `$version` can be `1.12`, `1.13` etc. ```bash kubectl create ns antrea-multicluster kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader.yml ``` The Multi-cluster Controller in the leader cluster will be deployed in Namespace `antrea-multicluster` by default. If you'd like to use another Namespace, you can change `antrea-multicluster` to the desired Namespace in `antrea-multicluster-leader-namespaced.yml`, for example: ```bash kubectl create ns '<desired-namespace>' curl -L https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader-namespaced.yml > antrea-multicluster-leader-namespaced.yml sed 's/antrea-multicluster/<desired-namespace>/g' antrea-multicluster-leader-namespaced.yml | kubectl apply -f - ``` You can run the following command to install Multi-cluster Controller in a member cluster. The command will run the controller in the \"member\" mode in the `kube-system` Namespace. If you want to use a different Namespace other than `kube-system`, you can edit `antrea-multicluster-member.yml` and change `kube-system` to the desired Namespace. ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml ``` We need to run two instances of Multi-cluster Controller in the dual-role cluster, one in leader mode and another in member mode. Follow the steps in section to deploy the leader controller and import the Multi-cluster CRDs. Follow the steps in section to deploy the member controller. An Antrea Multi-cluster ClusterSet should include at least one leader cluster and two member clusters. As an example, in the following sections we will create a ClusterSet `test-clusterset` which has two member clusters with cluster ID `test-cluster-east` and `test-cluster-west` respectively, and one leader cluster with ID `test-cluster-north`. Please note that the name of a ClusterSet CR must match the ClusterSet ID. In all the member and leader clusters of a ClusterSet, the ClusterSet CR must use the ClusterSet ID as the name, e.g. `test-clusterset` in the example of this guide. We first need to set up access to the leader cluster's API server for all member clusters. We recommend creating one ServiceAccount for each member for fine-grained access control. The Multi-cluster Controller deployment manifest for a leader cluster also creates a default member cluster token. If you prefer to use the default token, you can skip step 1 and replace the Secret name `member-east-token` to the default token Secret `antrea-mc-member-access-token` in step 2. Apply the following YAML manifest in the leader cluster to set up access for `test-cluster-east`: ```yml apiVersion: v1 kind: ServiceAccount metadata: name: member-east namespace: antrea-multicluster apiVersion: v1 kind: Secret metadata: name: member-east-token namespace: antrea-multicluster annotations: kubernetes.io/service-account.name: member-east type: kubernetes.io/service-account-token apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: member-east namespace: antrea-multicluster roleRef: apiGroup:"
},
{
"data": "kind: Role name: antrea-mc-member-cluster-role subjects: kind: ServiceAccount name: member-east namespace: antrea-multicluster ``` Generate the token Secret manifest from the leader cluster, and create a Secret with the manifest in member cluster `test-cluster-east`, e.g.: ```bash kubectl get secret member-east-token -n antrea-multicluster -o yaml | grep -w -e '^apiVersion' -e '^data' -e '^metadata' -e '^ *name:' -e '^kind' -e ' ca.crt' -e ' token:' -e '^type' -e ' namespace' | sed -e 's/kubernetes.io\\/service-account-token/Opaque/g' -e 's/antrea-multicluster/kube-system/g' > member-east-token.yml kubectl apply -f member-east-token.yml --kubeconfig=/path/to/kubeconfig-of-member-test-cluster-east ``` Replace all `east` to `west` and repeat step 1/2 for the other member cluster `test-cluster-west`. In all clusters, a `ClusterSet` CR must be created to define the ClusterSet and claim the cluster is a member of the ClusterSet. Create `ClusterSet` in the leader cluster `test-cluster-north` with the following YAML manifest (you can also refer to ): ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha2 kind: ClusterSet metadata: name: test-clusterset namespace: antrea-multicluster spec: clusterID: test-cluster-north leaders: clusterID: test-cluster-north ``` Create `ClusterSet` in member cluster `test-cluster-east` with the following YAML manifest (you can also refer to ): ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha2 kind: ClusterSet metadata: name: test-clusterset namespace: kube-system spec: clusterID: test-cluster-east leaders: clusterID: test-cluster-north secret: \"member-east-token\" server: \"https://172.18.0.1:6443\" namespace: antrea-multicluster ``` Note: update `server: \"https://172.18.0.1:6443\"` in the `ClusterSet` spec to the correct leader cluster API server address. Create `ClusterSet` in member cluster `test-cluster-west`: ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha2 kind: ClusterSet metadata: name: test-clusterset namespace: kube-system spec: clusterID: test-cluster-west leaders: clusterID: test-cluster-north secret: \"member-west-token\" server: \"https://172.18.0.1:6443\" namespace: antrea-multicluster ``` If you want to make the leader cluster `test-cluster-north` also a member cluster of the ClusterSet, make sure you follow the steps in [Deploy Leader and Member in One Cluster](#deploy-leader-and-member-in-one-cluster) and repeat the steps in as well (don't forget replace all `east` to `north` when you repeat the steps). Then create the `ClusterSet` CR in cluster `test-cluster-north` in the `kube-system` Namespace (where the member Multi-cluster Controller runs): ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha2 kind: ClusterSet metadata: name: test-clusterset namespace: kube-system spec: clusterID: test-cluster-north leaders: clusterID: test-cluster-north secret: \"member-north-token\" server: \"https://172.18.0.1:6443\" namespace: antrea-multicluster ``` Multi-cluster Gateways are responsible for establishing tunnels between clusters. Each member cluster should have one Node serving as its Multi-cluster Gateway. Multi-cluster Service traffic is routed among clusters through the tunnels between Gateways. Below is a table about communication support for different configurations. | Pod-to-Pod connectivity provided by underlay | Gateway Enabled | MC EndpointTypes | Cross-cluster Service/Pod communications | | -- | | -- | - | | No | No | N/A | No | | Yes | No | PodIP | Yes | | No | Yes | PodIP/ClusterIP | Yes | | Yes | Yes | PodIP/ClusterIP | Yes | After a member cluster joins a ClusterSet, and the `Multicluster` feature is enabled on `antrea-agent`, you can select a Node of the cluster to serve as the Multi-cluster Gateway by adding an annotation: `multicluster.antrea.io/gateway=true` to the K8s Node. For example, you can run the following command to annotate Node `node-1` as the Multi-cluster Gateway: ```bash kubectl annotate node node-1 multicluster.antrea.io/gateway=true ``` You can annotate multiple Nodes in a member cluster as the candidates for Multi-cluster Gateway, but only one Node will be selected as the active Gateway. Before Antrea"
},
{
"data": "the Gateway Node is just randomly selected and will never change unless the Node or its `gateway` annotation is deleted. Starting with Antrea v1.9.0, Antrea Multi-cluster Controller will guarantee a \"ready\" Node is selected as the Gateway, and when the current Gateway Node's status changes to not \"ready\", Antrea will try selecting another \"ready\" Node from the candidate Nodes to be the Gateway. Once a Gateway Node is decided, Multi-cluster Controller in the member cluster will create a `Gateway` CR with the same name as the Node. You can check it with command: ```bash $ kubectl get gateway -n kube-system NAME GATEWAY IP INTERNAL IP AGE node-1 10.17.27.55 10.17.27.55 10s ``` `internalIP` of the Gateway is used for the tunnels between the Gateway Node and other Nodes in the local cluster, while `gatewayIP` is used for the tunnels to remote Gateways of other member clusters. Multi-cluster Controller discovers the IP addresses from the K8s Node resource of the Gateway Node. It will always use `InternalIP` of the K8s Node as the Gateway's `internalIP`. For `gatewayIP`, there are several possibilities: By default, the K8s Node's `InternalIP` is used as `gatewayIP` too. You can choose to use the K8s Node's `ExternalIP` as `gatewayIP`, by changing the configuration option `gatewayIPPrecedence` to value: `external`, when deploying the member Multi-cluster Controller. The configration option is defined in ConfigMap `antrea-mc-controller-config` in `antrea-multicluster-member.yml`. When the Gateway Node has a separate IP for external communication or is associated with a public IP (e.g. an Elastic IP on AWS), but the IP is not added to the K8s Node, you can still choose to use the IP as `gatewayIP`, by adding an annotation: `multicluster.antrea.io/gateway-ip=<ip-address>` to the K8s Node. When choosing a candidate Node for Multi-cluster Gateway, you need to make sure the resulted `gatewayIP` can be reached from the remote Gateways. You may need to properly to allow the tunnels between Gateway Nodes. As of now, only IPv4 Gateway IPs are supported. After the Gateway is created, Multi-cluster Controller will be responsible for exporting the cluster's network information to other member clusters through the leader cluster, including the cluster's Gateway IP and Service CIDR. Multi-cluster Controller will try to discover the cluster's Service CIDR automatically, but you can also manually specify the `serviceCIDR` option in ConfigMap `antrea-mc-controller-config`. In other member clusters, a ClusterInfoImport CR will be created for the cluster which includes the exported network information. For example, in cluster `test-cluster-west`, you you can see a ClusterInfoImport CR with name `test-cluster-east-clusterinfo` is created for cluster `test-cluster-east`: ```bash $ kubectl get clusterinfoimport -n kube-system NAME CLUSTER ID SERVICE CIDR AGE test-cluster-east-clusterinfo test-cluster-east 110.96.0.0/20 10s ``` Make sure you repeat the same step to assign a Gateway Node in all member clusters. Once you confirm that all `Gateway` and `ClusterInfoImport` are created correctly, you can follow the section to create multi-cluster Services and verify cross-cluster Service access. Since Antrea v1.12.0, Antrea Multi-cluster supports WireGuard tunnel between member clusters. If WireGuard is enabled, the WireGuard interface and routes will be created by Antrea Agent on the Gateway Node, and all cross-cluster traffic will be encrypted and forwarded to the WireGuard tunnel. Please note that WireGuard encryption requires the `wireguard` kernel module be present on the Kubernetes Nodes. `wireguard` module is part of mainline kernel since Linux"
},
{
"data": "Or, you can compile the module from source code with a kernel version >= 3.10. documents how to install WireGuard together with the kernel module on various operating systems. To enable the WireGuard encryption, the `TrafficEncryptMode` in Multi-cluster configuration should be set to `wireGuard` and the `enableGateway` field should be set to `true` as follows: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: Multicluster: true multicluster: enableGateway: true trafficEncryptionMode: \"wireGuard\" wireGuard: port: 51821 ``` When WireGuard encryption is enabled for cross-cluster traffic as part of the Multi-cluster feature, in-cluster encryption (for traffic within a given member cluster) is no longer supported, not even with IPsec. After you set up a ClusterSet properly, you can create a `ServiceExport` CR to export a Service from one cluster to other clusters in the Clusterset, like the example below: ```yaml apiVersion: multicluster.x-k8s.io/v1alpha1 kind: ServiceExport metadata: name: nginx namespace: default ``` For example, once you export the `default/nginx` Service in member cluster `test-cluster-west`, it will be automatically imported in member cluster `test-cluster-east`. A Service and an Endpoints with name `default/antrea-mc-nginx` will be created in `test-cluster-east`, as well as a ServcieImport CR with name `default/nginx`. Now, Pods in `test-cluster-east` can access the imported Service using its ClusterIP, and the requests will be routed to the backend `nginx` Pods in `test-cluster-west`. You can check the imported Service and ServiceImport with commands: ```bash $ kubectl get serviceimport antrea-mc-nginx -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE antrea-mc-nginx ClusterIP 10.107.57.62 <none> 443/TCP 10s $ kubectl get serviceimport nginx -n default NAME TYPE IP AGE nginx ClusterSetIP [\"10.19.57.62\"] 10s ``` As part of the Service export/import process, in the leader cluster, two ResourceExport CRs will be created in the Multi-cluster Controller Namespace, for the exported Service and Endpoints respectively, as well as two ResourceImport CRs. You can check them in the leader cluster with commands: ```bash $ kubectl get resourceexport -n antrea-multicluster NAME CLUSTER ID KIND NAMESPACE NAME AGE test-cluster-west-default-nginx-endpoints test-cluster-west Endpoints default nginx 30s test-cluster-west-default-nginx-service test-cluster-west Service default nginx 30s $ kubectl get resourceimport -n antrea-multicluster NAME KIND NAMESPACE NAME AGE default-nginx-endpoints Endpoints default nginx 99s default-nginx-service ServiceImport default nginx 99s ``` When there is any new change on the exported Service, the imported multi-cluster Service resources will be updated accordingly. Multiple member clusters can export the same Service (with the same name and Namespace). In this case, the imported Service in a member cluster will include endpoints from all the export clusters, and the Service requests will be load-balanced to all these clusters. Even when the client Pod's cluster also exported the Service, the Service requests may be routed to other clusters, and the endpoints from the local cluster do not take precedence. A Service cannot have conflicted definitions in different export clusters, otherwise only the first export will be replicated to other clusters; other exports as well as new updates to the Service will be ingored, until user fixes the conflicts. For example, after a member cluster exported a Service: `default/nginx` with TCP Port `80`, other clusters can only export the same Service with the same Ports definition including Port names. At the moment, Antrea Multi-cluster supports only IPv4 multi-cluster"
},
{
"data": "By default, a multi-cluster Service will use the exported Services' ClusterIPs (the original Service ClusterIPs in the export clusters) as Endpoints. Since Antrea v1.9.0, Antrea Multi-cluster also supports using the backend Pod IPs as the multi-cluster Service endpoints. You can change the value of configuration option `endpointIPType` in ConfigMap `antrea-mc-controller-config` from `ClusterIP` to `PodIP` to use Pod IPs as endpoints. All member clusters in a ClusterSet should use the same endpoint type. Existing ServiceExports should be re-exported after changing `endpointIPType`. `ClusterIP` type requires that Service CIDRs (ClusterIP ranges) must not overlap among member clusters, and always requires Multi-cluster Gateways to be configured. `PodIP` type requires Pod CIDRs not to overlap among clusters, and it also requires Multi-cluster Gateways when there is no direct Pod-to-Pod connectivity across clusters. Also refer to for more information. Since Antrea v1.9.0, Multi-cluster supports routing Pod traffic across clusters through Multi-cluster Gateways. Pod IPs can be reached in all member clusters within a ClusterSet. To enable this feature, the cluster's Pod CIDRs must be set in ConfigMap `antrea-mc-controller-config` of each member cluster and `multicluster.enablePodToPodConnectivity` must be set to `true` in the `antrea-agent` configuration. Note, Pod CIDRs must not overlap among clusters to enable cross-cluster Pod-to-Pod connectivity. ```yaml apiVersion: v1 kind: ConfigMap metadata: labels: app: antrea name: antrea-mc-controller-config namespace: kube-system data: controllermanagerconfig.yaml: | apiVersion: multicluster.crd.antrea.io/v1alpha1 kind: MultiClusterConfig podCIDRs: \"10.10.1.1/16\" ``` ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: Multicluster: true multicluster: enablePodToPodConnectivity: true ``` You can edit , or use `kubectl edit` to change the ConfigMap: ```bash kubectl edit configmap -n kube-system antrea-mc-controller-config ``` Normally, `podCIDRs` should be the value of `kube-controller-manager`'s `cluster-cidr` option. If it's left empty, the Pod-to-Pod connectivity feature will not be enabled. If you use `kubectl edit` to edit the ConfigMap, then you need to restart the `antrea-mc-controller` Pod to load the latest configuration. Antrea-native policies can be enforced on cross-cluster traffic in a ClusterSet. To enable Multi-cluster NetworkPolicy features, check the Antrea Controller and Agent ConfigMaps and make sure that `enableStretchedNetworkPolicy` is set to `true` in addition to enabling the `multicluster` feature gate: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-controller.conf: | featureGates: Multicluster: true multicluster: enableStretchedNetworkPolicy: true # required by both egress and ingres rules antrea-agent.conf: | featureGates: Multicluster: true multicluster: enableGateway: true enableStretchedNetworkPolicy: true # required by only ingress rules namespace: \"\" ``` Restricting Pod egress traffic to backends of a Multi-cluster Service (which can be on the same cluster of the source Pod or on a different cluster) is supported by Antrea-native policy's `toServices` feature in egress rules. To define such a policy, simply put the exported Service name and Namespace in the `toServices` field of an Antrea-native policy, and set `scope` of the `toServices` peer to `ClusterSet`: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-drop-tenant-to-secured-mc-service spec: priority: 1 tier: securityops appliedTo: podSelector: matchLabels: role: tenant egress: action: Drop toServices: name: secured-service # an exported Multi-cluster Service namespace: svcNamespace scope: ClusterSet ``` The `scope` field of `toServices` rules is supported since Antrea v1.10. For earlier versions of Antrea, an equivalent rule can be written by not specifying `scope` and providing the imported Service name instead (i.e. `antrea-mc-[svcName]`). Note that the scope of policy's `appliedTo` field will still be restricted to the cluster where the policy is created"
},
{
"data": "To enforce such a policy for all `role=tenant` Pods in the entire ClusterSet, use the feature described in the later section, and set the `clusterNetworkPolicy` field of the ResourceExport to the `acnp-drop-tenant-to-secured-mc-service` spec above. Such replication should only be performed by ClusterSet admins, who have clearance of creating ClusterNetworkPolicies in all clusters of a ClusterSet. Antrea-native policies now support selecting ingress peers in the ClusterSet scope (since v1.10.0). Policy rules can be created to enforce security postures on ingress traffic from all member clusters in a ClusterSet: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: drop-tenant-access-to-admin-namespace spec: appliedTo: namespaceSelector: matchLabels: role: admin priority: 1 tier: securityops ingress: action: Deny from: scope: ClusterSet namespaceSelector: matchLabels: role: tenant ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: db-svc-allow-ingress-from-client-only namespace: prod-us-west spec: appliedTo: podSelector: matchLabels: app: db priority: 1 tier: application ingress: action: Allow from: scope: ClusterSet podSelector: matchLabels: app: client action: Deny ``` As shown in the examples above, setting `scope` to `ClusterSet` expands the scope of the `podSelector` or `namespaceSelector` of an ingress peer to the entire ClusterSet that the policy is created in. Similar to egress rules, the scope of an ingress rule's `appliedTo` is still restricted to the local cluster. To use the ingress cross-cluster NetworkPolicy feature, the `enableStretchedNetworkPolicy` option needs to be set to `true` in `antrea-mc-controller-config`, for each `antrea-mc-controller` running in the ClusterSet. Refer to the on how to change the ConfigMap: ```yaml controllermanagerconfig.yaml: | apiVersion: multicluster.crd.antrea.io/v1alpha1 kind: MultiClusterConfig enableStretchedNetworkPolicy: true ``` Note that currently ingress stretched NetworkPolicy only works with the Antrea `encap` traffic mode. Since Antrea v1.6.0, Multi-cluster admins can specify certain ClusterNetworkPolicies to be replicated and enforced across the entire ClusterSet. This is especially useful for ClusterSet admins who want all clusters in the ClusterSet to be applied with a consistent security posture (for example, all Namespaces in all clusters can only communicate with Pods in their own Namespaces). For more information regarding Antrea ClusterNetworkPolicy (ACNP), please refer to . To achieve such ACNP replication across clusters, admins can, in the leader cluster of a ClusterSet, create a `ResourceExport` CR of kind `AntreaClusterNetworkPolicy` which contains the ClusterNetworkPolicy spec they wish to be replicated. The `ResourceExport` should be created in the Namespace where the ClusterSet's leader Multi-cluster Controller runs. ```yaml apiVersion: multicluster.crd.antrea.io/v1alpha1 kind: ResourceExport metadata: name: strict-namespace-isolation-for-test-clusterset namespace: antrea-multicluster # Namespace that Multi-cluster Controller is deployed spec: kind: AntreaClusterNetworkPolicy name: strict-namespace-isolation # In each importing cluster, an ACNP of name antrea-mc-strict-namespace-isolation will be created with the spec below clusterNetworkPolicy: priority: 1 tier: securityops appliedTo: namespaceSelector: {} # Selects all Namespaces in the member cluster ingress: action: Pass from: namespaces: match: Self # Skip drop rule for traffic from Pods in the same Namespace podSelector: matchLabels: k8s-app: kube-dns # Skip drop rule for traffic from the core-dns components action: Drop from: namespaceSelector: {} # Drop from Pods from all other Namespaces ``` The above sample spec will create an ACNP in each member cluster which implements strict Namespace isolation for that cluster. Note that because the Tier that an ACNP refers to must exist before the ACNP is applied, an importing cluster may fail to create the ACNP to be replicated, if the Tier in the ResourceExport spec cannot be found in that particular"
},
{
"data": "If there are such failures, the ACNP creation status of failed member clusters will be reported back to the leader cluster as K8s Events, and can be checked by describing the `ResourceImport` of the original `ResourceExport`: ```bash $ kubectl describe resourceimport -A Name: strict-namespace-isolation-antreaclusternetworkpolicy Namespace: antrea-multicluster API Version: multicluster.crd.antrea.io/v1alpha1 Kind: ResourceImport Spec: Clusternetworkpolicy: Applied To: Namespace Selector: Ingress: Action: Pass Enable Logging: false From: Namespaces: Match: Self Pod Selector: Match Labels: k8s-app: kube-dns Action: Drop Enable Logging: false From: Namespace Selector: Priority: 1 Tier: random Kind: AntreaClusterNetworkPolicy Name: strict-namespace-isolation ... Events: Type Reason Age From Message - - - Warning ACNPImportFailed 2m11s resourceimport-controller ACNP Tier random does not exist in the importing cluster test-cluster-west ``` In future releases, some additional tooling may become available to automate the creation of ResourceExports for ACNPs, and provide a user-friendly way to define Multi-cluster NetworkPolicies to be enforced in the ClusterSet. If you'd like to build Multi-cluster Controller Docker image locally, you can follow the following steps: Go to your local `antrea` source tree, run `make build-antrea-mc-controller`, and you will get a new image named `antrea/antrea-mc-controller:latest` locally. Run `docker save antrea/antrea-mc-controller:latest > antrea-mcs.tar` to save the image. Copy the image file `antrea-mcs.tar` to the Nodes of your local cluster. Run `docker load < antrea-mcs.tar` in each Node of your local cluster. If you want to remove a member cluster from a ClusterSet and uninstall Antrea Multi-cluster, please follow the following steps. Note: please replace `kube-system` with the right Namespace in the example commands and manifest if Antrea Multi-cluster is not deployed in the default Namespace. Delete all ServiceExports and the Multi-cluster Gateway annotation on the Gateway Nodes. Delete the ClusterSet CR. Antrea Multi-cluster Controller will be responsible for cleaning up all resources created by itself automatically. Delete the Antrea Multi-cluster Deployment: ```bash kubectl delete -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml ``` If you want to delete a ClusterSet and uninstall Antrea Multi-cluster in a leader cluster, please follow the following steps. You should first before removing a leader cluster from a ClusterSet. Note: please replace `antrea-multicluster` with the right Namespace in the following example commands and manifest if Antrea Multi-cluster is not deployed in the default Namespace. Delete AntreaClusterNetworkPolicy ResourceExports in the leader cluster. Verify that there is no remaining MemberClusterAnnounces. ```bash kubectl get memberclusterannounce -n antrea-multicluster ``` Delete the ClusterSet CR. Antrea Multi-cluster Controller will be responsible for cleaning up all resources created by itself automatically. Check there is no remaining ResourceExports and ResourceImports: ```bash kubectl get resourceexports -n antrea-multicluster kubectl get resourceimports -n antrea-multicluster ``` Note: you can follow the to delete the left-over ResourceExports. Delete the Antrea Multi-cluster Deployment: ```bash kubectl delete -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader.yml ``` We recommend user to redeploy or update Antrea Multi-cluster Controller through `kubectl apply`. If you are using `kubectl delete -f ` and `kubectl create -f ` to redeploy Controller in the leader cluster, you might encounter in `ResourceExport` CRD cleanup. To avoid this issue, please delete any `ResourceExport` CRs in the leader cluster first, and make sure `kubectl get resourceexport -A` returns empty result before you can redeploy Multi-cluster Controller. All `ResourceExports` can be deleted with the following command: ```bash kubectl get resourceexport -A -o json | jq -r '.items[]|[.metadata.namespace,.metadata.name]|join(\" \")' | xargs -n2 bash -c 'kubectl delete -n $0 resourceexport/$1' ```"
}
] |
{
"category": "Runtime",
"file_name": "release.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
} | [
{
"data": "name: Release task about: Create a release task title: \"[RELEASE]\" labels: release/task assignees: '' What's the task? Please describe. Action items for releasing v<x.y.z> Roles Release captain: <!--responsible for RD efforts of release development and coordinating with QA captain--> QA captain: <!--responsible for coordinating QA efforts of release testing tasks--> Describe the sub-tasks. Pre-Release (QA captain needs to coordinate the following efforts and finish these items) [ ] Regression test plan (manual) - QA captain [ ] Run e2e regression for pre-GA milestones (`install`, `upgrade`) - @yangchiu [ ] Run security testing of container images for pre-GA milestones - @roger-ryao [ ] Verify longhorn chart PR to ensure all artifacts are ready for GA (`install`, `upgrade`) @chriscchien [ ] Run core testing (install, upgrade) for the GA build from the previous patch (1.5.4) and the last patch of the previous feature release (1.4.4). - @yangchiu Release (Release captain needs to finish the following items) [ ] Release longhorn/chart from the release branch to publish to ArtifactHub [ ] Release note [ ] Deprecation note [ ] Upgrade notes including highlighted notes, deprecation, compatible changes, and others impacting the current users Post-Release (Release captain needs to coordinate the following items) [ ] Create a new release branch of manager/ui/tests/engine/longhorn instance-manager/share-manager/backing-image-manager when creating the RC1 (only for new feature release) [ ] Update https://github.com/longhorn/longhorn/blob/master/deploy/upgraderesponderserver/chart-values.yaml @PhanLe1010 [ ] Add another request for the rancher charts for the next patch release (`1.5.6`) @rebeccazzzz Rancher charts: verify the chart can be installed & upgraded - @khushboo-rancher [ ] rancher/image-mirrors update @PhanLe1010 [ ] rancher/charts active branches (2.7 & 2.8) for rancher marketplace @mantissahz @PhanLe1010 cc @longhorn/qa @longhorn/dev"
}
] |
{
"category": "Runtime",
"file_name": "pull_request_template.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
} | [
{
"data": "Describe the PR e.g. add cool parser. Relation issue e.g. https://github.com/swaggo/gin-swagger/pull/123/files Additional context Add any other context about the problem here."
}
] |
{
"category": "Runtime",
"file_name": "ROADMAP.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
} | [
{
"data": "This document defines a high level roadmap for rkt development. The dates below should not be considered authoritative, but rather indicative of the projected timeline of the project. The represent the most up-to-date state of affairs. rkt's version 1.0 release marks the command line user interface and on-disk data structures as stable and reliable for external development. Adapting rkt to offer first-class implementation of the Kubernetes Container Runtime Interface. Supporting OCI specs natively in rkt. Following OCI evolution and stabilization, it will become the preferred way over appc. However, rkt will continue to support the ACI image format and distribution mechanism. There is currently no plans to remove that support from rkt. Future tasks without a specific timeline are tracked at https://github.com/rkt/rkt/milestone/30."
}
] |
{
"category": "Runtime",
"file_name": "VERSIONING.md",
"project_name": "Stash by AppsCode",
"subcategory": "Cloud Native Storage"
} | [
{
"data": "We follow the , and use the corresponding tooling. For the purposes of the aforementioned guidelines, controller-runtime counts as a \"library project\", but otherwise follows the guidelines exactly. For release branches, we generally tend to support backporting one (1) major release (`release-{X-1}` or `release-0.{Y-1}`), but may go back further if the need arises and is very pressing (e.g. security updates). Note the . Particularly: We DO guarantee Kubernetes REST API compatibility -- if a given version of controller-runtime stops working with what should be a supported version of Kubernetes, this is almost certainly a bug. We DO NOT guarantee any particular compatibility matrix between kubernetes library dependencies (client-go, apimachinery, etc); Such compatibility is infeasible due to the way those libraries are versioned."
}
] |
{
"category": "Runtime",
"file_name": "roadmap.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
} | [
{
"data": "The initial roadmap of CRI-O was lightweight and followed the main Kubernetes Container Runtime Interface (CRI) development lifecycle. This is partially because additional features on top of that are either integrated into a CRI-O release as soon as theyre ready, or are tracked through the Milestone mechanism in GitHub. Another reason is that feature availability is mostly tied to Kubernetes releases, and thus most of its long-term goals are already tracked in through the Kubernetes Enhancement Proposal (KEP) process. Finally, CRI-Os long-term roadmap outside of features being added by SIG-Node is in part described by its mission: to be a secure, performant and stable implementation of the Container Runtime Interface. However, all of these together do construct a roadmap, and this document will describe how. CRI-Os primary internal planning mechanism is the Milestone feature in GitHub, along with Issues. Since CRI-O releases in lock-step with Kubernetes minor releases, where the CRI-O community aims to have a x.y.0 release released within three days after the corresponding Kubernetes x.y.0 release, there is a well established deadline that must be met. For PRs and Issues that are targeted at a particular x.y.0 release can be added to the x.y Milestone and they will be considered for priority in leading up to the actual release. However, since CRI-Os releases are time bound and partially out of the CRI-O communities control, tagging a PR or issue with a Milestone does not guarantee it will be included. Users or contributors who dont have permissions to add the Milestone can request an Approver to do so. If there is disagreement, the standard mechanism will be used. Pull requests targeting patch release-x.y branches are not part of any milestone. The release branches are decoupled from the and fixes can be merged ad-hoc. The support for patch release branches follows the yearly Kubernetes period and can be longer based on contributor bandwidth. CRI-Os primary purpose is to be a CRI compliant runtime for Kubernetes, and thus most of the features that CRI-O adds are added to remain conformant to the"
},
{
"data": "Often, though not always, CRI-O will attempt to support new features in Kubernetes while theyre in the Alpha stage, though sometimes this target is missed and support is added while the feature is in Beta. To track the features that may be added to CRI-O from upstream, one can watch for a given release. If a particular feature interests you, the CRI-O community recommends you open an issue in CRI-O so it can be included in the Milestone for that given release. CRI-O maintainers are involved in SIG-Node in driving various upstream initiatives that can be tracked in the . There still exist features that CRI-O will add that exist outside of the purview of SIG-Node, and span multiple releases. These features are usually implemented to fulfill CRI-Os purpose of being secure, performant, and stable. As all of these are aspirational and seek to improve CRI-O structurally, as opposed to fixing a bug or clearly adding a feature, its less appropriate to have an issue for them. As such, updates to this document will be made once per release cycle. Finally, it is worth noting that the integration of these features will be deliberate, slow, and strictly opted into. CRI-O does aim to constantly improve, but also aims to never compromise its stability in the process. Some of these features can be seen below: Improve upstream documentation Automate release process Improved seccomp notification support Increase pod density on nodes: Reduce overhead of relisting pods and containers (alongside ) Reduce overhead of metrics collection (alongside ) Reduce process overhead of multiple conmons per pod (through ) Improve maintainability by ensuring the code is easy to understand and follow Improve observability and tracing internally Evaluate rust reimplementation of different pieces of the codebase. Relying on different SIGs for CRI-O features: We have a need to discuss our enhancements with different SIGs to get all required information and drive the change. This can lead into helpful, but maybe not expected input and delay the deliverable. Some features require initial research: We're not completely aware of all technical aspects for the changes. This means that there is a risk of delaying because of investing more time in pre-research."
}
] |
{
"category": "Runtime",
"file_name": "pull_request_template.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
} | [
{
"data": "<!-- Thank you for contributing to curve! --> Issue Number: #xxx <!-- replace xxx with issue number --> Problem Summary: What's Changed: How it Works: Side effects(Breaking backward compatibility? Performance regression?): [ ] Relevant documentation/comments is changed or added [ ] I acknowledge that all my contributions will be made under the project's license"
}
] |
{
"category": "Runtime",
"file_name": "fuzzy_mode_convert_table.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
} | [
{
"data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|"
}
] |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 68