Linux-Utility
stringlengths
1
30
Manual-Page
stringlengths
700
948k
TLDR-Summary
stringlengths
110
2.05k
rsync
rsync(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training rsync(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | GENERAL | SETUP | USAGE | COPYING TO A DIFFERENT NAME | SORTED TRANSFER ORDER | MULTI-HOST SECURITY | ADVANCED USAGE | CONNECTING TO AN RSYNC DAEMON | USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION | STARTING AN RSYNC DAEMON TO ACCEPT CONNECTIONS | EXAMPLES | OPTION SUMMARY | OPTIONS | DAEMON OPTIONS | FILTER RULES | TRANSFER RULES | BATCH MODE | SYMBOLIC LINKS | DIAGNOSTICS | EXIT VALUES | ENVIRONMENT VARIABLES | FILES | SEE ALSO | BUGS | VERSION | INTERNAL OPTIONS | CREDITS | THANKS | AUTHOR | COLOPHON rsync(1) User Commands rsync(1) NAME top rsync - a fast, versatile, remote (and local) file-copying tool SYNOPSIS top Local: rsync [OPTION...] SRC... [DEST] Access via remote shell: Pull: rsync [OPTION...] [USER@]HOST:SRC... [DEST] Push: rsync [OPTION...] SRC... [USER@]HOST:DEST Access via rsync daemon: Pull: rsync [OPTION...] [USER@]HOST::SRC... [DEST] rsync [OPTION...] rsync://[USER@]HOST[:PORT]/SRC... [DEST] Push: rsync [OPTION...] SRC... [USER@]HOST::DEST rsync [OPTION...] SRC... rsync://[USER@]HOST[:PORT]/DEST) Usages with just one SRC arg and no DEST arg will list the source files instead of copying. The online version of this manpage (that includes cross-linking of topics) is available at https://download.samba.org/pub/rsync/rsync.1. DESCRIPTION top Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon. It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use. Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated. Some of the additional features of rsync are: o support for copying links, devices, owners, groups, and permissions o exclude and exclude-from options similar to GNU tar o a CVS exclude mode for ignoring the same files that CVS would ignore o can use any transparent remote shell, including ssh or rsh o does not require super-user privileges o pipelining of file transfers to minimize latency costs o support for anonymous or authenticated rsync daemons (ideal for mirroring) GENERAL top Rsync copies files either to or from a remote host, or locally on the current host (it does not support copying files between two remote hosts). There are two different ways for rsync to contact a remote system: using a remote-shell program as the transport (such as ssh or rsh) or contacting an rsync daemon directly via TCP. The remote-shell transport is used whenever the source or destination path contains a single colon (:) separator after a host specification. Contacting an rsync daemon directly happens when the source or destination path contains a double colon (::) separator after a host specification, OR when an rsync:// URL is specified (see also the USING RSYNC-DAEMON FEATURES VIA A REMOTE- SHELL CONNECTION section for an exception to this latter rule). As a special case, if a single source arg is specified without a destination, the files are listed in an output format similar to "ls -l". As expected, if neither the source or destination path specify a remote host, the copy occurs locally (see also the --list-only option). Rsync refers to the local side as the client and the remote side as the server. Don't confuse server with an rsync daemon. A daemon is always a server, but a server can be either a daemon or a remote-shell spawned process. SETUP top See the file README.md for installation instructions. Once installed, you can use rsync to any machine that you can access via a remote shell (as well as some that you can access using the rsync daemon-mode protocol). For remote transfers, a modern rsync uses ssh for its communications, but it may have been configured to use a different remote shell by default, such as rsh or remsh. You can also specify any remote shell you like, either by using the -e command line option, or by setting the RSYNC_RSH environment variable. Note that rsync must be installed on both the source and destination machines. USAGE top You use rsync in the same way you use rcp. You must specify a source and a destination, one of which may be remote. Perhaps the best way to explain the syntax is with some examples: rsync -t *.c foo:src/ This would transfer all files matching the pattern *.c from the current directory to the directory src on the machine foo. If any of the files already exist on the remote system then the rsync remote-update protocol is used to update the file by sending only the differences in the data. Note that the expansion of wildcards on the command-line (*.c) into a list of files is handled by the shell before it runs rsync and not by rsync itself (exactly the same as all other Posix-style programs). rsync -avz foo:src/bar /data/tmp This would recursively transfer all files from the directory src/bar on the machine foo into the /data/tmp/bar directory on the local machine. The files are transferred in archive mode, which ensures that symbolic links, devices, attributes, permissions, ownerships, etc. are preserved in the transfer. Additionally, compression will be used to reduce the size of data portions of the transfer. rsync -avz foo:src/bar/ /data/tmp A trailing slash on the source changes this behavior to avoid creating an additional directory level at the destination. You can think of a trailing / on a source as meaning "copy the contents of this directory" as opposed to "copy the directory by name", but in both cases the attributes of the containing directory are transferred to the containing directory on the destination. In other words, each of the following commands copies the files in the same way, including their setting of the attributes of /dest/foo: rsync -av /src/foo /dest rsync -av /src/foo/ /dest/foo Note also that host and module references don't require a trailing slash to copy the contents of the default directory. For example, both of these copy the remote directory's contents into "/dest": rsync -av host: /dest rsync -av host::module /dest You can also use rsync in local-only mode, where both the source and destination don't have a ':' in the name. In this case it behaves like an improved copy command. Finally, you can list all the (listable) modules available from a particular rsync daemon by leaving off the module name: rsync somehost.mydomain.com:: COPYING TO A DIFFERENT NAME top When you want to copy a directory to a different name, use a trailing slash on the source directory to put the contents of the directory into any destination directory you like: rsync -ai foo/ bar/ Rsync also has the ability to customize a destination file's name when copying a single item. The rules for this are: o The transfer list must consist of a single item (either a file or an empty directory) o The final element of the destination path must not exist as a directory o The destination path must not have been specified with a trailing slash Under those circumstances, rsync will set the name of the destination's single item to the last element of the destination path. Keep in mind that it is best to only use this idiom when copying a file and use the above trailing-slash idiom when copying a directory. The following example copies the foo.c file as bar.c in the save dir (assuming that bar.c isn't a directory): rsync -ai src/foo.c save/bar.c The single-item copy rule might accidentally bite you if you unknowingly copy a single item and specify a destination dir that doesn't exist (without using a trailing slash). For example, if src/*.c matches one file and save/dir doesn't exist, this will confuse you by naming the destination file save/dir: rsync -ai src/*.c save/dir To prevent such an accident, either make sure the destination dir exists or specify the destination path with a trailing slash: rsync -ai src/*.c save/dir/ SORTED TRANSFER ORDER top Rsync always sorts the specified filenames into its internal transfer list. This handles the merging together of the contents of identically named directories, makes it easy to remove duplicate filenames. It can, however, confuse someone when the files are transferred in a different order than what was given on the command-line. If you need a particular file to be transferred prior to another, either separate the files into different rsync calls, or consider using --delay-updates (which doesn't affect the sorted transfer order, but does make the final file-updating phase happen much more rapidly). MULTI-HOST SECURITY top Rsync takes steps to ensure that the file requests that are shared in a transfer are protected against various security issues. Most of the potential problems arise on the receiving side where rsync takes steps to ensure that the list of files being transferred remains within the bounds of what was requested. Toward this end, rsync 3.1.2 and later have aborted when a file list contains an absolute or relative path that tries to escape out of the top of the transfer. Also, beginning with version 3.2.5, rsync does two more safety checks of the file list to (1) ensure that no extra source arguments were added into the transfer other than those that the client requested and (2) ensure that the file list obeys the exclude rules that were sent to the sender. For those that don't yet have a 3.2.5 client rsync (or those that want to be extra careful), it is safest to do a copy into a dedicated destination directory for the remote files when you don't trust the remote host. For example, instead of doing an rsync copy into your home directory: rsync -aiv host1:dir1 ~ Dedicate a "host1-files" dir to the remote content: rsync -aiv host1:dir1 ~/host1-files See the --trust-sender option for additional details. CAUTION: it is not particularly safe to use rsync to copy files from a case-preserving filesystem to a case-ignoring filesystem. If you must perform such a copy, you should either disable symlinks via --no-links or enable the munging of symlinks via --munge-links (and make sure you use the right local or remote option). This will prevent rsync from doing potentially dangerous things if a symlink name overlaps with a file or directory. It does not, however, ensure that you get a full copy of all the files (since that may not be possible when the names overlap). A potentially better solution is to list all the source files and create a safe list of filenames that you pass to the --files-from option. Any files that conflict in name would need to be copied to different destination directories using more than one copy. While a copy of a case-ignoring filesystem to a case-ignoring filesystem can work out fairly well, if no --delete-during or --delete-before option is active, rsync can potentially update an existing file on the receiveing side without noticing that the upper-/lower-case of the filename should be changed to match the sender. ADVANCED USAGE top The syntax for requesting multiple files from a remote host is done by specifying additional remote-host args in the same style as the first, or with the hostname omitted. For instance, all these work: rsync -aiv host:file1 :file2 host:file{3,4} /dest/ rsync -aiv host::modname/file{1,2} host::modname/extra /dest/ rsync -aiv host::modname/first ::extra-file{1,2} /dest/ Note that a daemon connection only supports accessing one module per copy command, so if the start of a follow-up path doesn't begin with the modname of the first path, it is assumed to be a path in the module (such as the extra-file1 & extra-file2 that are grabbed above). Really old versions of rsync (2.6.9 and before) only allowed specifying one remote-source arg, so some people have instead relied on the remote-shell performing space splitting to break up an arg into multiple paths. Such unintuitive behavior is no longer supported by default (though you can request it, as described below). Starting in 3.2.4, filenames are passed to a remote shell in such a way as to preserve the characters you give it. Thus, if you ask for a file with spaces in the name, that's what the remote rsync looks for: rsync -aiv host:'a simple file.pdf' /dest/ If you use scripts that have been written to manually apply extra quoting to the remote rsync args (or to require remote arg splitting), you can ask rsync to let your script handle the extra escaping. This is done by either adding the --old-args option to the rsync runs in the script (which requires a new rsync) or exporting RSYNC_OLD_ARGS=1 and RSYNC_PROTECT_ARGS=0 (which works with old or new rsync versions). CONNECTING TO AN RSYNC DAEMON top It is also possible to use rsync without a remote shell as the transport. In this case you will directly connect to a remote rsync daemon, typically using TCP port 873. (This obviously requires the daemon to be running on the remote system, so refer to the STARTING AN RSYNC DAEMON TO ACCEPT CONNECTIONS section below for information on that.) Using rsync in this way is the same as using it with a remote shell except that: o Use either double-colon syntax or rsync:// URL syntax instead of the single-colon (remote shell) syntax. o The first element of the "path" is actually a module name. o Additional remote source args can use an abbreviated syntax that omits the hostname and/or the module name, as discussed in ADVANCED USAGE. o The remote daemon may print a "message of the day" when you connect. o If you specify only the host (with no module or path) then a list of accessible modules on the daemon is output. o If you specify a remote source path but no destination, a listing of the matching files on the remote daemon is output. o The --rsh (-e) option must be omitted to avoid changing the connection style from using a socket connection to USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION. An example that copies all the files in a remote module named "src": rsync -av host::src /dest Some modules on the remote daemon may require authentication. If so, you will receive a password prompt when you connect. You can avoid the password prompt by setting the environment variable RSYNC_PASSWORD to the password you want to use or using the --password-file option. This may be useful when scripting rsync. WARNING: On some systems environment variables are visible to all users. On those systems using --password-file is recommended. You may establish the connection via a web proxy by setting the environment variable RSYNC_PROXY to a hostname:port pair pointing to your web proxy. Note that your web proxy's configuration must support proxy connections to port 873. You may also establish a daemon connection using a program as a proxy by setting the environment variable RSYNC_CONNECT_PROG to the commands you wish to run in place of making a direct socket connection. The string may contain the escape "%H" to represent the hostname specified in the rsync command (so use "%%" if you need a single "%" in your string). For example: export RSYNC_CONNECT_PROG='ssh proxyhost nc %H 873' rsync -av targethost1::module/src/ /dest/ rsync -av rsync://targethost2/module/src/ /dest/ The command specified above uses ssh to run nc (netcat) on a proxyhost, which forwards all data to port 873 (the rsync daemon) on the targethost (%H). Note also that if the RSYNC_SHELL environment variable is set, that program will be used to run the RSYNC_CONNECT_PROG command instead of using the default shell of the system() call. USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION top It is sometimes useful to use various features of an rsync daemon (such as named modules) without actually allowing any new socket connections into a system (other than what is already required to allow remote-shell access). Rsync supports connecting to a host using a remote shell and then spawning a single-use "daemon" server that expects to read its config file in the home dir of the remote user. This can be useful if you want to encrypt a daemon-style transfer's data, but since the daemon is started up fresh by the remote user, you may not be able to use features such as chroot or change the uid used by the daemon. (For another way to encrypt a daemon transfer, consider using ssh to tunnel a local port to a remote machine and configure a normal rsync daemon on that remote host to only allow connections from "localhost".) From the user's perspective, a daemon transfer via a remote-shell connection uses nearly the same command-line syntax as a normal rsync-daemon transfer, with the only exception being that you must explicitly set the remote shell program on the command-line with the --rsh=COMMAND option. (Setting the RSYNC_RSH in the environment will not turn on this functionality.) For example: rsync -av --rsh=ssh host::module /dest If you need to specify a different remote-shell user, keep in mind that the user@ prefix in front of the host is specifying the rsync-user value (for a module that requires user-based authentication). This means that you must give the '-l user' option to ssh when specifying the remote-shell, as in this example that uses the short version of the --rsh option: rsync -av -e "ssh -l ssh-user" rsync-user@host::module /dest The "ssh-user" will be used at the ssh level; the "rsync-user" will be used to log-in to the "module". In this setup, the daemon is started by the ssh command that is accessing the system (which can be forced via the ~/.ssh/authorized_keys file, if desired). However, when accessing a daemon directly, it needs to be started beforehand. STARTING AN RSYNC DAEMON TO ACCEPT CONNECTIONS top In order to connect to an rsync daemon, the remote system needs to have a daemon already running (or it needs to have configured something like inetd to spawn an rsync daemon for incoming connections on a particular port). For full information on how to start a daemon that will handling incoming socket connections, see the rsyncd.conf(5) manpage -- that is the config file for the daemon, and it contains the full details for how to run the daemon (including stand-alone and inetd configurations). If you're using one of the remote-shell transports for the transfer, there is no need to manually start an rsync daemon. EXAMPLES top Here are some examples of how rsync can be used. To backup a home directory, which consists of large MS Word files and mail folders, a per-user cron job can be used that runs this each day: rsync -aiz . bkhost:backup/joe/ To move some files from a remote host to the local host, you could run: rsync -aiv --remove-source-files rhost:/tmp/{file1,file2}.c ~/src/ OPTION SUMMARY top Here is a short summary of the options available in rsync. Each option also has its own detailed description later in this manpage. --verbose, -v increase verbosity --info=FLAGS fine-grained informational verbosity --debug=FLAGS fine-grained debug verbosity --stderr=e|a|c change stderr output mode (default: errors) --quiet, -q suppress non-error messages --no-motd suppress daemon-mode MOTD --checksum, -c skip based on checksum, not mod-time & size --archive, -a archive mode is -rlptgoD (no -A,-X,-U,-N,-H) --no-OPTION turn off an implied OPTION (e.g. --no-D) --recursive, -r recurse into directories --relative, -R use relative path names --no-implied-dirs don't send implied dirs with --relative --backup, -b make backups (see --suffix & --backup-dir) --backup-dir=DIR make backups into hierarchy based in DIR --suffix=SUFFIX backup suffix (default ~ w/o --backup-dir) --update, -u skip files that are newer on the receiver --inplace update destination files in-place --append append data onto shorter files --append-verify --append w/old data in file checksum --dirs, -d transfer directories without recursing --old-dirs, --old-d works like --dirs when talking to old rsync --mkpath create destination's missing path components --links, -l copy symlinks as symlinks --copy-links, -L transform symlink into referent file/dir --copy-unsafe-links only "unsafe" symlinks are transformed --safe-links ignore symlinks that point outside the tree --munge-links munge symlinks to make them safe & unusable --copy-dirlinks, -k transform symlink to dir into referent dir --keep-dirlinks, -K treat symlinked dir on receiver as dir --hard-links, -H preserve hard links --perms, -p preserve permissions --executability, -E preserve executability --chmod=CHMOD affect file and/or directory permissions --acls, -A preserve ACLs (implies --perms) --xattrs, -X preserve extended attributes --owner, -o preserve owner (super-user only) --group, -g preserve group --devices preserve device files (super-user only) --copy-devices copy device contents as a regular file --write-devices write to devices as files (implies --inplace) --specials preserve special files -D same as --devices --specials --times, -t preserve modification times --atimes, -U preserve access (use) times --open-noatime avoid changing the atime on opened files --crtimes, -N preserve create times (newness) --omit-dir-times, -O omit directories from --times --omit-link-times, -J omit symlinks from --times --super receiver attempts super-user activities --fake-super store/recover privileged attrs using xattrs --sparse, -S turn sequences of nulls into sparse blocks --preallocate allocate dest files before writing them --dry-run, -n perform a trial run with no changes made --whole-file, -W copy files whole (w/o delta-xfer algorithm) --checksum-choice=STR choose the checksum algorithm (aka --cc) --one-file-system, -x don't cross filesystem boundaries --block-size=SIZE, -B force a fixed checksum block-size --rsh=COMMAND, -e specify the remote shell to use --rsync-path=PROGRAM specify the rsync to run on remote machine --existing skip creating new files on receiver --ignore-existing skip updating files that exist on receiver --remove-source-files sender removes synchronized files (non-dir) --del an alias for --delete-during --delete delete extraneous files from dest dirs --delete-before receiver deletes before xfer, not during --delete-during receiver deletes during the transfer --delete-delay find deletions during, delete after --delete-after receiver deletes after transfer, not during --delete-excluded also delete excluded files from dest dirs --ignore-missing-args ignore missing source args without error --delete-missing-args delete missing source args from destination --ignore-errors delete even if there are I/O errors --force force deletion of dirs even if not empty --max-delete=NUM don't delete more than NUM files --max-size=SIZE don't transfer any file larger than SIZE --min-size=SIZE don't transfer any file smaller than SIZE --max-alloc=SIZE change a limit relating to memory alloc --partial keep partially transferred files --partial-dir=DIR put a partially transferred file into DIR --delay-updates put all updated files into place at end --prune-empty-dirs, -m prune empty directory chains from file-list --numeric-ids don't map uid/gid values by user/group name --usermap=STRING custom username mapping --groupmap=STRING custom groupname mapping --chown=USER:GROUP simple username/groupname mapping --timeout=SECONDS set I/O timeout in seconds --contimeout=SECONDS set daemon connection timeout in seconds --ignore-times, -I don't skip files that match size and time --size-only skip files that match in size --modify-window=NUM, -@ set the accuracy for mod-time comparisons --temp-dir=DIR, -T create temporary files in directory DIR --fuzzy, -y find similar file for basis if no dest file --compare-dest=DIR also compare destination files relative to DIR --copy-dest=DIR ... and include copies of unchanged files --link-dest=DIR hardlink to files in DIR when unchanged --compress, -z compress file data during the transfer --compress-choice=STR choose the compression algorithm (aka --zc) --compress-level=NUM explicitly set compression level (aka --zl) --skip-compress=LIST skip compressing files with suffix in LIST --cvs-exclude, -C auto-ignore files in the same way CVS does --filter=RULE, -f add a file-filtering RULE -F same as --filter='dir-merge /.rsync-filter' repeated: --filter='- .rsync-filter' --exclude=PATTERN exclude files matching PATTERN --exclude-from=FILE read exclude patterns from FILE --include=PATTERN don't exclude files matching PATTERN --include-from=FILE read include patterns from FILE --files-from=FILE read list of source-file names from FILE --from0, -0 all *-from/filter files are delimited by 0s --old-args disable the modern arg-protection idiom --secluded-args, -s use the protocol to safely send the args --trust-sender trust the remote sender's file list --copy-as=USER[:GROUP] specify user & optional group for the copy --address=ADDRESS bind address for outgoing socket to daemon --port=PORT specify double-colon alternate port number --sockopts=OPTIONS specify custom TCP options --blocking-io use blocking I/O for the remote shell --outbuf=N|L|B set out buffering to None, Line, or Block --stats give some file-transfer stats --8-bit-output, -8 leave high-bit chars unescaped in output --human-readable, -h output numbers in a human-readable format --progress show progress during transfer -P same as --partial --progress --itemize-changes, -i output a change-summary for all updates --remote-option=OPT, -M send OPTION to the remote side only --out-format=FORMAT output updates using the specified FORMAT --log-file=FILE log what we're doing to the specified FILE --log-file-format=FMT log updates using the specified FMT --password-file=FILE read daemon-access password from FILE --early-input=FILE use FILE for daemon's early exec input --list-only list the files instead of copying them --bwlimit=RATE limit socket I/O bandwidth --stop-after=MINS Stop rsync after MINS minutes have elapsed --stop-at=y-m-dTh:m Stop rsync at the specified point in time --fsync fsync every written file --write-batch=FILE write a batched update to FILE --only-write-batch=FILE like --write-batch but w/o updating dest --read-batch=FILE read a batched update from FILE --protocol=NUM force an older protocol version to be used --iconv=CONVERT_SPEC request charset conversion of filenames --checksum-seed=NUM set block/file checksum seed (advanced) --ipv4, -4 prefer IPv4 --ipv6, -6 prefer IPv6 --version, -V print the version + other info and exit --help, -h (*) show this help (* -h is help only on its own) Rsync can also be run as a daemon, in which case the following options are accepted: --daemon run as an rsync daemon --address=ADDRESS bind to the specified address --bwlimit=RATE limit socket I/O bandwidth --config=FILE specify alternate rsyncd.conf file --dparam=OVERRIDE, -M override global daemon config parameter --no-detach do not detach from the parent --port=PORT listen on alternate port number --log-file=FILE override the "log file" setting --log-file-format=FMT override the "log format" setting --sockopts=OPTIONS specify custom TCP options --verbose, -v increase verbosity --ipv4, -4 prefer IPv4 --ipv6, -6 prefer IPv6 --help, -h show this help (when used with --daemon) OPTIONS top Rsync accepts both long (double-dash + word) and short (single- dash + letter) options. The full list of the available options are described below. If an option can be specified in more than one way, the choices are comma-separated. Some options only have a long variant, not a short. If the option takes a parameter, the parameter is only listed after the long variant, even though it must also be specified for the short. When specifying a parameter, you can either use the form --option=param, --option param, -o=param, -o param, or -oparam (the latter choices assume that your option has a short variant). The parameter may need to be quoted in some manner for it to survive the shell's command-line parsing. Also keep in mind that a leading tilde (~) in a pathname is substituted by your shell, so make sure that you separate the option name from the pathname using a space if you want the local shell to expand it. --help Print a short help page describing the options available in rsync and exit. You can also use -h for --help when it is used without any other options (since it normally means --human-readable). --version, -V Print the rsync version plus other info and exit. When repeated, the information is output is a JSON format that is still fairly readable (client side only). The output includes a list of compiled-in capabilities, a list of optimizations, the default list of checksum algorithms, the default list of compression algorithms, the default list of daemon auth digests, a link to the rsync web site, and a few other items. --verbose, -v This option increases the amount of information you are given during the transfer. By default, rsync works silently. A single -v will give you information about what files are being transferred and a brief summary at the end. Two -v options will give you information on what files are being skipped and slightly more information at the end. More than two -v options should only be used if you are debugging rsync. The end-of-run summary tells you the number of bytes sent to the remote rsync (which is the receiving side on a local copy), the number of bytes received from the remote host, and the average bytes per second of the transferred data computed over the entire length of the rsync run. The second line shows the total size (in bytes), which is the sum of all the file sizes that rsync considered transferring. It also shows a "speedup" value, which is a ratio of the total file size divided by the sum of the sent and received bytes (which is really just a feel-good bigger-is-better number). Note that these byte values can be made more (or less) human-readable by using the --human-readable (or --no-human-readable) options. In a modern rsync, the -v option is equivalent to the setting of groups of --info and --debug options. You can choose to use these newer options in addition to, or in place of using --verbose, as any fine-grained settings override the implied settings of -v. Both --info and --debug have a way to ask for help that tells you exactly what flags are set for each increase in verbosity. However, do keep in mind that a daemon's "max verbosity" setting will limit how high of a level the various individual flags can be set on the daemon side. For instance, if the max is 2, then any info and/or debug flag that is set to a higher value than what would be set by -vv will be downgraded to the -vv level in the daemon's logging. --info=FLAGS This option lets you have fine-grained control over the information output you want to see. An individual flag name may be followed by a level number, with 0 meaning to silence that output, 1 being the default output level, and higher numbers increasing the output of that flag (for those that support higher levels). Use --info=help to see all the available flag names, what they output, and what flag names are added for each increase in the verbose level. Some examples: rsync -a --info=progress2 src/ dest/ rsync -avv --info=stats2,misc1,flist0 src/ dest/ Note that --info=name's output is affected by the --out- format and --itemize-changes (-i) options. See those options for more information on what is output and when. This option was added to 3.1.0, so an older rsync on the server side might reject your attempts at fine-grained control (if one or more flags needed to be send to the server and the server was too old to understand them). See also the "max verbosity" caveat above when dealing with a daemon. --debug=FLAGS This option lets you have fine-grained control over the debug output you want to see. An individual flag name may be followed by a level number, with 0 meaning to silence that output, 1 being the default output level, and higher numbers increasing the output of that flag (for those that support higher levels). Use --debug=help to see all the available flag names, what they output, and what flag names are added for each increase in the verbose level. Some examples: rsync -avvv --debug=none src/ dest/ rsync -avA --del --debug=del2,acl src/ dest/ Note that some debug messages will only be output when the --stderr=all option is specified, especially those pertaining to I/O and buffer debugging. Beginning in 3.2.0, this option is no longer auto- forwarded to the server side in order to allow you to specify different debug values for each side of the transfer, as well as to specify a new debug option that is only present in one of the rsync versions. If you want to duplicate the same option on both sides, using brace expansion is an easy way to save you some typing. This works in zsh and bash: rsync -aiv {-M,}--debug=del2 src/ dest/ --stderr=errors|all|client This option controls which processes output to stderr and if info messages are also changed to stderr. The mode strings can be abbreviated, so feel free to use a single letter value. The 3 possible choices are: o errors - (the default) causes all the rsync processes to send an error directly to stderr, even if the process is on the remote side of the transfer. Info messages are sent to the client side via the protocol stream. If stderr is not available (i.e. when directly connecting with a daemon via a socket) errors fall back to being sent via the protocol stream. o all - causes all rsync messages (info and error) to get written directly to stderr from all (possible) processes. This causes stderr to become line- buffered (instead of raw) and eliminates the ability to divide up the info and error messages by file handle. For those doing debugging or using several levels of verbosity, this option can help to avoid clogging up the transfer stream (which should prevent any chance of a deadlock bug hanging things up). It also allows --debug to enable some extra I/O related messages. o client - causes all rsync messages to be sent to the client side via the protocol stream. One client process outputs all messages, with errors on stderr and info messages on stdout. This was the default in older rsync versions, but can cause error delays when a lot of transfer data is ahead of the messages. If you're pushing files to an older rsync, you may want to use --stderr=all since that idiom has been around for several releases. This option was added in rsync 3.2.3. This version also began the forwarding of a non-default setting to the remote side, though rsync uses the backward-compatible options --msgs2stderr and --no-msgs2stderr to represent the all and client settings, respectively. A newer rsync will continue to accept these older option names to maintain compatibility. --quiet, -q This option decreases the amount of information you are given during the transfer, notably suppressing information messages from the remote server. This option is useful when invoking rsync from cron. --no-motd This option affects the information that is output by the client at the start of a daemon transfer. This suppresses the message-of-the-day (MOTD) text, but it also affects the list of modules that the daemon sends in response to the "rsync host::" request (due to a limitation in the rsync protocol), so omit this option if you want to request the list of modules from the daemon. --ignore-times, -I Normally rsync will skip any files that are already the same size and have the same modification timestamp. This option turns off this "quick check" behavior, causing all files to be updated. This option can be confusing compared to --ignore-existing and --ignore-non-existing in that that they cause rsync to transfer fewer files, while this option causes rsync to transfer more files. --size-only This modifies rsync's "quick check" algorithm for finding files that need to be transferred, changing it from the default of transferring files with either a changed size or a changed last-modified time to just looking for files that have changed in size. This is useful when starting to use rsync after using another mirroring system which may not preserve timestamps exactly. --modify-window=NUM, -@ When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify- window value. The default is 0, which matches just integer seconds. If you specify a negative value (and the receiver is at least version 3.1.3) then nanoseconds will also be taken into account. Specifying 1 is useful for copies to/from MS Windows FAT filesystems, because FAT represents times with a 2-second resolution (allowing times to differ from the original by up to 1 second). If you want all your transfers to default to comparing nanoseconds, you can create a ~/.popt file and put these lines in it: rsync alias -a -a@-1 rsync alias -t -t@-1 With that as the default, you'd need to specify --modify- window=0 (aka -@0) to override it and ignore nanoseconds, e.g. if you're copying between ext3 and ext4, or if the receiving rsync is older than 3.1.3. --checksum, -c This changes the way rsync checks if the files have been changed and are in need of a transfer. Without this option, rsync uses a "quick check" that (by default) checks if each file's size and time of last modification match between the sender and receiver. This option changes this to compare a 128-bit checksum for each file that has a matching size. Generating the checksums means that both sides will expend a lot of disk I/O reading all the data in the files in the transfer, so this can slow things down significantly (and this is prior to any reading that will be done to transfer changed files) The sending side generates its checksums while it is doing the file-system scan that builds the list of the available files. The receiver generates its checksums when it is scanning for changed files, and will checksum any file that has the same size as the corresponding sender's file: files with either a changed size or a changed checksum are selected for transfer. Note that rsync always verifies that each transferred file was correctly reconstructed on the receiving side by checking a whole-file checksum that is generated as the file is transferred, but that automatic after-the-transfer verification has nothing to do with this option's before- the-transfer "Does this file need to be updated?" check. The checksum used is auto-negotiated between the client and the server, but can be overridden using either the --checksum-choice (--cc) option or an environment variable that is discussed in that option's section. --archive, -a This is equivalent to -rlptgoD. It is a quick way of saying you want recursion and want to preserve almost everything. Be aware that it does not include preserving ACLs (-A), xattrs (-X), atimes (-U), crtimes (-N), nor the finding and preserving of hardlinks (-H). The only exception to the above equivalence is when --files-from is specified, in which case -r is not implied. --no-OPTION You may turn off one or more implied options by prefixing the option name with "no-". Not all positive options have a negated opposite, but a lot do, including those that can be used to disable an implied option (e.g. --no-D, --no- perms) or have different defaults in various circumstances (e.g. --no-whole-file, --no-blocking-io, --no-dirs). Every valid negated option accepts both the short and the long option name after the "no-" prefix (e.g. --no-R is the same as --no-relative). As an example, if you want to use --archive (-a) but don't want --owner (-o), instead of converting -a into -rlptgD, you can specify -a --no-o (aka --archive --no-owner). The order of the options is important: if you specify --no-r -a, the -r option would end up being turned on, the opposite of -a --no-r. Note also that the side-effects of the --files-from option are NOT positional, as it affects the default state of several options and slightly changes the meaning of -a (see the --files-from option for more details). --recursive, -r This tells rsync to copy directories recursively. See also --dirs (-d) for an option that allows the scanning of a single directory. See the --inc-recursive option for a discussion of the incremental recursion for creating the list of files to transfer. --inc-recursive, --i-r This option explicitly enables on incremental recursion when scanning for files, which is enabled by default when using the --recursive option and both sides of the transfer are running rsync 3.0.0 or newer. Incremental recursion uses much less memory than non- incremental, while also beginning the transfer more quickly (since it doesn't need to scan the entire transfer hierarchy before it starts transferring files). If no recursion is enabled in the source files, this option has no effect. Some options require rsync to know the full file list, so these options disable the incremental recursion mode. These include: o --delete-before (the old default of --delete) o --delete-after o --prune-empty-dirs o --delay-updates In order to make --delete compatible with incremental recursion, rsync 3.0.0 made --delete-during the default delete mode (which was first added in 2.6.4). One side-effect of incremental recursion is that any missing sub-directories inside a recursively-scanned directory are (by default) created prior to recursing into the sub-dirs. This earlier creation point (compared to a non-incremental recursion) allows rsync to then set the modify time of the finished directory right away (without having to delay that until a bunch of recursive copying has finished). However, these early directories don't yet have their completed mode, mtime, or ownership set -- they have more restrictive rights until the subdirectory's copying actually begins. This early-creation idiom can be avoided by using the --omit-dir-times option. Incremental recursion can be disabled using the --no-inc- recursive (--no-i-r) option. --no-inc-recursive, --no-i-r Disables the new incremental recursion algorithm of the --recursive option. This makes rsync scan the full file list before it begins to transfer files. See --inc- recursive for more info. --relative, -R Use relative paths. This means that the full path names specified on the command line are sent to the server rather than just the last parts of the filenames. This is particularly useful when you want to send several different directories at the same time. For example, if you used this command: rsync -av /foo/bar/baz.c remote:/tmp/ would create a file named baz.c in /tmp/ on the remote machine. If instead you used rsync -avR /foo/bar/baz.c remote:/tmp/ then a file named /tmp/foo/bar/baz.c would be created on the remote machine, preserving its full path. These extra path elements are called "implied directories" (i.e. the "foo" and the "foo/bar" directories in the above example). Beginning with rsync 3.0.0, rsync always sends these implied directories as real directories in the file list, even if a path element is really a symlink on the sending side. This prevents some really unexpected behaviors when copying the full path of a file that you didn't realize had a symlink in its path. If you want to duplicate a server-side symlink, include both the symlink via its path, and referent directory via its real path. If you're dealing with an older rsync on the sending side, you may need to use the --no-implied-dirs option. It is also possible to limit the amount of path information that is sent as implied directories for each path you specify. With a modern rsync on the sending side (beginning with 2.6.7), you can insert a dot and a slash into the source path, like this: rsync -avR /foo/./bar/baz.c remote:/tmp/ That would create /tmp/bar/baz.c on the remote machine. (Note that the dot must be followed by a slash, so "/foo/." would not be abbreviated.) For older rsync versions, you would need to use a chdir to limit the source path. For example, when pushing files: (cd /foo; rsync -avR bar/baz.c remote:/tmp/) (Note that the parens put the two commands into a sub- shell, so that the "cd" command doesn't remain in effect for future commands.) If you're pulling files from an older rsync, use this idiom (but only for a non-daemon transfer): rsync -avR --rsync-path="cd /foo; rsync" \ remote:bar/baz.c /tmp/ --no-implied-dirs This option affects the default behavior of the --relative option. When it is specified, the attributes of the implied directories from the source names are not included in the transfer. This means that the corresponding path elements on the destination system are left unchanged if they exist, and any missing implied directories are created with default attributes. This even allows these implied path elements to have big differences, such as being a symlink to a directory on the receiving side. For instance, if a command-line arg or a files-from entry told rsync to transfer the file "path/foo/file", the directories "path" and "path/foo" are implied when --relative is used. If "path/foo" is a symlink to "bar" on the destination system, the receiving rsync would ordinarily delete "path/foo", recreate it as a directory, and receive the file into the new directory. With --no- implied-dirs, the receiving rsync updates "path/foo/file" using the existing path elements, which means that the file ends up being created in "path/bar". Another way to accomplish this link preservation is to use the --keep- dirlinks option (which will also affect symlinks to directories in the rest of the transfer). When pulling files from an rsync older than 3.0.0, you may need to use this option if the sending side has a symlink in the path you request and you wish the implied directories to be transferred as normal directories. --backup, -b With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options. If you don't specify --backup-dir: 1. the --omit-dir-times option will be forced on 2. the use of --delete (without --delete-excluded), causes rsync to add a "protect" filter-rule for the backup suffix to the end of all your existing filters that looks like this: -f "P *~". This rule prevents previously backed-up files from being deleted. Note that if you are supplying your own filter rules, you may need to manually insert your own exclude/protect rule somewhere higher up in the list so that it has a high enough priority to be effective (e.g. if your rules specify a trailing inclusion/exclusion of *, the auto- added rule would never be reached). --backup-dir=DIR This implies the --backup option, and tells rsync to store all backups in the specified directory on the receiving side. This can be used for incremental backups. You can additionally specify a backup suffix using the --suffix option (otherwise the files backed up in the specified directory will keep their original filenames). Note that if you specify a relative path, the backup directory will be relative to the destination directory, so you probably want to specify either an absolute path or a path that starts with "../". If an rsync daemon is the receiver, the backup dir cannot go outside the module's path hierarchy, so take extra care not to delete it or copy into it. --suffix=SUFFIX This option allows you to override the default backup suffix used with the --backup (-b) option. The default suffix is a ~ if no --backup-dir was specified, otherwise it is an empty string. --update, -u This forces rsync to skip any files which exist on the destination and have a modified time that is newer than the source file. (If an existing destination file has a modification time equal to the source file's, it will be updated if the sizes are different.) Note that this does not affect the copying of dirs, symlinks, or other special files. Also, a difference of file format between the sender and receiver is always considered to be important enough for an update, no matter what date is on the objects. In other words, if the source has a directory where the destination has a file, the transfer would occur regardless of the timestamps. This option is a TRANSFER RULE, so don't expect any exclude side effects. A caution for those that choose to combine --inplace with --update: an interrupted transfer will leave behind a partial file on the receiving side that has a very recent modified time, so re-running the transfer will probably not continue the interrupted file. As such, it is usually best to avoid combining this with --inplace unless you have implemented manual steps to handle any interrupted in-progress files. --inplace This option changes how rsync transfers a file when its data needs to be updated: instead of the default method of creating a new copy of the file and moving it into place when it is complete, rsync instead writes the updated data directly to the destination file. This has several effects: o Hard links are not broken. This means the new data will be visible through other hard links to the destination file. Moreover, attempts to copy differing source files onto a multiply-linked destination file will result in a "tug of war" with the destination data changing back and forth. o In-use binaries cannot be updated (either the OS will prevent this from happening, or binaries that attempt to swap-in their data will misbehave or crash). o The file's data will be in an inconsistent state during the transfer and will be left that way if the transfer is interrupted or if an update fails. o A file that rsync cannot write to cannot be updated. While a super user can update any file, a normal user needs to be granted write permission for the open of the file for writing to be successful. o The efficiency of rsync's delta-transfer algorithm may be reduced if some data in the destination file is overwritten before it can be copied to a position later in the file. This does not apply if you use --backup, since rsync is smart enough to use the backup file as the basis file for the transfer. WARNING: you should not use this option to update files that are being accessed by others, so be careful when choosing to use this for a copy. This option is useful for transferring large files with block-based changes or appended data, and also on systems that are disk bound, not network bound. It can also help keep a copy-on-write filesystem snapshot from diverging the entire contents of a file that only has minor changes. The option implies --partial (since an interrupted transfer does not delete the file), but conflicts with --partial-dir and --delay-updates. Prior to rsync 2.6.4 --inplace was also incompatible with --compare-dest and --link-dest. --append This special copy mode only works to efficiently update files that are known to be growing larger where any existing content on the receiving side is also known to be the same as the content on the sender. The use of --append can be dangerous if you aren't 100% sure that all the files in the transfer are shared, growing files. You should thus use filter rules to ensure that you weed out any files that do not fit this criteria. Rsync updates these growing file in-place without verifying any of the existing content in the file (it only verifies the content that it is appending). Rsync skips any files that exist on the receiving side that are not shorter than the associated file on the sending side (which means that new files are transferred). It also skips any files whose size on the sending side gets shorter during the send negotiations (rsync warns about a "diminished" file when this happens). This does not interfere with the updating of a file's non- content attributes (e.g. permissions, ownership, etc.) when the file does not need to be transferred, nor does it affect the updating of any directories or non-regular files. --append-verify This special copy mode works like --append except that all the data in the file is included in the checksum verification (making it less efficient but also potentially safer). This option can be dangerous if you aren't 100% sure that all the files in the transfer are shared, growing files. See the --append option for more details. Note: prior to rsync 3.0.0, the --append option worked like --append-verify, so if you are interacting with an older rsync (or the transfer is using a protocol prior to 30), specifying either append option will initiate an --append-verify transfer. --dirs, -d Tell the sending side to include any directories that are encountered. Unlike --recursive, a directory's contents are not copied unless the directory name specified is "." or ends with a trailing slash (e.g. ".", "dir/.", "dir/", etc.). Without this option or the --recursive option, rsync will skip all directories it encounters (and output a message to that effect for each one). If you specify both --dirs and --recursive, --recursive takes precedence. The --dirs option is implied by the --files-from option or the --list-only option (including an implied --list-only usage) if --recursive wasn't specified (so that directories are seen in the listing). Specify --no-dirs (or --no-d) if you want to turn this off. There is also a backward-compatibility helper option, --old-dirs (--old-d) that tells rsync to use a hack of -r --exclude='/*/*' to get an older rsync to list a single directory without recursing. --mkpath Create all missing path components of the destination path. By default, rsync allows only the final component of the destination path to not exist, which is an attempt to help you to validate your destination path. With this option, rsync creates all the missing destination-path components, just as if mkdir -p $DEST_PATH had been run on the receiving side. When specifying a destination path, including a trailing slash ensures that the whole path is treated as directory names to be created, even when the file list has a single item. See the COPYING TO A DIFFERENT NAME section for full details on how rsync decides if a final destination-path component should be created as a directory or not. If you would like the newly-created destination dirs to match the dirs on the sending side, you should be using --relative (-R) instead of --mkpath. For instance, the following two commands result in the same destination tree, but only the second command ensures that the "some/extra/path" components match the dirs on the sending side: rsync -ai --mkpath host:some/extra/path/*.c some/extra/path/ rsync -aiR host:some/extra/path/*.c ./ --links, -l Add symlinks to the transferred files instead of noisily ignoring them with a "non-regular file" warning for each symlink encountered. You can alternately silence the warning by specifying --info=nonreg0. The default handling of symlinks is to recreate each symlink's unchanged value on the receiving side. See the SYMBOLIC LINKS section for multi-option info. --copy-links, -L The sender transforms each symlink encountered in the transfer into the referent item, following the symlink chain to the file or directory that it references. If a symlink chain is broken, an error is output and the file is dropped from the transfer. This option supersedes any other options that affect symlinks in the transfer, since there are no symlinks left in the transfer. This option does not change the handling of existing symlinks on the receiving side, unlike versions of rsync prior to 2.6.3 which had the side-effect of telling the receiving side to also follow symlinks. A modern rsync won't forward this option to a remote receiver (since only the sender needs to know about it), so this caveat should only affect someone using an rsync client older than 2.6.7 (which is when -L stopped being forwarded to the receiver). See the --keep-dirlinks (-K) if you need a symlink to a directory to be treated as a real directory on the receiving side. See the SYMBOLIC LINKS section for multi-option info. --copy-unsafe-links This tells rsync to copy the referent of symbolic links that point outside the copied tree. Absolute symlinks are also treated like ordinary files, and so are any symlinks in the source path itself when --relative is used. Note that the cut-off point is the top of the transfer, which is the part of the path that rsync isn't mentioning in the verbose output. If you copy "/src/subdir" to "/dest/" then the "subdir" directory is a name inside the transfer tree, not the top of the transfer (which is /src) so it is legal for created relative symlinks to refer to other names inside the /src and /dest directories. If you instead copy "/src/subdir/" (with a trailing slash) to "/dest/subdir" that would not allow symlinks to any files outside of "subdir". Note that safe symlinks are only copied if --links was also specified or implied. The --copy-unsafe-links option has no extra effect when combined with --copy-links. See the SYMBOLIC LINKS section for multi-option info. --safe-links This tells the receiving rsync to ignore any symbolic links in the transfer which point outside the copied tree. All absolute symlinks are also ignored. Since this ignoring is happening on the receiving side, it will still be effective even when the sending side has munged symlinks (when it is using --munge-links). It also affects deletions, since the file being present in the transfer prevents any matching file on the receiver from being deleted when the symlink is deemed to be unsafe and is skipped. This option must be combined with --links (or --archive) to have any symlinks in the transfer to conditionally ignore. Its effect is superseded by --copy-unsafe-links. Using this option in conjunction with --relative may give unexpected results. See the SYMBOLIC LINKS section for multi-option info. --munge-links This option affects just one side of the transfer and tells rsync to munge symlink values when it is receiving files or unmunge symlink values when it is sending files. The munged values make the symlinks unusable on disk but allows the original contents of the symlinks to be recovered. The server-side rsync often enables this option without the client's knowledge, such as in an rsync daemon's configuration file or by an option given to the rrsync (restricted rsync) script. When specified on the client side, specify the option normally if it is the client side that has/needs the munged symlinks, or use -M--munge-links to give the option to the server when it has/needs the munged symlinks. Note that on a local transfer, the client is the sender, so specifying the option directly unmunges symlinks while specifying it as a remote option munges symlinks. This option has no effect when sent to a daemon via --remote-option because the daemon configures whether it wants munged symlinks via its "munge symlinks" parameter. The symlink value is munged/unmunged once it is in the transfer, so any option that transforms symlinks into non- symlinks occurs prior to the munging/unmunging except for --safe-links, which is a choice that the receiver makes, so it bases its decision on the munged/unmunged value. This does mean that if a receiver has munging enabled, that using --safe-links will cause all symlinks to be ignored (since they are all absolute). The method that rsync uses to munge the symlinks is to prefix each one's value with the string "/rsyncd-munged/". This prevents the links from being used as long as the directory does not exist. When this option is enabled, rsync will refuse to run if that path is a directory or a symlink to a directory (though it only checks at startup). See also the "munge-symlinks" python script in the support directory of the source code for a way to munge/unmunge one or more symlinks in-place. --copy-dirlinks, -k This option causes the sending side to treat a symlink to a directory as though it were a real directory. This is useful if you don't want symlinks to non-directories to be affected, as they would be using --copy-links. Without this option, if the sending side has replaced a directory with a symlink to a directory, the receiving side will delete anything that is in the way of the new symlink, including a directory hierarchy (as long as --force or --delete is in effect). See also --keep-dirlinks for an analogous option for the receiving side. --copy-dirlinks applies to all symlinks to directories in the source. If you want to follow only a few specified symlinks, a trick you can use is to pass them as additional source args with a trailing slash, using --relative to make the paths match up right. For example: rsync -r --relative src/./ src/./follow-me/ dest/ This works because rsync calls lstat(2) on the source arg as given, and the trailing slash makes lstat(2) follow the symlink, giving rise to a directory in the file-list which overrides the symlink found during the scan of "src/./". See the SYMBOLIC LINKS section for multi-option info. --keep-dirlinks, -K This option causes the receiving side to treat a symlink to a directory as though it were a real directory, but only if it matches a real directory from the sender. Without this option, the receiver's symlink would be deleted and replaced with a real directory. For example, suppose you transfer a directory "foo" that contains a file "file", but "foo" is a symlink to directory "bar" on the receiver. Without --keep-dirlinks, the receiver deletes symlink "foo", recreates it as a directory, and receives the file into the new directory. With --keep-dirlinks, the receiver keeps the symlink and "file" ends up in "bar". One note of caution: if you use --keep-dirlinks, you must trust all the symlinks in the copy or enable the --munge- links option on the receiving side! If it is possible for an untrusted user to create their own symlink to any real directory, the user could then (on a subsequent copy) replace the symlink with a real directory and affect the content of whatever directory the symlink references. For backup copies, you are better off using something like a bind mount instead of a symlink to modify your receiving hierarchy. See also --copy-dirlinks for an analogous option for the sending side. See the SYMBOLIC LINKS section for multi-option info. --hard-links, -H This tells rsync to look for hard-linked files in the source and link together the corresponding files on the destination. Without this option, hard-linked files in the source are treated as though they were separate files. This option does NOT necessarily ensure that the pattern of hard links on the destination exactly matches that on the source. Cases in which the destination may end up with extra hard links include the following: o If the destination contains extraneous hard-links (more linking than what is present in the source file list), the copying algorithm will not break them explicitly. However, if one or more of the paths have content differences, the normal file- update process will break those extra links (unless you are using the --inplace option). o If you specify a --link-dest directory that contains hard links, the linking of the destination files against the --link-dest files can cause some paths in the destination to become linked together due to the --link-dest associations. Note that rsync can only detect hard links between files that are inside the transfer set. If rsync updates a file that has extra hard-link connections to files outside the transfer, that linkage will be broken. If you are tempted to use the --inplace option to avoid this breakage, be very careful that you know how your files are being updated so that you are certain that no unintended changes happen due to lingering hard links (and see the --inplace option for more caveats). If incremental recursion is active (see --inc-recursive), rsync may transfer a missing hard-linked file before it finds that another link for that contents exists elsewhere in the hierarchy. This does not affect the accuracy of the transfer (i.e. which files are hard-linked together), just its efficiency (i.e. copying the data for a new, early copy of a hard-linked file that could have been found later in the transfer in another member of the hard- linked set of files). One way to avoid this inefficiency is to disable incremental recursion using the --no-inc- recursive option. --perms, -p This option causes the receiving rsync to set the destination permissions to be the same as the source permissions. (See also the --chmod option for a way to modify what rsync considers to be the source permissions.) When this option is off, permissions are set as follows: o Existing files (including updated files) retain their existing permissions, though the --executability option might change just the execute permission for the file. o New files get their "normal" permission bits set to the source file's permissions masked with the receiving directory's default permissions (either the receiving process's umask, or the permissions specified via the destination directory's default ACL), and their special permission bits disabled except in the case where a new directory inherits a setgid bit from its parent directory. Thus, when --perms and --executability are both disabled, rsync's behavior is the same as that of other file-copy utilities, such as cp(1) and tar(1). In summary: to give destination files (both old and new) the source permissions, use --perms. To give new files the destination-default permissions (while leaving existing files unchanged), make sure that the --perms option is off and use --chmod=ugo=rwX (which ensures that all non-masked bits get enabled). If you'd care to make this latter behavior easier to type, you could define a popt alias for it, such as putting this line in the file ~/.popt (the following defines the -Z option, and includes --no-g to use the default group of the destination dir): rsync alias -Z --no-p --no-g --chmod=ugo=rwX You could then use this new option in a command such as this one: rsync -avZ src/ dest/ (Caveat: make sure that -a does not follow -Z, or it will re-enable the two --no-* options mentioned above.) The preservation of the destination's setgid bit on newly- created directories when --perms is off was added in rsync 2.6.7. Older rsync versions erroneously preserved the three special permission bits for newly-created files when --perms was off, while overriding the destination's setgid bit setting on a newly-created directory. Default ACL observance was added to the ACL patch for rsync 2.6.7, so older (or non-ACL-enabled) rsyncs use the umask even if default ACLs are present. (Keep in mind that it is the version of the receiving rsync that affects these behaviors.) --executability, -E This option causes rsync to preserve the executability (or non-executability) of regular files when --perms is not enabled. A regular file is considered to be executable if at least one 'x' is turned on in its permissions. When an existing destination file's executability differs from that of the corresponding source file, rsync modifies the destination file's permissions as follows: o To make a file non-executable, rsync turns off all its 'x' permissions. o To make a file executable, rsync turns on each 'x' permission that has a corresponding 'r' permission enabled. If --perms is enabled, this option is ignored. --acls, -A This option causes rsync to update the destination ACLs to be the same as the source ACLs. The option also implies --perms. The source and destination systems must have compatible ACL entries for this option to work properly. See the --fake-super option for a way to backup and restore ACLs that are not compatible. --xattrs, -X This option causes rsync to update the destination extended attributes to be the same as the source ones. For systems that support extended-attribute namespaces, a copy being done by a super-user copies all namespaces except system.*. A normal user only copies the user.* namespace. To be able to backup and restore non-user namespaces as a normal user, see the --fake-super option. The above name filtering can be overridden by using one or more filter options with the x modifier. When you specify an xattr-affecting filter rule, rsync requires that you do your own system/user filtering, as well as any additional filtering for what xattr names are copied and what names are allowed to be deleted. For example, to skip the system namespace, you could specify: --filter='-x system.*' To skip all namespaces except the user namespace, you could specify a negated-user match: --filter='-x! user.*' To prevent any attributes from being deleted, you could specify a receiver-only rule that excludes all names: --filter='-xr *' Note that the -X option does not copy rsync's special xattr values (e.g. those used by --fake-super) unless you repeat the option (e.g. -XX). This "copy all xattrs" mode cannot be used with --fake-super. --chmod=CHMOD This option tells rsync to apply one or more comma- separated "chmod" modes to the permission of the files in the transfer. The resulting value is treated as though it were the permissions that the sending side supplied for the file, which means that this option can seem to have no effect on existing files if --perms is not enabled. In addition to the normal parsing rules specified in the chmod(1) manpage, you can specify an item that should only apply to a directory by prefixing it with a 'D', or specify an item that should only apply to a file by prefixing it with a 'F'. For example, the following will ensure that all directories get marked set-gid, that no files are other-writable, that both are user-writable and group-writable, and that both have consistent executability across all bits: --chmod=Dg+s,ug+w,Fo-w,+X Using octal mode numbers is also allowed: --chmod=D2775,F664 It is also legal to specify multiple --chmod options, as each additional option is just appended to the list of changes to make. See the --perms and --executability options for how the resulting permission value can be applied to the files in the transfer. --owner, -o This option causes rsync to set the owner of the destination file to be the same as the source file, but only if the receiving rsync is being run as the super-user (see also the --super and --fake-super options). Without this option, the owner of new and/or transferred files are set to the invoking user on the receiving side. The preservation of ownership will associate matching names by default, but may fall back to using the ID number in some circumstances (see also the --numeric-ids option for a full discussion). --group, -g This option causes rsync to set the group of the destination file to be the same as the source file. If the receiving program is not running as the super-user (or if --no-super was specified), only groups that the invoking user on the receiving side is a member of will be preserved. Without this option, the group is set to the default group of the invoking user on the receiving side. The preservation of group information will associate matching names by default, but may fall back to using the ID number in some circumstances (see also the --numeric- ids option for a full discussion). --devices This option causes rsync to transfer character and block device files to the remote system to recreate these devices. If the receiving rsync is not being run as the super-user, rsync silently skips creating the device files (see also the --super and --fake-super options). By default, rsync generates a "non-regular file" warning for each device file encountered when this option is not set. You can silence the warning by specifying --info=nonreg0. --specials This option causes rsync to transfer special files, such as named sockets and fifos. If the receiving rsync is not being run as the super-user, rsync silently skips creating the special files (see also the --super and --fake-super options). By default, rsync generates a "non-regular file" warning for each special file encountered when this option is not set. You can silence the warning by specifying --info=nonreg0. -D The -D option is equivalent to "--devices --specials". --copy-devices This tells rsync to treat a device on the sending side as a regular file, allowing it to be copied to a normal destination file (or another device if --write-devices was also specified). This option is refused by default by an rsync daemon. --write-devices This tells rsync to treat a device on the receiving side as a regular file, allowing the writing of file data into a device. This option implies the --inplace option. Be careful using this, as you should know what devices are present on the receiving side of the transfer, especially when running rsync as root. This option is refused by default by an rsync daemon. --times, -t This tells rsync to transfer modification times along with the files and update them on the remote system. Note that if this option is not used, the optimization that excludes files that have not been modified cannot be effective; in other words, a missing -t (or -a) will cause the next transfer to behave as if it used --ignore-times (-I), causing all files to be updated (though rsync's delta- transfer algorithm will make the update fairly efficient if the files haven't actually changed, you're much better off using -t). A modern rsync that is using transfer protocol 30 or 31 conveys a modify time using up to 8-bytes. If rsync is forced to speak an older protocol (perhaps due to the remote rsync being older than 3.0.0) a modify time is conveyed using 4-bytes. Prior to 3.2.7, these shorter values could convey a date range of 13-Dec-1901 to 19-Jan-2038. Beginning with 3.2.7, these 4-byte values now convey a date range of 1-Jan-1970 to 7-Feb-2106. If you have files dated older than 1970, make sure your rsync executables are upgraded so that the full range of dates can be conveyed. --atimes, -U This tells rsync to set the access (use) times of the destination files to the same value as the source files. If repeated, it also sets the --open-noatime option, which can help you to make the sending and receiving systems have the same access times on the transferred files without needing to run rsync an extra time after a file is transferred. Note that some older rsync versions (prior to 3.2.0) may have been built with a pre-release --atimes patch that does not imply --open-noatime when this option is repeated. --open-noatime This tells rsync to open files with the O_NOATIME flag (on systems that support it) to avoid changing the access time of the files that are being transferred. If your OS does not support the O_NOATIME flag then rsync will silently ignore this option. Note also that some filesystems are mounted to avoid updating the atime on read access even without the O_NOATIME flag being set. --crtimes, -N, This tells rsync to set the create times (newness) of the destination files to the same value as the source files. --omit-dir-times, -O This tells rsync to omit directories when it is preserving modification, access, and create times. If NFS is sharing the directories on the receiving side, it is a good idea to use -O. This option is inferred if you use --backup without --backup-dir. This option also has the side-effect of avoiding early creation of missing sub-directories when incremental recursion is enabled, as discussed in the --inc-recursive section. --omit-link-times, -J This tells rsync to omit symlinks when it is preserving modification, access, and create times. --super This tells the receiving side to attempt super-user activities even if the receiving rsync wasn't run by the super-user. These activities include: preserving users via the --owner option, preserving all groups (not just the current user's groups) via the --group option, and copying devices via the --devices option. This is useful for systems that allow such activities without being the super-user, and also for ensuring that you will get errors if the receiving side isn't being run as the super-user. To turn off super-user activities, the super-user can use --no-super. --fake-super When this option is enabled, rsync simulates super-user activities by saving/restoring the privileged attributes via special extended attributes that are attached to each file (as needed). This includes the file's owner and group (if it is not the default), the file's device info (device & special files are created as empty text files), and any permission bits that we won't allow to be set on the real file (e.g. the real file gets u-s,g-s,o-t for safety) or that would limit the owner's access (since the real super-user can always access/change a file, the files we create can always be accessed/changed by the creating user). This option also handles ACLs (if --acls was specified) and non-user extended attributes (if --xattrs was specified). This is a good way to backup data without using a super- user, and to store ACLs from incompatible systems. The --fake-super option only affects the side where the option is used. To affect the remote side of a remote- shell connection, use the --remote-option (-M) option: rsync -av -M--fake-super /src/ host:/dest/ For a local copy, this option affects both the source and the destination. If you wish a local copy to enable this option just for the destination files, specify -M--fake- super. If you wish a local copy to enable this option just for the source files, combine --fake-super with -M--super. This option is overridden by both --super and --no-super. See also the fake super setting in the daemon's rsyncd.conf file. --sparse, -S Try to handle sparse files efficiently so they take up less space on the destination. If combined with --inplace the file created might not end up with sparse blocks with some combinations of kernel version and/or filesystem type. If --whole-file is in effect (e.g. for a local copy) then it will always work because rsync truncates the file prior to writing out the updated version. Note that versions of rsync older than 3.1.3 will reject the combination of --sparse and --inplace. --preallocate This tells the receiver to allocate each destination file to its eventual size before writing data to the file. Rsync will only use the real filesystem-level preallocation support provided by Linux's fallocate(2) system call or Cygwin's posix_fallocate(3), not the slow glibc implementation that writes a null byte into each block. Without this option, larger files may not be entirely contiguous on the filesystem, but with this option rsync will probably copy more slowly. If the destination is not an extent-supporting filesystem (such as ext4, xfs, NTFS, etc.), this option may have no positive effect at all. If combined with --sparse, the file will only have sparse blocks (as opposed to allocated sequences of null bytes) if the kernel version and filesystem type support creating holes in the allocated data. --dry-run, -n This makes rsync perform a trial run that doesn't make any changes (and produces mostly the same output as a real run). It is most commonly used in combination with the --verbose (-v) and/or --itemize-changes (-i) options to see what an rsync command is going to do before one actually runs it. The output of --itemize-changes is supposed to be exactly the same on a dry run and a subsequent real run (barring intentional trickery and system call failures); if it isn't, that's a bug. Other output should be mostly unchanged, but may differ in some areas. Notably, a dry run does not send the actual data for file transfers, so --progress has no effect, the "bytes sent", "bytes received", "literal data", and "matched data" statistics are too small, and the "speedup" value is equivalent to a run where no file transfers were needed. --whole-file, -W This option disables rsync's delta-transfer algorithm, which causes all transferred files to be sent whole. The transfer may be faster if this option is used when the bandwidth between the source and destination machines is higher than the bandwidth to disk (especially when the "disk" is actually a networked filesystem). This is the default when both the source and destination are specified as local paths, but only if no batch-writing option is in effect. --no-whole-file, --no-W Disable whole-file updating when it is enabled by default for a local transfer. This usually slows rsync down, but it can be useful if you are trying to minimize the writes to the destination file (if combined with --inplace) or for testing the checksum-based update algorithm. See also the --whole-file option. --checksum-choice=STR, --cc=STR This option overrides the checksum algorithms. If one algorithm name is specified, it is used for both the transfer checksums and (assuming --checksum is specified) the pre-transfer checksums. If two comma-separated names are supplied, the first name affects the transfer checksums, and the second name affects the pre-transfer checksums (-c). The checksum options that you may be able to use are: o auto (the default automatic choice) o xxh128 o xxh3 o xxh64 (aka xxhash) o md5 o md4 o sha1 o none Run rsync --version to see the default checksum list compiled into your version (which may differ from the list above). If "none" is specified for the first (or only) name, the --whole-file option is forced on and no checksum verification is performed on the transferred data. If "none" is specified for the second (or only) name, the --checksum option cannot be used. The "auto" option is the default, where rsync bases its algorithm choice on a negotiation between the client and the server as follows: When both sides of the transfer are at least 3.2.0, rsync chooses the first algorithm in the client's list of choices that is also in the server's list of choices. If no common checksum choice is found, rsync exits with an error. If the remote rsync is too old to support checksum negotiation, a value is chosen based on the protocol version (which chooses between MD5 and various flavors of MD4 based on protocol age). The default order can be customized by setting the environment variable RSYNC_CHECKSUM_LIST to a space- separated list of acceptable checksum names. If the string contains a "&" character, it is separated into the "client string & server string", otherwise the same string applies to both. If the string (or string portion) contains no non-whitespace characters, the default checksum list is used. This method does not allow you to specify the transfer checksum separately from the pre- transfer checksum, and it discards "auto" and all unknown checksum names. A list with only invalid names results in a failed negotiation. The use of the --checksum-choice option overrides this environment list. --one-file-system, -x This tells rsync to avoid crossing a filesystem boundary when recursing. This does not limit the user's ability to specify items to copy from multiple filesystems, just rsync's recursion through the hierarchy of each directory that the user specified, and also the analogous recursion on the receiving side during deletion. Also keep in mind that rsync treats a "bind" mount to the same device as being on the same filesystem. If this option is repeated, rsync omits all mount-point directories from the copy. Otherwise, it includes an empty directory at each mount-point it encounters (using the attributes of the mounted directory because those of the underlying mount-point directory are inaccessible). If rsync has been told to collapse symlinks (via --copy- links or --copy-unsafe-links), a symlink to a directory on another device is treated like a mount-point. Symlinks to non-directories are unaffected by this option. --ignore-non-existing, --existing This tells rsync to skip creating files (including directories) that do not exist yet on the destination. If this option is combined with the --ignore-existing option, no files will be updated (which can be useful if all you want to do is delete extraneous files). This option is a TRANSFER RULE, so don't expect any exclude side effects. --ignore-existing This tells rsync to skip updating files that already exist on the destination (this does not ignore existing directories, or nothing would get done). See also --ignore-non-existing. This option is a TRANSFER RULE, so don't expect any exclude side effects. This option can be useful for those doing backups using the --link-dest option when they need to continue a backup run that got interrupted. Since a --link-dest run is copied into a new directory hierarchy (when it is used properly), using [--ignore-existing will ensure that the already-handled files don't get tweaked (which avoids a change in permissions on the hard-linked files). This does mean that this option is only looking at the existing files in the destination hierarchy itself. When --info=skip2 is used rsync will output "FILENAME exists (INFO)" messages where the INFO indicates one of "type change", "sum change" (requires -c), "file change" (based on the quick check), "attr change", or "uptodate". Using --info=skip1 (which is also implied by 2 -v options) outputs the exists message without the INFO suffix. --remove-source-files This tells rsync to remove from the sending side the files (meaning non-directories) that are a part of the transfer and have been successfully duplicated on the receiving side. Note that you should only use this option on source files that are quiescent. If you are using this to move files that show up in a particular directory over to another host, make sure that the finished files get renamed into the source directory, not directly written into it, so that rsync can't possibly transfer a file that is not yet fully written. If you can't first write the files into a different directory, you should use a naming idiom that lets rsync avoid transferring files that are not yet finished (e.g. name the file "foo.new" when it is written, rename it to "foo" when it is done, and then use the option --exclude='*.new' for the rsync transfer). Starting with 3.1.0, rsync will skip the sender-side removal (and output an error) if the file's size or modify time has not stayed unchanged. Starting with 3.2.6, a local rsync copy will ensure that the sender does not remove a file the receiver just verified, such as when the user accidentally makes the source and destination directory the same path. --delete This tells rsync to delete extraneous files from the receiving side (ones that aren't on the sending side), but only for the directories that are being synchronized. You must have asked rsync to send the whole directory (e.g. "dir" or "dir/") without using a wildcard for the directory's contents (e.g. "dir/*") since the wildcard is expanded by the shell and rsync thus gets a request to transfer individual files, not the files' parent directory. Files that are excluded from the transfer are also excluded from being deleted unless you use the --delete-excluded option or mark the rules as only matching on the sending side (see the include/exclude modifiers in the FILTER RULES section). Prior to rsync 2.6.7, this option would have no effect unless --recursive was enabled. Beginning with 2.6.7, deletions will also occur when --dirs (-d) is enabled, but only for directories whose contents are being copied. This option can be dangerous if used incorrectly! It is a very good idea to first try a run using the --dry-run (-n) option to see what files are going to be deleted. If the sending side detects any I/O errors, then the deletion of any files at the destination will be automatically disabled. This is to prevent temporary filesystem failures (such as NFS errors) on the sending side from causing a massive deletion of files on the destination. You can override this with the --ignore- errors option. The --delete option may be combined with one of the --delete-WHEN options without conflict, as well as --delete-excluded. However, if none of the --delete-WHEN options are specified, rsync will choose the --delete- during algorithm when talking to rsync 3.0.0 or newer, or the --delete-before algorithm when talking to an older rsync. See also --delete-delay and --delete-after. --delete-before Request that the file-deletions on the receiving side be done before the transfer starts. See --delete (which is implied) for more details on file-deletion. Deleting before the transfer is helpful if the filesystem is tight for space and removing extraneous files would help to make the transfer possible. However, it does introduce a delay before the start of the transfer, and this delay might cause the transfer to timeout (if --timeout was specified). It also forces rsync to use the old, non-incremental recursion algorithm that requires rsync to scan all the files in the transfer into memory at once (see --recursive). --delete-during, --del Request that the file-deletions on the receiving side be done incrementally as the transfer happens. The per- directory delete scan is done right before each directory is checked for updates, so it behaves like a more efficient --delete-before, including doing the deletions prior to any per-directory filter files being updated. This option was first added in rsync version 2.6.4. See --delete (which is implied) for more details on file- deletion. --delete-delay Request that the file-deletions on the receiving side be computed during the transfer (like --delete-during), and then removed after the transfer completes. This is useful when combined with --delay-updates and/or --fuzzy, and is more efficient than using --delete-after (but can behave differently, since --delete-after computes the deletions in a separate pass after all updates are done). If the number of removed files overflows an internal buffer, a temporary file will be created on the receiving side to hold the names (it is removed while open, so you shouldn't see it during the transfer). If the creation of the temporary file fails, rsync will try to fall back to using --delete-after (which it cannot do if --recursive is doing an incremental scan). See --delete (which is implied) for more details on file-deletion. --delete-after Request that the file-deletions on the receiving side be done after the transfer has completed. This is useful if you are sending new per-directory merge files as a part of the transfer and you want their exclusions to take effect for the delete phase of the current transfer. It also forces rsync to use the old, non-incremental recursion algorithm that requires rsync to scan all the files in the transfer into memory at once (see --recursive). See --delete (which is implied) for more details on file- deletion. See also the --delete-delay option that might be a faster choice for those that just want the deletions to occur at the end of the transfer. --delete-excluded This option turns any unqualified exclude/include rules into server-side rules that do not affect the receiver's deletions. By default, an exclude or include has both a server-side effect (to "hide" and "show" files when building the server's file list) and a receiver-side effect (to "protect" and "risk" files when deletions are occurring). Any rule that has no modifier to specify what sides it is executed on will be instead treated as if it were a server-side rule only, avoiding any "protect" effects of the rules. A rule can still apply to both sides even with this option specified if the rule is given both the sender & receiver modifier letters (e.g., -f'-sr foo'). Receiver-side protect/risk rules can also be explicitly specified to limit the deletions. This saves you from having to edit a bunch of -f'- foo' rules into -f'-s foo' (aka -f'H foo') rules (not to mention the corresponding includes). See the FILTER RULES section for more information. See --delete (which is implied) for more details on deletion. --ignore-missing-args When rsync is first processing the explicitly requested source files (e.g. command-line arguments or --files-from entries), it is normally an error if the file cannot be found. This option suppresses that error, and does not try to transfer the file. This does not affect subsequent vanished-file errors if a file was initially found to be present and later is no longer there. --delete-missing-args This option takes the behavior of the (implied) --ignore- missing-args option a step farther: each missing arg will become a deletion request of the corresponding destination file on the receiving side (should it exist). If the destination file is a non-empty directory, it will only be successfully deleted if --force or --delete are in effect. Other than that, this option is independent of any other type of delete processing. The missing source files are represented by special file- list entries which display as a "*missing" entry in the --list-only output. --ignore-errors Tells --delete to go ahead and delete files even when there are I/O errors. --force This option tells rsync to delete a non-empty directory when it is to be replaced by a non-directory. This is only relevant if deletions are not active (see --delete for details). Note for older rsync versions: --force used to still be required when using --delete-after, and it used to be non- functional unless the --recursive option was also enabled. --max-delete=NUM This tells rsync not to delete more than NUM files or directories. If that limit is exceeded, all further deletions are skipped through the end of the transfer. At the end, rsync outputs a warning (including a count of the skipped deletions) and exits with an error code of 25 (unless some more important error condition also occurred). Beginning with version 3.0.0, you may specify --max- delete=0 to be warned about any extraneous files in the destination without removing any of them. Older clients interpreted this as "unlimited", so if you don't know what version the client is, you can use the less obvious --max- delete=-1 as a backward-compatible way to specify that no deletions be allowed (though really old versions didn't warn when the limit was exceeded). --max-size=SIZE This tells rsync to avoid transferring any file that is larger than the specified SIZE. A numeric value can be suffixed with a string to indicate the numeric units or left unqualified to specify bytes. Feel free to use a fractional value along with the units, such as --max- size=1.5m. This option is a TRANSFER RULE, so don't expect any exclude side effects. The first letter of a units string can be B (bytes), K (kilo), M (mega), G (giga), T (tera), or P (peta). If the string is a single char or has "ib" added to it (e.g. "G" or "GiB") then the units are multiples of 1024. If you use a two-letter suffix that ends with a "B" (e.g. "kb") then you get units that are multiples of 1000. The string's letters can be any mix of upper and lower-case that you want to use. Finally, if the string ends with either "+1" or "-1", it is offset by one byte in the indicated direction. The largest possible value is usually 8192P-1. Examples: --max-size=1.5mb-1 is 1499999 bytes, and --max- size=2g+1 is 2147483649 bytes. Note that rsync versions prior to 3.1.0 did not allow --max-size=0. --min-size=SIZE This tells rsync to avoid transferring any file that is smaller than the specified SIZE, which can help in not transferring small, junk files. See the --max-size option for a description of SIZE and other info. Note that rsync versions prior to 3.1.0 did not allow --min-size=0. --max-alloc=SIZE By default rsync limits an individual malloc/realloc to about 1GB in size. For most people this limit works just fine and prevents a protocol error causing rsync to request massive amounts of memory. However, if you have many millions of files in a transfer, a large amount of server memory, and you don't want to split up your transfer into multiple parts, you can increase the per- allocation limit to something larger and rsync will consume more memory. Keep in mind that this is not a limit on the total size of allocated memory. It is a sanity-check value for each individual allocation. See the --max-size option for a description of how SIZE can be specified. The default suffix if none is given is bytes. Beginning in 3.2.3, a value of 0 specifies no limit. You can set a default value using the environment variable RSYNC_MAX_ALLOC using the same SIZE values as supported by this option. If the remote rsync doesn't understand the --max-alloc option, you can override an environmental value by specifying --max-alloc=1g, which will make rsync avoid sending the option to the remote side (because "1G" is the default). --block-size=SIZE, -B This forces the block size used in rsync's delta-transfer algorithm to a fixed value. It is normally selected based on the size of each file being updated. See the technical report for details. Beginning in 3.2.3 the SIZE can be specified with a suffix as detailed in the --max-size option. Older versions only accepted a byte count. --rsh=COMMAND, -e This option allows you to choose an alternative remote shell program to use for communication between the local and remote copies of rsync. Typically, rsync is configured to use ssh by default, but you may prefer to use rsh on a local network. If this option is used with [user@]host::module/path, then the remote shell COMMAND will be used to run an rsync daemon on the remote host, and all data will be transmitted through that remote shell connection, rather than through a direct socket connection to a running rsync daemon on the remote host. See the USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION section above. Beginning with rsync 3.2.0, the RSYNC_PORT environment variable will be set when a daemon connection is being made via a remote-shell connection. It is set to 0 if the default daemon port is being assumed, or it is set to the value of the rsync port that was specified via either the --port option or a non-empty port value in an rsync:// URL. This allows the script to discern if a non-default port is being requested, allowing for things such as an SSL or stunnel helper script to connect to a default or alternate port. Command-line arguments are permitted in COMMAND provided that COMMAND is presented to rsync as a single argument. You must use spaces (not tabs or other whitespace) to separate the command and args from each other, and you can use single- and/or double-quotes to preserve spaces in an argument (but not backslashes). Note that doubling a single-quote inside a single-quoted string gives you a single-quote; likewise for double-quotes (though you need to pay attention to which quotes your shell is parsing and which quotes rsync is parsing). Some examples: -e 'ssh -p 2234' -e 'ssh -o "ProxyCommand nohup ssh firewall nc -w1 %h %p"' (Note that ssh users can alternately customize site- specific connect options in their .ssh/config file.) You can also choose the remote shell program using the RSYNC_RSH environment variable, which accepts the same range of values as -e. See also the --blocking-io option which is affected by this option. --rsync-path=PROGRAM Use this to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell's path (e.g. --rsync- path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you'd care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate. One tricky example is to set a different default directory on the remote machine for use with the --relative option. For instance: rsync -avR --rsync-path="cd /a/b && rsync" host:c/d /e/ --remote-option=OPTION, -M This option is used for more advanced situations where you want certain effects to be limited to one side of the transfer only. For instance, if you want to pass --log- file=FILE and --fake-super to the remote system, specify it like this: rsync -av -M --log-file=foo -M--fake-super src/ dest/ If you want to have an option affect only the local side of a transfer when it normally affects both sides, send its negation to the remote side. Like this: rsync -av -x -M--no-x src/ dest/ Be cautious using this, as it is possible to toggle an option that will cause rsync to have a different idea about what data to expect next over the socket, and that will make it fail in a cryptic fashion. Note that you should use a separate -M option for each remote option you want to pass. On older rsync versions, the presence of any spaces in the remote-option arg could cause it to be split into separate remote args, but this requires the use of --old-args in a modern rsync. When performing a local transfer, the "local" side is the sender and the "remote" side is the receiver. Note some versions of the popt option-parsing library have a bug in them that prevents you from using an adjacent arg with an equal in it next to a short option letter (e.g. -M--log-file=/tmp/foo). If this bug affects your version of popt, you can use the version of popt that is included with rsync. --cvs-exclude, -C This is a useful shorthand for excluding a broad range of files that you often don't want to transfer between systems. It uses a similar algorithm to CVS to determine if a file should be ignored. The exclude list is initialized to exclude the following items (these initial items are marked as perishable -- see the FILTER RULES section): RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS .make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old *.bak *.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj *.so *.exe *.Z *.elc *.ln core .svn/ .git/ .hg/ .bzr/ then, files listed in a $HOME/.cvsignore are added to the list and any files listed in the CVSIGNORE environment variable (all cvsignore names are delimited by whitespace). Finally, any file is ignored if it is in the same directory as a .cvsignore file and matches one of the patterns listed therein. Unlike rsync's filter/exclude files, these patterns are split on whitespace. See the cvs(1) manual for more information. If you're combining -C with your own --filter rules, you should note that these CVS excludes are appended at the end of your own rules, regardless of where the -C was placed on the command-line. This makes them a lower priority than any rules you specified explicitly. If you want to control where these CVS excludes get inserted into your filter rules, you should omit the -C as a command- line option and use a combination of --filter=:C and --filter=-C (either on your command-line or by putting the ":C" and "-C" rules into a filter file with your other rules). The first option turns on the per-directory scanning for the .cvsignore file. The second option does a one-time import of the CVS excludes mentioned above. --filter=RULE, -f This option allows you to add rules to selectively exclude certain files from the list of files to be transferred. This is most useful in combination with a recursive transfer. You may use as many --filter options on the command line as you like to build up the list of files to exclude. If the filter contains whitespace, be sure to quote it so that the shell gives the rule to rsync as a single argument. The text below also mentions that you can use an underscore to replace the space that separates a rule from its arg. See the FILTER RULES section for detailed information on this option. -F The -F option is a shorthand for adding two --filter rules to your command. The first time it is used is a shorthand for this rule: --filter='dir-merge /.rsync-filter' This tells rsync to look for per-directory .rsync-filter files that have been sprinkled through the hierarchy and use their rules to filter the files in the transfer. If -F is repeated, it is a shorthand for this rule: --filter='exclude .rsync-filter' This filters out the .rsync-filter files themselves from the transfer. See the FILTER RULES section for detailed information on how these options work. --exclude=PATTERN This option is a simplified form of the --filter option that specifies an exclude rule and does not allow the full rule-parsing syntax of normal filter rules. This is equivalent to specifying -f'- PATTERN'. See the FILTER RULES section for detailed information on this option. --exclude-from=FILE This option is related to the --exclude option, but it specifies a FILE that contains exclude patterns (one per line). Blank lines in the file are ignored, as are whole- line comments that start with ';' or '#' (filename rules that contain those characters are unaffected). If a line begins with "- " (dash, space) or "+ " (plus, space), then the type of rule is being explicitly specified as an exclude or an include (respectively). Any rules without such a prefix are taken to be an exclude. If a line consists of just "!", then the current filter rules are cleared before adding any further rules. If FILE is '-', the list will be read from standard input. --include=PATTERN This option is a simplified form of the --filter option that specifies an include rule and does not allow the full rule-parsing syntax of normal filter rules. This is equivalent to specifying -f'+ PATTERN'. See the FILTER RULES section for detailed information on this option. --include-from=FILE This option is related to the --include option, but it specifies a FILE that contains include patterns (one per line). Blank lines in the file are ignored, as are whole- line comments that start with ';' or '#' (filename rules that contain those characters are unaffected). If a line begins with "- " (dash, space) or "+ " (plus, space), then the type of rule is being explicitly specified as an exclude or an include (respectively). Any rules without such a prefix are taken to be an include. If a line consists of just "!", then the current filter rules are cleared before adding any further rules. If FILE is '-', the list will be read from standard input. --files-from=FILE Using this option allows you to specify the exact list of files to transfer (as read from the specified FILE or '-' for standard input). It also tweaks the default behavior of rsync to make transferring just the specified files and directories easier: o The --relative (-R) option is implied, which preserves the path information that is specified for each item in the file (use --no-relative or --no-R if you want to turn that off). o The --dirs (-d) option is implied, which will create directories specified in the list on the destination rather than noisily skipping them (use --no-dirs or --no-d if you want to turn that off). o The --archive (-a) option's behavior does not imply --recursive (-r), so specify it explicitly, if you want it. o These side-effects change the default state of rsync, so the position of the --files-from option on the command-line has no bearing on how other options are parsed (e.g. -a works the same before or after --files-from, as does --no-R and all other options). The filenames that are read from the FILE are all relative to the source dir -- any leading slashes are removed and no ".." references are allowed to go higher than the source dir. For example, take this command: rsync -a --files-from=/tmp/foo /usr remote:/backup If /tmp/foo contains the string "bin" (or even "/bin"), the /usr/bin directory will be created as /backup/bin on the remote host. If it contains "bin/" (note the trailing slash), the immediate contents of the directory would also be sent (without needing to be explicitly mentioned in the file -- this began in version 2.6.4). In both cases, if the -r option was enabled, that dir's entire hierarchy would also be transferred (keep in mind that -r needs to be specified explicitly with --files-from, since it is not implied by -a. Also note that the effect of the (enabled by default) -r option is to duplicate only the path info that is read from the file -- it does not force the duplication of the source-spec path (/usr in this case). In addition, the --files-from file can be read from the remote host instead of the local host if you specify a "host:" in front of the file (the host must match one end of the transfer). As a short-cut, you can specify just a prefix of ":" to mean "use the remote end of the transfer". For example: rsync -a --files-from=:/path/file-list src:/ /tmp/copy This would copy all the files specified in the /path/file- list file that was located on the remote "src" host. If the --iconv and --secluded-args options are specified and the --files-from filenames are being sent from one host to another, the filenames will be translated from the sending host's charset to the receiving host's charset. NOTE: sorting the list of files in the --files-from input helps rsync to be more efficient, as it will avoid re- visiting the path elements that are shared between adjacent entries. If the input is not sorted, some path elements (implied directories) may end up being scanned multiple times, and rsync will eventually unduplicate them after they get turned into file-list elements. --from0, -0 This tells rsync that the rules/filenames it reads from a file are terminated by a null ('\0') character, not a NL, CR, or CR+LF. This affects --exclude-from, --include- from, --files-from, and any merged files specified in a --filter rule. It does not affect --cvs-exclude (since all names read from a .cvsignore file are split on whitespace). --old-args This option tells rsync to stop trying to protect the arg values on the remote side from unintended word-splitting or other misinterpretation. It also allows the client to treat an empty arg as a "." instead of generating an error. The default in a modern rsync is for "shell-active" characters (including spaces) to be backslash-escaped in the args that are sent to the remote shell. The wildcard characters *, ?, [, & ] are not escaped in filename args (allowing them to expand into multiple filenames) while being protected in option args, such as --usermap. If you have a script that wants to use old-style arg splitting in its filenames, specify this option once. If the remote shell has a problem with any backslash escapes at all, specify this option twice. You may also control this setting via the RSYNC_OLD_ARGS environment variable. If it has the value "1", rsync will default to a single-option setting. If it has the value "2" (or more), rsync will default to a repeated-option setting. If it is "0", you'll get the default escaping behavior. The environment is always overridden by manually specified positive or negative options (the negative is --no-old-args). Note that this option also disables the extra safety check added in 3.2.5 that ensures that a remote sender isn't including extra top-level items in the file-list that you didn't request. This side-effect is necessary because we can't know for sure what names to expect when the remote shell is interpreting the args. This option conflicts with the --secluded-args option. --secluded-args, -s This option sends all filenames and most options to the remote rsync via the protocol (not the remote shell command line) which avoids letting the remote shell modify them. Wildcards are expanded on the remote host by rsync instead of a shell. This is similar to the default backslash-escaping of args that was added in 3.2.4 (see --old-args) in that it prevents things like space splitting and unwanted special- character side-effects. However, it has the drawbacks of being incompatible with older rsync versions (prior to 3.0.0) and of being refused by restricted shells that want to be able to inspect all the option values for safety. This option is useful for those times that you need the argument's character set to be converted for the remote host, if the remote shell is incompatible with the default backslash-escpaing method, or there is some other reason that you want the majority of the options and arguments to bypass the command-line of the remote shell. If you combine this option with --iconv, the args related to the remote side will be translated from the local to the remote character-set. The translation happens before wild-cards are expanded. See also the --files-from option. You may also control this setting via the RSYNC_PROTECT_ARGS environment variable. If it has a non- zero value, this setting will be enabled by default, otherwise it will be disabled by default. Either state is overridden by a manually specified positive or negative version of this option (note that --no-s and --no- secluded-args are the negative versions). This environment variable is also superseded by a non-zero RSYNC_OLD_ARGS export. This option conflicts with the --old-args option. This option used to be called --protect-args (before 3.2.6) and that older name can still be used (though specifying it as -s is always the easiest and most compatible choice). --trust-sender This option disables two extra validation checks that a local client performs on the file list generated by a remote sender. This option should only be used if you trust the sender to not put something malicious in the file list (something that could possibly be done via a modified rsync, a modified shell, or some other similar manipulation). Normally, the rsync client (as of version 3.2.5) runs two extra validation checks when pulling files from a remote rsync: o It verifies that additional arg items didn't get added at the top of the transfer. o It verifies that none of the items in the file list are names that should have been excluded (if filter rules were specified). Note that various options can turn off one or both of these checks if the option interferes with the validation. For instance: o Using a per-directory filter file reads filter rules that only the server knows about, so the filter checking is disabled. o Using the --old-args option allows the sender to manipulate the requested args, so the arg checking is disabled. o Reading the files-from list from the server side means that the client doesn't know the arg list, so the arg checking is disabled. o Using --read-batch disables both checks since the batch file's contents will have been verified when it was created. This option may help an under-powered client server if the extra pattern matching is slowing things down on a huge transfer. It can also be used to work around a currently- unknown bug in the verification logic for a transfer from a trusted sender. When using this option it is a good idea to specify a dedicated destination directory, as discussed in the MULTI-HOST SECURITY section. --copy-as=USER[:GROUP] This option instructs rsync to use the USER and (if specified after a colon) the GROUP for the copy operations. This only works if the user that is running rsync has the ability to change users. If the group is not specified then the user's default groups are used. This option can help to reduce the risk of an rsync being run as root into or out of a directory that might have live changes happening to it and you want to make sure that root-level read or write actions of system files are not possible. While you could alternatively run all of rsync as the specified user, sometimes you need the root- level host-access credentials to be used, so this allows rsync to drop root for the copying part of the operation after the remote-shell or daemon connection is established. The option only affects one side of the transfer unless the transfer is local, in which case it affects both sides. Use the --remote-option to affect the remote side, such as -M--copy-as=joe. For a local transfer, the lsh (or lsh.sh) support file provides a local-shell helper script that can be used to allow a "localhost:" or "lh:" host-spec to be specified without needing to setup any remote shells, allowing you to specify remote options that affect the side of the transfer that is using the host- spec (and using hostname "lh" avoids the overriding of the remote directory to the user's home dir). For example, the following rsync writes the local files as user "joe": sudo rsync -aiv --copy-as=joe host1:backups/joe/ /home/joe/ This makes all files owned by user "joe", limits the groups to those that are available to that user, and makes it impossible for the joe user to do a timed exploit of the path to induce a change to a file that the joe user has no permissions to change. The following command does a local copy into the "dest/" dir as user "joe" (assuming you've installed support/lsh into a dir on your $PATH): sudo rsync -aive lsh -M--copy-as=joe src/ lh:dest/ --temp-dir=DIR, -T This option instructs rsync to use DIR as a scratch directory when creating temporary copies of the files transferred on the receiving side. The default behavior is to create each temporary file in the same directory as the associated destination file. Beginning with rsync 3.1.1, the temp-file names inside the specified DIR will not be prefixed with an extra dot (though they will still have a random suffix added). This option is most often used when the receiving disk partition does not have enough free space to hold a copy of the largest file in the transfer. In this case (i.e. when the scratch directory is on a different disk partition), rsync will not be able to rename each received temporary file over the top of the associated destination file, but instead must copy it into place. Rsync does this by copying the file over the top of the destination file, which means that the destination file will contain truncated data during this copy. If this were not done this way (even if the destination file were first removed, the data locally copied to a temporary file in the destination directory, and then renamed into place) it would be possible for the old file to continue taking up disk space (if someone had it open), and thus there might not be enough room to fit the new version on the disk at the same time. If you are using this option for reasons other than a shortage of disk space, you may wish to combine it with the --delay-updates option, which will ensure that all copied files get put into subdirectories in the destination hierarchy, awaiting the end of the transfer. If you don't have enough room to duplicate all the arriving files on the destination partition, another way to tell rsync that you aren't overly concerned about disk space is to use the --partial-dir option with a relative path; because this tells rsync that it is OK to stash off a copy of a single file in a subdir in the destination hierarchy, rsync will use the partial-dir as a staging area to bring over the copied file, and then rename it into place from there. (Specifying a --partial-dir with an absolute path does not have this side-effect.) --fuzzy, -y This option tells rsync that it should look for a basis file for any destination file that is missing. The current algorithm looks in the same directory as the destination file for either a file that has an identical size and modified-time, or a similarly-named file. If found, rsync uses the fuzzy basis file to try to speed up the transfer. If the option is repeated, the fuzzy scan will also be done in any matching alternate destination directories that are specified via --compare-dest, --copy-dest, or --link-dest. Note that the use of the --delete option might get rid of any potential fuzzy-match files, so either use --delete- after or specify some filename exclusions if you need to prevent this. --compare-dest=DIR This option instructs rsync to use DIR on the destination machine as an additional hierarchy to compare destination files against doing transfers (if the files are missing in the destination directory). If a file is found in DIR that is identical to the sender's file, the file will NOT be transferred to the destination directory. This is useful for creating a sparse backup of just files that have changed from an earlier backup. This option is typically used to copy into an empty (or newly created) directory. Beginning in version 2.6.4, multiple --compare-dest directories may be provided, which will cause rsync to search the list in the order specified for an exact match. If a match is found that differs only in attributes, a local copy is made and the attributes updated. If a match is not found, a basis file from one of the DIRs will be selected to try to speed up the transfer. If DIR is a relative path, it is relative to the destination directory. See also --copy-dest and --link- dest. NOTE: beginning with version 3.1.0, rsync will remove a file from a non-empty destination hierarchy if an exact match is found in one of the compare-dest hierarchies (making the end result more closely match a fresh copy). --copy-dest=DIR This option behaves like --compare-dest, but rsync will also copy unchanged files found in DIR to the destination directory using a local copy. This is useful for doing transfers to a new destination while leaving existing files intact, and then doing a flash-cutover when all files have been successfully transferred. Multiple --copy-dest directories may be provided, which will cause rsync to search the list in the order specified for an unchanged file. If a match is not found, a basis file from one of the DIRs will be selected to try to speed up the transfer. If DIR is a relative path, it is relative to the destination directory. See also --compare-dest and --link-dest. --link-dest=DIR This option behaves like --copy-dest, but unchanged files are hard linked from DIR to the destination directory. The files must be identical in all preserved attributes (e.g. permissions, possibly ownership) in order for the files to be linked together. An example: rsync -av --link-dest=$PWD/prior_dir host:src_dir/ new_dir/ If files aren't linking, double-check their attributes. Also check if some attributes are getting forced outside of rsync's control, such a mount option that squishes root to a single user, or mounts a removable drive with generic ownership (such as OS X's "Ignore ownership on this volume" option). Beginning in version 2.6.4, multiple --link-dest directories may be provided, which will cause rsync to search the list in the order specified for an exact match (there is a limit of 20 such directories). If a match is found that differs only in attributes, a local copy is made and the attributes updated. If a match is not found, a basis file from one of the DIRs will be selected to try to speed up the transfer. This option works best when copying into an empty destination hierarchy, as existing files may get their attributes tweaked, and that can affect alternate destination files via hard-links. Also, itemizing of changes can get a bit muddled. Note that prior to version 3.1.0, an alternate-directory exact match would never be found (nor linked into the destination) when a destination file already exists. Note that if you combine this option with --ignore-times, rsync will not link any files together because it only links identical files together as a substitute for transferring the file, never as an additional check after the file is updated. If DIR is a relative path, it is relative to the destination directory. See also --compare-dest and --copy-dest. Note that rsync versions prior to 2.6.1 had a bug that could prevent --link-dest from working properly for a non- super-user when --owner (-o) was specified (or implied). You can work-around this bug by avoiding the -o option (or using --no-o) when sending to an old rsync. --compress, -z With this option, rsync compresses the file data as it is sent to the destination machine, which reduces the amount of data being transmitted -- something that is useful over a slow connection. Rsync supports multiple compression methods and will choose one for you unless you force the choice using the --compress-choice (--zc) option. Run rsync --version to see the default compress list compiled into your version. When both sides of the transfer are at least 3.2.0, rsync chooses the first algorithm in the client's list of choices that is also in the server's list of choices. If no common compress choice is found, rsync exits with an error. If the remote rsync is too old to support checksum negotiation, its list is assumed to be "zlib". The default order can be customized by setting the environment variable RSYNC_COMPRESS_LIST to a space- separated list of acceptable compression names. If the string contains a "&" character, it is separated into the "client string & server string", otherwise the same string applies to both. If the string (or string portion) contains no non-whitespace characters, the default compress list is used. Any unknown compression names are discarded from the list, but a list with only invalid names results in a failed negotiation. There are some older rsync versions that were configured to reject a -z option and require the use of -zz because their compression library was not compatible with the default zlib compression method. You can usually ignore this weirdness unless the rsync server complains and tells you to specify -zz. --compress-choice=STR, --zc=STR This option can be used to override the automatic negotiation of the compression algorithm that occurs when --compress is used. The option implies --compress unless "none" was specified, which instead implies --no-compress. The compression options that you may be able to use are: o zstd o lz4 o zlibx o zlib o none Run rsync --version to see the default compress list compiled into your version (which may differ from the list above). Note that if you see an error about an option named --old- compress or --new-compress, this is rsync trying to send the --compress-choice=zlib or --compress-choice=zlibx option in a backward-compatible manner that more rsync versions understand. This error indicates that the older rsync version on the server will not allow you to force the compression type. Note that the "zlibx" compression algorithm is just the "zlib" algorithm with matched data excluded from the compression stream (to try to make it more compatible with an external zlib implementation). --compress-level=NUM, --zl=NUM Explicitly set the compression level to use (see --compress, -z) instead of letting it default. The --compress option is implied as long as the level chosen is not a "don't compress" level for the compression algorithm that is in effect (e.g. zlib compression treats level 0 as "off"). The level values vary depending on the checksum in effect. Because rsync will negotiate a checksum choice by default (when the remote rsync is new enough), it can be good to combine this option with a --compress-choice (--zc) option unless you're sure of the choice in effect. For example: rsync -aiv --zc=zstd --zl=22 host:src/ dest/ For zlib & zlibx compression the valid values are from 1 to 9 with 6 being the default. Specifying --zl=0 turns compression off, and specifying --zl=-1 chooses the default level of 6. For zstd compression the valid values are from -131072 to 22 with 3 being the default. Specifying 0 chooses the default of 3. For lz4 compression there are no levels, so the value is always 0. If you specify a too-large or too-small value, the number is silently limited to a valid value. This allows you to specify something like --zl=999999999 and be assured that you'll end up with the maximum compression level no matter what algorithm was chosen. If you want to know the compression level that is in effect, specify --debug=nstr to see the "negotiated string" results. This will report something like "Client compress: zstd (level 3)" (along with the checksum choice in effect). --skip-compress=LIST NOTE: no compression method currently supports per-file compression changes, so this option has no effect. Override the list of file suffixes that will be compressed as little as possible. Rsync sets the compression level on a per-file basis based on the file's suffix. If the compression algorithm has an "off" level, then no compression occurs for those files. Other algorithms that support changing the streaming level on-the-fly will have the level minimized to reduces the CPU usage as much as possible for a matching file. The LIST should be one or more file suffixes (without the dot) separated by slashes (/). You may specify an empty string to indicate that no files should be skipped. Simple character-class matching is supported: each must consist of a list of letters inside the square brackets (e.g. no special classes, such as "[:alpha:]", are supported, and '-' has no special meaning). The characters asterisk (*) and question-mark (?) have no special meaning. Here's an example that specifies 6 suffixes to skip (since 1 of the 5 rules matches 2 suffixes): --skip-compress=gz/jpg/mp[34]/7z/bz2 The default file suffixes in the skip-compress list in this version of rsync are: 3g2 3gp 7z aac ace apk avi bz2 deb dmg ear f4v flac flv gpg gz iso jar jpeg jpg lrz lz lz4 lzma lzo m1a m1v m2a m2ts m2v m4a m4b m4p m4r m4v mka mkv mov mp1 mp2 mp3 mp4 mpa mpeg mpg mpv mts odb odf odg odi odm odp ods odt oga ogg ogm ogv ogx opus otg oth otp ots ott oxt png qt rar rpm rz rzip spx squashfs sxc sxd sxg sxm sxw sz tbz tbz2 tgz tlz ts txz tzo vob war webm webp xz z zip zst This list will be replaced by your --skip-compress list in all but one situation: a copy from a daemon rsync will add your skipped suffixes to its list of non-compressing files (and its list may be configured to a different default). --numeric-ids With this option rsync will transfer numeric group and user IDs rather than using user and group names and mapping them at both ends. By default rsync will use the username and groupname to determine what ownership to give files. The special uid 0 and the special group 0 are never mapped via user/group names even if the --numeric-ids option is not specified. If a user or group has no name on the source system or it has no match on the destination system, then the numeric ID from the source system is used instead. See also the use chroot setting in the rsyncd.conf manpage for some comments on how the chroot setting affects rsync's ability to look up the names of the users and groups and what you can do about it. --usermap=STRING, --groupmap=STRING These options allow you to specify users and groups that should be mapped to other values by the receiving side. The STRING is one or more FROM:TO pairs of values separated by commas. Any matching FROM value from the sender is replaced with a TO value from the receiver. You may specify usernames or user IDs for the FROM and TO values, and the FROM value may also be a wild-card string, which will be matched against the sender's names (wild- cards do NOT match against ID numbers, though see below for why a '*' matches everything). You may instead specify a range of ID numbers via an inclusive range: LOW- HIGH. For example: --usermap=0-99:nobody,wayne:admin,*:normal --groupmap=usr:1,1:usr The first match in the list is the one that is used. You should specify all your user mappings using a single --usermap option, and/or all your group mappings using a single --groupmap option. Note that the sender's name for the 0 user and group are not transmitted to the receiver, so you should either match these values using a 0, or use the names in effect on the receiving side (typically "root"). All other FROM names match those in use on the sending side. All TO names match those in use on the receiving side. Any IDs that do not have a name on the sending side are treated as having an empty name for the purpose of matching. This allows them to be matched via a "*" or using an empty name. For instance: --usermap=:nobody --groupmap=*:nobody When the --numeric-ids option is used, the sender does not send any names, so all the IDs are treated as having an empty name. This means that you will need to specify numeric FROM values if you want to map these nameless IDs to different values. For the --usermap option to work, the receiver will need to be running as a super-user (see also the --super and --fake-super options). For the --groupmap option to work, the receiver will need to have permissions to set that group. Starting with rsync 3.2.4, the --usermap option implies the --owner (-o) option while the --groupmap option implies the --group (-g) option (since rsync needs to have those options enabled for the mapping options to work). An older rsync client may need to use -s to avoid a complaint about wildcard characters, but a modern rsync handles this automatically. --chown=USER:GROUP This option forces all files to be owned by USER with group GROUP. This is a simpler interface than using --usermap & --groupmap directly, but it is implemented using those options internally so they cannot be mixed. If either the USER or GROUP is empty, no mapping for the omitted user/group will occur. If GROUP is empty, the trailing colon may be omitted, but if USER is empty, a leading colon must be supplied. If you specify "--chown=foo:bar", this is exactly the same as specifying "--usermap=*:foo --groupmap=*:bar", only easier (and with the same implied --owner and/or --group options). An older rsync client may need to use -s to avoid a complaint about wildcard characters, but a modern rsync handles this automatically. --timeout=SECONDS This option allows you to set a maximum I/O timeout in seconds. If no data is transferred for the specified time then rsync will exit. The default is 0, which means no timeout. --contimeout=SECONDS This option allows you to set the amount of time that rsync will wait for its connection to an rsync daemon to succeed. If the timeout is reached, rsync exits with an error. --address=ADDRESS By default rsync will bind to the wildcard address when connecting to an rsync daemon. The --address option allows you to specify a specific IP address (or hostname) to bind to. See also the daemon version of the --address option. --port=PORT This specifies an alternate TCP port number to use rather than the default of 873. This is only needed if you are using the double-colon (::) syntax to connect with an rsync daemon (since the URL syntax has a way to specify the port as a part of the URL). See also the daemon version of the --port option. --sockopts=OPTIONS This option can provide endless fun for people who like to tune their systems to the utmost degree. You can set all sorts of socket options which may make transfers faster (or slower!). Read the manpage for the setsockopt() system call for details on some of the options you may be able to set. By default no special socket options are set. This only affects direct socket connections to a remote rsync daemon. See also the daemon version of the --sockopts option. --blocking-io This tells rsync to use blocking I/O when launching a remote shell transport. If the remote shell is either rsh or remsh, rsync defaults to using blocking I/O, otherwise it defaults to using non-blocking I/O. (Note that ssh prefers non-blocking I/O.) --outbuf=MODE This sets the output buffering mode. The mode can be None (aka Unbuffered), Line, or Block (aka Full). You may specify as little as a single letter for the mode, and use upper or lower case. The main use of this option is to change Full buffering to Line buffering when rsync's output is going to a file or pipe. --itemize-changes, -i Requests a simple itemized list of the changes that are being made to each file, including attribute changes. This is exactly the same as specifying --out- format='%i %n%L'. If you repeat the option, unchanged files will also be output, but only if the receiving rsync is at least version 2.6.7 (you can use -vv with older versions of rsync, but that also turns on the output of other verbose messages). The "%i" escape has a cryptic output that is 11 letters long. The general format is like the string YXcstpoguax, where Y is replaced by the type of update being done, X is replaced by the file-type, and the other letters represent attributes that may be output if they are being modified. The update types that replace the Y are as follows: o A < means that a file is being transferred to the remote host (sent). o A > means that a file is being transferred to the local host (received). o A c means that a local change/creation is occurring for the item (such as the creation of a directory or the changing of a symlink, etc.). o A h means that the item is a hard link to another item (requires --hard-links). o A . means that the item is not being updated (though it might have attributes that are being modified). o A * means that the rest of the itemized-output area contains a message (e.g. "deleting"). The file-types that replace the X are: f for a file, a d for a directory, an L for a symlink, a D for a device, and a S for a special file (e.g. named sockets and fifos). The other letters in the string indicate if some attributes of the file have changed, as follows: o "." - the attribute is unchanged. o "+" - the file is newly created. o " " - all the attributes are unchanged (all dots turn to spaces). o "?" - the change is unknown (when the remote rsync is old). o A letter indicates an attribute is being updated. The attribute that is associated with each letter is as follows: o A c means either that a regular file has a different checksum (requires --checksum) or that a symlink, device, or special file has a changed value. Note that if you are sending files to an rsync prior to 3.0.1, this change flag will be present only for checksum-differing regular files. o A s means the size of a regular file is different and will be updated by the file transfer. o A t means the modification time is different and is being updated to the sender's value (requires --times). An alternate value of T means that the modification time will be set to the transfer time, which happens when a file/symlink/device is updated without --times and when a symlink is changed and the receiver can't set its time. (Note: when using an rsync 3.0.0 client, you might see the s flag combined with t instead of the proper T flag for this time-setting failure.) o A p means the permissions are different and are being updated to the sender's value (requires --perms). o An o means the owner is different and is being updated to the sender's value (requires --owner and super-user privileges). o A g means the group is different and is being updated to the sender's value (requires --group and the authority to set the group). o o A u|n|b indicates the following information: u means the access (use) time is different and is being updated to the sender's value (requires --atimes) o n means the create time (newness) is different and is being updated to the sender's value (requires --crtimes) o b means that both the access and create times are being updated o The a means that the ACL information is being changed. o The x means that the extended attribute information is being changed. One other output is possible: when deleting files, the "%i" will output the string "*deleting" for each item that is being removed (assuming that you are talking to a recent enough rsync that it logs deletions instead of outputting them as a verbose message). --out-format=FORMAT This allows you to specify exactly what the rsync client outputs to the user on a per-update basis. The format is a text string containing embedded single-character escape sequences prefixed with a percent (%) character. A default format of "%n%L" is assumed if either --info=name or -v is specified (this tells you just the name of the file and, if the item is a link, where it points). For a full list of the possible escape characters, see the log format setting in the rsyncd.conf manpage. Specifying the --out-format option implies the --info=name option, which will mention each file, dir, etc. that gets updated in a significant way (a transferred file, a recreated symlink/device, or a touched directory). In addition, if the itemize-changes escape (%i) is included in the string (e.g. if the --itemize-changes option was used), the logging of names increases to mention any item that is changed in any way (as long as the receiving side is at least 2.6.4). See the --itemize-changes option for a description of the output of "%i". Rsync will output the out-format string prior to a file's transfer unless one of the transfer-statistic escapes is requested, in which case the logging is done at the end of the file's transfer. When this late logging is in effect and --progress is also specified, rsync will also output the name of the file being transferred prior to its progress information (followed, of course, by the out- format output). --log-file=FILE This option causes rsync to log what it is doing to a file. This is similar to the logging that a daemon does, but can be requested for the client side and/or the server side of a non-daemon transfer. If specified as a client option, transfer logging will be enabled with a default format of "%i %n%L". See the --log-file-format option if you wish to override this. Here's an example command that requests the remote side to log what is happening: rsync -av --remote-option=--log-file=/tmp/rlog src/ dest/ This is very useful if you need to debug why a connection is closing unexpectedly. See also the daemon version of the --log-file option. --log-file-format=FORMAT This allows you to specify exactly what per-update logging is put into the file specified by the --log-file option (which must also be specified for this option to have any effect). If you specify an empty string, updated files will not be mentioned in the log file. For a list of the possible escape characters, see the log format setting in the rsyncd.conf manpage. The default FORMAT used if --log-file is specified and this option is not is '%i %n%L'. See also the daemon version of the --log-file-format option. --stats This tells rsync to print a verbose set of statistics on the file transfer, allowing you to tell how effective rsync's delta-transfer algorithm is for your data. This option is equivalent to --info=stats2 if combined with 0 or 1 -v options, or --info=stats3 if combined with 2 or more -v options. The current statistics are as follows: o Number of files is the count of all "files" (in the generic sense), which includes directories, symlinks, etc. The total count will be followed by a list of counts by filetype (if the total is non- zero). For example: "(reg: 5, dir: 3, link: 2, dev: 1, special: 1)" lists the totals for regular files, directories, symlinks, devices, and special files. If any of value is 0, it is completely omitted from the list. o Number of created files is the count of how many "files" (generic sense) were created (as opposed to updated). The total count will be followed by a list of counts by filetype (if the total is non- zero). o Number of deleted files is the count of how many "files" (generic sense) were deleted. The total count will be followed by a list of counts by filetype (if the total is non-zero). Note that this line is only output if deletions are in effect, and only if protocol 31 is being used (the default for rsync 3.1.x). o Number of regular files transferred is the count of normal files that were updated via rsync's delta- transfer algorithm, which does not include dirs, symlinks, etc. Note that rsync 3.1.0 added the word "regular" into this heading. o Total file size is the total sum of all file sizes in the transfer. This does not count any size for directories or special files, but does include the size of symlinks. o Total transferred file size is the total sum of all files sizes for just the transferred files. o Literal data is how much unmatched file-update data we had to send to the receiver for it to recreate the updated files. o Matched data is how much data the receiver got locally when recreating the updated files. o File list size is how big the file-list data was when the sender sent it to the receiver. This is smaller than the in-memory size for the file list due to some compressing of duplicated data when rsync sends the list. o File list generation time is the number of seconds that the sender spent creating the file list. This requires a modern rsync on the sending side for this to be present. o File list transfer time is the number of seconds that the sender spent sending the file list to the receiver. o Total bytes sent is the count of all the bytes that rsync sent from the client side to the server side. o Total bytes received is the count of all non- message bytes that rsync received by the client side from the server side. "Non-message" bytes means that we don't count the bytes for a verbose message that the server sent to us, which makes the stats more consistent. --8-bit-output, -8 This tells rsync to leave all high-bit characters unescaped in the output instead of trying to test them to see if they're valid in the current locale and escaping the invalid ones. All control characters (but never tabs) are always escaped, regardless of this option's setting. The escape idiom that started in 2.6.7 is to output a literal backslash (\) and a hash (#), followed by exactly 3 octal digits. For example, a newline would output as "\#012". A literal backslash that is in a filename is not escaped unless it is followed by a hash and 3 digits (0-9). --human-readable, -h Output numbers in a more human-readable format. There are 3 possible levels: 1. output numbers with a separator between each set of 3 digits (either a comma or a period, depending on if the decimal point is represented by a period or a comma). 2. output numbers in units of 1000 (with a character suffix for larger units -- see below). 3. output numbers in units of 1024. The default is human-readable level 1. Each -h option increases the level by one. You can take the level down to 0 (to output numbers as pure digits) by specifying the --no-human-readable (--no-h) option. The unit letters that are appended in levels 2 and 3 are: K (kilo), M (mega), G (giga), T (tera), or P (peta). For example, a 1234567-byte file would output as 1.23M in level-2 (assuming that a period is your local decimal point). Backward compatibility note: versions of rsync prior to 3.1.0 do not support human-readable level 1, and they default to level 0. Thus, specifying one or two -h options will behave in a comparable manner in old and new versions as long as you didn't specify a --no-h option prior to one or more -h options. See the --list-only option for one difference. --partial By default, rsync will delete any partially transferred file if the transfer is interrupted. In some circumstances it is more desirable to keep partially transferred files. Using the --partial option tells rsync to keep the partial file which should make a subsequent transfer of the rest of the file much faster. --partial-dir=DIR This option modifies the behavior of the --partial option while also implying that it be enabled. This enhanced partial-file method puts any partially transferred files into the specified DIR instead of writing the partial file out to the destination file. On the next transfer, rsync will use a file found in this dir as data to speed up the resumption of the transfer and then delete it after it has served its purpose. Note that if --whole-file is specified (or implied), any partial-dir files that are found for a file that is being updated will simply be removed (since rsync is sending files without using rsync's delta-transfer algorithm). Rsync will create the DIR if it is missing, but just the last dir -- not the whole path. This makes it easy to use a relative path (such as "--partial-dir=.rsync-partial") to have rsync create the partial-directory in the destination file's directory when it is needed, and then remove it again when the partial file is deleted. Note that this directory removal is only done for a relative pathname, as it is expected that an absolute path is to a directory that is reserved for partial-dir work. If the partial-dir value is not an absolute path, rsync will add an exclude rule at the end of all your existing excludes. This will prevent the sending of any partial- dir files that may exist on the sending side, and will also prevent the untimely deletion of partial-dir items on the receiving side. An example: the above --partial-dir option would add the equivalent of this "perishable" exclude at the end of any other filter rules: -f '-p .rsync-partial/' If you are supplying your own exclude rules, you may need to add your own exclude/hide/protect rule for the partial- dir because: 1. the auto-added rule may be ineffective at the end of your other rules, or 2. you may wish to override rsync's exclude choice. For instance, if you want to make rsync clean-up any left- over partial-dirs that may be lying around, you should specify --delete-after and add a "risk" filter rule, e.g. -f 'R .rsync-partial/'. Avoid using --delete-before or --delete-during unless you don't need rsync to use any of the left-over partial-dir data during the current run. IMPORTANT: the --partial-dir should not be writable by other users or it is a security risk! E.g. AVOID "/tmp"! You can also set the partial-dir value the RSYNC_PARTIAL_DIR environment variable. Setting this in the environment does not force --partial to be enabled, but rather it affects where partial files go when --partial is specified. For instance, instead of using --partial-dir=.rsync-tmp along with --progress, you could set RSYNC_PARTIAL_DIR=.rsync-tmp in your environment and then use the -P option to turn on the use of the .rsync- tmp dir for partial transfers. The only times that the --partial option does not look for this environment value are: 1. when --inplace was specified (since --inplace conflicts with --partial-dir), and 2. when --delay-updates was specified (see below). When a modern rsync resumes the transfer of a file in the partial-dir, that partial file is now updated in-place instead of creating yet another tmp-file copy (so it maxes out at dest + tmp instead of dest + partial + tmp). This requires both ends of the transfer to be at least version 3.2.0. For the purposes of the daemon-config's "refuse options" setting, --partial-dir does not imply --partial. This is so that a refusal of the --partial option can be used to disallow the overwriting of destination files with a partial transfer, while still allowing the safer idiom provided by --partial-dir. --delay-updates This option puts the temporary file from each updated file into a holding directory until the end of the transfer, at which time all the files are renamed into place in rapid succession. This attempts to make the updating of the files a little more atomic. By default the files are placed into a directory named .~tmp~ in each file's destination directory, but if you've specified the --partial-dir option, that directory will be used instead. See the comments in the --partial-dir section for a discussion of how this .~tmp~ dir will be excluded from the transfer, and what you can do if you want rsync to cleanup old .~tmp~ dirs that might be lying around. Conflicts with --inplace and --append. This option implies --no-inc-recursive since it needs the full file list in memory in order to be able to iterate over it at the end. This option uses more memory on the receiving side (one bit per file transferred) and also requires enough free disk space on the receiving side to hold an additional copy of all the updated files. Note also that you should not use an absolute path to --partial-dir unless: 1. there is no chance of any of the files in the transfer having the same name (since all the updated files will be put into a single directory if the path is absolute), and 2. there are no mount points in the hierarchy (since the delayed updates will fail if they can't be renamed into place). See also the "atomic-rsync" python script in the "support" subdir for an update algorithm that is even more atomic (it uses --link-dest and a parallel hierarchy of files). --prune-empty-dirs, -m This option tells the receiving rsync to get rid of empty directories from the file-list, including nested directories that have no non-directory children. This is useful for avoiding the creation of a bunch of useless directories when the sending rsync is recursively scanning a hierarchy of files using include/exclude/filter rules. This option can still leave empty directories on the receiving side if you make use of TRANSFER_RULES. Because the file-list is actually being pruned, this option also affects what directories get deleted when a delete is active. However, keep in mind that excluded files and directories can prevent existing items from being deleted due to an exclude both hiding source files and protecting destination files. See the perishable filter-rule option for how to avoid this. You can prevent the pruning of certain empty directories from the file-list by using a global "protect" filter. For instance, this option would ensure that the directory "emptydir" was kept in the file-list: --filter 'protect emptydir/' Here's an example that copies all .pdf files in a hierarchy, only creating the necessary destination directories to hold the .pdf files, and ensures that any superfluous files and directories in the destination are removed (note the hide filter of non-directories being used instead of an exclude): rsync -avm --del --include='*.pdf' -f 'hide,! */' src/ dest If you didn't want to remove superfluous destination files, the more time-honored options of --include='*/' --exclude='*' would work fine in place of the hide-filter (if that is more natural to you). --progress This option tells rsync to print information showing the progress of the transfer. This gives a bored user something to watch. With a modern rsync this is the same as specifying --info=flist2,name,progress, but any user- supplied settings for those info flags takes precedence (e.g. --info=flist0 --progress). While rsync is transferring a regular file, it updates a progress line that looks like this: 782448 63% 110.64kB/s 0:00:04 In this example, the receiver has reconstructed 782448 bytes or 63% of the sender's file, which is being reconstructed at a rate of 110.64 kilobytes per second, and the transfer will finish in 4 seconds if the current rate is maintained until the end. These statistics can be misleading if rsync's delta- transfer algorithm is in use. For example, if the sender's file consists of the basis file followed by additional data, the reported rate will probably drop dramatically when the receiver gets to the literal data, and the transfer will probably take much longer to finish than the receiver estimated as it was finishing the matched part of the file. When the file transfer finishes, rsync replaces the progress line with a summary line that looks like this: 1,238,099 100% 146.38kB/s 0:00:08 (xfr#5, to-chk=169/396) In this example, the file was 1,238,099 bytes long in total, the average rate of transfer for the whole file was 146.38 kilobytes per second over the 8 seconds that it took to complete, it was the 5th transfer of a regular file during the current rsync session, and there are 169 more files for the receiver to check (to see if they are up-to-date or not) remaining out of the 396 total files in the file-list. In an incremental recursion scan, rsync won't know the total number of files in the file-list until it reaches the ends of the scan, but since it starts to transfer files during the scan, it will display a line with the text "ir-chk" (for incremental recursion check) instead of "to-chk" until the point that it knows the full size of the list, at which point it will switch to using "to-chk". Thus, seeing "ir-chk" lets you know that the total count of files in the file list is still going to increase (and each time it does, the count of files left to check will increase by the number of the files added to the list). -P The -P option is equivalent to "--partial --progress". Its purpose is to make it much easier to specify these two options for a long transfer that may be interrupted. There is also a --info=progress2 option that outputs statistics based on the whole transfer, rather than individual files. Use this flag without outputting a filename (e.g. avoid -v or specify --info=name0) if you want to see how the transfer is doing without scrolling the screen with a lot of names. (You don't need to specify the --progress option in order to use --info=progress2.) Finally, you can get an instant progress report by sending rsync a signal of either SIGINFO or SIGVTALRM. On BSD systems, a SIGINFO is generated by typing a Ctrl+T (Linux doesn't currently support a SIGINFO signal). When the client-side process receives one of those signals, it sets a flag to output a single progress report which is output when the current file transfer finishes (so it may take a little time if a big file is being handled when the signal arrives). A filename is output (if needed) followed by the --info=progress2 format of progress info. If you don't know which of the 3 rsync processes is the client process, it's OK to signal all of them (since the non- client processes ignore the signal). CAUTION: sending SIGVTALRM to an older rsync (pre-3.2.0) will kill it. --password-file=FILE This option allows you to provide a password for accessing an rsync daemon via a file or via standard input if FILE is -. The file should contain just the password on the first line (all other lines are ignored). Rsync will exit with an error if FILE is world readable or if a root-run rsync command finds a non-root-owned file. This option does not supply a password to a remote shell transport such as ssh; to learn how to do that, consult the remote shell's documentation. When accessing an rsync daemon using a remote shell as the transport, this option only comes into effect after the remote shell finishes its authentication (i.e. if you have also specified a password in the daemon's config file). --early-input=FILE This option allows rsync to send up to 5K of data to the "early exec" script on its stdin. One possible use of this data is to give the script a secret that can be used to mount an encrypted filesystem (which you should unmount in the the "post-xfer exec" script). The daemon must be at least version 3.2.1. --list-only This option will cause the source files to be listed instead of transferred. This option is inferred if there is a single source arg and no destination specified, so its main uses are: 1. to turn a copy command that includes a destination arg into a file-listing command, or 2. to be able to specify more than one source arg. Note: be sure to include the destination. CAUTION: keep in mind that a source arg with a wild-card is expanded by the shell into multiple args, so it is never safe to try to specify a single wild-card arg to try to infer this option. A safe example is: rsync -av --list-only foo* dest/ This option always uses an output format that looks similar to this: drwxrwxr-x 4,096 2022/09/30 12:53:11 support -rw-rw-r-- 80 2005/01/11 10:37:37 support/Makefile The only option that affects this output style is (as of 3.1.0) the --human-readable (-h) option. The default is to output sizes as byte counts with digit separators (in a 14-character-width column). Specifying at least one -h option makes the sizes output with unit suffixes. If you want old-style bytecount sizes without digit separators (and an 11-character-width column) use --no-h. Compatibility note: when requesting a remote listing of files from an rsync that is version 2.6.3 or older, you may encounter an error if you ask for a non-recursive listing. This is because a file listing implies the --dirs option w/o --recursive, and older rsyncs don't have that option. To avoid this problem, either specify the --no-dirs option (if you don't need to expand a directory's content), or turn on recursion and exclude the content of subdirectories: -r --exclude='/*/*'. --bwlimit=RATE This option allows you to specify the maximum transfer rate for the data sent over the socket, specified in units per second. The RATE value can be suffixed with a string to indicate a size multiplier, and may be a fractional value (e.g. --bwlimit=1.5m). If no suffix is specified, the value will be assumed to be in units of 1024 bytes (as if "K" or "KiB" had been appended). See the --max-size option for a description of all the available suffixes. A value of 0 specifies no limit. For backward-compatibility reasons, the rate limit will be rounded to the nearest KiB unit, so no rate smaller than 1024 bytes per second is possible. Rsync writes data over the socket in blocks, and this option both limits the size of the blocks that rsync writes, and tries to keep the average transfer rate at the requested limit. Some burstiness may be seen where rsync writes out a block of data and then sleeps to bring the average rate into compliance. Due to the internal buffering of data, the --progress option may not be an accurate reflection on how fast the data is being sent. This is because some files can show up as being rapidly sent when the data is quickly buffered, while other can show up as very slow when the flushing of the output buffer occurs. This may be fixed in a future version. See also the daemon version of the --bwlimit option. --stop-after=MINS, (--time-limit=MINS) This option tells rsync to stop copying when the specified number of minutes has elapsed. For maximal flexibility, rsync does not communicate this option to the remote rsync since it is usually enough that one side of the connection quits as specified. This allows the option's use even when only one side of the connection supports it. You can tell the remote side about the time limit using --remote-option (-M), should the need arise. The --time-limit version of this option is deprecated. --stop-at=y-m-dTh:m This option tells rsync to stop copying when the specified point in time has been reached. The date & time can be fully specified in a numeric format of year-month- dayThour:minute (e.g. 2000-12-31T23:59) in the local timezone. You may choose to separate the date numbers using slashes instead of dashes. The value can also be abbreviated in a variety of ways, such as specifying a 2-digit year and/or leaving off various values. In all cases, the value will be taken to be the next possible point in time where the supplied information matches. If the value specifies the current time or a past time, rsync exits with an error. For example, "1-30" specifies the next January 30th (at midnight local time), "14:00" specifies the next 2 P.M., "1" specifies the next 1st of the month at midnight, "31" specifies the next month where we can stop on its 31st day, and ":59" specifies the next 59th minute after the hour. For maximal flexibility, rsync does not communicate this option to the remote rsync since it is usually enough that one side of the connection quits as specified. This allows the option's use even when only one side of the connection supports it. You can tell the remote side about the time limit using --remote-option (-M), should the need arise. Do keep in mind that the remote host may have a different default timezone than your local host. --fsync Cause the receiving side to fsync each finished file. This may slow down the transfer, but can help to provide peace of mind when updating critical files. --write-batch=FILE Record a file that can later be applied to another identical destination with --read-batch. See the "BATCH MODE" section for details, and also the --only-write-batch option. This option overrides the negotiated checksum & compress lists and always negotiates a choice based on old-school md5/md4/zlib choices. If you want a more modern choice, use the --checksum-choice (--cc) and/or --compress-choice (--zc) options. --only-write-batch=FILE Works like --write-batch, except that no updates are made on the destination system when creating the batch. This lets you transport the changes to the destination system via some other means and then apply the changes via --read-batch. Note that you can feel free to write the batch directly to some portable media: if this media fills to capacity before the end of the transfer, you can just apply that partial transfer to the destination and repeat the whole process to get the rest of the changes (as long as you don't mind a partially updated destination system while the multi-update cycle is happening). Also note that you only save bandwidth when pushing changes to a remote system because this allows the batched data to be diverted from the sender into the batch file without having to flow over the wire to the receiver (when pulling, the sender is remote, and thus can't write the batch). --read-batch=FILE Apply all of the changes stored in FILE, a file previously generated by --write-batch. If FILE is -, the batch data will be read from standard input. See the "BATCH MODE" section for details. --protocol=NUM Force an older protocol version to be used. This is useful for creating a batch file that is compatible with an older version of rsync. For instance, if rsync 2.6.4 is being used with the --write-batch option, but rsync 2.6.3 is what will be used to run the --read-batch option, you should use "--protocol=28" when creating the batch file to force the older protocol version to be used in the batch file (assuming you can't upgrade the rsync on the reading system). --iconv=CONVERT_SPEC Rsync can convert filenames between character sets using this option. Using a CONVERT_SPEC of "." tells rsync to look up the default character-set via the locale setting. Alternately, you can fully specify what conversion to do by giving a local and a remote charset separated by a comma in the order --iconv=LOCAL,REMOTE, e.g. --iconv=utf8,iso88591. This order ensures that the option will stay the same whether you're pushing or pulling files. Finally, you can specify either --no-iconv or a CONVERT_SPEC of "-" to turn off any conversion. The default setting of this option is site-specific, and can also be affected via the RSYNC_ICONV environment variable. For a list of what charset names your local iconv library supports, you can run "iconv --list". If you specify the --secluded-args (-s) option, rsync will translate the filenames you specify on the command-line that are being sent to the remote host. See also the --files-from option. Note that rsync does not do any conversion of names in filter files (including include/exclude files). It is up to you to ensure that you're specifying matching rules that can match on both sides of the transfer. For instance, you can specify extra include/exclude rules if there are filename differences on the two sides that need to be accounted for. When you pass an --iconv option to an rsync daemon that allows it, the daemon uses the charset specified in its "charset" configuration parameter regardless of the remote charset you actually pass. Thus, you may feel free to specify just the local charset for a daemon transfer (e.g. --iconv=utf8). --ipv4, -4 or --ipv6, -6 Tells rsync to prefer IPv4/IPv6 when creating sockets or running ssh. This affects sockets that rsync has direct control over, such as the outgoing socket when directly contacting an rsync daemon, as well as the forwarding of the -4 or -6 option to ssh when rsync can deduce that ssh is being used as the remote shell. For other remote shells you'll need to specify the "--rsh SHELL -4" option directly (or whatever IPv4/IPv6 hint options it uses). See also the daemon version of these options. If rsync was compiled without support for IPv6, the --ipv6 option will have no effect. The rsync --version output will contain "no IPv6" if is the case. --checksum-seed=NUM Set the checksum seed to the integer NUM. This 4 byte checksum seed is included in each block and MD4 file checksum calculation (the more modern MD5 file checksums don't use a seed). By default the checksum seed is generated by the server and defaults to the current time(). This option is used to set a specific checksum seed, which is useful for applications that want repeatable block checksums, or in the case where the user wants a more random checksum seed. Setting NUM to 0 causes rsync to use the default of time() for checksum seed. DAEMON OPTIONS top The options allowed when starting an rsync daemon are as follows: --daemon This tells rsync that it is to run as a daemon. The daemon you start running may be accessed using an rsync client using the host::module or rsync://host/module/ syntax. If standard input is a socket then rsync will assume that it is being run via inetd, otherwise it will detach from the current terminal and become a background daemon. The daemon will read the config file (rsyncd.conf) on each connect made by a client and respond to requests accordingly. See the rsyncd.conf(5) manpage for more details. --address=ADDRESS By default rsync will bind to the wildcard address when run as a daemon with the --daemon option. The --address option allows you to specify a specific IP address (or hostname) to bind to. This makes virtual hosting possible in conjunction with the --config option. See also the address global option in the rsyncd.conf manpage and the client version of the --address option. --bwlimit=RATE This option allows you to specify the maximum transfer rate for the data the daemon sends over the socket. The client can still specify a smaller --bwlimit value, but no larger value will be allowed. See the client version of the --bwlimit option for some extra details. --config=FILE This specifies an alternate config file than the default. This is only relevant when --daemon is specified. The default is /etc/rsyncd.conf unless the daemon is running over a remote shell program and the remote user is not the super-user; in that case the default is rsyncd.conf in the current directory (typically $HOME). --dparam=OVERRIDE, -M This option can be used to set a daemon-config parameter when starting up rsync in daemon mode. It is equivalent to adding the parameter at the end of the global settings prior to the first module's definition. The parameter names can be specified without spaces, if you so desire. For instance: rsync --daemon -M pidfile=/path/rsync.pid --no-detach When running as a daemon, this option instructs rsync to not detach itself and become a background process. This option is required when running as a service on Cygwin, and may also be useful when rsync is supervised by a program such as daemontools or AIX's System Resource Controller. --no-detach is also recommended when rsync is run under a debugger. This option has no effect if rsync is run from inetd or sshd. --port=PORT This specifies an alternate TCP port number for the daemon to listen on rather than the default of 873. See also the client version of the --port option and the port global setting in the rsyncd.conf manpage. --log-file=FILE This option tells the rsync daemon to use the given log- file name instead of using the "log file" setting in the config file. See also the client version of the --log-file option. --log-file-format=FORMAT This option tells the rsync daemon to use the given FORMAT string instead of using the "log format" setting in the config file. It also enables "transfer logging" unless the string is empty, in which case transfer logging is turned off. See also the client version of the --log-file-format option. --sockopts This overrides the socket options setting in the rsyncd.conf file and has the same syntax. See also the client version of the --sockopts option. --verbose, -v This option increases the amount of information the daemon logs during its startup phase. After the client connects, the daemon's verbosity level will be controlled by the options that the client used and the "max verbosity" setting in the module's config section. See also the client version of the --verbose option. --ipv4, -4 or --ipv6, -6 Tells rsync to prefer IPv4/IPv6 when creating the incoming sockets that the rsync daemon will use to listen for connections. One of these options may be required in older versions of Linux to work around an IPv6 bug in the kernel (if you see an "address already in use" error when nothing else is using the port, try specifying --ipv6 or --ipv4 when starting the daemon). See also the client version of these options. If rsync was compiled without support for IPv6, the --ipv6 option will have no effect. The rsync --version output will contain "no IPv6" if is the case. --help, -h When specified after --daemon, print a short help page describing the options available for starting an rsync daemon. FILTER RULES top The filter rules allow for custom control of several aspects of how files are handled: o Control which files the sending side puts into the file list that describes the transfer hierarchy o Control which files the receiving side protects from deletion when the file is not in the sender's file list o Control which extended attribute names are skipped when copying xattrs The rules are either directly specified via option arguments or they can be read in from one or more files. The filter-rule files can even be a part of the hierarchy of files being copied, affecting different parts of the tree in different ways. SIMPLE INCLUDE/EXCLUDE RULES We will first cover the basics of how include & exclude rules affect what files are transferred, ignoring any deletion side- effects. Filter rules mainly affect the contents of directories that rsync is "recursing" into, but they can also affect a top- level item in the transfer that was specified as a argument. The default for any unmatched file/dir is for it to be included in the transfer, which puts the file/dir into the sender's file list. The use of an exclude rule causes one or more matching files/dirs to be left out of the sender's file list. An include rule can be used to limit the effect of an exclude rule that is matching too many files. The order of the rules is important because the first rule that matches is the one that takes effect. Thus, if an early rule excludes a file, no include rule that comes after it can have any effect. This means that you must place any include overrides somewhere prior to the exclude that it is intended to limit. When a directory is excluded, all its contents and sub-contents are also excluded. The sender doesn't scan through any of it at all, which can save a lot of time when skipping large unneeded sub-trees. It is also important to understand that the include/exclude rules are applied to every file and directory that the sender is recursing into. Thus, if you want a particular deep file to be included, you have to make sure that none of the directories that must be traversed on the way down to that file are excluded or else the file will never be discovered to be included. As an example, if the directory "a/path" was given as a transfer argument and you want to ensure that the file "a/path/down/deep/wanted.txt" is a part of the transfer, then the sender must not exclude the directories "a/path", "a/path/down", or "a/path/down/deep" as it makes it way scanning through the file tree. When you are working on the rules, it can be helpful to ask rsync to tell you what is being excluded/included and why. Specifying --debug=FILTER or (when pulling files) -M--debug=FILTER turns on level 1 of the FILTER debug information that will output a message any time that a file or directory is included or excluded and which rule it matched. Beginning in 3.2.4 it will also warn if a filter rule has trailing whitespace, since an exclude of "foo " (with a trailing space) will not exclude a file named "foo". Exclude and include rules can specify wildcard PATTERN MATCHING RULES (similar to shell wildcards) that allow you to match things like a file suffix or a portion of a filename. A rule can be limited to only affecting a directory by putting a trailing slash onto the filename. SIMPLE INCLUDE/EXCLUDE EXAMPLE With the following file tree created on the sending side: mkdir x/ touch x/file.txt mkdir x/y/ touch x/y/file.txt touch x/y/zzz.txt mkdir x/z/ touch x/z/file.txt Then the following rsync command will transfer the file "x/y/file.txt" and the directories needed to hold it, resulting in the path "/tmp/x/y/file.txt" existing on the remote host: rsync -ai -f'+ x/' -f'+ x/y/' -f'+ x/y/file.txt' -f'- *' x host:/tmp/ Aside: this copy could also have been accomplished using the -R option (though the 2 commands behave differently if deletions are enabled): rsync -aiR x/y/file.txt host:/tmp/ The following command does not need an include of the "x" directory because it is not a part of the transfer (note the traililng slash). Running this command would copy just "/tmp/x/file.txt" because the "y" and "z" dirs get excluded: rsync -ai -f'+ file.txt' -f'- *' x/ host:/tmp/x/ This command would omit the zzz.txt file while copying "x" and everything else it contains: rsync -ai -f'- zzz.txt' x host:/tmp/ FILTER RULES WHEN DELETING By default the include & exclude filter rules affect both the sender (as it creates its file list) and the receiver (as it creates its file lists for calculating deletions). If no delete option is in effect, the receiver skips creating the delete- related file lists. This two-sided default can be manually overridden so that you are only specifying sender rules or receiver rules, as described in the FILTER RULES IN DEPTH section. When deleting, an exclude protects a file from being removed on the receiving side while an include overrides that protection (putting the file at risk of deletion). The default is for a file to be at risk -- its safety depends on it matching a corresponding file from the sender. An example of the two-sided exclude effect can be illustrated by the copying of a C development directory between 2 systems. When doing a touch-up copy, you might want to skip copying the built executable and the .o files (sender hide) so that the receiving side can build their own and not lose any object files that are already correct (receiver protect). For instance: rsync -ai --del -f'- *.o' -f'- cmd' src host:/dest/ Note that using -f'-p *.o' is even better than -f'- *.o' if there is a chance that the directory structure may have changed. The "p" modifier is discussed in FILTER RULE MODIFIERS. One final note, if your shell doesn't mind unexpanded wildcards, you could simplify the typing of the filter options by using an underscore in place of the space and leaving off the quotes. For instance, -f -_*.o -f -_cmd (and similar) could be used instead of the filter options above. FILTER RULES IN DEPTH Rsync supports old-style include/exclude rules and new-style filter rules. The older rules are specified using --include and --exclude as well as the --include-from and --exclude-from. These are limited in behavior but they don't require a "-" or "+" prefix. An old-style exclude rule is turned into a "- name" filter rule (with no modifiers) and an old-style include rule is turned into a "+ name" filter rule (with no modifiers). Rsync builds an ordered list of filter rules as specified on the command-line and/or read-in from files. New style filter rules have the following syntax: RULE [PATTERN_OR_FILENAME] RULE,MODIFIERS [PATTERN_OR_FILENAME] You have your choice of using either short or long RULE names, as described below. If you use a short-named rule, the ',' separating the RULE from the MODIFIERS is optional. The PATTERN or FILENAME that follows (when present) must come after either a single space or an underscore (_). Any additional spaces and/or underscores are considered to be a part of the pattern name. Here are the available rule prefixes: exclude, '-' specifies an exclude pattern that (by default) is both a hide and a protect. include, '+' specifies an include pattern that (by default) is both a show and a risk. merge, '.' specifies a merge-file on the client side to read for more rules. dir-merge, ':' specifies a per-directory merge-file. Using this kind of filter rule requires that you trust the sending side's filter checking, so it has the side-effect mentioned under the --trust-sender option. hide, 'H' specifies a pattern for hiding files from the transfer. Equivalent to a sender-only exclude, so -f'H foo' could also be specified as -f'-s foo'. show, 'S' files that match the pattern are not hidden. Equivalent to a sender-only include, so -f'S foo' could also be specified as -f'+s foo'. protect, 'P' specifies a pattern for protecting files from deletion. Equivalent to a receiver-only exclude, so -f'P foo' could also be specified as -f'-r foo'. risk, 'R' files that match the pattern are not protected. Equivalent to a receiver-only include, so -f'R foo' could also be specified as -f'+r foo'. clear, '!' clears the current include/exclude list (takes no arg) When rules are being read from a file (using merge or dir-merge), empty lines are ignored, as are whole-line comments that start with a '#' (filename rules that contain a hash character are unaffected). Note also that the --filter, --include, and --exclude options take one rule/pattern each. To add multiple ones, you can repeat the options on the command-line, use the merge-file syntax of the --filter option, or the --include-from / --exclude-from options. PATTERN MATCHING RULES Most of the rules mentioned above take an argument that specifies what the rule should match. If rsync is recursing through a directory hierarchy, keep in mind that each pattern is matched against the name of every directory in the descent path as rsync finds the filenames to send. The matching rules for the pattern argument take several forms: o If a pattern contains a / (not counting a trailing slash) or a "**" (which can match a slash), then the pattern is matched against the full pathname, including any leading directories within the transfer. If the pattern doesn't contain a (non-trailing) / or a "**", then it is matched only against the final component of the filename or pathname. For example, foo means that the final path component must be "foo" while foo/bar would match the last 2 elements of the path (as long as both elements are within the transfer). o A pattern that ends with a / only matches a directory, not a regular file, symlink, or device. o A pattern that starts with a / is anchored to the start of the transfer path instead of the end. For example, /foo/** or /foo/bar/** match only leading elements in the path. If the rule is read from a per-directory filter file, the transfer path being matched will begin at the level of the filter file instead of the top of the transfer. See the section on ANCHORING INCLUDE/EXCLUDE PATTERNS for a full discussion of how to specify a pattern that matches at the root of the transfer. Rsync chooses between doing a simple string match and wildcard matching by checking if the pattern contains one of these three wildcard characters: '*', '?', and '[' : o a '?' matches any single character except a slash (/). o a '*' matches zero or more non-slash characters. o a '**' matches zero or more characters, including slashes. o a '[' introduces a character class, such as [a-z] or [[:alpha:]], that must match one character. o a trailing *** in the pattern is a shorthand that allows you to match a directory and all its contents using a single rule. For example, specifying "dir_name/***" will match both the "dir_name" directory (as if "dir_name/" had been specified) and everything in the directory (as if "dir_name/**" had been specified). o a backslash can be used to escape a wildcard character, but it is only interpreted as an escape character if at least one wildcard character is present in the match pattern. For instance, the pattern "foo\bar" matches that single backslash literally, while the pattern "foo\bar*" would need to be changed to "foo\\bar*" to avoid the "\b" becoming just "b". Here are some examples of exclude/include matching: o Option -f'- *.o' would exclude all filenames ending with .o o Option -f'- /foo' would exclude a file (or directory) named foo in the transfer-root directory o Option -f'- foo/' would exclude any directory named foo o Option -f'- foo/*/bar' would exclude any file/dir named bar which is at two levels below a directory named foo (if foo is in the transfer) o Option -f'- /foo/**/bar' would exclude any file/dir named bar that was two or more levels below a top-level directory named foo (note that /foo/bar is not excluded by this) o Options -f'+ */' -f'+ *.c' -f'- *' would include all directories and .c source files but nothing else o Options -f'+ foo/' -f'+ foo/bar.c' -f'- *' would include only the foo directory and foo/bar.c (the foo directory must be explicitly included or it would be excluded by the "- *") FILTER RULE MODIFIERS The following modifiers are accepted after an include (+) or exclude (-) rule: o A / specifies that the include/exclude rule should be matched against the absolute pathname of the current item. For example, -f'-/ /etc/passwd' would exclude the passwd file any time the transfer was sending files from the "/etc" directory, and "-/ subdir/foo" would always exclude "foo" when it is in a dir named "subdir", even if "foo" is at the root of the current transfer. o A ! specifies that the include/exclude should take effect if the pattern fails to match. For instance, -f'-! */' would exclude all non-directories. o A C is used to indicate that all the global CVS-exclude rules should be inserted as excludes in place of the "-C". No arg should follow. o An s is used to indicate that the rule applies to the sending side. When a rule affects the sending side, it affects what files are put into the sender's file list. The default is for a rule to affect both sides unless --delete-excluded was specified, in which case default rules become sender-side only. See also the hide (H) and show (S) rules, which are an alternate way to specify sending-side includes/excludes. o An r is used to indicate that the rule applies to the receiving side. When a rule affects the receiving side, it prevents files from being deleted. See the s modifier for more info. See also the protect (P) and risk (R) rules, which are an alternate way to specify receiver-side includes/excludes. o A p indicates that a rule is perishable, meaning that it is ignored in directories that are being deleted. For instance, the --cvs-exclude (-C) option's default rules that exclude things like "CVS" and "*.o" are marked as perishable, and will not prevent a directory that was removed on the source from being deleted on the destination. o An x indicates that a rule affects xattr names in xattr copy/delete operations (and is thus ignored when matching file/dir names). If no xattr-matching rules are specified, a default xattr filtering rule is used (see the --xattrs option). MERGE-FILE FILTER RULES You can merge whole files into your filter rules by specifying either a merge (.) or a dir-merge (:) filter rule (as introduced in the FILTER RULES section above). There are two kinds of merged files -- single-instance ('.') and per-directory (':'). A single-instance merge file is read one time, and its rules are incorporated into the filter list in the place of the "." rule. For per-directory merge files, rsync will scan every directory that it traverses for the named file, merging its contents when the file exists into the current list of inherited rules. These per-directory rule files must be created on the sending side because it is the sending side that is being scanned for the available files to transfer. These rule files may also need to be transferred to the receiving side if you want them to affect what files don't get deleted (see PER- DIRECTORY RULES AND DELETE below). Some examples: merge /etc/rsync/default.rules . /etc/rsync/default.rules dir-merge .per-dir-filter dir-merge,n- .non-inherited-per-dir-excludes :n- .non-inherited-per-dir-excludes The following modifiers are accepted after a merge or dir-merge rule: o A - specifies that the file should consist of only exclude patterns, with no other rule-parsing except for in-file comments. o A + specifies that the file should consist of only include patterns, with no other rule-parsing except for in-file comments. o A C is a way to specify that the file should be read in a CVS-compatible manner. This turns on 'n', 'w', and '-', but also allows the list-clearing token (!) to be specified. If no filename is provided, ".cvsignore" is assumed. o A e will exclude the merge-file name from the transfer; e.g. "dir-merge,e .rules" is like "dir-merge .rules" and "- .rules". o An n specifies that the rules are not inherited by subdirectories. o A w specifies that the rules are word-split on whitespace instead of the normal line-splitting. This also turns off comments. Note: the space that separates the prefix from the rule is treated specially, so "- foo + bar" is parsed as two rules (assuming that prefix-parsing wasn't also disabled). o You may also specify any of the modifiers for the "+" or "-" rules (above) in order to have the rules that are read in from the file default to having that modifier set (except for the ! modifier, which would not be useful). For instance, "merge,-/ .excl" would treat the contents of .excl as absolute-path excludes, while "dir-merge,s .filt" and ":sC" would each make all their per-directory rules apply only on the sending side. If the merge rule specifies sides to affect (via the s or r modifier or both), then the rules in the file must not specify sides (via a modifier or a rule prefix such as hide). Per-directory rules are inherited in all subdirectories of the directory where the merge-file was found unless the 'n' modifier was used. Each subdirectory's rules are prefixed to the inherited per-directory rules from its parents, which gives the newest rules a higher priority than the inherited rules. The entire set of dir-merge rules are grouped together in the spot where the merge-file was specified, so it is possible to override dir-merge rules via a rule that got specified earlier in the list of global rules. When the list-clearing rule ("!") is read from a per-directory file, it only clears the inherited rules for the current merge file. Another way to prevent a single rule from a dir-merge file from being inherited is to anchor it with a leading slash. Anchored rules in a per-directory merge-file are relative to the merge- file's directory, so a pattern "/foo" would only match the file "foo" in the directory where the dir-merge filter file was found. Here's an example filter file which you'd specify via --filter=". file": merge /home/user/.global-filter - *.gz dir-merge .rules + *.[ch] - *.o - foo* This will merge the contents of the /home/user/.global-filter file at the start of the list and also turns the ".rules" filename into a per-directory filter file. All rules read in prior to the start of the directory scan follow the global anchoring rules (i.e. a leading slash matches at the root of the transfer). If a per-directory merge-file is specified with a path that is a parent directory of the first transfer directory, rsync will scan all the parent dirs from that starting point to the transfer directory for the indicated per-directory file. For instance, here is a common filter (see -F): --filter=': /.rsync-filter' That rule tells rsync to scan for the file .rsync-filter in all directories from the root down through the parent directory of the transfer prior to the start of the normal directory scan of the file in the directories that are sent as a part of the transfer. (Note: for an rsync daemon, the root is always the same as the module's "path".) Some examples of this pre-scanning for per-directory files: rsync -avF /src/path/ /dest/dir rsync -av --filter=': ../../.rsync-filter' /src/path/ /dest/dir rsync -av --filter=': .rsync-filter' /src/path/ /dest/dir The first two commands above will look for ".rsync-filter" in "/" and "/src" before the normal scan begins looking for the file in "/src/path" and its subdirectories. The last command avoids the parent-dir scan and only looks for the ".rsync-filter" files in each directory that is a part of the transfer. If you want to include the contents of a ".cvsignore" in your patterns, you should use the rule ":C", which creates a dir-merge of the .cvsignore file, but parsed in a CVS-compatible manner. You can use this to affect where the --cvs-exclude (-C) option's inclusion of the per-directory .cvsignore file gets placed into your rules by putting the ":C" wherever you like in your filter rules. Without this, rsync would add the dir-merge rule for the .cvsignore file at the end of all your other rules (giving it a lower priority than your command-line rules). For example: cat <<EOT | rsync -avC --filter='. -' a/ b + foo.o :C - *.old EOT rsync -avC --include=foo.o -f :C --exclude='*.old' a/ b Both of the above rsync commands are identical. Each one will merge all the per-directory .cvsignore rules in the middle of the list rather than at the end. This allows their dir-specific rules to supersede the rules that follow the :C instead of being subservient to all your rules. To affect the other CVS exclude rules (i.e. the default list of exclusions, the contents of $HOME/.cvsignore, and the value of $CVSIGNORE) you should omit the -C command-line option and instead insert a "-C" rule into your filter rules; e.g. "--filter=-C". LIST-CLEARING FILTER RULE You can clear the current include/exclude list by using the "!" filter rule (as introduced in the FILTER RULES section above). The "current" list is either the global list of rules (if the rule is encountered while parsing the filter options) or a set of per-directory rules (which are inherited in their own sub-list, so a subdirectory can use this to clear out the parent's rules). ANCHORING INCLUDE/EXCLUDE PATTERNS As mentioned earlier, global include/exclude patterns are anchored at the "root of the transfer" (as opposed to per- directory patterns, which are anchored at the merge-file's directory). If you think of the transfer as a subtree of names that are being sent from sender to receiver, the transfer-root is where the tree starts to be duplicated in the destination directory. This root governs where patterns that start with a / match. Because the matching is relative to the transfer-root, changing the trailing slash on a source path or changing your use of the --relative option affects the path you need to use in your matching (in addition to changing how much of the file tree is duplicated on the destination host). The following examples demonstrate this. Let's say that we want to match two source files, one with an absolute path of "/home/me/foo/bar", and one with a path of "/home/you/bar/baz". Here is how the various command choices differ for a 2-source transfer: Example cmd: rsync -a /home/me /home/you /dest +/- pattern: /me/foo/bar +/- pattern: /you/bar/baz Target file: /dest/me/foo/bar Target file: /dest/you/bar/baz Example cmd: rsync -a /home/me/ /home/you/ /dest +/- pattern: /foo/bar (note missing "me") +/- pattern: /bar/baz (note missing "you") Target file: /dest/foo/bar Target file: /dest/bar/baz Example cmd: rsync -a --relative /home/me/ /home/you /dest +/- pattern: /home/me/foo/bar (note full path) +/- pattern: /home/you/bar/baz (ditto) Target file: /dest/home/me/foo/bar Target file: /dest/home/you/bar/baz Example cmd: cd /home; rsync -a --relative me/foo you/ /dest +/- pattern: /me/foo/bar (starts at specified path) +/- pattern: /you/bar/baz (ditto) Target file: /dest/me/foo/bar Target file: /dest/you/bar/baz The easiest way to see what name you should filter is to just look at the output when using --verbose and put a / in front of the name (use the --dry-run option if you're not yet ready to copy any files). PER-DIRECTORY RULES AND DELETE Without a delete option, per-directory rules are only relevant on the sending side, so you can feel free to exclude the merge files themselves without affecting the transfer. To make this easy, the 'e' modifier adds this exclude for you, as seen in these two equivalent commands: rsync -av --filter=': .excl' --exclude=.excl host:src/dir /dest rsync -av --filter=':e .excl' host:src/dir /dest However, if you want to do a delete on the receiving side AND you want some files to be excluded from being deleted, you'll need to be sure that the receiving side knows what files to exclude. The easiest way is to include the per-directory merge files in the transfer and use --delete-after, because this ensures that the receiving side gets all the same exclude rules as the sending side before it tries to delete anything: rsync -avF --delete-after host:src/dir /dest However, if the merge files are not a part of the transfer, you'll need to either specify some global exclude rules (i.e. specified on the command line), or you'll need to maintain your own per-directory merge files on the receiving side. An example of the first is this (assume that the remote .rules files exclude themselves): rsync -av --filter=': .rules' --filter='. /my/extra.rules' --delete host:src/dir /dest In the above example the extra.rules file can affect both sides of the transfer, but (on the sending side) the rules are subservient to the rules merged from the .rules files because they were specified after the per-directory merge rule. In one final example, the remote side is excluding the .rsync- filter files from the transfer, but we want to use our own .rsync-filter files to control what gets deleted on the receiving side. To do this we must specifically exclude the per-directory merge files (so that they don't get deleted) and then put rules into the local files to control what else should not get deleted. Like one of these commands: rsync -av --filter=':e /.rsync-filter' --delete \ host:src/dir /dest rsync -avFF --delete host:src/dir /dest TRANSFER RULES top In addition to the FILTER RULES that affect the recursive file scans that generate the file list on the sending and (when deleting) receiving sides, there are transfer rules. These rules affect which files the generator decides need to be transferred without the side effects of an exclude filter rule. Transfer rules affect only files and never directories. Because a transfer rule does not affect what goes into the sender's (and receiver's) file list, it cannot have any effect on which files get deleted on the receiving side. For example, if the file "foo" is present in the sender's list but its size is such that it is omitted due to a transfer rule, the receiving side does not request the file. However, its presence in the file list means that a delete pass will not remove a matching file named "foo" on the receiving side. On the other hand, a server-side exclude (hide) of the file "foo" leaves the file out of the server's file list, and absent a receiver-side exclude (protect) the receiver will remove a matching file named "foo" if deletions are requested. Given that the files are still in the sender's file list, the --prune-empty-dirs option will not judge a directory as being empty even if it contains only files that the transfer rules omitted. Similarly, a transfer rule does not have any extra effect on which files are deleted on the receiving side, so setting a maximum file size for the transfer does not prevent big files from being deleted. Examples of transfer rules include the default "quick check" algorithm (which compares size & modify time), the --update option, the --max-size option, the --ignore-non-existing option, and a few others. BATCH MODE top Batch mode can be used to apply the same set of updates to many identical systems. Suppose one has a tree which is replicated on a number of hosts. Now suppose some changes have been made to this source tree and those changes need to be propagated to the other hosts. In order to do this using batch mode, rsync is run with the write-batch option to apply the changes made to the source tree to one of the destination trees. The write-batch option causes the rsync client to store in a "batch file" all the information needed to repeat this operation against other, identical destination trees. Generating the batch file once saves having to perform the file status, checksum, and data block generation more than once when updating multiple destination trees. Multicast transport protocols can be used to transfer the batch update files in parallel to many hosts at once, instead of sending the same data to every host individually. To apply the recorded changes to another destination tree, run rsync with the read-batch option, specifying the name of the same batch file, and the destination tree. Rsync updates the destination tree using the information stored in the batch file. For your convenience, a script file is also created when the write-batch option is used: it will be named the same as the batch file with ".sh" appended. This script file contains a command-line suitable for updating a destination tree using the associated batch file. It can be executed using a Bourne (or Bourne-like) shell, optionally passing in an alternate destination tree pathname which is then used instead of the original destination path. This is useful when the destination tree path on the current host differs from the one used to create the batch file. Examples: $ rsync --write-batch=foo -a host:/source/dir/ /adest/dir/ $ scp foo* remote: $ ssh remote ./foo.sh /bdest/dir/ $ rsync --write-batch=foo -a /source/dir/ /adest/dir/ $ ssh remote rsync --read-batch=- -a /bdest/dir/ <foo In these examples, rsync is used to update /adest/dir/ from /source/dir/ and the information to repeat this operation is stored in "foo" and "foo.sh". The host "remote" is then updated with the batched data going into the directory /bdest/dir. The differences between the two examples reveals some of the flexibility you have in how you deal with batches: o The first example shows that the initial copy doesn't have to be local -- you can push or pull data to/from a remote host using either the remote-shell syntax or rsync daemon syntax, as desired. o The first example uses the created "foo.sh" file to get the right rsync options when running the read-batch command on the remote host. o The second example reads the batch data via standard input so that the batch file doesn't need to be copied to the remote machine first. This example avoids the foo.sh script because it needed to use a modified --read-batch option, but you could edit the script file if you wished to make use of it (just be sure that no other option is trying to use standard input, such as the --exclude-from=- option). Caveats: The read-batch option expects the destination tree that it is updating to be identical to the destination tree that was used to create the batch update fileset. When a difference between the destination trees is encountered the update might be discarded with a warning (if the file appears to be up-to-date already) or the file-update may be attempted and then, if the file fails to verify, the update discarded with an error. This means that it should be safe to re-run a read-batch operation if the command got interrupted. If you wish to force the batched-update to always be attempted regardless of the file's size and date, use the -I option (when reading the batch). If an error occurs, the destination tree will probably be in a partially updated state. In that case, rsync can be used in its regular (non-batch) mode of operation to fix up the destination tree. The rsync version used on all destinations must be at least as new as the one used to generate the batch file. Rsync will die with an error if the protocol version in the batch file is too new for the batch-reading rsync to handle. See also the --protocol option for a way to have the creating rsync generate a batch file that an older rsync can understand. (Note that batch files changed format in version 2.6.3, so mixing versions older than that with newer versions will not work.) When reading a batch file, rsync will force the value of certain options to match the data in the batch file if you didn't set them to the same as the batch-writing command. Other options can (and should) be changed. For instance --write-batch changes to --read-batch, --files-from is dropped, and the --filter / --include / --exclude options are not needed unless one of the --delete options is specified. The code that creates the BATCH.sh file transforms any filter/include/exclude options into a single list that is appended as a "here" document to the shell script file. An advanced user can use this to modify the exclude list if a change in what gets deleted by --delete is desired. A normal user can ignore this detail and just use the shell script as an easy way to run the appropriate --read-batch command for the batched data. The original batch mode in rsync was based on "rsync+", but the latest version uses a new implementation. SYMBOLIC LINKS top Three basic behaviors are possible when rsync encounters a symbolic link in the source directory. By default, symbolic links are not transferred at all. A message "skipping non-regular" file is emitted for any symlinks that exist. If --links is specified, then symlinks are added to the transfer (instead of being noisily ignored), and the default handling is to recreate them with the same target on the destination. Note that --archive implies --links. If --copy-links is specified, then symlinks are "collapsed" by copying their referent, rather than the symlink. Rsync can also distinguish "safe" and "unsafe" symbolic links. An example where this might be used is a web site mirror that wishes to ensure that the rsync module that is copied does not include symbolic links to /etc/passwd in the public section of the site. Using --copy-unsafe-links will cause any links to be copied as the file they point to on the destination. Using --safe-links will cause unsafe links to be omitted by the receiver. (Note that you must specify or imply --links for --safe-links to have any effect.) Symbolic links are considered unsafe if they are absolute symlinks (start with /), empty, or if they contain enough ".." components to ascend from the top of the transfer. Here's a summary of how the symlink options are interpreted. The list is in order of precedence, so if your combination of options isn't mentioned, use the first line that is a complete subset of your options: --copy-links Turn all symlinks into normal files and directories (leaving no symlinks in the transfer for any other options to affect). --copy-dirlinks Turn just symlinks to directories into real directories, leaving all other symlinks to be handled as described below. --links --copy-unsafe-links Turn all unsafe symlinks into files and create all safe symlinks. --copy-unsafe-links Turn all unsafe symlinks into files, noisily skip all safe symlinks. --links --safe-links The receiver skips creating unsafe symlinks found in the transfer and creates the safe ones. --links Create all symlinks. For the effect of --munge-links, see the discussion in that option's section. Note that the --keep-dirlinks option does not effect symlinks in the transfer but instead affects how rsync treats a symlink to a directory that already exists on the receiving side. See that option's section for a warning. DIAGNOSTICS top Rsync occasionally produces error messages that may seem a little cryptic. The one that seems to cause the most confusion is "protocol version mismatch -- is your shell clean?". This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most common cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. If you are having trouble debugging filter patterns, then try specifying the -vv option. At this level of verbosity rsync will show why each individual file is included or excluded. EXIT VALUES top o 0 - Success o 1 - Syntax or usage error o 2 - Protocol incompatibility o 3 - Errors selecting input/output files, dirs o o 4 - Requested action not supported. Either: an attempt was made to manipulate 64-bit files on a platform that cannot support them o an option was specified that is supported by the client and not by the server o 5 - Error starting client-server protocol o 6 - Daemon unable to append to log-file o 10 - Error in socket I/O o 11 - Error in file I/O o 12 - Error in rsync protocol data stream o 13 - Errors with program diagnostics o 14 - Error in IPC code o 20 - Received SIGUSR1 or SIGINT o 21 - Some error returned by waitpid() o 22 - Error allocating core memory buffers o 23 - Partial transfer due to error o 24 - Partial transfer due to vanished source files o 25 - The --max-delete limit stopped deletions o 30 - Timeout in data send/receive o 35 - Timeout waiting for daemon connection ENVIRONMENT VARIABLES top CVSIGNORE The CVSIGNORE environment variable supplements any ignore patterns in .cvsignore files. See the --cvs-exclude option for more details. RSYNC_ICONV Specify a default --iconv setting using this environment variable. First supported in 3.0.0. RSYNC_OLD_ARGS Specify a "1" if you want the --old-args option to be enabled by default, a "2" (or more) if you want it to be enabled in the repeated-option state, or a "0" to make sure that it is disabled by default. When this environment variable is set to a non-zero value, it supersedes the RSYNC_PROTECT_ARGS variable. This variable is ignored if --old-args, --no-old-args, or --secluded-args is specified on the command line. First supported in 3.2.4. RSYNC_PROTECT_ARGS Specify a non-zero numeric value if you want the --secluded-args option to be enabled by default, or a zero value to make sure that it is disabled by default. This variable is ignored if --secluded-args, --no- secluded-args, or --old-args is specified on the command line. First supported in 3.1.0. Starting in 3.2.4, this variable is ignored if RSYNC_OLD_ARGS is set to a non-zero value. RSYNC_RSH This environment variable allows you to override the default shell used as the transport for rsync. Command line options are permitted after the command name, just as in the --rsh (-e) option. RSYNC_PROXY This environment variable allows you to redirect your rsync client to use a web proxy when connecting to an rsync daemon. You should set RSYNC_PROXY to a hostname:port pair. RSYNC_PASSWORD This environment variable allows you to set the password for an rsync daemon connection, which avoids the password prompt. Note that this does not supply a password to a remote shell transport such as ssh (consult its documentation for how to do that). USER or LOGNAME The USER or LOGNAME environment variables are used to determine the default username sent to an rsync daemon. If neither is set, the username defaults to "nobody". If both are set, USER takes precedence. RSYNC_PARTIAL_DIR This environment variable specifies the directory to use for a --partial transfer without implying that partial transfers be enabled. See the --partial-dir option for full details. RSYNC_COMPRESS_LIST This environment variable allows you to customize the negotiation of the compression algorithm by specifying an alternate order or a reduced list of names. Use the command rsync --version to see the available compression names. See the --compress option for full details. RSYNC_CHECKSUM_LIST This environment variable allows you to customize the negotiation of the checksum algorithm by specifying an alternate order or a reduced list of names. Use the command rsync --version to see the available checksum names. See the --checksum-choice option for full details. RSYNC_MAX_ALLOC This environment variable sets an allocation maximum as if you had used the --max-alloc option. RSYNC_PORT This environment variable is not read by rsync, but is instead set in its sub-environment when rsync is running the remote shell in combination with a daemon connection. This allows a script such as rsync-ssl to be able to know the port number that the user specified on the command line. HOME This environment variable is used to find the user's default .cvsignore file. RSYNC_CONNECT_PROG This environment variable is mainly used in debug setups to set the program to use when making a daemon connection. See CONNECTING TO AN RSYNC DAEMON for full details. RSYNC_SHELL This environment variable is mainly used in debug setups to set the program to use to run the program specified by RSYNC_CONNECT_PROG. See CONNECTING TO AN RSYNC DAEMON for full details. FILES top /etc/rsyncd.conf or rsyncd.conf SEE ALSO top rsync-ssl(1), rsyncd.conf(5), rrsync(1) BUGS top o Times are transferred as *nix time_t values. o When transferring to FAT filesystems rsync may re-sync unmodified files. See the comments on the --modify-window option. o File permissions, devices, etc. are transferred as native numerical values. o See also the comments on the --delete option. Please report bugs! See the web site at https://rsync.samba.org/. VERSION top This manpage is current for version 3.2.7 of rsync. INTERNAL OPTIONS top The options --server and --sender are used internally by rsync, and should never be typed by a user under normal circumstances. Some awareness of these options may be needed in certain scenarios, such as when setting up a login that can only run an rsync command. For instance, the support directory of the rsync distribution has an example script named rrsync (for restricted rsync) that can be used with a restricted ssh login. CREDITS top Rsync is distributed under the GNU General Public License. See the file COPYING for details. An rsync web site is available at https://rsync.samba.org/. The site includes an FAQ-O-Matic which may cover questions unanswered by this manual page. The rsync github project is https://github.com/WayneD/rsync. We would be delighted to hear from you if you like this program. Please contact the mailing-list at rsync@lists.samba.org. This program uses the excellent zlib compression library written by Jean-loup Gailly and Mark Adler. THANKS top Special thanks go out to: John Van Essen, Matt McCutchen, Wesley W. Terpstra, David Dykstra, Jos Backus, Sebastian Krahmer, Martin Pool, and our gone-but-not-forgotten compadre, J.W. Schultz. Thanks also to Richard Brent, Brendan Mackay, Bill Waite, Stephen Rothwell and David Bell. I've probably missed some people, my apologies if I have. AUTHOR top Rsync was originally written by Andrew Tridgell and Paul Mackerras. Many people have later contributed to it. It is currently maintained by Wayne Davison. Mailing lists for support and development are available at https://lists.samba.org/. COLOPHON top This page is part of the rsync (a fast, versatile, remote (and local) file-copying tool) project. Information about the project can be found at https://rsync.samba.org/. If you have a bug report for this manual page, see https://rsync.samba.org/bugzilla.html. This page was obtained from the tarball fetched from https://download.samba.org/pub/rsync/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org rsync 3.2.7 20 Oct 2022 rsync(1) Pages that refer to this page: pmlogger_daily(1), rrsync(1), rsync-ssl(1), rsyncd.conf(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# rsync\n\n> Transfer files either to or from a remote host (but not between two remote hosts), by default using SSH.\n> To specify a remote path, use `user@host:path/to/file_or_directory`.\n> More information: <https://download.samba.org/pub/rsync/rsync.1>.\n\n- Transfer a file:\n\n`rsync {{path/to/source}} {{path/to/destination}}`\n\n- Use archive mode (recursively copy directories, copy symlinks without resolving, and preserve permissions, ownership and modification times):\n\n`rsync --archive {{path/to/source}} {{path/to/destination}}`\n\n- Compress the data as it is sent to the destination, display verbose and human-readable progress, and keep partially transferred files if interrupted:\n\n`rsync --compress --verbose --human-readable --partial --progress {{path/to/source}} {{path/to/destination}}`\n\n- Recursively copy directories:\n\n`rsync --recursive {{path/to/source}} {{path/to/destination}}`\n\n- Transfer directory contents, but not the directory itself:\n\n`rsync --recursive {{path/to/source}}/ {{path/to/destination}}`\n\n- Use archive mode, resolve symlinks and skip files that are newer on the destination:\n\n`rsync --archive --update --copy-links {{path/to/source}} {{path/to/destination}}`\n\n- Transfer a directory to a remote host running `rsyncd` and delete files on the destination that do not exist on the source:\n\n`rsync --recursive --delete rsync://{{host}}:{{path/to/source}} {{path/to/destination}}`\n\n- Transfer a file over SSH using a different port than the default (22) and show global progress:\n\n`rsync --rsh 'ssh -p {{port}}' --info=progress2 {{host}}:{{path/to/source}} {{path/to/destination}}`\n
rtcwake
rtcwake(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training rtcwake(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | NOTES | FILES | HISTORY | AUTHORS | COPYRIGHT | SEE ALSO | REPORTING BUGS | AVAILABILITY RTCWAKE(8) System Administration RTCWAKE(8) NAME top rtcwake - enter a system sleep state until specified wakeup time SYNOPSIS top rtcwake [options] [-d device] [-m standby_mode] {-s seconds|-t time_t} DESCRIPTION top This program is used to enter a system sleep state and to automatically wake from it at a specified time. This uses cross-platform Linux interfaces to enter a system sleep state, and leave it no later than a specified time. It uses any RTC framework driver that supports standard driver model wakeup flags. This is normally used like the old apmsleep utility, to wake from a suspend state like ACPI S1 (standby) or S3 (suspend-to-RAM). Most platforms can implement those without analogues of BIOS, APM, or ACPI. On some systems, this can also be used like nvram-wakeup, waking from states like ACPI S4 (suspend to disk). Not all systems have persistent media that are appropriate for such suspend modes. Note that alarm functionality depends on hardware; not every RTC is able to setup an alarm up to 24 hours in the future. The suspend setup may be interrupted by active hardware; for example wireless USB input devices that continue to send events for some fraction of a second after the return key is pressed. rtcwake tries to avoid this problem and it waits to the terminal to settle down before entering a system sleep. OPTIONS top -A, --adjfile file Specify an alternative path to the adjust file. -a, --auto Read the clock mode (whether the hardware clock is set to UTC or local time) from the adjtime file, where hwclock(8) stores that information. This is the default. --date timestamp Set the wakeup time to the value of the timestamp. Format of the timestamp can be any of the following: YYYYMMDDhhmmss YYYY-MM-DD hh:mm:ss YYYY-MM-DD hh:mm (seconds will be set to 00) YYYY-MM-DD (time will be set to 00:00:00) hh:mm:ss (date will be set to today) hh:mm (date will be set to today, seconds to 00) tomorrow (time is set to 00:00:00) +5min -d, --device device Use the specified device instead of rtc0 as realtime clock. This option is only relevant if your system has more than one RTC. You may specify rtc1, rtc2, ... here. -l, --local Assume that the hardware clock is set to local time, regardless of the contents of the adjtime file. --list-modes List available --mode option arguments. -m, --mode mode Go into the given standby state. Valid values for mode are: standby ACPI state S1. This state offers minimal, though real, power savings, while providing a very low-latency transition back to a working system. This is the default mode. freeze The processes are frozen, all the devices are suspended and all the processors idled. This state is a general state that does not need any platform-specific support, but it saves less power than Suspend-to-RAM, because the system is still in a running state. (Available since Linux 3.9.) mem ACPI state S3 (Suspend-to-RAM). This state offers significant power savings as everything in the system is put into a low-power state, except for memory, which is placed in self-refresh mode to retain its contents. disk ACPI state S4 (Suspend-to-disk). This state offers the greatest power savings, and can be used even in the absence of low-level platform support for power management. This state operates similarly to Suspend-to-RAM, but includes a final step of writing memory contents to disk. off ACPI state S5 (Poweroff). This is done by calling '/sbin/shutdown'. Not officially supported by ACPI, but it usually works. no Dont suspend, only set the RTC wakeup time. on Dont suspend, but read the RTC device until an alarm time appears. This mode is useful for debugging. disable Disable a previously set alarm. show Print alarm information in format: "alarm: off|on <time>". The time is in ctime() output format, e.g., "alarm: on Tue Nov 16 04:48:45 2010". -n, --dry-run This option does everything apart from actually setting up the alarm, suspending the system, or waiting for the alarm. -s, --seconds seconds Set the wakeup time to seconds in the future from now. -t, --time time_t Set the wakeup time to the absolute time time_t. time_t is the time in seconds since 1970-01-01, 00:00 UTC. Use the date(1) tool to convert between human-readable time and time_t. -u, --utc Assume that the hardware clock is set to UTC (Universal Time Coordinated), regardless of the contents of the adjtime file. -v, --verbose Be verbose. -h, --help Display help text and exit. -V, --version Print version and exit. NOTES top Some PC systems cant currently exit sleep states such as mem using only the kernel code accessed by this driver. They need help from userspace code to make the framebuffer work again. FILES top /etc/adjtime HISTORY top The program was posted several times on LKML and other lists before appearing in kernel commit message for Linux 2.6 in the GIT commit 87ac84f42a7a580d0dd72ae31d6a5eb4bfe04c6d. AUTHORS top The program was written by David Brownell <dbrownell@users.sourceforge.net> and improved by Bernhard Walle <bwalle@suse.de>. COPYRIGHT top This is free software. You may redistribute copies of it under the terms of the GNU General Public License <http://www.gnu.org/licenses/gpl.html>. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top adjtime_config(5), hwclock(8), date(1) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The rtcwake command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 RTCWAKE(8) Pages that refer to this page: adjtime_config(5), hwclock(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# rtcwake\n\n> Enter a system sleep state until specified wakeup time relative to your BIOS clock.\n> More information: <https://manned.org/rtcwake>.\n\n- Show whether an alarm is set or not:\n\n`sudo rtcwake -m show -v`\n\n- Suspend to RAM and wakeup after 10 seconds:\n\n`sudo rtcwake -m mem -s {{10}}`\n\n- Suspend to disk (higher power saving) and wakeup 15 minutes later:\n\n`sudo rtcwake -m disk --date +{{15}}min`\n\n- Freeze the system (more efficient than suspend-to-RAM but version 3.9 or newer of the Linux kernel is required) and wakeup at a given date and time:\n\n`sudo rtcwake -m freeze --date {{YYYYMMDDhhmm}}`\n\n- Disable a previously set alarm:\n\n`sudo rtcwake -m disable`\n\n- Perform a dry run to wakeup the computer at a given time. (Press Ctrl + C to abort):\n\n`sudo rtcwake -m on --date {{hh:ss}}`\n
runcon
runcon(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training runcon(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON RUNCON(1) User Commands RUNCON(1) NAME top runcon - run command with specified security context SYNOPSIS top runcon CONTEXT COMMAND [args] runcon [ -c ] [-u USER] [-r ROLE] [-t TYPE] [-l RANGE] COMMAND [args] DESCRIPTION top Run COMMAND with completely-specified CONTEXT, or with current or transitioned security context modified by one or more of LEVEL, ROLE, TYPE, and USER. If none of -c, -t, -u, -r, or -l, is specified, the first argument is used as the complete context. Any additional arguments after COMMAND are interpreted as arguments to the command. Note that only carefully-chosen contexts are likely to successfully run. Run a program in a different SELinux security context. With neither CONTEXT nor COMMAND, print the current security context. Mandatory arguments to long options are mandatory for short options too. CONTEXT Complete security context -c, --compute compute process transition context before modifying -t, --type=TYPE type (for same role as parent) -u, --user=USER user identity -r, --role=ROLE role -l, --range=RANGE levelrange --help display this help and exit --version output version information and exit Exit status: 125 if the runcon command itself fails 126 if COMMAND is found but cannot be invoked 127 if COMMAND cannot be found - the exit status of COMMAND otherwise AUTHOR top Written by Russell Coker. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/runcon> or available locally via: info '(coreutils) runcon invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 RUNCON(1) Pages that refer to this page: newrole(1), setpriv(1), run_init(8), sandbox(8), seunshare(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# runcon\n\n> Run a program in a different SELinux security context.\n> With neither context nor command, print the current security context.\n> More information: <https://www.gnu.org/software/coreutils/runcon>.\n\n- Determine the current domain:\n\n`runcon`\n\n- Specify the domain to run a command in:\n\n`runcon -t {{domain}}_t {{command}}`\n\n- Specify the context role to run a command with:\n\n`runcon -r {{role}}_r {{command}}`\n\n- Specify the full context to run a command with:\n\n`runcon {{user}}_u:{{role}}_r:{{domain}}_t {{command}}`\n
runuser
runuser(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training runuser(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CONFIG FILES | EXIT STATUS | FILES | HISTORY | SEE ALSO | REPORTING BUGS | AVAILABILITY RUNUSER(1) User Commands RUNUSER(1) NAME top runuser - run a command with substitute user and group ID SYNOPSIS top runuser [options] -u user [[--] command [argument...]] runuser [options] [-] [user [argument...]] DESCRIPTION top runuser can be used to run commands with a substitute user and group ID. If the option -u is not given, runuser falls back to su-compatible semantics and a shell is executed. The difference between the commands runuser and su is that runuser does not ask for a password (because it may be executed by the root user only) and it uses a different PAM configuration. The command runuser does not have to be installed with set-user-ID permissions. If the PAM session is not required, then the recommended solution is to use the setpriv(1) command. When called without arguments, runuser defaults to running an interactive shell as root. For backward compatibility, runuser defaults to not changing the current directory and to setting only the environment variables HOME and SHELL (plus USER and LOGNAME if the target user is not root). This version of runuser uses PAM for session management. Note that runuser in all cases use PAM (pam_getenvlist()) to do the final environment modification. Command-line options such as --login and --preserve-environment affect the environment before it is modified by PAM. Since version 2.38 runuser resets process resource limits RLIMIT_NICE, RLIMIT_RTPRIO, RLIMIT_FSIZE, RLIMIT_AS and RLIMIT_NOFILE. OPTIONS top -c, --command=command Pass command to the shell with the -c option. -f, --fast Pass -f to the shell, which may or may not be useful, depending on the shell. -g, --group=group The primary group to be used. This option is allowed for the root user only. -G, --supp-group=group Specify a supplementary group. This option is available to the root user only. The first specified supplementary group is also used as a primary group if the option --group is not specified. -, -l, --login Start the shell as a login shell with an environment similar to a real login: clears all the environment variables except for TERM and variables specified by --whitelist-environment initializes the environment variables HOME, SHELL, USER, LOGNAME, and PATH changes to the target users home directory sets argv[0] of the shell to '-' in order to make the shell a login shell -P, --pty Create a pseudo-terminal for the session. The independent terminal provides better security as the user does not share a terminal with the original session. This can be used to avoid TIOCSTI ioctl terminal injection and other security attacks against terminal file descriptors. The entire session can also be moved to the background (e.g., runuser --pty -u username -- command &). If the pseudo-terminal is enabled, then runuser works as a proxy between the sessions (sync stdin and stdout). This feature is mostly designed for interactive sessions. If the standard input is not a terminal, but for example a pipe (e.g., echo "date" | runuser --pty -u user), then the ECHO flag for the pseudo-terminal is disabled to avoid messy output. -m, -p, --preserve-environment Preserve the entire environment, i.e., do not set HOME, SHELL, USER or LOGNAME. The option is ignored if the option --login is specified. -s, --shell=shell Run the specified shell instead of the default. The shell to run is selected according to the following rules, in order: the shell specified with --shell the shell specified in the environment variable SHELL if the --preserve-environment option is used the shell listed in the passwd entry of the target user /bin/sh If the target user has a restricted shell (i.e., not listed in /etc/shells), then the --shell option and the SHELL environment variables are ignored unless the calling user is root. --session-command=command Same as -c, but do not create a new session. (Discouraged.) -w, --whitelist-environment=list Dont reset the environment variables specified in the comma-separated list when clearing the environment for --login. The whitelist is ignored for the environment variables HOME, SHELL, USER, LOGNAME, and PATH. -h, --help Display help text and exit. -V, --version Print version and exit. CONFIG FILES top runuser reads the /etc/default/runuser and /etc/login.defs configuration files. The following configuration items are relevant for runuser: ENV_PATH (string) Defines the PATH environment variable for a regular user. The default value is /usr/local/bin:/bin:/usr/bin. ENV_ROOTPATH (string), ENV_SUPATH (string) Defines the PATH environment variable for root. ENV_SUPATH takes precedence. The default value is /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin. ALWAYS_SET_PATH (boolean) If set to yes and --login and --preserve-environment were not specified runuser initializes PATH. The environment variable PATH may be different on systems where /bin and /sbin are merged into /usr; this variable is also affected by the --login command-line option and the PAM system setting (e.g., pam_env(8)). EXIT STATUS top runuser normally returns the exit status of the command it executed. If the command was killed by a signal, runuser returns the number of the signal plus 128. Exit status generated by runuser itself: 1 Generic error before executing the requested command 126 The requested command could not be executed 127 The requested command was not found FILES top /etc/pam.d/runuser default PAM configuration file /etc/pam.d/runuser-l PAM configuration file if --login is specified /etc/default/runuser runuser specific logindef config file /etc/login.defs global logindef config file HISTORY top This runuser command was derived from coreutils' su, which was based on an implementation by David MacKenzie, and the Fedora runuser command by Dan Walsh. SEE ALSO top setpriv(1), su(1), login.defs(5), shells(5), pam(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The runuser command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 RUNUSER(1) Pages that refer to this page: setpriv(1), su(1), credentials(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# runuser\n\n> Run commands as a user and group without asking for password (needs root privileges).\n> More information: <https://manned.org/runuser>.\n\n- Run command as a different user:\n\n`runuser {{user}} -c '{{command}}'`\n\n- Run command as a different user and group:\n\n`runuser {{user}} -g {{group}} -c '{{command}}'`\n\n- Start a login shell as a specific user:\n\n`runuser {{user}} -l`\n\n- Specify a shell for running instead of the default shell (also works for login):\n\n`runuser {{user}} -s {{/bin/sh}}`\n\n- Preserve the entire environment of root (only if `--login` is not specified):\n\n`runuser {{user}} --preserve-environment -c '{{command}}'`\n
sa
sa(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sa(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | FILES | BUGS | AUTHOR | SEE ALSO | COLOPHON SA(8) System Manager's Manual SA(8) NAME top sa - summarizes accounting information SYNOPSIS top sa [ -a | --list-all-names ] [ -b | --sort-sys-user-div-calls ] [ -c | --percentages ] [ -d | --sort-avio ] [ -D | --sort-tio ] [ -f | --not-interactive ] [ -i | --dont-read-summary-files ] [ -j | --print-seconds ] [ -k | --sort-cpu-avmem ] [ -K | --sort-ksec ] [ -l | --separate-times ] [ -m | --user-summary ] [ -n | --sort-num-calls ] [ -p | --show-paging ] [ -P | --show-paging-avg ] [ -r | --reverse-sort ] [ -s | --merge ] [ -t | --print-ratio ] [ -u | --print-users ] [ -v num | --threshold num ] [ --sort-real-time ] [ --debug ] [ -V | --version ] [ -h | --help ] [ --other-usracct-file filename ] [ --ahz hz ] [ --other-savacct-file filename ] [ [ --other-acct-file ] filename ] DESCRIPTION top sa summarizes information about previously executed commands as recorded in the acct file. In addition, it condenses this data into a summary file named savacct which contains the number of times the command was called and the system resources used. The information can also be summarized on a per-user basis; sa will save this information into a file named usracct. If no arguments are specified, sa will print information about all of the commands in the acct file. If called with a file name as the last argument, sa will use that file instead of the system's default acct file. By default, sa will sort the output by sum of user and system time. If command names have unprintable characters, or are only called once, sa will sort them into a group called `***other'. If more than one sorting option is specified, the list will be sorted by the one specified last on the command line. The output fields are labeled as follows: cpu sum of system and user time in cpu minutes re "elapsed time" in minutes k cpu-time averaged core usage, in 1k units avio average number of I/O operations per execution tio total number of I/O operations k*sec cpu storage integral (kilo-core seconds) u user cpu time in cpu seconds s system time in cpu seconds Note that these column titles do not appear in the first row of the table, but after each numeric entry (as units of measurement) in every row. For example, you might see `79.29re', meaning 79.29 cpu seconds of "real time". An asterisk will appear after the name of commands that forked but didn't call exec. GNU sa takes care to implement a number of features not found in other versions. For example, most versions of sa don't pay attention to flags like `--print-seconds' and `--sort-num-calls' when printing out commands when combined with the `--user- summary' or `--print-users' flags. GNU sa pays attention to these flags if they are applicable. Also, MIPS' sa stores the average memory use as a short rather than a double, resulting in some round-off errors. GNU sa uses double the whole way through. OPTIONS top The availability of these program options depends on your operating system. In specific, the members that appear in the struct acct of your system's process accounting header file (usually acct.h ) determine which flags will be present. For example, if your system's struct acct doesn't have the `ac_mem' field, the installed version of sa will not support the `--sort- cpu-avmem', `--sort-ksec', `-k', or `-K' options. In short, all of these flags may not be available on your machine. -a, --list-all-names Force sa not to sort those command names with unprintable characters and those used only once into the ***other group. -b, --sort-sys-user-div-calls Sort the output by the sum of user and system time divided by the number of calls. -c, --percentages Print percentages of total time for the command's user, system, and real time values. -d, --sort-avio Sort the output by the average number of disk I/O operations. -D, --sort-tio Print and sort the output by the total number of disk I/O operations. -f, --not-interactive When using the `--threshold' option, assume that all answers to interactive queries will be affirmative. -i, --dont-read-summary-files Don't read the information in the system's default savacct file. -j, --print-seconds Instead of printing total minutes for each category, print seconds per call. -k, --sort-cpu-avmem Sort the output by cpu time average memory usage. -K, --sort-ksec Print and sort the output by the cpu-storage integral. -l, --separate-times Print separate columns for system and user time; usually the two are added together and listed as `cpu'. -m, --user-summary Print the number of processes and number of CPU minutes on a per-user basis. -n, --sort-num-calls Sort the output by the number of calls. This is the default sorting method. -p, --show-paging Print the number of minor and major pagefaults and swaps. -P, --show-paging-avg Print the number of minor and major pagefaults and swaps divided by the number of calls. -r, --reverse-sort Sort output items in reverse order. -s, --merge Merge the summarized accounting data into the summary files savacct and usracct. -t, --print-ratio For each entry, print the ratio of real time to the sum of system and user times. If the sum of system and user times is too small to report--the sum is zero--`*ignore*' will appear in this field. -u, --print-users For each command in the accounting file, print the userid and command name. After printing all entries, quit. *Note*: this flag supersedes all others. -v num --threshold num Print commands which were executed num times or fewer and await a reply from the terminal. If the response begins with `y', add the command to the `**junk**' group. --separate-forks It really doesn't make any sense to me that the stock version of sa separates statistics for a particular executable depending on whether or not that command forked. Therefore, GNU sa lumps this information together unless this option is specified. --ahz hz Use this flag to tell the program what AHZ should be (in hertz). This option is useful if you are trying to view an acct file created on another machine which has the same byte order and file format as your current machine, but has a different value for AHZ. --debug Print verbose internal information. -V, --version Print the version number of sa. -h, --help Prints the usage string and default locations of system files to standard output and exits. --sort-real-time Sort the output by the "real time" field. --other-usracct-file filename Write summaries by user ID to filename rather than the system's default usracct file. --other-savacct-file filename Write summaries by command name to filename rather than the system's default SAVACCT file. --other-acct-file filename Read from the file filename instead of the system's default ACCT file. FILES top acct The raw system wide process accounting file. See acct(5) for further details. savacct A summary of system process accounting sorted by command. usracct A summary of system process accounting sorted by user ID. BUGS top There is not yet a wide experience base for comparing the output of GNU sa with versions of sa in many other systems. The problem is that the data files grow big in a short time and therefore require a lot of disk space. AUTHOR top The GNU accounting utilities were written by Noel Cragg <noel@gnu.ai.mit.edu>. The man page was adapted from the accounting texinfo page by Susan Kleinmann <sgk@sgk.tiac.net>. SEE ALSO top acct(5), ac(1) COLOPHON top This page is part of the psacct (process accounting utilities) project. Information about the project can be found at http://www.gnu.org/software/acct/. If you have a bug report for this manual page, see http://www.gnu.org/software/acct/. This page was obtained from the tarball acct-6.6.4.tar.gz fetched from http://ftp.gnu.org/gnu/acct/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 1997 August 19 SA(8) Pages that refer to this page: ac(1), acct(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sa\n\n> Summarizes accounting information. Part of the acct package.\n> Shows commands called by users, including basic info on CPU time spent processing and I/O rates.\n> More information: <https://manned.org/man/sa.8>.\n\n- Display executable invocations per user (username not displayed):\n\n`sudo sa`\n\n- Display executable invocations per user, showing responsible usernames:\n\n`sudo sa --print-users`\n\n- List resources used recently per user:\n\n`sudo sa --user-summary`\n
sar
sar(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sar(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ENVIRONMENT | EXAMPLES | BUGS | FILES | AUTHOR | SEE ALSO | COLOPHON SAR(1) Linux User's Manual SAR(1) NAME top sar - Collect, report, or save system activity information. SYNOPSIS top sar [ -A ] [ -B ] [ -b ] [ -C ] [ -D ] [ -d ] [ -F [ MOUNT ] ] [ -H ] [ -h ] [ -p ] [ -r [ ALL ] ] [ -S ] [ -t ] [ -u [ ALL ] ] [ -V ] [ -v ] [ -W ] [ -w ] [ -x ] [ -y ] [ -z ] [ --dec={ 0 | 1 | 2 } ] [ --dev=dev_list ] [ --fs=fs_list ] [ --help ] [ --human ] [ --iface=iface_list ] [ --int=int_list ] [ --pretty ] [ --sadc ] [ -I [ SUM | ALL ] ] [ -P { cpu_list | ALL } ] [ -m { keyword[,...] | ALL } ] [ -n { keyword[,...] | ALL } ] [ -q [ keyword[,...] | ALL ] ] [ -j { SID | ID | LABEL | PATH | UUID | ... } ] [ -f [ filename ] | -o [ filename ] | -[0-9]+ ] [ -i interval ] [ -s [ start_time ] ] [ -e [ end_time ] ] ] [ interval [ count ] ] DESCRIPTION top The sar command writes to standard output the contents of selected cumulative activity counters in the operating system. The accounting system, based on the values in the count and interval parameters, writes information the specified number of times spaced at the specified intervals in seconds. If the interval parameter is set to zero, the sar command displays the average statistics for the time since the system was started. If the interval parameter is specified without the count parameter, then reports are generated continuously. The collected data can also be saved in the file specified by the -o filename flag, in addition to being displayed onto the screen. If filename is omitted, sar uses the standard system activity daily data file (see below). By default all the data available from the kernel are saved in the data file. The sar command extracts and writes to standard output records previously saved in a file. This file can be either the one specified by the -f flag or, by default, the standard system activity daily data file. It is also possible to enter -1, -2 etc. as an argument to sar to display data of that days ago. For example, -1 will point at the standard system activity file of yesterday. Standard system activity daily data files are named saDD or saYYYYMMDD, where YYYY stands for the current year, MM for the current month and DD for the current day. They are the default files used by sar only when no filename has been explicitly specified. When used to write data to files (with its option -o), sar will use saYYYYMMDD if option -D has also been specified, else it will use saDD. When used to display the records previously saved in a file, sar will look for the most recent of saDD and saYYYYMMDD, and use it. Standard system activity daily data files are located in the /var/log/sa directory by default. Yet it is possible to specify an alternate location for them: If a directory (instead of a plain file) is used with options -f or -o then it will be considered as the directory containing the data files. Without the -P flag, the sar command reports system-wide (global among all processors) statistics, which are calculated as averages for values expressed as percentages, and as sums otherwise. If the -P flag is given, the sar command reports activity which relates to the specified processor or processors. If -P ALL is given, the sar command reports statistics for each individual processor and global statistics among all processors. Offline processors are not displayed. You can select information about specific system activities using flags. Not specifying any flags selects only CPU activity. Specifying the -A flag selects all possible activities. The default version of the sar command (CPU utilization report) might be one of the first facilities the user runs to begin system activity investigation, because it monitors major system resources. If CPU utilization is near 100 percent (user + nice + system), the workload sampled is CPU-bound. If multiple samples and multiple reports are desired, it is convenient to specify an output file for the sar command. Run the sar command as a background process. The syntax for this is: sar -o datafile interval count >/dev/null 2>&1 & All data are captured in binary form and saved to a file (datafile). The data can then be selectively displayed with the sar command using the -f option. Set the interval and count parameters to select count records at interval second intervals. If the count parameter is not set, all the records saved in the file will be selected. Collection of data in this manner is useful to characterize system usage over a period of time and determine peak usage hours. Note: The sar command only reports on local activities. OPTIONS top -A This is equivalent to specifying -bBdFHISvwWy -m ALL -n ALL -q ALL -r ALL -u ALL. This option also implies specifying -I ALL -P ALL unless these options are explicitly set on the command line. -B Report paging statistics. The following values are displayed: pgpgin/s Total number of kilobytes the system paged in from disk per second. pgpgout/s Total number of kilobytes the system paged out to disk per second. fault/s Number of page faults (major + minor) made by the system per second. This is not a count of page faults that generate I/O, because some page faults can be resolved without I/O. majflt/s Number of major faults the system has made per second, those which have required loading a memory page from disk. pgfree/s Number of pages placed on the free list by the system per second. pgscank/s Number of pages scanned by the kswapd daemon per second. pgscand/s Number of pages scanned directly per second. pgsteal/s Number of pages the system has reclaimed from cache (pagecache and swapcache) per second to satisfy its memory demands. pgprom/s Number of pages promoted (i.e. migrated from slow to fast memory types) by the system per second. pgdem/s Number of pages demoted (i.e. migrated from fast to slow memory types) by the system per second. -b Report I/O and transfer rate statistics. The following values are displayed: tps Total number of transfers per second that were issued to physical devices. A transfer is an I/O request to a physical device. Multiple logical requests can be combined into a single I/O request to the device. A transfer is of indeterminate size. rtps Total number of read requests per second issued to physical devices. wtps Total number of write requests per second issued to physical devices. dtps Total number of discard requests per second issued to physical devices. bread/s Total amount of data read from the devices in blocks per second. Blocks are equivalent to sectors and therefore have a size of 512 bytes. bwrtn/s Total amount of data written to devices in blocks per second. bdscd/s Total amount of data discarded for devices in blocks per second. -C When reading data from a file, tell sar to display comments that have been inserted by sadc. -D Use saYYYYMMDD instead of saDD as the standard system activity daily data file name. This option works only when used in conjunction with option -o to save data to file. -d Report activity for each block device. When data are displayed, the device name is displayed as it (should) appear in /dev. sar uses data in /sys to determine the device name based on its major and minor numbers. If this name resolution fails, sar will use name mapping controlled by /etc/sysconfig/sysstat.ioconf file. Persistent device names can also be printed if option -j is used (see below). Statistics for all devices are displayed unless a restricted list is specified using option --dev= (see corresponding option entry). Note that disk activity depends on sadc's options -S DISK and -S XDISK to be collected. The following values are displayed: tps Total number of transfers per second that were issued to physical devices. A transfer is an I/O request to a physical device. Multiple logical requests can be combined into a single I/O request to the device. A transfer is of indeterminate size. rkB/s Number of kilobytes read from the device per second. wkB/s Number of kilobytes written to the device per second. dkB/s Number of kilobytes discarded for the device per second. areq-sz The average size (in kilobytes) of the I/O requests that were issued to the device. Note: In previous versions, this field was known as avgrq-sz and was expressed in sectors. aqu-sz The average queue length of the requests that were issued to the device. Note: In previous versions, this field was known as avgqu-sz. await The average time (in milliseconds) for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. %util Percentage of elapsed time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100% for devices serving requests serially. But for devices serving requests in parallel, such as RAID arrays and modern SSDs, this number does not reflect their performance limits. --dec={ 0 | 1 | 2 } Specify the number of decimal places to use (0 to 2, default value is 2). --dev=dev_list Specify the block devices for which statistics are to be displayed by sar. dev_list is a list of comma-separated device names. -e [ hh:mm[:ss] ] -e [ seconds_since_the_epoch ] Set the ending time of the report. The default ending time is 18:00:00. Hours must be given in 24-hour format, or as the number of seconds since the epoch (given as a 10 digit number). This option can be used when data are read from or written to a file (options -f or -o). -F [ MOUNT ] Display statistics for currently mounted filesystems. Pseudo-filesystems are ignored. At the end of the report, sar will display a summary of all those filesystems. Use of the MOUNT parameter keyword indicates that mountpoint will be reported instead of filesystem device. Statistics for all filesystems are displayed unless a restricted list is specified using option --fs= (see corresponding option entry). Note that filesystems statistics depend on sadc's option -S XDISK to be collected. The following values are displayed: MBfsfree Total amount of free space in megabytes (including space available only to privileged user). MBfsused Total amount of space used in megabytes. %fsused Percentage of filesystem space used, as seen by a privileged user. %ufsused Percentage of filesystem space used, as seen by an unprivileged user. Ifree Total number of free file nodes in filesystem. Iused Total number of file nodes used in filesystem. %Iused Percentage of file nodes used in filesystem. -f [ filename ] Extract records from filename (created by the -o filename flag). The default value of the filename parameter is the current standard system activity daily data file. If filename is a directory instead of a plain file then it is considered as the directory where the standard system activity daily data files are located. Option -f is exclusive of option -o. --fs=fs_list Specify the filesystems for which statistics are to be displayed by sar. fs_list is a list of comma-separated filesystem names or mountpoints. -H Report hugepages utilization statistics. The following values are displayed: kbhugfree Amount of hugepages memory in kilobytes that is not yet allocated. kbhugused Amount of hugepages memory in kilobytes that has been allocated. %hugused Percentage of total hugepages memory that has been allocated. kbhugrsvd Amount of reserved hugepages memory in kilobytes. kbhugsurp Amount of surplus hugepages memory in kilobytes. -h This option is equivalent to specifying --pretty --human. --help Display a short help message then exit. --human Print sizes in human readable format (e.g. 1.0k, 1.2M, etc.) The units displayed with this option supersede any other default units (e.g. kilobytes, sectors...) associated with the metrics. -I [ SUM | ALL ] Report statistics for interrupts. The values displayed are the number of interrupts per second for the given processor or among all processors. A list of interrupts can be specified using --int= (see this option). The SUM keyword indicates that the total number of interrupts received per second is to be displayed. The ALL keyword indicates that statistics from all interrupts are to be reported (this is the default). Note that interrupts statistics depend on sadc's option -S INT to be collected. -i interval Select data records at seconds as close as possible to the number specified by the interval parameter. --iface=iface_list Specify the network interfaces for which statistics are to be displayed by sar. iface_list is a list of comma- separated interface names. --int=int_list Specify the interrupts names for which statistics are to be displayed by sar. int_list is a list of comma- separated values or range of values (e.g., 0-16,35,40-). -j { SID | ID | LABEL | PATH | UUID | ... } Display persistent device names. Use this option in conjunction with option -d. Keywords ID, LABEL, etc. specify the type of the persistent name. These keywords are not limited, only prerequisite is that directory with required persistent names is present in /dev/disk. Keyword SID tries to get a stable identifier to use as the device name. A stable identifier won't change across reboots for the same physical device. If it exists, this identifier is normally the WWN (World Wide Name) of the device, as read from the /dev/disk/by-id directory. -m { keyword[,...] | ALL } Report power management statistics. Note that these statistics depend on sadc's option -S POWER to be collected. Possible keywords are BAT, CPU, FAN, FREQ, IN, TEMP and USB. With the BAT keyword, statistics about batteries capacity are reported. The following values are displayed: %cap Battery capacity. cap/min Capacity lost or gained per minute by the battery. status Charging status of the battery: (full), (charging), (not charging), (discharging), ? (unknown). With the CPU keyword, statistics about CPU are reported. The following value is displayed: MHz Instantaneous CPU clock frequency in MHz. With the FAN keyword, statistics about fans speed are reported. The following values are displayed: rpm Fan speed expressed in revolutions per minute. drpm This field is calculated as the difference between current fan speed (rpm) and its low limit (fan_min). DEVICE Sensor device name. With the FREQ keyword, statistics about CPU clock frequency are reported. The following value is displayed: wghMHz Weighted average CPU clock frequency in MHz. Note that the cpufreq-stats driver must be compiled in the kernel for this option to work. With the IN keyword, statistics about voltage inputs are reported. The following values are displayed: inV Voltage input expressed in Volts. %in Relative input value. A value of 100% means that voltage input has reached its high limit (in_max) whereas a value of 0% means that it has reached its low limit (in_min). DEVICE Sensor device name. With the TEMP keyword, statistics about devices temperature are reported. The following values are displayed: degC Device temperature expressed in degrees Celsius. %temp Relative device temperature. A value of 100% means that temperature has reached its high limit (temp_max). DEVICE Sensor device name. With the USB keyword, the sar command takes a snapshot of all the USB devices currently plugged into the system. At the end of the report, sar will display a summary of all those USB devices. The following values are displayed: BUS Root hub number of the USB device. idvendor Vendor ID number (assigned by USB organization). idprod Product ID number (assigned by Manufacturer). maxpower Maximum power consumption of the device (expressed in mA). manufact Manufacturer name. product Product name. The ALL keyword is equivalent to specifying all the keywords above and therefore all the power management statistics are reported. -n { keyword[,...] | ALL } Report network statistics. Possible keywords are DEV, EDEV, FC, ICMP, EICMP, ICMP6, EICMP6, IP, EIP, IP6, EIP6, NFS, NFSD, SOCK, SOCK6, SOFT, TCP, ETCP, UDP and UDP6. With the DEV keyword, statistics from the network devices are reported. Statistics for all network interfaces are displayed unless a restricted list is specified using option --iface= (see corresponding option entry). The following values are displayed: IFACE Name of the network interface for which statistics are reported. rxpck/s Total number of packets received per second. txpck/s Total number of packets transmitted per second. rxkB/s Total number of kilobytes received per second. txkB/s Total number of kilobytes transmitted per second. rxcmp/s Number of compressed packets received per second (for cslip etc.). txcmp/s Number of compressed packets transmitted per second. rxmcst/s Number of multicast packets received per second. %ifutil Utilization percentage of the network interface. For half-duplex interfaces, utilization is calculated using the sum of rxkB/s and txkB/s as a percentage of the interface speed. For full-duplex, this is the greater of rxkB/S or txkB/s. With the EDEV keyword, statistics on failures (errors) from the network devices are reported. Statistics for all network interfaces are displayed unless a restricted list is specified using option --iface= (see corresponding option entry). The following values are displayed: IFACE Name of the network interface for which statistics are reported. rxerr/s Total number of bad packets received per second. txerr/s Total number of errors that happened per second while transmitting packets. coll/s Number of collisions that happened per second while transmitting packets. rxdrop/s Number of received packets dropped per second because of a lack of space in linux buffers. txdrop/s Number of transmitted packets dropped per second because of a lack of space in linux buffers. txcarr/s Number of carrier-errors that happened per second while transmitting packets. rxfram/s Number of frame alignment errors that happened per second on received packets. rxfifo/s Number of FIFO overrun errors that happened per second on received packets. txfifo/s Number of FIFO overrun errors that happened per second on transmitted packets. With the FC keyword, statistics about fibre channel traffic are reported. Note that fibre channel statistics depend on sadc's option -S DISK to be collected. The following values are displayed: FCHOST Name of the fibre channel host bus adapter (HBA) interface for which statistics are reported. fch_rxf/s The total number of frames received per second. fch_txf/s The total number of frames transmitted per second. fch_rxw/s The total number of transmission words received per second. fch_txw/s The total number of transmission words transmitted per second. With the ICMP keyword, statistics about ICMPv4 network traffic are reported. Note that ICMPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): imsg/s The total number of ICMP messages which the entity received per second [icmpInMsgs]. Note that this counter includes all those counted by ierr/s. omsg/s The total number of ICMP messages which this entity attempted to send per second [icmpOutMsgs]. Note that this counter includes all those counted by oerr/s. iech/s The number of ICMP Echo (request) messages received per second [icmpInEchos]. iechr/s The number of ICMP Echo Reply messages received per second [icmpInEchoReps]. oech/s The number of ICMP Echo (request) messages sent per second [icmpOutEchos]. oechr/s The number of ICMP Echo Reply messages sent per second [icmpOutEchoReps]. itm/s The number of ICMP Timestamp (request) messages received per second [icmpInTimestamps]. itmr/s The number of ICMP Timestamp Reply messages received per second [icmpInTimestampReps]. otm/s The number of ICMP Timestamp (request) messages sent per second [icmpOutTimestamps]. otmr/s The number of ICMP Timestamp Reply messages sent per second [icmpOutTimestampReps]. iadrmk/s The number of ICMP Address Mask Request messages received per second [icmpInAddrMasks]. iadrmkr/s The number of ICMP Address Mask Reply messages received per second [icmpInAddrMaskReps]. oadrmk/s The number of ICMP Address Mask Request messages sent per second [icmpOutAddrMasks]. oadrmkr/s The number of ICMP Address Mask Reply messages sent per second [icmpOutAddrMaskReps]. With the EICMP keyword, statistics about ICMPv4 error messages are reported. Note that ICMPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): ierr/s The number of ICMP messages per second which the entity received but determined as having ICMP- specific errors (bad ICMP checksums, bad length, etc.) [icmpInErrors]. oerr/s The number of ICMP messages per second which this entity did not send due to problems discovered within ICMP such as a lack of buffers [icmpOutErrors]. idstunr/s The number of ICMP Destination Unreachable messages received per second [icmpInDestUnreachs]. odstunr/s The number of ICMP Destination Unreachable messages sent per second [icmpOutDestUnreachs]. itmex/s The number of ICMP Time Exceeded messages received per second [icmpInTimeExcds]. otmex/s The number of ICMP Time Exceeded messages sent per second [icmpOutTimeExcds]. iparmpb/s The number of ICMP Parameter Problem messages received per second [icmpInParmProbs]. oparmpb/s The number of ICMP Parameter Problem messages sent per second [icmpOutParmProbs]. isrcq/s The number of ICMP Source Quench messages received per second [icmpInSrcQuenchs]. osrcq/s The number of ICMP Source Quench messages sent per second [icmpOutSrcQuenchs]. iredir/s The number of ICMP Redirect messages received per second [icmpInRedirects]. oredir/s The number of ICMP Redirect messages sent per second [icmpOutRedirects]. With the ICMP6 keyword, statistics about ICMPv6 network traffic are reported. Note that ICMPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed (formal SNMP names between square brackets): imsg6/s The total number of ICMP messages received by the interface per second which includes all those counted by ierr6/s [ipv6IfIcmpInMsgs]. omsg6/s The total number of ICMP messages which this interface attempted to send per second [ipv6IfIcmpOutMsgs]. iech6/s The number of ICMP Echo (request) messages received by the interface per second [ipv6IfIcmpInEchos]. iechr6/s The number of ICMP Echo Reply messages received by the interface per second [ipv6IfIcmpInEchoReplies]. oechr6/s The number of ICMP Echo Reply messages sent by the interface per second [ipv6IfIcmpOutEchoReplies]. igmbq6/s The number of ICMPv6 Group Membership Query messages received by the interface per second [ipv6IfIcmpInGroupMembQueries]. igmbr6/s The number of ICMPv6 Group Membership Response messages received by the interface per second [ipv6IfIcmpInGroupMembResponses]. ogmbr6/s The number of ICMPv6 Group Membership Response messages sent per second [ipv6IfIcmpOutGroupMembResponses]. igmbrd6/s The number of ICMPv6 Group Membership Reduction messages received by the interface per second [ipv6IfIcmpInGroupMembReductions]. ogmbrd6/s The number of ICMPv6 Group Membership Reduction messages sent per second [ipv6IfIcmpOutGroupMembReductions]. irtsol6/s The number of ICMP Router Solicit messages received by the interface per second [ipv6IfIcmpInRouterSolicits]. ortsol6/s The number of ICMP Router Solicitation messages sent by the interface per second [ipv6IfIcmpOutRouterSolicits]. irtad6/s The number of ICMP Router Advertisement messages received by the interface per second [ipv6IfIcmpInRouterAdvertisements]. inbsol6/s The number of ICMP Neighbor Solicit messages received by the interface per second [ipv6IfIcmpInNeighborSolicits]. onbsol6/s The number of ICMP Neighbor Solicitation messages sent by the interface per second [ipv6IfIcmpOutNeighborSolicits]. inbad6/s The number of ICMP Neighbor Advertisement messages received by the interface per second [ipv6IfIcmpInNeighborAdvertisements]. onbad6/s The number of ICMP Neighbor Advertisement messages sent by the interface per second [ipv6IfIcmpOutNeighborAdvertisements]. With the EICMP6 keyword, statistics about ICMPv6 error messages are reported. Note that ICMPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed (formal SNMP names between square brackets): ierr6/s The number of ICMP messages per second which the interface received but determined as having ICMP- specific errors (bad ICMP checksums, bad length, etc.) [ipv6IfIcmpInErrors] idtunr6/s The number of ICMP Destination Unreachable messages received by the interface per second [ipv6IfIcmpInDestUnreachs]. odtunr6/s The number of ICMP Destination Unreachable messages sent by the interface per second [ipv6IfIcmpOutDestUnreachs]. itmex6/s The number of ICMP Time Exceeded messages received by the interface per second [ipv6IfIcmpInTimeExcds]. otmex6/s The number of ICMP Time Exceeded messages sent by the interface per second [ipv6IfIcmpOutTimeExcds]. iprmpb6/s The number of ICMP Parameter Problem messages received by the interface per second [ipv6IfIcmpInParmProblems]. oprmpb6/s The number of ICMP Parameter Problem messages sent by the interface per second [ipv6IfIcmpOutParmProblems]. iredir6/s The number of Redirect messages received by the interface per second [ipv6IfIcmpInRedirects]. oredir6/s The number of Redirect messages sent by the interface by second [ipv6IfIcmpOutRedirects]. ipck2b6/s The number of ICMP Packet Too Big messages received by the interface per second [ipv6IfIcmpInPktTooBigs]. opck2b6/s The number of ICMP Packet Too Big messages sent by the interface per second [ipv6IfIcmpOutPktTooBigs]. With the IP keyword, statistics about IPv4 network traffic are reported. Note that IPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): irec/s The total number of input datagrams received from interfaces per second, including those received in error [ipInReceives]. fwddgm/s The number of input datagrams per second, for which this entity was not their final IP destination, as a result of which an attempt was made to find a route to forward them to that final destination [ipForwDatagrams]. idel/s The total number of input datagrams successfully delivered per second to IP user-protocols (including ICMP) [ipInDelivers]. orq/s The total number of IP datagrams which local IP user-protocols (including ICMP) supplied per second to IP in requests for transmission [ipOutRequests]. Note that this counter does not include any datagrams counted in fwddgm/s. asmrq/s The number of IP fragments received per second which needed to be reassembled at this entity [ipReasmReqds]. asmok/s The number of IP datagrams successfully re- assembled per second [ipReasmOKs]. fragok/s The number of IP datagrams that have been successfully fragmented at this entity per second [ipFragOKs]. fragcrt/s The number of IP datagram fragments that have been generated per second as a result of fragmentation at this entity [ipFragCreates]. With the EIP keyword, statistics about IPv4 network errors are reported. Note that IPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): ihdrerr/s The number of input datagrams discarded per second due to errors in their IP headers, including bad checksums, version number mismatch, other format errors, time-to-live exceeded, errors discovered in processing their IP options, etc. [ipInHdrErrors] iadrerr/s The number of input datagrams discarded per second because the IP address in their IP header's destination field was not a valid address to be received at this entity. This count includes invalid addresses (e.g., 0.0.0.0) and addresses of unsupported Classes (e.g., Class E). For entities which are not IP routers and therefore do not forward datagrams, this counter includes datagrams discarded because the destination address was not a local address [ipInAddrErrors]. iukwnpr/s The number of locally-addressed datagrams received successfully but discarded per second because of an unknown or unsupported protocol [ipInUnknownProtos]. idisc/s The number of input IP datagrams per second for which no problems were encountered to prevent their continued processing, but which were discarded (e.g., for lack of buffer space) [ipInDiscards]. Note that this counter does not include any datagrams discarded while awaiting re-assembly. odisc/s The number of output IP datagrams per second for which no problem was encountered to prevent their transmission to their destination, but which were discarded (e.g., for lack of buffer space) [ipOutDiscards]. Note that this counter would include datagrams counted in fwddgm/s if any such packets met this (discretionary) discard criterion. onort/s The number of IP datagrams discarded per second because no route could be found to transmit them to their destination [ipOutNoRoutes]. Note that this counter includes any packets counted in fwddgm/s which meet this 'no-route' criterion. Note that this includes any datagrams which a host cannot route because all of its default routers are down. asmf/s The number of failures detected per second by the IP re-assembly algorithm (for whatever reason: timed out, errors, etc) [ipReasmFails]. Note that this is not necessarily a count of discarded IP fragments since some algorithms can lose track of the number of fragments by combining them as they are received. fragf/s The number of IP datagrams that have been discarded per second because they needed to be fragmented at this entity but could not be, e.g., because their Don't Fragment flag was set [ipFragFails]. With the IP6 keyword, statistics about IPv6 network traffic are reported. Note that IPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed (formal SNMP names between square brackets): irec6/s The total number of input datagrams received from interfaces per second, including those received in error [ipv6IfStatsInReceives]. fwddgm6/s The number of output datagrams per second which this entity received and forwarded to their final destinations [ipv6IfStatsOutForwDatagrams]. idel6/s The total number of datagrams successfully delivered per second to IPv6 user-protocols (including ICMP) [ipv6IfStatsInDelivers]. orq6/s The total number of IPv6 datagrams which local IPv6 user-protocols (including ICMP) supplied per second to IPv6 in requests for transmission [ipv6IfStatsOutRequests]. Note that this counter does not include any datagrams counted in fwddgm6/s. asmrq6/s The number of IPv6 fragments received per second which needed to be reassembled at this interface [ipv6IfStatsReasmReqds]. asmok6/s The number of IPv6 datagrams successfully reassembled per second [ipv6IfStatsReasmOKs]. imcpck6/s The number of multicast packets received per second by the interface [ipv6IfStatsInMcastPkts]. omcpck6/s The number of multicast packets transmitted per second by the interface [ipv6IfStatsOutMcastPkts]. fragok6/s The number of IPv6 datagrams that have been successfully fragmented at this output interface per second [ipv6IfStatsOutFragOKs]. fragcr6/s The number of output datagram fragments that have been generated per second as a result of fragmentation at this output interface [ipv6IfStatsOutFragCreates]. With the EIP6 keyword, statistics about IPv6 network errors are reported. Note that IPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed (formal SNMP names between square brackets): ihdrer6/s The number of input datagrams discarded per second due to errors in their IPv6 headers, including version number mismatch, other format errors, hop count exceeded, errors discovered in processing their IPv6 options, etc. [ipv6IfStatsInHdrErrors] iadrer6/s The number of input datagrams discarded per second because the IPv6 address in their IPv6 header's destination field was not a valid address to be received at this entity. This count includes invalid addresses (e.g., ::0) and unsupported addresses (e.g., addresses with unallocated prefixes). For entities which are not IPv6 routers and therefore do not forward datagrams, this counter includes datagrams discarded because the destination address was not a local address [ipv6IfStatsInAddrErrors]. iukwnp6/s The number of locally-addressed datagrams received successfully but discarded per second because of an unknown or unsupported protocol [ipv6IfStatsInUnknownProtos]. i2big6/s The number of input datagrams that could not be forwarded per second because their size exceeded the link MTU of outgoing interface [ipv6IfStatsInTooBigErrors]. idisc6/s The number of input IPv6 datagrams per second for which no problems were encountered to prevent their continued processing, but which were discarded (e.g., for lack of buffer space) [ipv6IfStatsInDiscards]. Note that this counter does not include any datagrams discarded while awaiting re-assembly. odisc6/s The number of output IPv6 datagrams per second for which no problem was encountered to prevent their transmission to their destination, but which were discarded (e.g., for lack of buffer space) [ipv6IfStatsOutDiscards]. Note that this counter would include datagrams counted in fwddgm6/s if any such packets met this (discretionary) discard criterion. inort6/s The number of input datagrams discarded per second because no route could be found to transmit them to their destination [ipv6IfStatsInNoRoutes]. onort6/s The number of locally generated IP datagrams discarded per second because no route could be found to transmit them to their destination [unknown formal SNMP name]. asmf6/s The number of failures detected per second by the IPv6 re-assembly algorithm (for whatever reason: timed out, errors, etc.) [ipv6IfStatsReasmFails]. Note that this is not necessarily a count of discarded IPv6 fragments since some algorithms can lose track of the number of fragments by combining them as they are received. fragf6/s The number of IPv6 datagrams that have been discarded per second because they needed to be fragmented at this output interface but could not be [ipv6IfStatsOutFragFails]. itrpck6/s The number of input datagrams discarded per second because datagram frame didn't carry enough data [ipv6IfStatsInTruncatedPkts]. With the NFS keyword, statistics about NFS client activity are reported. The following values are displayed: call/s Number of RPC requests made per second. retrans/s Number of RPC requests per second, those which needed to be retransmitted (for example because of a server timeout). read/s Number of 'read' RPC calls made per second. write/s Number of 'write' RPC calls made per second. access/s Number of 'access' RPC calls made per second. getatt/s Number of 'getattr' RPC calls made per second. With the NFSD keyword, statistics about NFS server activity are reported. The following values are displayed: scall/s Number of RPC requests received per second. badcall/s Number of bad RPC requests received per second, those whose processing generated an error. packet/s Number of network packets received per second. udp/s Number of UDP packets received per second. tcp/s Number of TCP packets received per second. hit/s Number of reply cache hits per second. miss/s Number of reply cache misses per second. sread/s Number of 'read' RPC calls received per second. swrite/s Number of 'write' RPC calls received per second. saccess/s Number of 'access' RPC calls received per second. sgetatt/s Number of 'getattr' RPC calls received per second. With the SOCK keyword, statistics on sockets in use are reported (IPv4). The following values are displayed: totsck Total number of sockets used by the system. tcpsck Number of TCP sockets currently in use. udpsck Number of UDP sockets currently in use. rawsck Number of RAW sockets currently in use. ip-frag Number of IP fragments currently in queue. tcp-tw Number of TCP sockets in TIME_WAIT state. With the SOCK6 keyword, statistics on sockets in use are reported (IPv6). Note that IPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed: tcp6sck Number of TCPv6 sockets currently in use. udp6sck Number of UDPv6 sockets currently in use. raw6sck Number of RAWv6 sockets currently in use. ip6-frag Number of IPv6 fragments currently in use. With the SOFT keyword, statistics about software-based network processing are reported. The following values are displayed: total/s The total number of network frames processed per second. dropd/s The total number of network frames dropped per second because there was no room on the processing queue. squeezd/s The number of times the softirq handler function terminated per second because its budget was consumed or the time limit was reached, but more work could have been done. rx_rps/s The number of times the CPU has been woken up per second to process packets via an inter-processor interrupt. flw_lim/s The number of times the flow limit has been reached per second. Flow limiting is an optional RPS feature that can be used to limit the number of packets queued to the backlog for each flow to a certain amount. This can help ensure that smaller flows are processed even though much larger flows are pushing packets in. blg_len The length of the network backlog. With the TCP keyword, statistics about TCPv4 network traffic are reported. Note that TCPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): active/s The number of times TCP connections have made a direct transition to the SYN-SENT state from the CLOSED state per second [tcpActiveOpens]. passive/s The number of times TCP connections have made a direct transition to the SYN-RCVD state from the LISTEN state per second [tcpPassiveOpens]. iseg/s The total number of segments received per second, including those received in error [tcpInSegs]. This count includes segments received on currently established connections. oseg/s The total number of segments sent per second, including those on current connections but excluding those containing only retransmitted octets [tcpOutSegs]. With the ETCP keyword, statistics about TCPv4 network errors are reported. Note that TCPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): atmptf/s The number of times per second TCP connections have made a direct transition to the CLOSED state from either the SYN-SENT state or the SYN-RCVD state, plus the number of times per second TCP connections have made a direct transition to the LISTEN state from the SYN-RCVD state [tcpAttemptFails]. estres/s The number of times per second TCP connections have made a direct transition to the CLOSED state from either the ESTABLISHED state or the CLOSE-WAIT state [tcpEstabResets]. retrans/s The total number of segments retransmitted per second - that is, the number of TCP segments transmitted containing one or more previously transmitted octets [tcpRetransSegs]. isegerr/s The total number of segments received in error (e.g., bad TCP checksums) per second [tcpInErrs]. orsts/s The number of TCP segments sent per second containing the RST flag [tcpOutRsts]. With the UDP keyword, statistics about UDPv4 network traffic are reported. Note that UDPv4 statistics depend on sadc's option -S SNMP to be collected. The following values are displayed (formal SNMP names between square brackets): idgm/s The total number of UDP datagrams delivered per second to UDP users [udpInDatagrams]. odgm/s The total number of UDP datagrams sent per second from this entity [udpOutDatagrams]. noport/s The total number of received UDP datagrams per second for which there was no application at the destination port [udpNoPorts]. idgmerr/s The number of received UDP datagrams per second that could not be delivered for reasons other than the lack of an application at the destination port [udpInErrors]. With the UDP6 keyword, statistics about UDPv6 network traffic are reported. Note that UDPv6 statistics depend on sadc's option -S IPV6 to be collected. The following values are displayed (formal SNMP names between square brackets): idgm6/s The total number of UDP datagrams delivered per second to UDP users [udpInDatagrams]. odgm6/s The total number of UDP datagrams sent per second from this entity [udpOutDatagrams]. noport6/s The total number of received UDP datagrams per second for which there was no application at the destination port [udpNoPorts]. idgmer6/s The number of received UDP datagrams per second that could not be delivered for reasons other than the lack of an application at the destination port [udpInErrors]. The ALL keyword is equivalent to specifying all the keywords above and therefore all the network activities are reported. -o [ filename ] Save the readings in the file in binary form. Each reading is in a separate record. The default value of the filename parameter is the current standard system activity daily data file. If filename is a directory instead of a plain file then it is considered as the directory where the standard system activity daily data files are located. Option -o is exclusive of option -f. All the data available from the kernel are saved in the file (in fact, sar calls its data collector sadc with the option -S ALL. See sadc(8) manual page). -P { cpu_list | ALL } Report per-processor statistics for the specified processor or processors. cpu_list is a list of comma- separated values or range of values (e.g., 0,2,4-7,12-). Note that processor 0 is the first processor, and processor all is the global average among all processors. Specifying the ALL keyword reports statistics for each individual processor, and globally for all processors. Offline processors are not displayed. -p, --pretty Make reports easier to read by a human. This option may be especially useful when displaying e.g., network interfaces or block devices statistics. -q [ keyword[,...] | ALL ] Report system load and pressure-stall statistics. Possible keywords are CPU, IO, LOAD, MEM and PSI. With the CPU keyword, CPU pressure statistics are reported. The following values are displayed: %scpu-10 Percentage of the time that at least some runnable tasks were delayed because the CPU was unavailable to them, over the last 10 second window. %scpu-60 Percentage of the time that at least some runnable tasks were delayed because the CPU was unavailable to them, over the last 60 second window. %scpu-300 Percentage of the time that at least some runnable tasks were delayed because the CPU was unavailable to them, over the last 300 second window. %scpu Percentage of the time that at least some runnable tasks were delayed because the CPU was unavailable to them, over the last time interval. With the IO keyword, I/O pressure statistics are reported. The following values are displayed: %sio-10 Percentage of the time that at least some tasks lost waiting for I/O, over the last 10 second window. %sio-60 Percentage of the time that at least some tasks lost waiting for I/O, over the last 60 second window. %sio-300 Percentage of the time that at least some tasks lost waiting for I/O, over the last 300 second window. %sio Percentage of the time that at least some tasks lost waiting for I/O, over the last time interval. %fio-10 Percentage of the time during which all non-idle tasks were stalled waiting for I/O, over the last 10 second window. %fio-60 Percentage of the time during which all non-idle tasks were stalled waiting for I/O, over the last 60 second window. %fio-300 Percentage of the time during which all non-idle tasks were stalled waiting for I/O, over the last 300 second window. %fio Percentage of the time during which all non-idle tasks were stalled waiting for I/O, over the last time interval. With the LOAD keyword, queue length and load averages statistics are reported. The following values are displayed: runq-sz Run queue length (number of tasks running or waiting for run time). plist-sz Number of tasks in the task list. ldavg-1 System load average for the last minute. The load average is calculated as the average number of runnable or running tasks (R state), and the number of tasks in uninterruptible sleep (D state) over the specified interval. ldavg-5 System load average for the past 5 minutes. ldavg-15 System load average for the past 15 minutes. blocked Number of tasks currently blocked, waiting for I/O to complete. With the MEM keyword, memory pressure statistics are reported. The following values are displayed: %smem-10 Percentage of the time during which at least some tasks were waiting for memory resources, over the last 10 second window. %smem-60 Percentage of the time during which at least some tasks were waiting for memory resources, over the last 60 second window. %smem-300 Percentage of the time during which at least some tasks were waiting for memory resources, over the last 300 second window. %smem Percentage of the time during which at least some tasks were waiting for memory resources, over the last time interval. %fmem-10 Percentage of the time during which all non-idle tasks were stalled waiting for memory resources, over the last 10 second window. %fmem-60 Percentage of the time during which all non-idle tasks were stalled waiting for memory resources, over the last 60 second window. %fmem-300 Percentage of the time during which all non-idle tasks were stalled waiting for memory resources, over the last 300 second window. %fmem Percentage of the time during which all non-idle tasks were stalled waiting for memory resources, over the last time interval. The PSI keyword is equivalent to specifying CPU, IO and MEM keywords together and therefore all the pressure-stall statistics are reported. The ALL keyword is equivalent to specifying all the keywords above and therefore all the statistics are reported. -r [ ALL ] Report memory utilization statistics. The ALL keyword indicates that all the memory fields should be displayed. The following values may be displayed: kbmemfree Amount of free memory available in kilobytes. kbavail Estimate of how much memory in kilobytes is available for starting new applications, without swapping. The estimate takes into account that the system needs some page cache to function well, and that not all reclaimable slab will be reclaimable, due to items being in use. The impact of those factors will vary from system to system. kbmemused Amount of used memory in kilobytes (calculated as total installed memory - kbmemfree - kbbuffers - kbcached - kbslab). %memused Percentage of used memory. kbbuffers Amount of memory used as buffers by the kernel in kilobytes. kbcached Amount of memory used to cache data by the kernel in kilobytes. kbcommit Amount of memory in kilobytes needed for current workload. This is an estimate of how much RAM/swap is needed to guarantee that there never is out of memory. %commit Percentage of memory needed for current workload in relation to the total amount of memory (RAM+swap). This number may be greater than 100% because the kernel usually overcommits memory. kbactive Amount of active memory in kilobytes (memory that has been used more recently and usually not reclaimed unless absolutely necessary). kbinact Amount of inactive memory in kilobytes (memory which has been less recently used. It is more eligible to be reclaimed for other purposes). kbdirty Amount of memory in kilobytes waiting to get written back to the disk. kbanonpg Amount of non-file backed pages in kilobytes mapped into userspace page tables. kbslab Amount of memory in kilobytes used by the kernel to cache data structures for its own use. kbkstack Amount of memory in kilobytes used for kernel stack space. kbpgtbl Amount of memory in kilobytes dedicated to the lowest level of page tables. kbvmused Amount of memory in kilobytes of used virtual address space. -S Report swap space utilization statistics. The following values are displayed: kbswpfree Amount of free swap space in kilobytes. kbswpused Amount of used swap space in kilobytes. %swpused Percentage of used swap space. kbswpcad Amount of cached swap memory in kilobytes. This is memory that once was swapped out, is swapped back in but still also is in the swap area (if memory is needed it doesn't need to be swapped out again because it is already in the swap area. This saves I/O). %swpcad Percentage of cached swap memory in relation to the amount of used swap space. -s [ hh:mm[:ss] ] -s [ seconds_since_the_epoch ] Set the starting time of the data, causing the sar command to extract records time-tagged at, or following, the time specified. The default starting time is 08:00:00. Hours must be given in 24-hour format, or as the number of seconds since the epoch (given as a 10 digit number). This option can be used only when data are read from a file (option -f). --sadc Indicate which data collector is called by sar. If the data collector is sought in PATH then enter "which sadc" to know where it is located. -t When reading data from a daily data file, indicate that sar should display the timestamps in the original local time of the data file creator. Without this option, the sar command displays the timestamps in the user's local time. -u [ ALL ] Report CPU utilization. The ALL keyword indicates that all the CPU fields should be displayed. The report may show the following fields: %user Percentage of CPU utilization that occurred while executing at the user level (application). Note that this field includes time spent running virtual processors. %usr Percentage of CPU utilization that occurred while executing at the user level (application). Note that this field does NOT include time spent running virtual processors. %nice Percentage of CPU utilization that occurred while executing at the user level with nice priority. %system Percentage of CPU utilization that occurred while executing at the system level (kernel). Note that this field includes time spent servicing hardware and software interrupts. %sys Percentage of CPU utilization that occurred while executing at the system level (kernel). Note that this field does NOT include time spent servicing hardware or software interrupts. %iowait Percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request. %steal Percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor. %irq Percentage of time spent by the CPU or CPUs to service hardware interrupts. %soft Percentage of time spent by the CPU or CPUs to service software interrupts. %guest Percentage of time spent by the CPU or CPUs to run a virtual processor. %gnice Percentage of time spent by the CPU or CPUs to run a niced guest. %idle Percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request. -V Print version number then exit. -v Report status of inode, file and other kernel tables. The following values are displayed: dentunusd Number of unused cache entries in the directory cache. file-nr Number of file handles used by the system. inode-nr Number of inode handlers used by the system. pty-nr Number of pseudo-terminals used by the system. -W Report swapping statistics. The following values are displayed: pswpin/s Total number of swap pages the system brought in per second. pswpout/s Total number of swap pages the system brought out per second. -w Report task creation and system switching activity. The following values are displayed: proc/s Total number of tasks created per second. cswch/s Total number of context switches per second. -x Extended reports: Display minimum and maximum values in addition to average ones at the end of the report. -y Report TTY devices activity. The following values are displayed: rcvin/s Number of receive interrupts per second for current serial line. Serial line number is given in the TTY column. xmtin/s Number of transmit interrupts per second for current serial line. framerr/s Number of frame errors per second for current serial line. prtyerr/s Number of parity errors per second for current serial line. brk/s Number of breaks per second for current serial line. ovrun/s Number of overrun errors per second for current serial line. -z Tell sar to omit output for any devices for which there was no activity during the sample period. ENVIRONMENT top The sar command takes into account the following environment variables: S_COLORS By default statistics are displayed in color when the output is connected to a terminal. Use this variable to change the settings. Possible values for this variable are never, always or auto (the latter is equivalent to the default settings). Please note that the color (being red, yellow, or some other color) used to display a value is not indicative of any kind of issue simply because of the color. It only indicates different ranges of values. S_COLORS_SGR Specify the colors and other attributes used to display statistics on the terminal. Its value is a colon- separated list of capabilities that defaults to C=33;22:I=32;22:N=34;1:R=31;22:W=35;1:X=31;1:Z=34;22. Supported capabilities are: C= SGR (Select Graphic Rendition) substring for comments inserted in the binary daily data files. I= SGR substring for item names or values (eg. network interfaces, CPU number...) N= SGR substring for non-zero statistics values. R= SGR substring for restart messages. W= (or M=) SGR substring for percentage values in the range from 75% to 90% (or in the range 10% to 25% depending on the metric's meaning). It is also used for negative values in the range from -10 to -5. X= (or H=) SGR substring for percentage values greater than or equal to 90% (or lower than or equal to 10% depending on the metric's meaning). It is also used for negative values lower than or equal to -10. Z= SGR substring for zero values. S_REPEAT_HEADER This variable contains the maximum number of lines after which a header has to be displayed by sar when the output is not a terminal. S_TIME_DEF_TIME If this variable exists and its value is UTC then sar will save its data in UTC time (data will still be displayed in local time). sar will also use UTC time instead of local time to determine the current daily data file located in the /var/log/sa directory. This variable may be useful for servers with users located across several timezones. S_TIME_FORMAT If this variable exists and its value is ISO then the current locale will be ignored when printing the date in the report header. The sar command will use the ISO 8601 format (YYYY-MM-DD) instead. The timestamp will also be compliant with ISO 8601 format. EXAMPLES top sar -u 2 5 Report CPU utilization for each 2 seconds. 5 lines are displayed. sar -I --int=14 -o int14.file 2 10 Report statistics on IRQ 14 for each 2 seconds. 10 lines are displayed. Data are stored in a file called int14.file. sar -r -n DEV -f /var/log/sa/sa16 Display memory and network statistics saved in daily data file sa16. sar -A Display all the statistics saved in current daily data file. BUGS top /proc filesystem must be mounted for the sar command to work. All the statistics are not necessarily available, depending on the kernel version used. sar assumes that you are using at least a 2.6 kernel. Although sar speaks of kilobytes (kB), megabytes (MB)..., it actually uses kibibytes (kiB), mebibytes (MiB)... A kibibyte is equal to 1024 bytes, and a mebibyte is equal to 1024 kibibytes. FILES top /var/log/sa/saDD /var/log/sa/saYYYYMMDD The standard system activity daily data files and their default location. YYYY stands for the current year, MM for the current month and DD for the current day. /proc and /sys contain various files with system statistics. AUTHOR top Sebastien Godard (sysstat <at> orange.fr) SEE ALSO top sadc(8), sa1(8), sa2(8), sadf(1), sysstat(5), pidstat(1), mpstat(1), iostat(1), vmstat(8) https://github.com/sysstat/sysstat https://sysstat.github.io/ COLOPHON top This page is part of the sysstat (sysstat performance monitoring tools) project. Information about the project can be found at http://sebastien.godard.pagesperso-orange.fr/. If you have a bug report for this manual page, send it to sysstat-AT-orange.fr. This page was obtained from the project's upstream Git repository https://github.com/sysstat/sysstat.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-17.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Linux NOVEMBER 2023 SAR(1) Pages that refer to this page: cifsiostat(1), iostat(1), mpstat(1), pidstat(1), pmrep(1), sadf(1), sar2pcp(1), sa1(8), sa2(8), sadc(8), vmstat(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sar\n\n> Monitor performance of various Linux subsystems.\n> More information: <https://manned.org/sar>.\n\n- Report I/O and transfer rate issued to physical devices, one per second (press CTRL+C to quit):\n\n`sar -b {{1}}`\n\n- Report a total of 10 network device statistics, one per 2 seconds:\n\n`sar -n DEV {{2}} {{10}}`\n\n- Report CPU utilization, one per 2 seconds:\n\n`sar -u ALL {{2}}`\n\n- Report a total of 20 memory utilization statistics, one per second:\n\n`sar -r ALL {{1}} {{20}}`\n\n- Report the run queue length and load averages, one per second:\n\n`sar -q {{1}}`\n\n- Report paging statistics, one per 5 seconds:\n\n`sar -B {{5}}`\n
scp
scp(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training scp(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXIT STATUS | SEE ALSO | HISTORY | AUTHORS | CAVEATS | COLOPHON SCP(1) General Commands Manual SCP(1) NAME top scp OpenSSH secure file copy SYNOPSIS top scp [-346ABCOpqRrsTv] [-c cipher] [-D sftp_server_path] [-F ssh_config] [-i identity_file] [-J destination] [-l limit] [-o ssh_option] [-P port] [-S program] [-X sftp_option] source ... target DESCRIPTION top copies files between hosts on a network. uses the SFTP protocol over a ssh(1) connection for data transfer, and uses the same authentication and provides the same security as a login session. will ask for passwords or passphrases if they are needed for authentication. The source and target may be specified as a local pathname, a remote host with optional path in the form [user@]host:[path], or a URI in the form scp://[user@]host[:port][/path]. Local file names can be made explicit using absolute or relative pathnames to avoid treating file names containing : as host specifiers. When copying between two remote hosts, if the URI format is used, a port cannot be specified on the target if the -R option is used. The options are as follows: -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly between the two remote hosts. Note that, when using the legacy SCP protocol (via the -O flag), this option selects batch mode for the second host as cannot ask for passwords or passphrases for both hosts. This mode is the default. -4 Forces to use IPv4 addresses only. -6 Forces to use IPv6 addresses only. -A Allows forwarding of ssh-agent(1) to the remote system. The default is not to forward an authentication agent. -B Selects batch mode (prevents asking for passwords or passphrases). -C Compression enable. Passes the -C flag to ssh(1) to enable compression. -c cipher Selects the cipher to use for encrypting the data transfer. This option is directly passed to ssh(1). -D sftp_server_path Connect directly to a local SFTP server program rather than a remote one via ssh(1). This option may be useful in debugging the client and server. -F ssh_config Specifies an alternative per-user configuration file for ssh. This option is directly passed to ssh(1). -i identity_file Selects the file from which the identity (private key) for public key authentication is read. This option is directly passed to ssh(1). -J destination Connect to the target host by first making an connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. This option is directly passed to ssh(1). -l limit Limits the used bandwidth, specified in Kbit/s. -O Use the legacy SCP protocol for file transfers instead of the SFTP protocol. Forcing the use of the SCP protocol may be necessary for servers that do not implement SFTP, for backwards-compatibility for particular filename wildcard patterns and for expanding paths with a ~ prefix for older SFTP servers. -o ssh_option Can be used to pass options to ssh in the format used in ssh_config(5). This is useful for specifying options for which there is no separate scp command-line flag. For full details of the options listed below, and their possible values, see ssh_config(5). AddressFamily BatchMode BindAddress BindInterface CanonicalDomains CanonicalizeFallbackLocal CanonicalizeHostname CanonicalizeMaxDots CanonicalizePermittedCNAMEs CASignatureAlgorithms CertificateFile CheckHostIP Ciphers Compression ConnectionAttempts ConnectTimeout ControlMaster ControlPath ControlPersist GlobalKnownHostsFile GSSAPIAuthentication GSSAPIDelegateCredentials HashKnownHosts Host HostbasedAcceptedAlgorithms HostbasedAuthentication HostKeyAlgorithms HostKeyAlias Hostname IdentitiesOnly IdentityAgent IdentityFile IPQoS KbdInteractiveAuthentication KbdInteractiveDevices KexAlgorithms KnownHostsCommand LogLevel MACs NoHostAuthenticationForLocalhost NumberOfPasswordPrompts PasswordAuthentication PKCS11Provider Port PreferredAuthentications ProxyCommand ProxyJump PubkeyAcceptedAlgorithms PubkeyAuthentication RekeyLimit RequiredRSASize SendEnv ServerAliveInterval ServerAliveCountMax SetEnv StrictHostKeyChecking TCPKeepAlive UpdateHostKeys User UserKnownHostsFile VerifyHostKeyDNS -P port Specifies the port to connect to on the remote host. Note that this option is written with a capital P, because -p is already reserved for preserving the times and mode bits of the file. -p Preserves modification times, access times, and file mode bits from the source file. -q Quiet mode: disables the progress meter as well as warning and diagnostic messages from ssh(1). -R Copies between two remote hosts are performed by connecting to the origin host and executing there. This requires that running on the origin host can authenticate to the destination host without requiring a password. -r Recursively copy entire directories. Note that follows symbolic links encountered in the tree traversal. -S program Name of program to use for the encrypted connection. The program must understand ssh(1) options. -T Disable strict filename checking. By default when copying files from a remote host to a local directory checks that the received filenames match those requested on the command-line to prevent the remote end from sending unexpected or unwanted files. Because of differences in how various operating systems and shells interpret filename wildcards, these checks may cause wanted files to be rejected. This option disables these checks at the expense of fully trusting that the server will not send unexpected filenames. -v Verbose mode. Causes and ssh(1) to print debugging messages about their progress. This is helpful in debugging connection, authentication, and configuration problems. -X sftp_option Specify an option that controls aspects of SFTP protocol behaviour. The valid options are: nrequests=value Controls how many concurrent SFTP read or write requests may be in progress at any point in time during a download or upload. By default 64 requests may be active concurrently. buffer=value Controls the maximum buffer size for a single SFTP read/write operation used during download or upload. By default a 32KB buffer is used. EXIT STATUS top The scp utility exits 0 on success, and >0 if an error occurs. SEE ALSO top sftp(1), ssh(1), ssh-add(1), ssh-agent(1), ssh-keygen(1), ssh_config(5), sftp-server(8), sshd(8) HISTORY top is based on the rcp program in BSD source code from the Regents of the University of California. Since OpenSSH 9.0, has used the SFTP protocol for transfers by default. AUTHORS top Timo Rinne <tri@iki.fi> Tatu Ylonen <ylo@cs.hut.fi> CAVEATS top The legacy SCP protocol (selected by the -O flag) requires execution of the remote user's shell to perform glob(3) pattern matching. This requires careful quoting of any characters that have special meaning to the remote shell, such as quote characters. COLOPHON top This page is part of the openssh (Portable OpenSSH) project. Information about the project can be found at http://www.openssh.com/portable.html. If you have a bug report for this manual page, see http://www.openssh.com/report.html. This page was obtained from the tarball openssh-9.6p1.tar.gz fetched from http://ftp.eu.openbsd.org/pub/OpenBSD/OpenSSH/portable/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up- to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU December 16, 2022 SCP(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# scp\n\n> Secure copy.\n> Copy files between hosts using Secure Copy Protocol over SSH.\n> More information: <https://man.openbsd.org/scp>.\n\n- Copy a local file to a remote host:\n\n`scp {{path/to/local_file}} {{remote_host}}:{{path/to/remote_file}}`\n\n- Use a specific port when connecting to the remote host:\n\n`scp -P {{port}} {{path/to/local_file}} {{remote_host}}:{{path/to/remote_file}}`\n\n- Copy a file from a remote host to a local directory:\n\n`scp {{remote_host}}:{{path/to/remote_file}} {{path/to/local_directory}}`\n\n- Recursively copy the contents of a directory from a remote host to a local directory:\n\n`scp -r {{remote_host}}:{{path/to/remote_directory}} {{path/to/local_directory}}`\n\n- Copy a file between two remote hosts transferring through the local host:\n\n`scp -3 {{host1}}:{{path/to/remote_file}} {{host2}}:{{path/to/remote_directory}}`\n\n- Use a specific username when connecting to the remote host:\n\n`scp {{path/to/local_file}} {{remote_username}}@{{remote_host}}:{{path/to/remote_directory}}`\n\n- Use a specific SSH private key for authentication with the remote host:\n\n`scp -i {{~/.ssh/private_key}} {{path/to/local_file}} {{remote_host}}:{{path/to/remote_file}}`\n\n- Use a specific proxy when connecting to the remote host:\n\n`scp -J {{proxy_username}}@{{proxy_host}} {{path/to/local_file}} {{remote_host}}:{{path/to/remote_file}}`\n
screen
screen(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training screen(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | GETTING STARTED | COMMAND-LINE OPTIONS | DEFAULT KEY BINDINGS | CUSTOMIZATION | THE MESSAGE LINE | WINDOW TYPES | STRING ESCAPES | FLOW-CONTROL | TITLES (naming windows) | THE VIRTUAL TERMINAL | INPUT TRANSLATION | SPECIAL TERMINAL CAPABILITIES | CHARACTER TRANSLATION | ENVIRONMENT | FILES | SEE ALSO | AUTHORS | COPYLEFT | CONTRIBUTORS | VERSION | AVAILABILITY | BUGS | COLOPHON SCREEN(1) General Commands Manual SCREEN(1) NAME top screen - screen manager with VT100/ANSI terminal emulation SYNOPSIS top screen [ -options ] [ cmd [ args ] ] screen -r [[pid.]tty[.host]] screen -r sessionowner/[[pid.]tty[.host]] DESCRIPTION top Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells). Each virtual terminal provides the functions of a DEC VT100 terminal and, in addition, several control functions from the ISO 6429 (ECMA 48, ANSI X3.64) and ISO 2022 standards (e.g. insert/delete line and support for multiple character sets). There is a scrollback history buffer for each virtual terminal and a copy-and-paste mechanism that allows moving text regions between windows. When screen is called, it creates a single window with a shell in it (or the specified command) and then gets out of your way so that you can use the program as you normally would. Then, at any time, you can create new (full-screen) windows with other programs in them (including more shells), kill existing windows, view a list of windows, turn output logging on and off, copy-and- paste text between windows, view the scrollback history, switch between windows in whatever manner you wish, etc. All windows run their programs completely independent of each other. Programs continue to run when their window is currently not visible and even when the whole screen session is detached from the user's terminal. When a program terminates, screen (per default) kills the window that contained it. If this window was in the foreground, the display switches to the previous window; if none are left, screen exits. Shells usually distinguish between running as login-shell or sub-shell. Screen runs them as sub- shells, unless told otherwise (See "shell" .screenrc command). Everything you type is sent to the program running in the current window. The only exception to this is the one keystroke that is used to initiate a command to the window manager. By default, each command begins with a control-a (abbreviated C-a from now on), and is followed by one other keystroke. The command character and all the key bindings can be fully customized to be anything you like, though they are always two characters in length. Screen does not understand the prefix "C-" to mean control, although this notation is used in this manual for readability. Please use the caret notation ("^A" instead of "C-a") as arguments to e.g. the escape command or the -e option. Screen will also print out control characters in caret notation. The standard way to create a new window is to type "C-a c". This creates a new window running a shell and switches to that window immediately, regardless of the state of the process running in the current window. Similarly, you can create a new window with a custom command in it by first binding the command to a keystroke (in your .screenrc file or at the "C-a :" command line) and then using it just like the "C-a c" command. In addition, new windows can be created by running a command like: screen emacs prog.c from a shell prompt within a previously created window. This will not run another copy of screen, but will instead supply the command name and its arguments to the window manager (specified in the $STY environment variable) who will use it to create the new window. The above example would start the emacs editor (editing prog.c) and switch to its window. - Note that you cannot transport environment variables from the invoking shell to the application (emacs in this case), because it is forked from the parent screen process, not from the invoking shell. If "/etc/utmp" is writable by screen, an appropriate record will be written to this file for each window, and removed when the window is terminated. This is useful for working with "talk", "script", "shutdown", "rsend", "sccs" and other similar programs that use the utmp file to determine who you are. As long as screen is active on your terminal, the terminal's own record is removed from the utmp file. See also "C-a L". GETTING STARTED top Before you begin to use screen you'll need to make sure you have correctly selected your terminal type, just as you would for any other termcap/terminfo program. (You can do this by using tset for example.) If you're impatient and want to get started without doing a lot more reading, you should remember this one command: "C-a ?". Typing these two characters will display a list of the available screen commands and their bindings. Each keystroke is discussed in the section "DEFAULT KEY BINDINGS". The manual section "CUSTOMIZATION" deals with the contents of your .screenrc. If your terminal is a "true" auto-margin terminal (it doesn't allow the last position on the screen to be updated without scrolling the screen) consider using a version of your terminal's termcap that has automatic margins turned off. This will ensure an accurate and optimal update of the screen in all circumstances. Most terminals nowadays have "magic" margins (automatic margins plus usable last column). This is the VT100 style type and perfectly suited for screen. If all you've got is a "true" auto-margin terminal screen will be content to use it, but updating a character put into the last position on the screen may not be possible until the screen scrolls or the character is moved into a safe position in some other way. This delay can be shortened by using a terminal with insert-character capability. COMMAND-LINE OPTIONS top Screen has the following command-line options: -a include all capabilities (with some minor exceptions) in each window's termcap, even if screen must redraw parts of the display in order to implement a function. -A Adapt the sizes of all windows to the size of the current terminal. By default, screen tries to restore its old window sizes when attaching to resizable terminals (those with "WS" in its description, e.g. suncmd or some xterm). -c file override the default configuration file from "$HOME/.screenrc" to file. -d|-D [pid.tty.host] does not start screen, but detaches the elsewhere running screen session. It has the same effect as typing "C-a d" from screen's controlling terminal. -D is the equivalent to the power detach key. If no session can be detached, this option is ignored. In combination with the -r/-R option more powerful effects can be achieved: -d -r Reattach a session and if necessary detach it first. -d -R Reattach a session and if necessary detach or even create it first. -d -RR Reattach a session and if necessary detach or create it. Use the first session if more than one session is available. -D -r Reattach a session. If necessary detach and logout remotely first. -D -R Attach here and now. In detail this means: If a session is running, then reattach. If necessary detach and logout remotely first. If it was not running create it and notify the user. This is the author's favorite. -D -RR Attach here and now. Whatever that means, just do it. Note: It is always a good idea to check the status of your sessions by means of "screen -list". -e xy specifies the command character to be x and the character generating a literal command character to y (when typed after the command character). The default is "C-a" and `a', which can be specified as "-e^Aa". When creating a screen session, this option sets the default command character. In a multiuser session all users added will start off with this command character. But when attaching to an already running session, this option changes only the command character of the attaching user. This option is equivalent to either the commands "defescape" or "escape" respectively. -f, -fn, and -fa turns flow-control on, off, or "automatic switching mode". This can also be defined through the "defflow" .screenrc command. -h num Specifies the history scrollback buffer to be num lines high. -i will cause the interrupt key (usually C-c) to interrupt the display immediately when flow-control is on. See the "defflow" .screenrc command for details. The use of this option is discouraged. -l and -ln turns login mode on or off (for /etc/utmp updating). This can also be defined through the "deflogin" .screenrc command. -ls [match] -list [match] does not start screen, but prints a list of pid.tty.host strings identifying your screen sessions. Sessions marked `detached' can be resumed with "screen -r". Those marked `attached' are running and have a controlling terminal. If the session runs in multiuser mode, it is marked `multi'. Sessions marked as `unreachable' either live on a different host or are `dead'. An unreachable session is considered dead, when its name matches either the name of the local host, or the specified parameter, if any. See the -r flag for a description how to construct matches. Sessions marked as `dead' should be thoroughly checked and removed. Ask your system administrator if you are not sure. Remove sessions with the -wipe option. -L tells screen to turn on automatic output logging for the windows. -Logfile file By default logfile name is "screenlog.0". You can set new logfile name with the "-Logfile" option. -m causes screen to ignore the $STY environment variable. With "screen -m" creation of a new session is enforced, regardless whether screen is called from within another screen session or not. This flag has a special meaning in connection with the `-d' option: -d -m Start screen in "detached" mode. This creates a new session but doesn't attach to it. This is useful for system startup scripts. -D -m This also starts screen in "detached" mode, but doesn't fork a new process. The command exits if the session terminates. -O selects an optimal output mode for your terminal rather than true VT100 emulation (only affects auto-margin terminals without `LP'). This can also be set in your .screenrc by specifying `OP' in a "termcap" command. -p number_or_name|-|=|+ Preselect a window. This is useful when you want to reattach to a specific window or you want to send a command via the "-X" option to a specific window. As with screen's select command, "-" selects the blank window. As a special case for reattach, "=" brings up the windowlist on the blank window, while a "+" will create a new window. The command will not be executed if the specified window could not be found. -q Suppress printing of error messages. In combination with "-ls" the exit value is as follows: 9 indicates a directory without sessions. 10 indicates a directory with running but not attachable sessions. 11 (or more) indicates 1 (or more) usable sessions. In combination with "-r" the exit value is as follows: 10 indicates that there is no session to resume. 12 (or more) indicates that there are 2 (or more) sessions to resume and you should specify which one to choose. In all other cases "-q" has no effect. -Q Some commands now can be queried from a remote session using this flag, e.g. "screen -Q windows". The commands will send the response to the stdout of the querying process. If there was an error in the command, then the querying process will exit with a non-zero status. The commands that can be queried now are: echo info lastmsg number select time title windows -r [pid.tty.host] -r sessionowner/[pid.tty.host] resumes a detached screen session. No other options (except combinations with -d/-D) may be specified, though an optional prefix of [pid.]tty.host may be needed to distinguish between multiple detached screen sessions. The second form is used to connect to another user's screen session which runs in multiuser mode. This indicates that screen should look for sessions in another user's directory. This requires setuid-root. -R resumes screen only when it's unambiguous which one to attach, usually when only one screen is detached. Otherwise lists available sessions. -RR attempts to resume the first detached screen session it finds. If successful, all other command-line options are ignored. If no detached session exists, starts a new session using the specified options, just as if -R had not been specified. The option is set by default if screen is run as a login-shell (actually screen uses "-xRR" in that case). For combinations with the -d/-D option see there. -s program sets the default shell to the program specified, instead of the value in the environment variable $SHELL (or "/bin/sh" if not defined). This can also be defined through the "shell" .screenrc command. See also there. -S sessionname When creating a new session, this option can be used to specify a meaningful name for the session. This name identifies the session for "screen -list" and "screen -r" actions. It substitutes the default [tty.host] suffix. -t name sets the title (a.k.a.) for the default shell or specified program. See also the "shelltitle" .screenrc command. -T term Set the $TERM environment variable using the specified term as opposed to the default setting of screen. -U Run screen in UTF-8 mode. This option tells screen that your terminal sends and understands UTF-8 encoded characters. It also sets the default encoding for new windows to `utf8'. -v Print version number. -wipe [match] does the same as "screen -ls", but removes destroyed sessions instead of marking them as `dead'. An unreachable session is considered dead, when its name matches either the name of the local host, or the explicitly given parameter, if any. See the -r flag for a description how to construct matches. -x Attach to a not detached screen session. (Multi display mode). Screen refuses to attach from within itself. But when cascading multiple screens, loops are not detected; take care. -X Send the specified command to a running screen session. You may use the -S option to specify the screen session if you have several screen sessions running. You can use the -d or -r option to tell screen to look only for attached or detached screen sessions. Note that this command doesn't work if the session is password protected. -4 Resolve hostnames only to IPv4 addresses. -6 Resolve hostnames only to IPv6 addresses. DEFAULT KEY BINDINGS top As mentioned, each screen command consists of a "C-a" followed by one other character. For your convenience, all commands that are bound to lower-case letters are also bound to their control character counterparts (with the exception of "C-a a"; see below), thus, "C-a c" as well as "C-a C-c" can be used to create a window. See section "CUSTOMIZATION" for a description of the command. The following table shows the default key bindings. The trailing commas in boxes with multiple keystroke entries are separators, not part of the bindings. C-a ' (select) Prompt for a window name or number to switch to. C-a " (windowlist -b) Present a list of all windows for selection. C-a digit (select 0-9) Switch to window number 0 - 9 C-a - (select -) Switch to window number 0 - 9, or to the blank window. C-a tab (focus) Switch the input focus to the next region. See also split, remove, only. C-a C-a (other) Toggle to the window displayed previously. Note that this binding defaults to the command character typed twice, unless overridden. For instance, if you use the option "-e]x", this command becomes "]]". C-a a (meta) Send the command character (C-a) to window. See escape command. C-a A (title) Allow the user to enter a name for the current window. C-a b, (break) Send a break to C-a C-b window. C-a B (pow_break) Reopen the terminal line and send a break. C-a c, (screen) Create a new C-a C-c window with a shell and switch to that window. C-a C (clear) Clear the screen. C-a d, (detach) Detach screen from C-a C-d this terminal. C-a D D (pow_detach) Detach and logout. C-a f, (flow) Toggle flow on, C-a C-f off or auto. C-a F (fit) Resize the window to the current region size. C-a C-g (vbell) Toggles screen's visual bell mode. C-a h (hardcopy) Write a hardcopy of the current window to the file "hardcopy.n". C-a H (log) Begins/ends logging of the current window to the file "screenlog.n". C-a i, (info) Show info about C-a C-i this window. C-a k, (kill) Destroy current C-a C-k window. C-a l, (redisplay) Fully refresh C-a C-l current window. C-a L (login) Toggle this windows login slot. Available only if screen is configured to update the utmp database. C-a m, (lastmsg) Repeat the last C-a C-m message displayed in the message line. C-a M (monitor) Toggles monitoring of the current window. C-a space, (next) Switch to the next C-a n, window. C-a C-n C-a N (number) Show the number (and title) of the current window. C-a backspace, (prev) Switch to the C-a C-h, previous window C-a p, (opposite of C-a C-a C-p n). C-a q, (xon) Send a control-q C-a C-q to the current window. C-a Q (only) Delete all regions but the current one. See also split, remove, focus. C-a r, (wrap) Toggle the current C-a C-r window's line-wrap setting (turn the current window's automatic margins on and off). C-a s, (xoff) Send a control-s C-a C-s; to the current window. C-a S (split) Split the current region horizontally into two new ones. See also only, remove, focus. C-a t, (time) Show system C-a C-t information. C-a u, (parent) Switch to the C-a C-u parent window. C-a v (version) Display the version and compilation date. C-a C-v (digraph) Enter digraph. C-a w, (windows) Show a list of C-a C-w window. C-a W (width) Toggle 80/132 columns. C-a x or C-a C-x (lockscreen) Lock this terminal. C-a X (remove) Kill the current region. See also split, only, focus. C-a z, (suspend) Suspend screen. C-a C-z Your system must support BSD-style job-control. C-a Z (reset) Reset the virtual terminal to its "power-on" values. C-a . (dumptermcap) Write out a ".termcap" file. C-a ? (help) Show key bindings. C-a \ (quit) Kill all windows and terminate screen. C-a : (colon) Enter command line mode. C-a [, (copy) Enter C-a C-[, copy/scrollback C-a esc mode. C-a C-], (paste .) Write the contents C-a ] of the paste buffer to the stdin queue of the current window. C-a {, (history) Copy and paste a C-a } previous (command) line. C-a > (writebuf) Write paste buffer to a file. C-a < (readbuf) Reads the screen- exchange file into the paste buffer. C-a = (removebuf) Removes the file used by C-a < and C-a >. C-a , (license) Shows where screen comes from, where it went to and why you can use it. C-a _ (silence) Start/stop monitoring the current window for inactivity. C-a | (split -v) Split the current region vertically into two new ones. C-a * (displays) Show a listing of all currently attached displays. CUSTOMIZATION top The "socket directory" defaults either to $HOME/.screen or simply to /tmp/screens or preferably to /usr/local/screens chosen at compile-time. If screen is installed setuid-root, then the administrator should compile screen with an adequate (not NFS mounted) socket directory. If screen is not running setuid-root, the user can specify any mode 700 directory in the environment variable $SCREENDIR. When screen is invoked, it executes initialization commands from the files "/usr/local/etc/screenrc" and ".screenrc" in the user's home directory. These are the "programmer's defaults" that can be overridden in the following ways: for the global screenrc file screen searches for the environment variable $SYSTEM_SCREENRC (this override feature may be disabled at compile-time). The user specific screenrc file is searched in $SCREENRC, then $HOME/.screenrc. The command line option -c takes precedence over the above user screenrc files. Commands in these files are used to set options, bind functions to keys, and to automatically establish one or more windows at the beginning of your screen session. Commands are listed one per line, with empty lines being ignored. A command's arguments are separated by tabs or spaces, and may be surrounded by single or double quotes. A `#' turns the rest of the line into a comment, except in quotes. Unintelligible lines are warned about and ignored. Commands may contain references to environment variables. The syntax is the shell-like "$VAR " or "${VAR}". Note that this causes incompatibility with previous screen versions, as now the '$'-character has to be protected with '\' if no variable substitution shall be performed. A string in single- quotes is also protected from variable substitution. Two configuration files are shipped as examples with your screen distribution: "etc/screenrc" and "etc/etcscreenrc". They contain a number of useful examples for various commands. Customization can also be done 'on-line'. To enter the command mode type `C-a :'. Note that commands starting with "def" change default values, while others change current settings. The following commands are available: acladd usernames [crypted-pw] addacl usernames Enable users to fully access this screen session. Usernames can be one user or a comma separated list of users. This command enables to attach to the screen session and performs the equivalent of `aclchg usernames +rwx "#?"'. executed. To add a user with restricted access, use the `aclchg' command below. If an optional second parameter is supplied, it should be a crypted password for the named user(s). `Addacl' is a synonym to `acladd'. Multi user mode only. aclchg usernames permbits list chacl usernames permbits list Change permissions for a comma separated list of users. Permission bits are represented as `r', `w' and `x'. Prefixing `+' grants the permission, `-' removes it. The third parameter is a comma separated list of commands and/or windows (specified either by number or title). The special list `#' refers to all windows, `?' to all commands. if usernames consists of a single `*', all known users are affected. A command can be executed when the user has the `x' bit for it. The user can type input to a window when he has its `w' bit set and no other user obtains a writelock for this window. Other bits are currently ignored. To withdraw the writelock from another user in window 2: `aclchg username -w+w 2'. To allow read-only access to the session: `aclchg username -w "#"'. As soon as a user's name is known to screen he can attach to the session and (per default) has full permissions for all command and windows. Execution permission for the acl commands, `at' and others should also be removed or the user may be able to regain write permission. Rights of the special username nobody cannot be changed (see the "su" command). `Chacl' is a synonym to `aclchg'. Multi user mode only. acldel username Remove a user from screen's access control list. If currently attached, all the user's displays are detached from the session. He cannot attach again. Multi user mode only. aclgrp username [groupname] Creates groups of users that share common access rights. The name of the group is the username of the group leader. Each member of the group inherits the permissions that are granted to the group leader. That means, if a user fails an access check, another check is made for the group leader. A user is removed from all groups the special value "none" is used for groupname. If the second parameter is omitted all groups the user is in are listed. aclumask [[ users ] +bits | [ users ] -bits... ] umask [[ users ] +bits | [ users ] -bits... ] This specifies the access other users have to windows that will be created by the caller of the command. Users may be no, one or a comma separated list of known usernames. If no users are specified, a list of all currently known users is assumed. Bits is any combination of access control bits allowed defined with the "aclchg" command. The special username "?" predefines the access that not yet known users will be granted to any window initially. The special username "??" predefines the access that not yet known users are granted to any command. Rights of the special username nobody cannot be changed (see the "su" command). `Umask' is a synonym to `aclumask'. activity message When any activity occurs in a background window that is being monitored, screen displays a notification in the message line. The notification message can be re-defined by means of the "activity" command. Each occurrence of `%' in message is replaced by the number of the window in which activity has occurred, and each occurrence of `^G' is replaced by the definition for bell in your termcap (usually an audible bell). The default message is 'Activity in window %n' Note that monitoring is off for all windows by default, but can be altered by use of the "monitor" command (C-a M). allpartial on|off If set to on, only the current cursor line is refreshed on window change. This affects all windows and is useful for slow terminal lines. The previous setting of full/partial refresh for each window is restored with "allpartial off". This is a global flag that immediately takes effect on all windows overriding the "partial" settings. It does not change the default redraw behavior of newly created windows. altscreen on|off If set to on, "alternate screen" support is enabled in virtual terminals, just like in xterm. Initial setting is `off'. at [identifier][#|*|%] command [args ... ] Execute a command at other displays or windows as if it had been entered there. "At" changes the context (the `current window' or `current display' setting) of the command. If the first parameter describes a non-unique context, the command will be executed multiple times. If the first parameter is of the form `identifier*' then identifier is matched against user names. The command is executed once for each display of the selected user(s). If the first parameter is of the form `identifier%' identifier is matched against displays. Displays are named after the ttys they attach. The prefix `/dev/' or `/dev/tty' may be omitted from the identifier. If identifier has a `#' or nothing appended it is matched against window numbers and titles. Omitting an identifier in front of the `#', `*' or `%'-character selects all users, displays or windows because a prefix-match is performed. Note that on the affected display(s) a short message will describe what happened. Permission is checked for initiator of the "at" command, not for the owners of the affected display(s). Note that the '#' character works as a comment introducer when it is preceded by whitespace. This can be escaped by prefixing a '\'. Permission is checked for the initiator of the "at" command, not for the owners of the affected display(s). Caveat: When matching against windows, the command is executed at least once per window. Commands that change the internal arrangement of windows (like "other") may be called again. In shared windows the command will be repeated for each attached display. Beware, when issuing toggle commands like "login"! Some commands (e.g. "process") require that a display is associated with the target windows. These commands may not work correctly under "at" looping over windows. attrcolor attrib [attribute/color-modifier] This command can be used to highlight attributes by changing the color of the text. If the attribute attrib is in use, the specified attribute/color modifier is also applied. If no modifier is given, the current one is deleted. See the "STRING ESCAPES" chapter for the syntax of the modifier. Screen understands two pseudo-attributes, "i" stands for high-intensity foreground color and "I" for high-intensity background color. Examples: attrcolor b "R" Change the color to bright red if bold text is to be printed. attrcolor u "-u b" Use blue text instead of underline. attrcolor b ".I" Use bright colors for bold text. Most terminal emulators do this already. attrcolor i "+b" Make bright colored text also bold. autodetach on|off Sets whether screen will automatically detach upon hangup, which saves all your running programs until they are resumed with a screen -r command. When turned off, a hangup signal will terminate screen and all the processes it contains. Autodetach is on by default. autonuke on|off Sets whether a clear screen sequence should nuke all the output that has not been written to the terminal. See also "obuflimit". backtick id lifespan autorefresh cmd args... backtick id Program the backtick command with the numerical id id. The output of such a command is used for substitution of the "%`" string escape. The specified lifespan is the number of seconds the output is considered valid. After this time, the command is run again if a corresponding string escape is encountered. The autorefresh parameter triggers an automatic refresh for caption and hardstatus strings after the specified number of seconds. Only the last line of output is used for substitution. If both the lifespan and the autorefresh parameters are zero, the backtick program is expected to stay in the background and generate output once in a while. In this case, the command is executed right away and screen stores the last line of output. If a new line gets printed screen will automatically refresh the hardstatus or the captions. The second form of the command deletes the backtick command with the numerical id id. bce [on|off] Change background-color-erase setting. If "bce" is set to on, all characters cleared by an erase/insert/scroll/clear operation will be displayed in the current background color. Otherwise the default background color is used. bell_msg [message] When a bell character is sent to a background window, screen displays a notification in the message line. The notification message can be re-defined by this command. Each occurrence of `%' in message is replaced by the number of the window to which a bell has been sent, and each occurrence of `^G' is replaced by the definition for bell in your termcap (usually an audible bell). The default message is 'Bell in window %n' An empty message can be supplied to the "bell_msg" command to suppress output of a message line (bell_msg ""). Without parameter, the current message is shown. bind [class] key [command [args]] Bind a command to a key. By default, most of the commands provided by screen are bound to one or more keys as indicated in the "DEFAULT KEY BINDINGS" section, e.g. the command to create a new window is bound to "C-c" and "c". The "bind" command can be used to redefine the key bindings and to define new bindings. The key argument is either a single character, a two-character sequence of the form "^x" (meaning "C-x"), a backslash followed by an octal number (specifying the ASCII code of the character), or a backslash followed by a second character, such as "\^" or "\\". The argument can also be quoted, if you like. If no further argument is given, any previously established binding for this key is removed. The command argument can be any command listed in this section. If a command class is specified via the "-c" option, the key is bound for the specified class. Use the "command" command to activate a class. Command classes can be used to create multiple command keys or multi-character bindings. Some examples: bind ' ' windows bind ^k bind k bind K kill bind ^f screen telnet foobar bind \033 screen -ln -t root -h 1000 9 su would bind the space key to the command that displays a list of windows (so that the command usually invoked by "C-a C-w" would also be available as "C-a space"). The next three lines remove the default kill binding from "C-a C-k" and "C-a k". "C-a K" is then bound to the kill command. Then it binds "C-f" to the command "create a window with a TELNET connection to foobar", and bind "escape" to the command that creates an non-login window with a.k.a. "root" in slot #9, with a superuser shell and a scrollback buffer of 1000 lines. bind -c demo1 0 select 10 bind -c demo1 1 select 11 bind -c demo1 2 select 12 bindkey "^B" command -c demo1 makes "C-b 0" select window 10, "C-b 1" window 11, etc. bind -c demo2 0 select 10 bind -c demo2 1 select 11 bind -c demo2 2 select 12 bind - command -c demo2 makes "C-a - 0" select window 10, "C-a - 1" window 11, etc. bindkey [-d] [-m] [-a] [[-k|-t] string [cmd-args]] This command manages screen's input translation tables. Every entry in one of the tables tells screen how to react if a certain sequence of characters is encountered. There are three tables: one that should contain actions programmed by the user, one for the default actions used for terminal emulation and one for screen's copy mode to do cursor movement. See section "INPUT TRANSLATION" for a list of default key bindings. If the -d option is given, bindkey modifies the default table, -m changes the copy mode table and with neither option the user table is selected. The argument string is the sequence of characters to which an action is bound. This can either be a fixed string or a termcap keyboard capability name (selectable with the -k option). Some keys on a VT100 terminal can send a different string if application mode is turned on (e.g the cursor keys). Such keys have two entries in the translation table. You can select the application mode entry by specifying the -a option. The -t option tells screen not to do inter-character timing. One cannot turn off the timing if a termcap capability is used. Cmd can be any of screen's commands with an arbitrary number of args. If cmd is omitted the key-binding is removed from the table. Here are some examples of keyboard bindings: bindkey -d Show all of the default key bindings. The application mode entries are marked with [A]. bindkey -k k1 select 1 Make the "F1" key switch to window one. bindkey -t foo stuff barfoo Make "foo" an abbreviation of the word "barfoo". Timeout is disabled so that users can type slowly. bindkey "\024" mapdefault This key-binding makes "^T" an escape character for key-bindings. If you did the above "stuff barfoo" binding, you can enter the word "foo" by typing "^Tfoo". If you want to insert a "^T" you have to press the key twice (i.e., escape the escape binding). bindkey -k F1 command Make the F11 (not F1!) key an alternative screen escape (besides ^A). break[duration] Send a break signal for duration*0.25 seconds to this window. For non-Posix systems the time interval may be rounded up to full seconds. Most useful if a character device is attached to the window rather than a shell process (See also chapter "WINDOW TYPES"). The maximum duration of a break signal is limited to 15 seconds. blanker Activate the screen blanker. First the screen is cleared. If no blanker program is defined, the cursor is turned off, otherwise, the program is started and it's output is written to the screen. The screen blanker is killed with the first keypress, the read key is discarded. This command is normally used together with the "idle" command. blankerprg [program-args] Defines a blanker program. Disables the blanker program if an empty argument is given. Shows the currently set blanker program if no arguments are given. breaktype [tcsendbreak|TIOCSBRK|TCSBRK] Choose one of the available methods of generating a break signal for terminal devices. This command should affect the current window only. But it still behaves identical to "defbreaktype". This will be changed in the future. Calling "breaktype" with no parameter displays the break method for the current window. bufferfile [exchange-file] Change the filename used for reading and writing with the paste buffer. If the optional argument to the "bufferfile" command is omitted, the default setting ("/tmp/screen-exchange") is reactivated. The following example will paste the system's password file into the screen window (using the paste buffer, where a copy remains): C-a : bufferfile /etc/passwd C-a < C-a ] C-a : bufferfile bumpleft Swaps window with previous one on window list. bumpright Swaps window with next one on window list. c1 [on|off] Change c1 code processing. "C1 on" tells screen to treat the input characters between 128 and 159 as control functions. Such an 8-bit code is normally the same as ESC followed by the corresponding 7-bit code. The default setting is to process c1 codes and can be changed with the "defc1" command. Users with fonts that have usable characters in the c1 positions may want to turn this off. caption [ top | bottom ] always|splitonly[string] caption string [string] This command controls the display of the window captions. Normally a caption is only used if more than one window is shown on the display (split screen mode). But if the type is set to always screen shows a caption even if only one window is displayed. The default is splitonly. The second form changes the text used for the caption. You can use all escapes from the "STRING ESCAPES" chapter. Screen uses a default of `%3n %t'. You can mix both forms by providing a string as an additional argument. You can have the caption displayed either at the top or bottom of the window. The default is bottom. charset set Change the current character set slot designation and charset mapping. The first four character of set are treated as charset designators while the fifth and sixth character must be in range '0' to '3' and set the GL/GR charset mapping. On every position a '.' may be used to indicate that the corresponding charset/mapping should not be changed (set is padded to six characters internally by appending '.' chars). New windows have "BBBB02" as default charset, unless a "encoding" command is active. The current setting can be viewed with the "info" command. chdir [directory] Change the current directory of screen to the specified directory or, if called without an argument, to your home directory (the value of the environment variable $HOME). All windows that are created by means of the "screen" command from within ".screenrc" or by means of "C-a : screen ..." or "C-a c" use this as their default directory. Without a chdir command, this would be the directory from which screen was invoked. Hardcopy and log files are always written to the window's default directory, not the current directory of the process running in the window. You can use this command multiple times in your .screenrc to start various windows in different default directories, but the last chdir value will affect all the windows you create interactively. cjkwidth [ on | off ] Treat ambiguous width characters as full/half width. clear Clears the current window and saves its image to the scrollback buffer. collapse Reorders window on window list, removing number gaps between them. colon [prefix] Allows you to enter ".screenrc" command lines. Useful for on-the- fly modification of key bindings, specific window creation and changing settings. Note that the "set" keyword no longer exists! Usually commands affect the current window rather than default settings for future windows. Change defaults with commands starting with 'def...'. If you consider this as the `Ex command mode' of screen, you may regard "C-a esc" (copy mode) as its `Vi command mode'. command [-c class] This command has the same effect as typing the screen escape character (^A). It is probably only useful for key bindings. If the "-c" option is given, select the specified command class. See also "bind" and "bindkey". compacthist [on|off] This tells screen whether to suppress trailing blank lines when scrolling up text into the history buffer. console [on|off] Grabs or un-grabs the machines console output to a window. Note: Only the owner of /dev/console can grab the console output. This command is only available if the machine supports the ioctl TIOCCONS. copy Enter copy/scrollback mode. This allows you to copy text from the current window and its history into the paste buffer. In this mode a vi-like `full screen editor' is active: The editor's movement keys are: h, C-h, move the cursor left. left arrow j, C-n, move the cursor down. down arrow k, C-p, move the cursor up. up arrow l ('el'), move the cursor right. right arrow 0 (zero) C-a move to the leftmost column. + and - positions one line up and down. H, M and L move the cursor to the leftmost column of the top, center or bottom line of the window. | moves to the specified absolute column. g or home moves to the beginning of the buffer. G or end moves to the specified absolute line (default: end of buffer). % jumps to the specified percentage of the buffer. ^ or $ move to the leftmost column, to the first or last non-whitespace character on the line. w, b, and e move the cursor word by word. B, E move the cursor WORD by WORD (as in vi). f/F, t/T move the cursor forward/backward to the next occurence of the target. (eg, '3fy' will move the cursor to the 3rd 'y' to the right.) ; and , Repeat the last f/F/t/T command in the same/opposite direction. C-e and C-y scroll the display up/down by one line while preserving the cursor position. C-u and C-d scroll the display up/down by the specified amount of lines while preserving the cursor position. (Default: half screen-full). C-b and C-f scroll the display up/down a full screen. Note: Emacs style movement keys can be customized by a .screenrc command. (E.g. markkeys "h=^B:l=^F:$=^E") There is no simple method for a full emacs-style keymap, as this involves multi- character codes. Some keys are defined to do mark and replace operations. The copy range is specified by setting two marks. The text between these marks will be highlighted. Press: space or enter to set the first or second mark respectively. If mousetrack is set to `on', marks can also be set using left mouse click. Y and y used to mark one whole line or to mark from start of line. W marks exactly one word. Any of these commands can be prefixed with a repeat count number by pressing digits 0..9 which is taken as a repeat count. Example: "C-a C-[ H 10 j 5 Y" will copy lines 11 to 15 into the paste buffer. The folllowing search keys are defined: / Vi-like search forward. ? Vi-like search backward. C-a s Emacs style incremental search forward. C-r Emacs style reverse i-search. n Find next search pattern. N Find previous search pattern. There are however some keys that act differently than in vi. Vi does not allow one to yank rectangular blocks of text, but screen does. Press: c or C to set the left or right margin respectively. If no repeat count is given, both default to the current cursor position. Example: Try this on a rather full text screen: "C-a [ M 20 l SPACE c 10 l 5 j C SPACE". This moves one to the middle line of the screen, moves in 20 columns left, marks the beginning of the paste buffer, sets the left column, moves 5 columns down, sets the right column, and then marks the end of the paste buffer. Now try: "C-a [ M 20 l SPACE 10 l 5 j SPACE" and notice the difference in the amount of text copied. J joins lines. It toggles between 4 modes: lines separated by a newline character (012), lines glued seamless, lines separated by a single whitespace and comma separated lines. Note that you can prepend the newline character with a carriage return character, by issuing a "crlf on". v or V is for all the vi users with ":set numbers" - it toggles the left margin between column 9 and 1. Press a before the final space key to toggle in append mode. Thus the contents of the paste buffer will not be overwritten, but is appended to. A toggles in append mode and sets a (second) mark. > sets the (second) mark and writes the contents of the paste buffer to the screen-exchange file (/tmp/screen-exchange per default) once copy-mode is finished. This example demonstrates how to dump the whole scrollback buffer to that file: "C-A [ g SPACE G $ >". C-g gives information about the current line and column. x or o exchanges the first mark and the current cursor position. You can use this to adjust an already placed mark. C-l ('el') will redraw the screen. @ does nothing. Does not even exit copy mode. All keys not described here exit copy mode. copy_reg [key] No longer exists, use "readreg" instead. crlf [on|off] This affects the copying of text regions with the `C-a [' command. If it is set to `on', lines will be separated by the two character sequence `CR' - `LF'. Otherwise (default) only `LF' is used. When no parameter is given, the state is toggled. defc1 on|off Same as the c1 command except that the default setting for new windows is changed. Initial setting is `on'. defautonuke on|off Same as the autonuke command except that the default setting for new displays is changed. Initial setting is `off'. Note that you can use the special `AN' terminal capability if you want to have a dependency on the terminal type. defbce on|off Same as the bce command except that the default setting for new windows is changed. Initial setting is `off'. defbreaktype [tcsendbreak|TIOCSBRK|TCSBRK] Choose one of the available methods of generating a break signal for terminal devices. The preferred methods are tcsendbreak and TIOCSBRK. The third, TCSBRK, blocks the complete screen session for the duration of the break, but it may be the only way to generate long breaks. Tcsendbreak and TIOCSBRK may or may not produce long breaks with spikes (e.g. 4 per second). This is not only system-dependent, this also differs between serial board drivers. Calling "defbreaktype" with no parameter displays the current setting. defcharset [set] Like the charset command except that the default setting for new windows is changed. Shows current default if called without argument. defdynamictitle on|off Set default behaviour for new windows regarding if screen should change window title when seeing proper escape sequence. See also "TITLES (naming windows)" section. defescape xy Set the default command characters. This is equivalent to the "escape" except that it is useful multiuser sessions only. In a multiuser session "escape" changes the command character of the calling user, where "defescape" changes the default command characters for users that will be added later. defflow on|off|auto [interrupt] Same as the flow command except that the default setting for new windows is changed. Initial setting is `auto'. Specifying "defflow auto interrupt" is the same as the command-line options -fa and -i. defgr on|off Same as the gr command except that the default setting for new windows is changed. Initial setting is `off'. defhstatus [status] The hardstatus line that all new windows will get is set to status. This command is useful to make the hardstatus of every window display the window number or title or the like. Status may contain the same directives as in the window messages, but the directive escape character is '^E' (octal 005) instead of '%'. This was done to make a misinterpretation of program generated hardstatus lines impossible. If the parameter status is omitted, the current default string is displayed. Per default the hardstatus line of new windows is empty. defencoding enc Same as the encoding command except that the default setting for new windows is changed. Initial setting is the encoding taken from the terminal. deflog on|off Same as the log command except that the default setting for new windows is changed. Initial setting is `off'. deflogin on|off Same as the login command except that the default setting for new windows is changed. This is initialized with `on' as distributed (see config.h.in). defmode mode The mode of each newly allocated pseudo-tty is set to mode. Mode is an octal number. When no "defmode" command is given, mode 0622 is used. defmonitor on|off Same as the monitor command except that the default setting for new windows is changed. Initial setting is `off'. defmousetrack on|off Same as the mousetrack command except that the default setting for new windows is changed. Initial setting is `off'. defnonblock on|off|numsecs Same as the nonblock command except that the default setting for displays is changed. Initial setting is `off'. defobuflimit limit Same as the obuflimit command except that the default setting for new displays is changed. Initial setting is 256 bytes. Note that you can use the special 'OL' terminal capability if you want to have a dependency on the terminal type. defscrollback num Same as the scrollback command except that the default setting for new windows is changed. Initial setting is 100. defshell command Synonym to the shell .screenrc command. See there. defsilence on|off Same as the silence command except that the default setting for new windows is changed. Initial setting is `off'. defslowpaste msec Same as the slowpaste command except that the default setting for new windows is changed. Initial setting is 0 milliseconds, meaning `off'. defutf8 on|off Same as the utf8 command except that the default setting for new windows is changed. Initial setting is `on' if screen was started with "-U", otherwise `off'. defwrap on|off Same as the wrap command except that the default setting for new windows is changed. Initially line-wrap is on and can be toggled with the "wrap" command ("C-a r") or by means of "C-a : wrap on|off". defwritelock on|off|auto Same as the writelock command except that the default setting for new windows is changed. Initially writelocks will off. detach [-h] Detach the screen session (disconnect it from the terminal and put it into the background). This returns you to the shell where you invoked screen. A detached screen can be resumed by invoking screen with the -r option (see also section "COMMAND-LINE OPTIONS"). The -h option tells screen to immediately close the connection to the terminal ("hangup"). dinfo Show what screen thinks about your terminal. Useful if you want to know why features like color or the alternate charset don't work. displays Shows a tabular listing of all currently connected user front- ends (displays). This is most useful for multiuser sessions. The following keys can be used in displays list: k, C-p, or up Move up one line. j, C-n, or down Move down one line. C-a or home Move to the first line. C-e or end Move to the last line. C-u or C-d Move one half page up or down. C-b or C-f Move one full page up or down. mouseclick Move to the selected line. Available when "mousetrack" is set to on. space Refresh the list d Detach that display D Power detach that display C-g, enter, or escape Exit the list The following is an example of what "displays" could look like: xterm 80x42 jnweiger@/dev/ttyp4 0(m11) &rWx facit 80x24 mlschroe@/dev/ttyhf nb 11(tcsh) rwx xterm 80x42 jnhollma@/dev/ttyp5 0(m11) &R.x (A) (B) (C) (D) (E) (F)(G) (H)(I) The legend is as follows: (A) The terminal type known by screen for this display. (B) Displays geometry as width x height. (C) Username who is logged in at the display. (D) Device name of the display or the attached device (E) Display is in blocking or nonblocking mode. The available modes are "nb", "NB", "Z<", "Z>", and "BL". (F) Number of the window (G) Name/title of window (H) Whether the window is shared (I) Window permissions. Made up of three characters. Window permissions indicators 1st character 2nd character 3rd character - no read - no write - no execute r read w write x execute W own wlock Indicators of permissions suppressed by a foreign wlock R read only . no write "displays" needs a region size of at least 10 characters wide and 5 characters high in order to display. digraph [preset[unicode-value]] This command prompts the user for a digraph sequence. The next two characters typed are looked up in a builtin table and the resulting character is inserted in the input stream. For example, if the user enters 'a"', an a-umlaut will be inserted. If the first character entered is a 0 (zero), screen will treat the following characters (up to three) as an octal number instead. The optional argument preset is treated as user input, thus one can create an "umlaut" key. For example the command "bindkey ^K digraph '"'" enables the user to generate an a-umlaut by typing CTRL-K a. When a non-zero unicode-value is specified, a new digraph is created with the specified preset. The digraph is unset if a zero value is provided for the unicode-value. dumptermcap Write the termcap entry for the virtual terminal optimized for the currently active window to the file ".termcap" in the user's "$HOME/.screen" directory (or wherever screen stores its sockets. See the "FILES" section below). This termcap entry is identical to the value of the environment variable $TERMCAP that is set up by screen for each window. For terminfo based systems you will need to run a converter like captoinfo and then compile the entry with tic. dynamictitle on|off Change behaviour for windows regarding if screen should change window title when seeing proper escape sequence. See also "TITLES (naming windows)" section. echo [-n] message The echo command may be used to annoy screen users with a 'message of the day'. Typically installed in a global /local/etc/screenrc. The option "-n" may be used to suppress the line feed. See also "sleep". Echo is also useful for online checking of environment variables. encoding enc [enc] Tell screen how to interpret the input/output. The first argument sets the encoding of the current window. Each window can emulate a different encoding. The optional second parameter overwrites the encoding of the connected terminal. It should never be needed as screen uses the locale setting to detect the encoding. There is also a way to select a terminal encoding depending on the terminal type by using the "KJ" termcap entry. Supported encodings are eucJP, SJIS, eucKR, eucCN, Big5, GBK, KOI8-R, KOI8-U, CP1251, UTF-8, ISO8859-2, ISO8859-3, ISO8859-4, ISO8859-5, ISO8859-6, ISO8859-7, ISO8859-8, ISO8859-9, ISO8859-10, ISO8859-15, jis. See also "defencoding", which changes the default setting of a new window. escape xy Set the command character to x and the character generating a literal command character (by triggering the "meta" command) to y (similar to the -e option). Each argument is either a single character, a two-character sequence of the form "^x" (meaning "C- x"), a backslash followed by an octal number (specifying the ASCII code of the character), or a backslash followed by a second character, such as "\^" or "\\". The default is "^Aa". eval command1[command2 ...] Parses and executes each argument as separate command. exec [[fdpat]newcommand [args ...]] Run a unix subprocess (specified by an executable path newcommand and its optional arguments) in the current window. The flow of data between newcommands stdin/stdout/stderr, the process originally started in the window (let us call it "application- process") and screen itself (window) is controlled by the file descriptor pattern fdpat. This pattern is basically a three character sequence representing stdin, stdout and stderr of newcommand. A dot (.) connects the file descriptor to screen. An exclamation mark (!) causes the file descriptor to be connected to the application-process. A colon (:) combines both. User input will go to newcommand unless newcommand receives the application-process' output (fdpats first character is `!' or `:') or a pipe symbol (|) is added (as a fourth character) to the end of fdpat. Invoking `exec' without arguments shows name and arguments of the currently running subprocess in this window. Only one subprocess a time can be running in each window. When a subprocess is running the `kill' command will affect it instead of the windows process. Refer to the postscript file `doc/fdpat.ps' for a confusing illustration of all 21 possible combinations. Each drawing shows the digits 2,1,0 representing the three file descriptors of newcommand. The box marked `W' is the usual pty that has the application-process on its slave side. The box marked `P' is the secondary pty that now has screen at its master side. Abbreviations: Whitespace between the word `exec' and fdpat and the command can be omitted. Trailing dots and a fdpat consisting only of dots can be omitted. A simple `|' is synonymous for the pattern `!..|'; the word exec can be omitted here and can always be replaced by `!'. Examples: exec ... /bin/sh exec /bin/sh !/bin/sh Creates another shell in the same window, while the original shell is still running. Output of both shells is displayed and user input is sent to the new /bin/sh. exec !.. stty 19200 exec ! stty 19200 !!stty 19200 Set the speed of the window's tty. If your stty command operates on stdout, then add another `!'. exec !..| less |less This adds a pager to the window output. The special character `|' is needed to give the user control over the pager although it gets its input from the window's process. This works, because less listens on stderr (a behavior that screen would not expect without the `|') when its stdin is not a tty. Less versions newer than 177 fail miserably here; good old pg still works. !:sed -n s/.*Error.*/\007/p Sends window output to both, the user and the sed command. The sed inserts an additional bell character (oct. 007) to the window output seen by screen. This will cause "Bell in window x" messages, whenever the string "Error" appears in the window. fit Change the window size to the size of the current region. This command is needed because screen doesn't adapt the window size automatically if the window is displayed more than once. flow [on|off|auto] Sets the flow-control mode for this window. Without parameters it cycles the current window's flow-control setting from "automatic" to "on" to "off". See the discussion on "FLOW- CONTROL" later on in this document for full details and note, that this is subject to change in future releases. Default is set by `defflow'. focus [next|prev|up|down|left|right|top|bottom] Move the input focus to the next region. This is done in a cyclic way so that the top left region is selected after the bottom right one. If no option is given it defaults to `next'. The next region to be selected is determined by how the regions are layered. Normally, the next region in the same layer would be selected. However, if that next region contains one or more layers, the first region in the highest layer is selected first. If you are at the last region of the current layer, `next' will move the focus to the next region in the lower layer (if there is a lower layer). `Prev' cycles in the opposite order. See "split" for more information about layers. The rest of the options (`up', `down', `left', `right', `top', and `bottom') are more indifferent to layers. The option `up' will move the focus upward to the region that is touching the upper left corner of the current region. `Down' will move downward to the region that is touching the lower left corner of the current region. The option `left' will move the focus leftward to the region that is touching the upper left corner of the current region, while `right' will move rightward to the region that is touching the upper right corner of the current region. Moving left from a left most region or moving right from a right most region will result in no action. The option `top' will move the focus to the very first region in the upper list corner of the screen, and `bottom' will move to the region in the bottom right corner of the screen. Moving up from a top most region or moving down from a bottom most region will result in no action. Useful bindings are (h, j, k, and l as in vi) bind h focus left bind j focus down bind k focus up bind l focus right bind t focus top bind b focus bottom Note that k is traditionally bound to the kill command. focusminsize [ ( width|max|_ ) ( height|max|_ ) ] This forces any currently selected region to be automatically resized at least a certain width and height. All other surrounding regions will be resized in order to accommodate. This constraint follows everytime the "focus" command is used. The "resize" command can be used to increase either dimension of a region, but never below what is set with "focusminsize". The underscore `_' is a synonym for max. Setting a width and height of `0 0' (zero zero) will undo any constraints and allow for manual resizing. Without any parameters, the minimum width and height is shown. gr [on|off] Turn GR charset switching on/off. Whenever screen sees an input character with the 8th bit set, it will use the charset stored in the GR slot and print the character with the 8th bit stripped. The default (see also "defgr") is not to process GR switching because otherwise the ISO88591 charset would not work. group [grouptitle] Change or show the group the current window belongs to. Windows can be moved around between different groups by specifying the name of the destination group. Without specifying a group, the title of the current group is displayed. hardcopy [-h] [file] Writes out the currently displayed image to the file file, or, if no filename is specified, to hardcopy.n in the default directory, where n is the number of the current window. This either appends or overwrites the file if it exists. See below. If the option -h is specified, dump also the contents of the scrollback buffer. hardcopy_append on|off If set to "on", screen will append to the "hardcopy.n" files created by the command "C-a h", otherwise these files are overwritten each time. Default is `off'. hardcopydir directory Defines a directory where hardcopy files will be placed. If unset, hardcopys are dumped in screen's current working directory. hardstatus [on|off] hardstatus [always]firstline|lastline|message|ignore[string] hardstatus string[string] This command configures the use and emulation of the terminal's hardstatus line. The first form toggles whether screen will use the hardware status line to display messages. If the flag is set to `off', these messages are overlaid in reverse video mode at the display line. The default setting is `on'. The second form tells screen what to do if the terminal doesn't have a hardstatus line (i.e. the termcap/terminfo capabilities "hs", "ts", "fs" and "ds" are not set). When "firstline/lastline" is used, screen will reserve the first/last line of the display for the hardstatus. "message" uses screen's message mechanism and "ignore" tells screen never to display the hardstatus. If you prepend the word "always" to the type (e.g., "alwayslastline"), screen will use the type even if the terminal supports a hardstatus. The third form specifies the contents of the hardstatus line. '%h' is used as default string, i.e., the stored hardstatus of the current window (settable via "ESC]0;<string>^G" or "ESC_<string>ESC\") is displayed. You can customize this to any string you like including the escapes from the "STRING ESCAPES" chapter. If you leave out the argument string, the current string is displayed. You can mix the second and third form by providing the string as additional argument. height [-w|-d] [lines [cols]] Set the display height to a specified number of lines. When no argument is given it toggles between 24 and 42 lines display. You can also specify a width if you want to change both values. The -w option tells screen to leave the display size unchanged and just set the window size, -d vice versa. help[class] Not really a online help, but displays a help screen showing you all the key bindings. The first pages list all the internal commands followed by their current bindings. Subsequent pages will display the custom commands, one command per key. Press space when you're done reading each page, or return to exit early. All other characters are ignored. If the "-c" option is given, display all bound commands for the specified command class. See also "DEFAULT KEY BINDINGS" section. history Usually users work with a shell that allows easy access to previous commands. For example csh has the command "!!" to repeat the last command executed. Screen allows you to have a primitive way of re-calling "the command that started ...": You just type the first letter of that command, then hit `C-a {' and screen tries to find a previous line that matches with the `prompt character' to the left of the cursor. This line is pasted into this window's input queue. Thus you have a crude command history (made up by the visible window and its scrollback buffer). hstatus status Change the window's hardstatus line to the string status. idle [timeout[cmd-args]] Sets a command that is run after the specified number of seconds inactivity is reached. This command will normally be the "blanker" command to create a screen blanker, but it can be any screen command. If no command is specified, only the timeout is set. A timeout of zero (or the special timeout off) disables the timer. If no arguments are given, the current settings are displayed. ignorecase [on|off] Tell screen to ignore the case of characters in searches. Default is `off'. Without any options, the state of ignorecase is toggled. info Uses the message line to display some information about the current window: the cursor position in the form "(column,row)" starting with "(1,1)", the terminal width and height plus the size of the scrollback buffer in lines, like in "(80,24)+50", the current state of window XON/XOFF flow control is shown like this (See also section FLOW CONTROL): +flow automatic flow control, currently on. -flow automatic flow control, currently off. +(+)flow flow control enabled. Agrees with automatic control. -(+)flow flow control disabled. Disagrees with automatic control. +(-)flow flow control enabled. Disagrees with automatic control. -(-)flow flow control disabled. Agrees with automatic control. The current line wrap setting (`+wrap' indicates enabled, `-wrap' not) is also shown. The flags `ins', `org', `app', `log', `mon' or `nored' are displayed when the window is in insert mode, origin mode, application-keypad mode, has output logging, activity monitoring or partial redraw enabled. The currently active character set (G0, G1, G2, or G3) and in square brackets the terminal character sets that are currently designated as G0 through G3 is shown. If the window is in UTF-8 mode, the string "UTF-8" is shown instead. Additional modes depending on the type of the window are displayed at the end of the status line (See also chapter "WINDOW TYPES"). If the state machine of the terminal emulator is in a non-default state, the info line is started with a string identifying the current state. For system information use the "time" command. ins_reg [key] No longer exists, use "paste" instead. kill Kill current window. If there is an `exec' command running then it is killed. Otherwise the process (shell) running in the window receives a HANGUP condition, the window structure is removed and screen (your display) switches to another window. When the last window is destroyed, screen exits. After a kill screen switches to the previously displayed window. Note: Emacs users should keep this command in mind, when killing a line. It is recommended not to use "C-a" as the screen escape key or to rebind kill to "C-a K". lastmsg Redisplay the last contents of the message/status line. Useful if you're typing when a message appears, because the message goes away when you press a key (unless your terminal has a hardware status line). Refer to the commands "msgwait" and "msgminwait" for fine tuning. layout new [title] Create a new layout. The screen will change to one whole region and be switched to the blank window. From here, you build the regions and the windows they show as you desire. The new layout will be numbered with the smallest available integer, starting with zero. You can optionally give a title to your new layout. Otherwise, it will have a default title of "layout". You can always change the title later by using the command layout title. layout remove [n|title] Remove, or in other words, delete the specified layout. Either the number or the title can be specified. Without either specification, screen will remove the current layout. Removing a layout does not affect your set windows or regions. layout next Switch to the next layout available layout prev Switch to the previous layout available layout select [n|title] Select the desired layout. Either the number or the title can be specified. Without either specification, screen will prompt and ask which screen is desired. To see which layouts are available, use the layout show command. layout show List on the message line the number(s) and title(s) of the available layout(s). The current layout is flagged. layout title [title] Change or display the title of the current layout. A string given will be used to name the layout. Without any options, the current title and number is displayed on the message line. layout number [n] Change or display the number of the current layout. An integer given will be used to number the layout. Without any options, the current number and title is displayed on the message line. layout attach [title|:last] Change or display which layout to reattach back to. The default is :last, which tells screen to reattach back to the last used layout just before detachment. By supplying a title, You can instruct screen to reattach to a particular layout regardless which one was used at the time of detachment. Without any options, the layout to reattach to will be shown in the message line. layout save [n|title] Remember the current arrangement of regions. When used, screen will remember the arrangement of vertically and horizontally split regions. This arrangement is restored when a screen session is reattached or switched back from a different layout. If the session ends or the screen process dies, the layout arrangements are lost. The layout dump command should help in this siutation. If a number or title is supplied, screen will remember the arrangement of that particular layout. Without any options, screen will remember the current layout. Saving your regions can be done automatically by using the layout autosave command. layout autosave [on|off] Change or display the status of automatcally saving layouts. The default is on, meaning when screen is detached or changed to a different layout, the arrangement of regions and windows will be remembered at the time of change and restored upon return. If autosave is set to off, that arrangement will only be restored to either to the last manual save, using layout save, or to when the layout was first created, to a single region with a single window. Without either an on or off, the current status is displayed on the message line. layout dump [filename] Write to a file the order of splits made in the current layout. This is useful to recreate the order of your regions used in your current layout. Only the current layout is recorded. While the order of the regions are recorded, the sizes of those regions and which windows correspond to which regions are not. If no filename is specified, the default is layout-dump, saved in the directory that the screen process was started in. If the file already exists, layout dump will append to that file. As an example: C-a : layout dump /home/user/.screenrc will save or append the layout to the user's .screenrc file. license Display the disclaimer page. This is done whenever screen is started without options, which should be often enough. See also the "startup_message" command. lockscreen Lock this display. Call a screenlock program (/local/bin/lck or /usr/bin/lock or a builtin if no other is available). Screen does not accept any command keys until this program terminates. Meanwhile processes in the windows may continue, as the windows are in the `detached' state. The screenlock program may be changed through the environment variable $LOCKPRG (which must be set in the shell from which screen is started) and is executed with the user's uid and gid. Warning: When you leave other shells unlocked and you have no password set on screen, the lock is void: One could easily re- attach from an unlocked shell. This feature should rather be called `lockterminal'. log [on|off] Start/stop writing output of the current window to a file "screenlog.n" in the window's default directory, where n is the number of the current window. This filename can be changed with the `logfile' command. If no parameter is given, the state of logging is toggled. The session log is appended to the previous contents of the file if it already exists. The current contents and the contents of the scrollback history are not included in the session log. Default is `off'. logfile filename logfile flush secs Defines the name the log files will get. The default is "screenlog.%n". The second form changes the number of seconds screen will wait before flushing the logfile buffer to the file- system. The default value is 10 seconds. login [on|off] Adds or removes the entry in the utmp database file for the current window. This controls if the window is `logged in'. When no parameter is given, the login state of the window is toggled. Additionally to that toggle, it is convenient having a `log in' and a `log out' key. E.g. `bind I login on' and `bind O login off' will map these keys to be C-a I and C-a O. The default setting (in config.h.in) should be "on" for a screen that runs under suid-root. Use the "deflogin" command to change the default login state for new windows. Both commands are only present when screen has been compiled with utmp support. logtstamp [on|off] logtstamp after [secs] logtstamp string [string] This command controls logfile time-stamp mechanism of screen. If time-stamps are turned "on", screen adds a string containing the current time to the logfile after two minutes of inactivity. When output continues and more than another two minutes have passed, a second time-stamp is added to document the restart of the output. You can change this timeout with the second form of the command. The third form is used for customizing the time- stamp string (`-- %n:%t -- time-stamp -- %M/%d/%y %c:%s --\n' by default). mapdefault Tell screen that the next input character should only be looked up in the default bindkey table. See also "bindkey". mapnotnext Like mapdefault, but don't even look in the default bindkey table. maptimeout [timeout] Set the inter-character timer for input sequence detection to a timeout of timeout ms. The default timeout is 300ms. Maptimeout with no arguments shows the current setting. See also "bindkey". markkeys string This is a method of changing the keymap used for copy/history mode. The string is made up of oldchar=newchar pairs which are separated by `:'. Example: The string "B=^B:F=^F" will change the keys `C-b' and `C-f' to the vi style binding (scroll up/down fill page). This happens to be the default binding for `B' and `F'. The command "markkeys h=^B:l=^F:$=^E" would set the mode for an emacs-style binding. If your terminal sends characters, that cause you to abort copy mode, then this command may help by binding these characters to do nothing. The no-op character is `@' and is used like this: "markkeys @=L=H" if you do not want to use the `H' or `L' commands any longer. As shown in this example, multiple keys can be assigned to one function in a single statement. maxwin num Set the maximum window number screen will create. Doesn't affect already existing windows. The number can be increased only when there are no existing windows. meta Insert the command character (C-a) in the current window's input stream. monitor [on|off] Toggles activity monitoring of windows. When monitoring is turned on and an affected window is switched into the background, you will receive the activity notification message in the status line at the first sign of output and the window will also be marked with an `@' in the window-status display. Monitoring is initially off for all windows. mousetrack [on|off] This command determines whether screen will watch for mouse clicks. When this command is enabled, regions that have been split in various ways can be selected by pointing to them with a mouse and left-clicking them. Without specifying on or off, the current state is displayed. The default state is determined by the "defmousetrack" command. msgminwait sec Defines the time screen delays a new message when one message is currently displayed. The default is 1 second. msgwait sec Defines the time a message is displayed if screen is not disturbed by other activity. The default is 5 seconds. multiuser on|off Switch between singleuser and multiuser mode. Standard screen operation is singleuser. In multiuser mode the commands `acladd', `aclchg', `aclgrp' and `acldel' can be used to enable (and disable) other users accessing this screen session. nethack on|off Changes the kind of error messages used by screen. When you are familiar with the game "nethack", you may enjoy the nethack-style messages which will often blur the facts a little, but are much funnier to read. Anyway, standard messages often tend to be unclear as well. This option is only available if screen was compiled with the NETHACK flag defined. The default setting is then determined by the presence of the environment variable $NETHACKOPTIONS and the file ~/.nethackrc - if either one is present, the default is on. next Switch to the next window. This command can be used repeatedly to cycle through the list of windows. nonblock [on|off|numsecs] Tell screen how to deal with user interfaces (displays) that cease to accept output. This can happen if a user presses ^S or a TCP/modem connection gets cut but no hangup is received. If nonblock is off (this is the default) screen waits until the display restarts to accept the output. If nonblock is on, screen waits until the timeout is reached (on is treated as 1s). If the display still doesn't receive characters, screen will consider it "blocked" and stop sending characters to it. If at some time it restarts to accept characters, screen will unblock the display and redisplay the updated window contents. number [[+|-]n] Change the current window's number. If the given number n is already used by another window, both windows exchange their numbers. If no argument is specified, the current window number (and title) is shown. Using `+' or `-' will change the window's number by the relative amount specified. obuflimit [limit] If the output buffer contains more bytes than the specified limit, no more data will be read from the windows. The default value is 256. If you have a fast display (like xterm), you can set it to some higher value. If no argument is specified, the current setting is displayed. only Kill all regions but the current one. other Switch to the window displayed previously. If this window does no longer exist, other has the same effect as next. partial on|off Defines whether the display should be refreshed (as with redisplay) after switching to the current window. This command only affects the current window. To immediately affect all windows use the allpartial command. Default is `off', of course. This default is fixed, as there is currently no defpartial command. password [crypted_pw] Present a crypted password in your ".screenrc" file and screen will ask for it, whenever someone attempts to resume a detached. This is useful if you have privileged programs running under screen and you want to protect your session from reattach attempts by another user masquerading as your uid (i.e. any superuser.) If no crypted password is specified, screen prompts twice for typing a password and places its encryption in the paste buffer. Default is `none', this disables password checking. paste [registers [dest_reg]] Write the (concatenated) contents of the specified registers to the stdin queue of the current window. The register '.' is treated as the paste buffer. If no parameter is given the user is prompted for a single register to paste. The paste buffer can be filled with the copy, history and readbuf commands. Other registers can be filled with the register, readreg and paste commands. If paste is called with a second argument, the contents of the specified registers is pasted into the named destination register rather than the window. If '.' is used as the second argument, the displays paste buffer is the destination. Note, that "paste" uses a wide variety of resources: Whenever a second argument is specified no current window is needed. When the source specification only contains registers (not the paste buffer) then there need not be a current display (terminal attached), as the registers are a global resource. The paste buffer exists once for every user. pastefont [on|off] Tell screen to include font information in the paste buffer. The default is not to do so. This command is especially useful for multi character fonts like kanji. pow_break Reopen the window's terminal line and send a break condition. See `break'. pow_detach Power detach. Mainly the same as detach, but also sends a HANGUP signal to the parent process of screen. CAUTION: This will result in a logout, when screen was started from your login- shell. pow_detach_msg [message] The message specified here is output whenever a `Power detach' was performed. It may be used as a replacement for a logout message or to reset baud rate, etc. Without parameter, the current message is shown. prev Switch to the window with the next lower number. This command can be used repeatedly to cycle through the list of windows. printcmd [cmd] If cmd is not an empty string, screen will not use the terminal capabilities "po/pf" if it detects an ansi print sequence ESC [ 5 i, but pipe the output into cmd. This should normally be a command like "lpr" or "'cat > /tmp/scrprint'". printcmd without a command displays the current setting. The ansi sequence ESC \ ends printing and closes the pipe. Warning: Be careful with this command! If other user have write access to your terminal, they will be able to fire off print commands. process [key] Stuff the contents of the specified register into screen's input queue. If no argument is given you are prompted for a register name. The text is parsed as if it had been typed in from the user's keyboard. This command can be used to bind multiple actions to a single key. quit Kill all windows and terminate screen. Note that on VT100-style terminals the keys C-4 and C-\ are identical. This makes the default bindings dangerous: Be careful not to type C-a C-4 when selecting window no. 4. Use the empty bind command (as in "bind '^\'") to remove a key binding. readbuf [encoding] [filename] Reads the contents of the specified file into the paste buffer. You can tell screen the encoding of the file via the -e option. If no file is specified, the screen-exchange filename is used. See also "bufferfile" command. readreg [encoding] [register [filename]] Does one of two things, dependent on number of arguments: with zero or one arguments it duplicates the paste buffer contents into the register specified or entered at the prompt. With two arguments it reads the contents of the named file into the register, just as readbuf reads the screen-exchange file into the paste buffer. You can tell screen the encoding of the file via the -e option. The following example will paste the system's password file into the screen window (using register p, where a copy remains): C-a : readreg p /etc/passwd C-a : paste p redisplay Redisplay the current window. Needed to get a full redisplay when in partial redraw mode. register [-eencoding]key-string Save the specified string to the register key. The encoding of the string can be specified via the -e option. See also the "paste" command. remove Kill the current region. This is a no-op if there is only one region. removebuf Unlinks the screen-exchange file used by the commands "writebuf" and "readbuf". rendition bell | monitor | silence | so attr [ color ] Change the way screen renders the titles of windows that have monitor or bell flags set in caption or hardstatus or windowlist. See the "STRING ESCAPES" chapter for the syntax of the modifiers. The default for monitor is currently "=b " (bold, active colors), for bell "=ub " (underline, bold and active colors), and "=u " for silence. reset Reset the virtual terminal to its "power-on" values. Useful when strange settings (like scroll regions or graphics character set) are left over from an application. resize [-h|-v|-b|-l|-p] [[+|-] n[%] |=|max|min|_|0] Resize the current region. The space will be removed from or added to the surrounding regions depending on the order of the splits. The available options for resizing are `-h'(horizontal), `-v'(vertical), `-b'(both), `-l'(local to layer), and `-p'(perpendicular). Horizontal resizes will add or remove width to a region, vertical will add or remove height, and both will add or remove size from both dimensions. Local and perpendicular are similar to horizontal and vertical, but they take in account of how a region was split. If a region's last split was horizontal, a local resize will work like a vertical resize. If a region's last split was vertical, a local resize will work like a horizontal resize. Perpendicular resizes work in opposite of local resizes. If no option is specified, local is the default. The amount of lines to add or remove can be expressed a couple of different ways. By specifying a number n by itself will resize the region by that absolute amount. You can specify a relative amount by prefixing a plus `+' or minus `-' to the amount, such as adding +n lines or removing -n lines. Resizing can also be expressed as an absolute or relative percentage by postfixing a percent sign `%'. Using zero `0' is a synonym for `min' and using an underscore `_' is a synonym for `max'. Some examples are: resize +N increase current region by N resize -N decrease current region by N resize N set current region to N resize 20% set current region to 20% of original size resize +20% increase current region by 20% resize -b = make all windows equally resize max maximize current region resize min minimize current region Without any arguments, screen will prompt for how you would like to resize the current region. See "focusminsize" if you want to restrict the minimun size a region can have. screen [-opts] [n] [cmd [args]|//group] Establish a new window. The flow-control options (-f, -fn and -fa), title (a.k.a.) option (-t), login options (-l and -ln) , terminal type option (-T <term>), the all-capability-flag (-a) and scrollback option (-h <num>) may be specified with each command. The option (-M) turns monitoring on for this window. The option (-L) turns output logging on for this window. If an optional number n in the range 0..MAXWIN-1 is given, the window number n is assigned to the newly created window (or, if this number is already in-use, the next available number). If a command is specified after "screen", this command (with the given arguments) is started in the window; otherwise, a shell is created. If //group is supplied, a container-type window is created in which other windows may be created inside it. Thus, if your ".screenrc" contains the lines # example for .screenrc: screen 1 screen -fn -t foobar -L 2 telnet foobar screen creates a shell window (in window #1) and a window with a TELNET connection to the machine foobar (with no flow-control using the title "foobar" in window #2) and will write a logfile ("screenlog.2") of the telnet session. Note, that unlike previous versions of screen no additional default window is created when "screen" commands are included in your ".screenrc" file. When the initialization is completed, screen switches to the last window specified in your .screenrc file or, if none, opens a default window #0. Screen has built in some functionality of "cu" and "telnet". See also chapter "WINDOW TYPES". scrollback num Set the size of the scrollback buffer for the current windows to num lines. The default scrollback is 100 lines. See also the "defscrollback" command and use "info" to view the current setting. To access and use the contents in the scrollback buffer, use the "copy" command. select [WindowID] Switch to the window identified by WindowID. This can be a prefix of a window title (alphanumeric window name) or a window number. The parameter is optional and if omitted, you get prompted for an identifier. When a new window is established, the first available number is assigned to this window. Thus, the first window can be activated by "select 0". The number of windows is limited at compile-time by the MAXWIN configuration parameter (which defaults to 40). There are two special WindowIDs, "-" selects the internal blank window and "." selects the current window. The latter is useful if used with screen's "-X" option. sessionname [name] Rename the current session. Note, that for "screen -list" the name shows up with the process-id prepended. If the argument "name" is omitted, the name of this session is displayed. Caution: The $STY environment variables will still reflect the old name in pre-existing shells. This may result in confusion. Use of this command is generally discouraged. Use the "-S" command-line option if you want to name a new session. The default is constructed from the tty and host names. setenv [var [string]] Set the environment variable var to value string. If only var is specified, the user will be prompted to enter a value. If no parameters are specified, the user will be prompted for both variable and value. The environment is inherited by all subsequently forked shells. setsid [on|off] Normally screen uses different sessions and process groups for the windows. If setsid is turned off, this is not done anymore and all windows will be in the same process group as the screen backend process. This also breaks job-control, so be careful. The default is on, of course. This command is probably useful only in rare circumstances. shell command Set the command to be used to create a new shell. This overrides the value of the environment variable $SHELL. This is useful if you'd like to run a tty-enhancer which is expecting to execute the program specified in $SHELL. If the command begins with a '-' character, the shell will be started as a login-shell. Typical shells do only minimal initialization when not started as a login-shell. E.g. Bash will not read your "~/.bashrc" unless it is a login-shell. shelltitle title Set the title for all shells created during startup or by the C-A C-c command. For details about what a title is, see the discussion entitled "TITLES (naming windows)". silence [on|off|sec] Toggles silence monitoring of windows. When silence is turned on and an affected window is switched into the background, you will receive the silence notification message in the status line after a specified period of inactivity (silence). The default timeout can be changed with the `silencewait' command or by specifying a number of seconds instead of `on' or `off'. Silence is initially off for all windows. silencewait sec Define the time that all windows monitored for silence should wait before displaying a message. Default 30 seconds. sleep num This command will pause the execution of a .screenrc file for num seconds. Keyboard activity will end the sleep. It may be used to give users a chance to read the messages output by "echo". slowpaste msec Define the speed at which text is inserted into the current window by the paste ("C-a ]") command. If the slowpaste value is nonzero text is written character by character. screen will make a pause of msec milliseconds after each single character write to allow the application to process its input. Only use slowpaste if your underlying system exposes flow control problems while pasting large amounts of text. sort Sort the windows in alphabetical order of the window tiles. source file Read and execute commands from file file. Source commands may be nested to a maximum recursion level of ten. If file is not an absolute path and screen is already processing a source command, the parent directory of the running source command file is used to search for the new command file before screen's current directory. Note that termcap/terminfo/termcapinfo commands only work at startup and reattach time, so they must be reached via the default screenrc files to have an effect. sorendition [attr[color]] This command is deprecated. See "rendition so" instead. split[-v] Split the current region into two new ones. All regions on the display are resized to make room for the new region. The blank window is displayed in the new region. The default is to create a horizontal split, putting the new regions on the top and bottom of each other. Using `-v' will create a vertical split, causing the new regions to appear side by side of each other. Use the "remove" or the "only" command to delete regions. Use "focus" to toggle between regions. When a region is split opposite of how it was previously split (that is, vertical then horizontal or horizontal then vertical), a new layer is created. The layer is used to group together the regions that are split the same. Normally, as a user, you should not see nor have to worry about layers, but they will affect how some commands ("focus" and "resize") behave. With this current implementation of screen, scrolling data will appear much slower in a vertically split region than one that is not. This should be taken into consideration if you need to use system commands such as "cat" or "tail -f". startup_message on|off Select whether you want to see the copyright notice during startup. Default is `on', as you probably noticed. status [top|up|down|bottom] [left|right] The status window by default is in bottom-left corner. This command can move status messages to any corner of the screen. top is the same as up, down is the same as bottom. stuff [string] Stuff the string string in the input buffer of the current window. This is like the "paste" command but with much less overhead. Without a parameter, screen will prompt for a string to stuff. You cannot paste large buffers with the "stuff" command. It is most useful for key bindings. See also "bindkey". su [username [password [password2]]] Substitute the user of a display. The command prompts for all parameters that are omitted. If passwords are specified as parameters, they have to be specified un-crypted. The first password is matched against the systems passwd database, the second password is matched against the screen password as set with the commands "acladd" or "password". "Su" may be useful for the screen administrator to test multiuser setups. When the identification fails, the user has access to the commands available for user nobody. These are "detach", "license", "version", "help" and "displays". suspend Suspend screen. The windows are in the `detached' state, while screen is suspended. This feature relies on the shell being able to do job control. term term In each window's environment screen opens, the $TERM variable is set to "screen" by default. But when no description for "screen" is installed in the local termcap or terminfo data base, you set $TERM to - say - "vt100". This won't do much harm, as screen is VT100/ANSI compatible. The use of the "term" command is discouraged for non-default purpose. That is, one may want to specify special $TERM settings (e.g. vt100) for the next "screen rlogin othermachine" command. Use the command "screen -T vt100 rlogin othermachine" rather than setting and resetting the default. termcap term terminal-tweaks[window-tweaks] terminfo term terminal-tweaks[window-tweaks] termcapinfo term terminal-tweaks[window-tweaks] Use this command to modify your terminal's termcap entry without going through all the hassles involved in creating a custom termcap entry. Plus, you can optionally customize the termcap generated for the windows. You have to place these commands in one of the screenrc startup files, as they are meaningless once the terminal emulator is booted. If your system uses the terminfo database rather than termcap, screen will understand the `terminfo' command, which has the same effects as the `termcap' command. Two separate commands are provided, as there are subtle syntactic differences, e.g. when parameter interpolation (using `%') is required. Note that termcap names of the capabilities have to be used with the `terminfo' command. In many cases, where the arguments are valid in both terminfo and termcap syntax, you can use the command `termcapinfo', which is just a shorthand for a pair of `termcap' and `terminfo' commands with identical arguments. The first argument specifies which terminal(s) should be affected by this definition. You can specify multiple terminal names by separating them with `|'s. Use `*' to match all terminals and `vt*' to match all terminals that begin with "vt". Each tweak argument contains one or more termcap defines (separated by `:'s) to be inserted at the start of the appropriate termcap entry, enhancing it or overriding existing values. The first tweak modifies your terminal's termcap, and contains definitions that your terminal uses to perform certain functions. Specify a null string to leave this unchanged (e.g. ''). The second (optional) tweak modifies all the window termcaps, and should contain definitions that screen understands (see the "VIRTUAL TERMINAL" section). Some examples: termcap xterm* LP:hs@ Informs screen that all terminals that begin with `xterm' have firm auto-margins that allow the last position on the screen to be updated (LP), but they don't really have a status line (no 'hs' - append `@' to turn entries off). Note that we assume `LP' for all terminal names that start with "vt", but only if you don't specify a termcap command for that terminal. termcap vt* LP termcap vt102|vt220 Z0=\E[?3h:Z1=\E[?3l Specifies the firm-margined `LP' capability for all terminals that begin with `vt', and the second line will also add the escape-sequences to switch into (Z0) and back out of (Z1) 132-character-per-line mode if this is a VT102 or VT220. (You must specify Z0 and Z1 in your termcap to use the width-changing commands.) termcap vt100 "" l0=PF1:l1=PF2:l2=PF3:l3=PF4 This leaves your vt100 termcap alone and adds the function key labels to each window's termcap entry. termcap h19|z19 am@:im=\E@:ei=\EO dc=\E[P Takes a h19 or z19 termcap and turns off auto-margins (am@) and enables the insert mode (im) and end-insert (ei) capabilities (the `@' in the `im' string is after the `=', so it is part of the string). Having the `im' and `ei' definitions put into your terminal's termcap will cause screen to automatically advertise the character-insert capability in each window's termcap. Each window will also get the delete-character capability (dc) added to its termcap, which screen will translate into a line-update for the terminal (we're pretending it doesn't support character deletion). If you would like to fully specify each window's termcap entry, you should instead set the $SCREENCAP variable prior to running screen. See the discussion on the "VIRTUAL TERMINAL" in this manual, and the termcap(5) man page for more information on termcap definitions. title [windowtitle] Set the name of the current window to windowtitle. If no name is specified, screen prompts for one. This command was known as `aka' in previous releases. truecolor [on|off] Enables truecolor support. Currently autodetection of truecolor support cannot be done reliably, as such it's left to user to enable. Default is off. Known terminals that may support it are: iTerm2, Konsole, st. Xterm includes support for truecolor escapes but converts them back to indexed 256 color space. unbindall Unbind all the bindings. This can be useful when screen is used solely for its detaching abilities, such as when letting a console application run as a daemon. If, for some reason, it is necessary to bind commands after this, use 'screen -X'. unsetenv var Unset an environment variable. utf8 [on|off[on|off]] Change the encoding used in the current window. If utf8 is enabled, the strings sent to the window will be UTF-8 encoded and vice versa. Omitting the parameter toggles the setting. If a second parameter is given, the display's encoding is also changed (this should rather be done with screen's "-U" option). See also "defutf8", which changes the default setting of a new window. vbell [on|off] Sets the visual bell setting for this window. Omitting the parameter toggles the setting. If vbell is switched on, but your terminal does not support a visual bell, a `vbell-message' is displayed in the status line when the bell character (^G) is received. Visual bell support of a terminal is defined by the termcap variable `vb' (terminfo: 'flash'). Per default, vbell is off, thus the audible bell is used. See also `bell_msg'. vbell_msg [message] Sets the visual bell message. message is printed to the status line if the window receives a bell character (^G), vbell is set to "on", but the terminal does not support a visual bell. The default message is "Wuff, Wuff!!". Without a parameter, the current message is shown. vbellwait sec Define a delay in seconds after each display of screen's visual bell message. The default is 1 second. verbose [on|off] If verbose is switched on, the command name is echoed, whenever a window is created (or resurrected from zombie state). Default is off. Without a parameter, the current setting is shown. version Print the current version and the compile date in the status line. wall message Write a message to all displays. The message will appear in the terminal's status line. width [-w|-d] [cols [lines]] Toggle the window width between 80 and 132 columns or set it to cols columns if an argument is specified. This requires a capable terminal and the termcap entries "Z0" and "Z1". See the "termcap" command for more information. You can also specify a new height if you want to change both values. The -w option tells screen to leave the display size unchanged and just set the window size, -d vice versa. windowlist [-b] [-m] [-g] windowlist string [string] windowlist title [title] Display all windows in a table for visual window selection. If screen was in a window group, screen will back out of the group and then display the windows in that group. If the -b option is given, screen will switch to the blank window before presenting the list, so that the current window is also selectable. The -m option changes the order of the windows, instead of sorting by window numbers screen uses its internal most-recently-used list. The -g option will show the windows inside any groups in that level and downwards. The following keys are used to navigate in "windowlist": k, C-p, or up Move up one line. j, C-n, or down Move down one line. C-g or escape Exit windowlist. C-a or home Move to the first line. C-e or end Move to the last line. C-u or C-d Move one half page up or down. C-b or C-f Move one full page up or down. 0..9 Using the number keys, move to the selected line. mouseclick Move to the selected line. Available when "mousetrack" is set to "on" / Search. n Repeat search in the forward direction. N Repeat search in the backward direction. m Toggle MRU. g Toggle group nesting. a All window view. C-h or backspace Back out the group. , Switch numbers with the previous window. . Switch numbers with the next window. K Kill that window. space or enter Select that window. The table format can be changed with the string and title option, the title is displayed as table heading, while the lines are made by using the string setting. The default setting is "Num Name%=Flags" for the title and "%3n %t%=%f" for the lines. See the "STRING ESCAPES" chapter for more codes (e.g. color settings). "Windowlist" needs a region size of at least 10 characters wide and 6 characters high in order to display. windows [ string ] Uses the message line to display a list of all the windows. Each window is listed by number with the name of process that has been started in the window (or its title); the current window is marked with a `*'; the previous window is marked with a `-'; all the windows that are "logged in" are marked with a `$'; a background window that has received a bell is marked with a `!'; a background window that is being monitored and has had activity occur is marked with an `@'; a window which has output logging turned on is marked with `(L)'; windows occupied by other users are marked with `&'; windows in the zombie state are marked with `Z'. If this list is too long to fit on the terminal's status line only the portion around the current window is displayed. The optional string parameter follows the "STRING ESCAPES" format. If string parameter is passed, the output size is unlimited. The default command without any parameter is limited to a size of 1024 bytes. wrap [on|off] Sets the line-wrap setting for the current window. When line- wrap is on, the second consecutive printable character output at the last column of a line will wrap to the start of the following line. As an added feature, backspace (^H) will also wrap through the left margin to the previous line. Default is `on'. Without any options, the state of wrap is toggled. writebuf [-e encoding] [filename] Writes the contents of the paste buffer to the specified file, or the public accessible screen-exchange file if no filename is given. This is thought of as a primitive means of communication between screen users on the same host. If an encoding is specified the paste buffer is recoded on the fly to match the encoding. The filename can be set with the bufferfile command and defaults to "/tmp/screen-exchange". writelock [on|off|auto] In addition to access control lists, not all users may be able to write to the same window at once. Per default, writelock is in `auto' mode and grants exclusive input permission to the user who is the first to switch to the particular window. When he leaves the window, other users may obtain the writelock (automatically). The writelock of the current window is disabled by the command "writelock off". If the user issues the command "writelock on" he keeps the exclusive write permission while switching to other windows. xoff xon Insert a CTRL-s / CTRL-q character to the stdin queue of the current window. zmodem [off|auto|catch|pass] zmodem sendcmd [string] zmodem recvcmd [string] Define zmodem support for screen. Screen understands two different modes when it detects a zmodem request: "pass" and "catch". If the mode is set to "pass", screen will relay all data to the attacher until the end of the transmission is reached. In "catch" mode screen acts as a zmodem endpoint and starts the corresponding rz/sz commands. If the mode is set to "auto", screen will use "catch" if the window is a tty (e.g. a serial line), otherwise it will use "pass". You can define the templates screen uses in "catch" mode via the second and the third form. Note also that this is an experimental feature. zombie [keys[onerror]] Per default screen windows are removed from the window list as soon as the windows process (e.g. shell) exits. When a string of two keys is specified to the zombie command, `dead' windows will remain in the list. The kill command may be used to remove such a window. Pressing the first key in the dead window has the same effect. When pressing the second key, screen will attempt to resurrect the window. The process that was initially running in the window will be launched again. Calling zombie without parameters will clear the zombie setting, thus making windows disappear when their process exits. As the zombie-setting is manipulated globally for all windows, this command should probably be called defzombie, but it isn't. Optionally you can put the word "onerror" after the keys. This will cause screen to monitor exit status of the process running in the window. If it exits normally ('0'), the window disappears. Any other exit value causes the window to become a zombie. zombie_timeout[seconds] Per default screen windows are removed from the window list as soon as the windows process (e.g. shell) exits. If zombie keys are defined (compare with above zombie command), it is possible to also set a timeout when screen tries to automatically reconnect a dead screen window. THE MESSAGE LINE top Screen displays informational messages and other diagnostics in a message line. While this line is distributed to appear at the bottom of the screen, it can be defined to appear at the top of the screen during compilation. If your terminal has a status line defined in its termcap, screen will use this for displaying its messages, otherwise a line of the current screen will be temporarily overwritten and output will be momentarily interrupted. The message line is automatically removed after a few seconds delay, but it can also be removed early (on terminals without a status line) by beginning to type. The message line facility can be used by an application running in the current window by means of the ANSI Privacy message control sequence. For instance, from within the shell, try something like: echo '<esc>^Hello world from window '$WINDOW'<esc>\\' where '<esc>' is an escape, '^' is a literal up-arrow, and '\\' turns into a single backslash. WINDOW TYPES top Screen provides three different window types. New windows are created with screen's screen command (see also the entry in chapter "CUSTOMIZATION"). The first parameter to the screen command defines which type of window is created. The different window types are all special cases of the normal type. They have been added in order to allow screen to be used efficiently as a console multiplexer with 100 or more windows. The normal window contains a shell (default, if no parameter is given) or any other system command that could be executed from a shell (e.g. slogin, etc...) If a tty (character special device) name (e.g. "/dev/ttya") is specified as the first parameter, then the window is directly connected to this device. This window type is similar to "screen cu -l /dev/ttya". Read and write access is required on the device node, an exclusive open is attempted on the node to mark the connection line as busy. An optional parameter is allowed consisting of a comma separated list of flags in the notation used by stty(1): <baud_rate> Usually 300, 1200, 9600 or 19200. This affects transmission as well as receive speed. cs8 or cs7 Specify the transmission of eight (or seven) bits per byte. ixon or -ixon Enables (or disables) software flow-control (CTRL- S/CTRL-Q) for sending data. ixoff or -ixoff Enables (or disables) software flow-control for receiving data. istrip or -istrip Clear (or keep) the eight bit in each received byte. You may want to specify as many of these options as applicable. Unspecified options cause the terminal driver to make up the parameter values of the connection. These values are system dependent and may be in defaults or values saved from a previous connection. For tty windows, the info command shows some of the modem control lines in the status line. These may include `RTS', `CTS', 'DTR', `DSR', `CD' and more. This depends on the available ioctl()'s and system header files as well as the on the physical capabilities of the serial board. Signals that are logical low (inactive) have their name preceded by an exclamation mark (!), otherwise the signal is logical high (active). Signals not supported by the hardware but available to the ioctl() interface are usually shown low. When the CLOCAL status bit is true, the whole set of modem signals is placed inside curly braces ({ and }). When the CRTSCTS or TIOCSOFTCAR bit is set, the signals `CTS' or `CD' are shown in parenthesis, respectively. For tty windows, the command break causes the Data transmission line (TxD) to go low for a specified period of time. This is expected to be interpreted as break signal on the other side. No data is sent and no modem control line is changed when a break is issued. If the first parameter is "//telnet", the second parameter is expected to be a host name, and an optional third parameter may specify a TCP port number (default decimal 23). Screen will connect to a server listening on the remote host and use the telnet protocol to communicate with that server. For telnet windows, the command info shows details about the connection in square brackets ([ and ]) at the end of the status line. b BINARY. The connection is in binary mode. e ECHO. Local echo is disabled. c SGA. The connection is in `character mode' (default: `line mode'). t TTYPE. The terminal type has been requested by the remote host. Screen sends the name "screen" unless instructed otherwise (see also the command `term'). w NAWS. The remote site is notified about window size changes. f LFLOW. The remote host will send flow control information. (Ignored at the moment.) Additional flags for debugging are x, t and n (XDISPLOC, TSPEED and NEWENV). For telnet windows, the command break sends the telnet code IAC BREAK (decimal 243) to the remote host. This window type is only available if screen was compiled with the ENABLE_TELNET option defined. STRING ESCAPES top Screen provides an escape mechanism to insert information like the current time into messages or file names. The escape character is '%' with one exception: inside of a window's hardstatus '^%' ('^E') is used instead. Here is the full list of supported escapes: % the escape character itself C The count of screen windows. Prefix with '-' to limit to current window group. E sets %? to true if the escape character has been pressed. f flags of the window, see "windows" for meanings of the various flags F sets %? to true if the window has the focus h hardstatus of the window H hostname of the system n window number P sets %? to true if the current region is in copy/paste mode S session name s window size t window title u all other users on this window w all window numbers and names. With '-' qualifier: up to the current window; with '+' qualifier: starting with the window after the current one. W all window numbers and names except the current one x the executed command including arguments running in this windows X the executed command without arguments running in this windows ? the part to the next '%?' is displayed only if a '%' escape inside the part expands to a non-empty string : else part of '%?' = pad the string to the display's width (like TeX's hfill). If a number is specified, pad to the percentage of the window's width. A '0' qualifier tells screen to treat the number as absolute position. You can specify to pad relative to the last absolute pad position by adding a '+' qualifier or to pad relative to the right margin by using '-'. The padding truncates the string if the specified position lies before the current position. Add the 'L' qualifier to change this. < same as '%=' but just do truncation, do not fill with spaces > mark the current text position for the next truncation. When screen needs to do truncation, it tries to do it in a way that the marked position gets moved to the specified percentage of the output area. (The area starts from the last absolute pad position and ends with the position specified by the truncation operator.) The 'L' qualifier tells screen to mark the truncated parts with '...'. { attribute/color modifier string terminated by the next "}" ` Substitute with the output of a 'backtick' command. The length qualifier is misused to identify one of the commands. The 'c' and 'C' escape may be qualified with a '0' to make screen use zero instead of space as fill character. The '0' qualifier also makes the '=' escape use absolute positions. The 'n' and '=' escapes understand a length qualifier (e.g. '%3n'), 'D' and 'M' can be prefixed with 'L' to generate long names, 'w' and 'W' also show the window flags if 'L' is given. An attribute/color modifier is used to change the attributes or the color settings. Its format is "[attribute modifier] [color description]". The attribute modifier must be prefixed by a change type indicator if it can be confused with a color description. The following change types are known: + add the specified set to the current attributes - remove the set from the current attributes ! invert the set in the current attributes = change the current attributes to the specified set The attribute set can either be specified as a hexadecimal number or a combination of the following letters: d dim u underline b bold r reverse s standout B blinking The old format of specifying colors by letters (k,r,g,y,b,m,c,w) is now deprecated. Colors are coded as 0-7 for basic ANSI, 0-255 for 256 color mode, or for truecolor, either a hexadecimal code starting with x, or HTML notation as either 3 or 6 hexadecimal digits. Foreground and background are specified by putting a semicolon between them. Ex: "#FFF;#000" or "i7;0" is white on a black background. The following numbers are for basic ANSI: 0 black 1 red 2 green 3 yellow 4 blue 5 magenta 6 cyan 7 white You can also use the pseudo-color 'i' to set just the brightness and leave the color unchanged. As a special case, "%{-}" restores the attributes and colors that were set before the last change was made (i.e., pops one level of the color-change stack). Examples: "i2" set color to bright green "+b r" use bold red "#F00;FFA" write in bright red color on a pale yellow background. %-Lw%{#AAA;#006}%50>%n%f* %t%{-}%+Lw%< The available windows centered at the current window and truncated to the available width. The current window is displayed white on blue. This can be used with "hardstatus alwayslastline". %?%F%{;2}%?%3n %t%? [%h]%? The window number and title and the window's hardstatus, if one is set. Also use a red background if this is the active focus. Useful for "caption string". FLOW-CONTROL top Each window has a flow-control setting that determines how screen deals with the XON and XOFF characters (and perhaps the interrupt character). When flow-control is turned off, screen ignores the XON and XOFF characters, which allows the user to send them to the current program by simply typing them (useful for the emacs editor, for instance). The trade-off is that it will take longer for output from a "normal" program to pause in response to an XOFF. With flow-control turned on, XON and XOFF characters are used to immediately pause the output of the current window. You can still send these characters to the current program, but you must use the appropriate two-character screen commands (typically "C-a q" (xon) and "C-a s" (xoff)). The xon/xoff commands are also useful for typing C-s and C-q past a terminal that intercepts these characters. Each window has an initial flow-control value set with either the -f option or the "defflow" .screenrc command. Per default the windows are set to automatic flow-switching. It can then be toggled between the three states 'fixed on', 'fixed off' and 'automatic' interactively with the "flow" command bound to "C-a f". The automatic flow-switching mode deals with flow control using the TIOCPKT mode (like "rlogin" does). If the tty driver does not support TIOCPKT, screen tries to find out the right mode based on the current setting of the application keypad - when it is enabled, flow-control is turned off and visa versa. Of course, you can still manipulate flow-control manually when needed. If you're running with flow-control enabled and find that pressing the interrupt key (usually C-c) does not interrupt the display until another 6-8 lines have scrolled by, try running screen with the "interrupt" option (add the "interrupt" flag to the "flow" command in your .screenrc, or use the -i command-line option). This causes the output that screen has accumulated from the interrupted program to be flushed. One disadvantage is that the virtual terminal's memory contains the non-flushed version of the output, which in rare cases can cause minor inaccuracies in the output. For example, if you switch screens and return, or update the screen with "C-a l" you would see the version of the output you would have gotten without "interrupt" being on. Also, you might need to turn off flow-control (or use auto-flow mode to turn it off automatically) when running a program that expects you to type the interrupt character as input, as it is possible to interrupt the output of the virtual terminal to your physical terminal when flow-control is enabled. If this happens, a simple refresh of the screen with "C-a l" will restore it. Give each mode a try, and use whichever mode you find more comfortable. TITLES (naming windows) top You can customize each window's name in the window display (viewed with the "windows" command (C-a w)) by setting it with one of the title commands. Normally the name displayed is the actual command name of the program created in the window. However, it is sometimes useful to distinguish various programs of the same name or to change the name on-the-fly to reflect the current state of the window. The default name for all shell windows can be set with the "shelltitle" command in the .screenrc file, while all other windows are created with a "screen" command and thus can have their name set with the -t option. Interactively, there is the title-string escape-sequence (<esc>kname<esc>\) and the "title" command (C-a A). The former can be output from an application to control the window's name under software control, and the latter will prompt for a name when typed. You can also bind pre-defined names to keys with the "title" command to set things quickly without prompting. Changing title by this escape sequence can be controlled by defdynamictitle and dynamictitle commands. Finally, screen has a shell-specific heuristic that is enabled by setting the window's name to "search|name" and arranging to have a null title escape-sequence output as a part of your prompt. The search portion specifies an end-of-prompt search string, while the name portion specifies the default shell name for the window. If the name ends in a `:' screen will add what it believes to be the current command running in the window to the end of the window's shell name (e.g. "name:cmd"). Otherwise the current command name supersedes the shell name while it is running. Here's how it works: you must modify your shell prompt to output a null title-escape-sequence (<esc>k<esc>\) as a part of your prompt. The last part of your prompt must be the same as the string you specified for the search portion of the title. Once this is set up, screen will use the title-escape-sequence to clear the previous command name and get ready for the next command. Then, when a newline is received from the shell, a search is made for the end of the prompt. If found, it will grab the first word after the matched string and use it as the command name. If the command name begins with either '!', '%', or '^' screen will use the first word on the following line (if found) in preference to the just-found name. This helps csh users get better command names when using job control or history recall commands. Here's some .screenrc examples: screen -t top 2 nice top Adding this line to your .screenrc would start a nice-d version of the "top" command in window 2 named "top" rather than "nice". shelltitle '> |csh' screen 1 These commands would start a shell with the given shelltitle. The title specified is an auto-title that would expect the prompt and the typed command to look something like the following: /usr/joe/src/dir> trn (it looks after the '> ' for the command name). The window status would show the name "trn" while the command was running, and revert to "csh" upon completion. bind R screen -t '% |root:' su Having this command in your .screenrc would bind the key sequence "C-a R" to the "su" command and give it an auto-title name of "root:". For this auto-title to work, the screen could look something like this: % !em emacs file.c Here the user typed the csh history command "!em" which ran the previously entered "emacs" command. The window status would show "root:emacs" during the execution of the command, and revert to simply "root:" at its completion. bind o title bind E title "" bind u title (unknown) The first binding doesn't have any arguments, so it would prompt you for a title when you type "C-a o". The second binding would clear an auto-title's current setting (C-a E). The third binding would set the current window's title to "(unknown)" (C-a u). One thing to keep in mind when adding a null title-escape- sequence to your prompt is that some shells (like the csh) count all the non-control characters as part of the prompt's length. If these invisible characters aren't a multiple of 8 then backspacing over a tab will result in an incorrect display. One way to get around this is to use a prompt like this: set prompt='^[[0000m^[k^[\% ' The escape-sequence "<esc>[0000m" not only normalizes the character attributes, but all the zeros round the length of the invisible characters up to 8. Bash users will probably want to echo the escape sequence in the PROMPT_COMMAND: PROMPT_COMMAND='printf "\033k\033\134"' (I used "\134" to output a `\' because of a bug in bash v1.04). THE VIRTUAL TERMINAL top Each window in a screen session emulates a VT100 terminal, with some extra functions added. The VT100 emulator is hard-coded, no other terminal types can be emulated. Usually screen tries to emulate as much of the VT100/ANSI standard as possible. But if your terminal lacks certain capabilities, the emulation may not be complete. In these cases screen has to tell the applications that some of the features are missing. This is no problem on machines using termcap, because screen can use the $TERMCAP variable to customize the standard screen termcap. But if you do a rlogin on another machine or your machine supports only terminfo this method fails. Because of this, screen offers a way to deal with these cases. Here is how it works: When screen tries to figure out a terminal name for itself, it first looks for an entry named "screen.<term>", where <term> is the contents of your $TERM variable. If no such entry exists, screen tries "screen" (or "screen-w" if the terminal is wide (132 cols or more)). If even this entry cannot be found, "vt100" is used as a substitute. The idea is that if you have a terminal which doesn't support an important feature (e.g. delete char or clear to EOS) you can build a new termcap/terminfo entry for screen (named "screen.<dumbterm>") in which this capability has been disabled. If this entry is installed on your machines you are able to do a rlogin and still keep the correct termcap/terminfo entry. The terminal name is put in the $TERM variable of all new windows. Screen also sets the $TERMCAP variable reflecting the capabilities of the virtual terminal emulated. Notice that, however, on machines using the terminfo database this variable has no effect. Furthermore, the variable $WINDOW is set to the window number of each window. The actual set of capabilities supported by the virtual terminal depends on the capabilities supported by the physical terminal. If, for instance, the physical terminal does not support underscore mode, screen does not put the `us' and `ue' capabilities into the window's $TERMCAP variable, accordingly. However, a minimum number of capabilities must be supported by a terminal in order to run screen; namely scrolling, clear screen, and direct cursor addressing (in addition, screen does not run on hardcopy terminals or on terminals that over-strike). Also, you can customize the $TERMCAP value used by screen by using the "termcap" .screenrc command, or by defining the variable $SCREENCAP prior to startup. When the latter is defined, its value will be copied verbatim into each window's $TERMCAP variable. This can either be the full terminal definition, or a filename where the terminal "screen" (and/or "screen-w") is defined. Note that screen honors the "terminfo" .screenrc command if the system uses the terminfo database rather than termcap. When the boolean `G0' capability is present in the termcap entry for the terminal on which screen has been called, the terminal emulation of screen supports multiple character sets. This allows an application to make use of, for instance, the VT100 graphics character set or national character sets. The following control functions from ISO 2022 are supported: lock shift G0 (SI), lock shift G1 (SO), lock shift G2, lock shift G3, single shift G2, and single shift G3. When a virtual terminal is created or reset, the ASCII character set is designated as G0 through G3. When the `G0' capability is present, screen evaluates the capabilities `S0', `E0', and `C0' if present. `S0' is the sequence the terminal uses to enable and start the graphics character set rather than SI. `E0' is the corresponding replacement for SO. `C0' gives a character by character translation string that is used during semi-graphics mode. This string is built like the `acsc' terminfo capability. When the `po' and `pf' capabilities are present in the terminal's termcap entry, applications running in a screen window can send output to the printer port of the terminal. This allows a user to have an application in one window sending output to a printer connected to the terminal, while all other windows are still active (the printer port is enabled and disabled again for each chunk of output). As a side-effect, programs running in different windows can send output to the printer simultaneously. Data sent to the printer is not displayed in the window. The info command displays a line starting `PRIN' while the printer is active. Screen maintains a hardstatus line for every window. If a window gets selected, the display's hardstatus will be updated to match the window's hardstatus line. If the display has no hardstatus the line will be displayed as a standard screen message. The hardstatus line can be changed with the ANSI Application Program Command (APC): "ESC_<string>ESC\". As a convenience for xterm users the sequence "ESC]0..2;<string>^G" is also accepted. Some capabilities are only put into the $TERMCAP variable of the virtual terminal if they can be efficiently implemented by the physical terminal. For instance, `dl' (delete line) is only put into the $TERMCAP variable if the terminal supports either delete line itself or scrolling regions. Note that this may provoke confusion, when the session is reattached on a different terminal, as the value of $TERMCAP cannot be modified by parent processes. The "alternate screen" capability is not enabled by default. Set the altscreen .screenrc command to enable it. The following is a list of control sequences recognized by screen. "(V)" and "(A)" indicate VT100-specific and ANSI- or ISO-specific functions, respectively. ESC E Next Line ESC D Index ESC M Reverse Index ESC H Horizontal Tab Set ESC Z Send VT100 Identification String ESC 7 (V) Save Cursor and Attributes ESC 8 (V) Restore Cursor and Attributes ESC [s (A) Save Cursor and Attributes ESC [u (A) Restore Cursor and Attributes ESC c Reset to Initial State ESC g Visual Bell ESC Pn p Cursor Visibility (97801) Pn = 6 Invisible Pn = 7 Visible ESC = (V) Application Keypad Mode ESC > (V) Numeric Keypad Mode ESC # 8 (V) Fill Screen with E's ESC \ (A) String Terminator ESC ^ (A) Privacy Message String (Message Line) ESC ! Global Message String (Message Line) ESC k A.k.a. Definition String ESC P (A) Device Control String. Outputs a string directly to the host terminal without interpretation. ESC _ (A) Application Program Command (Hardstatus) ESC ] 0 ; string ^G (A) Operating System Command (Hardstatus, xterm title hack) ESC ] 83 ; cmd ^G (A) Execute screen command. This only works if multi-user support is compiled into screen. The pseudo-user ":window:" is used to check the access control list. Use "addacl :window: -rwx #?" to create a user with no rights and allow only the needed commands. Control-N (A) Lock Shift G1 (SO) Control-O (A) Lock Shift G0 (SI) ESC n (A) Lock Shift G2 ESC o (A) Lock Shift G3 ESC N (A) Single Shift G2 ESC O (A) Single Shift G3 ESC ( Pcs (A) Designate character set as G0 ESC ) Pcs (A) Designate character set as G1 ESC * Pcs (A) Designate character set as G2 ESC + Pcs (A) Designate character set as G3 ESC [ Pn ; Pn H Direct Cursor Addressing ESC [ Pn ; Pn f same as above ESC [ Pn J Erase in Display Pn = None or 0 From Cursor to End of Screen Pn = 1 From Beginning of Screen to Cursor Pn = 2 Entire Screen ESC [ Pn K Erase in Line Pn = None or 0 From Cursor to End of Line Pn = 1 From Beginning of Line to Cursor Pn = 2 Entire Line ESC [ Pn X Erase character ESC [ Pn A Cursor Up ESC [ Pn B Cursor Down ESC [ Pn C Cursor Right ESC [ Pn D Cursor Left ESC [ Pn E Cursor next line ESC [ Pn F Cursor previous line ESC [ Pn G Cursor horizontal position ESC [ Pn ` same as above ESC [ Pn d Cursor vertical position ESC [ Ps ;...; Ps m Select Graphic Rendition Ps = None or 0 Default Rendition Ps = 1 Bold Ps = 2 (A) Faint Ps = 3 (A) Standout Mode (ANSI: Italicized) Ps = 4 Underlined Ps = 5 Blinking Ps = 7 Negative Image Ps = 22 (A) Normal Intensity Ps = 23 (A) Standout Mode off (ANSI: Italicized off) Ps = 24 (A) Not Underlined Ps = 25 (A) Not Blinking Ps = 27 (A) Positive Image Ps = 30 (A) Foreground Black Ps = 31 (A) Foreground Red Ps = 32 (A) Foreground Green Ps = 33 (A) Foreground Yellow Ps = 34 (A) Foreground Blue Ps = 35 (A) Foreground Magenta Ps = 36 (A) Foreground Cyan Ps = 37 (A) Foreground White Ps = 39 (A) Foreground Default Ps = 40 (A) Background Black Ps = ... Ps = 49 (A) Background Default ESC [ Pn g Tab Clear Pn = None or 0 Clear Tab at Current Position Pn = 3 Clear All Tabs ESC [ Pn ; Pn r (V) Set Scrolling Region ESC [ Pn I (A) Horizontal Tab ESC [ Pn Z (A) Backward Tab ESC [ Pn L (A) Insert Line ESC [ Pn M (A) Delete Line ESC [ Pn @ (A) Insert Character ESC [ Pn P (A) Delete Character ESC [ Pn S Scroll Scrolling Region Up ESC [ Pn T Scroll Scrolling Region Down ESC [ Pn ^ same as above ESC [ Ps ;...; Ps h Set Mode ESC [ Ps ;...; Ps l Reset Mode Ps = 4 (A) Insert Mode Ps = 20 (A) Automatic Linefeed Mode Ps = 34 Normal Cursor Visibility Ps = ?1 (V) Application Cursor Keys Ps = ?3 (V) Change Terminal Width to 132 columns Ps = ?5 (V) Reverse Video Ps = ?6 (V) Origin Mode Ps = ?7 (V) Wrap Mode Ps = ?9 X10 mouse tracking Ps = ?25 (V) Visible Cursor Ps = ?47 Alternate Screen (old xterm code) Ps = ?1000 (V) VT200 mouse tracking Ps = ?1047 Alternate Screen (new xterm code) Ps = ?1049 Alternate Screen (new xterm code) ESC [ 5 i (A) Start relay to printer (ANSI Media Copy) ESC [ 4 i (A) Stop relay to printer (ANSI Media Copy) ESC [ 8 ; Ph ; Pw t Resize the window to `Ph' lines and `Pw' columns (SunView special) ESC [ c Send VT100 Identification String ESC [ x Send Terminal Parameter Report ESC [ > c Send VT220 Secondary Device Attributes String ESC [ 6 n Send Cursor Position Report INPUT TRANSLATION top In order to do a full VT100 emulation screen has to detect that a sequence of characters in the input stream was generated by a keypress on the user's keyboard and insert the VT100 style escape sequence. Screen has a very flexible way of doing this by making it possible to map arbitrary commands on arbitrary sequences of characters. For standard VT100 emulation the command will always insert a string in the input buffer of the window (see also command stuff in the command table). Because the sequences generated by a keypress can change after a reattach from a different terminal type, it is possible to bind commands to the termcap name of the keys. Screen will insert the correct binding after each reattach. See the bindkey command for further details on the syntax and examples. Here is the table of the default key bindings. The fourth is what command is executed if the keyboard is switched into application mode. Key name Termcap name Command App mode Cursor up ku \033[A \033OA Cursor down kd \033[B \033OB Cursor right kr \033[C \033OC Cursor left kl \033[D \033OD Function key 0 k0 \033[10~ Function key 1 k1 \033OP Function key 2 k2 \033OQ Function key 3 k3 \033OR Function key 4 k4 \033OS Function key 5 k5 \033[15~ Function key 6 k6 \033[17~ Function key 7 k7 \033[18~ Function key 8 k8 \033[19~ Function key 9 k9 \033[20~ Function key 10 k; \033[21~ Function key 11 F1 \033[23~ Function key 12 F2 \033[24~ Home kh \033[1~ End kH \033[4~ Insert kI \033[2~ Delete kD \033[3~ Page up kP \033[5~ Page down kN \033[6~ Keypad 0 f0 0 \033Op Keypad 1 f1 1 \033Oq Keypad 2 f2 2 \033Or Keypad 3 f3 3 \033Os Keypad 4 f4 4 \033Ot Keypad 5 f5 5 \033Ou Keypad 6 f6 6 \033Ov Keypad 7 f7 7 \033Ow Keypad 8 f8 8 \033Ox Keypad 9 f9 9 \033Oy Keypad + f+ + \033Ok Keypad - f- - \033Om Keypad * f* * \033Oj Keypad / f/ / \033Oo Keypad = fq = \033OX Keypad . f. . \033On Keypad , f, , \033Ol Keypad enter fe \015 \033OM SPECIAL TERMINAL CAPABILITIES top The following table describes all terminal capabilities that are recognized by screen and are not in the termcap(5) manual. You can place these capabilities in your termcap entries (in `/etc/termcap') or use them with the commands `termcap', `terminfo' and `termcapinfo' in your screenrc files. It is often not possible to place these capabilities in the terminfo database. LP (bool) Terminal has VT100 style margins (`magic margins'). Note that this capability is obsolete because screen uses the standard 'xn' instead. Z0 (str) Change width to 132 columns. Z1 (str) Change width to 80 columns. WS (str) Resize display. This capability has the desired width and height as arguments. SunView(tm) example: '\E[8;%d;%dt'. NF (bool) Terminal doesn't need flow control. Send ^S and ^Q direct to the application. Same as 'flow off'. The opposite of this capability is 'nx'. G0 (bool) Terminal can deal with ISO 2022 font selection sequences. S0 (str) Switch charset 'G0' to the specified charset. Default is '\E(%.'. E0 (str) Switch charset 'G0' back to standard charset. Default is '\E(B'. C0 (str) Use the string as a conversion table for font '0'. See the 'ac' capability for more details. CS (str) Switch cursor-keys to application mode. CE (str) Switch cursor-keys back to normal mode. AN (bool) Turn on autonuke. See the 'autonuke' command for more details. OL (num) Set the output buffer limit. See the 'obuflimit' command for more details. KJ (str) Set the encoding of the terminal. See the 'encoding' command for valid encodings. AF (str) Change character foreground color in an ANSI conform way. This capability will almost always be set to '\E[3%dm' ('\E[3%p1%dm' on terminfo machines). AB (str) Same as 'AF', but change background color. AX (bool) Does understand ANSI set default fg/bg color (\E[39m / \E[49m). XC (str) Describe a translation of characters to strings depending on the current font. More details follow in the next section. XT (bool) Terminal understands special xterm sequences (OSC, mouse tracking). C8 (bool) Terminal needs bold to display high-intensity colors (e.g. Eterm). TF (bool) Add missing capabilities to the termcap/info entry. (Set by default). CHARACTER TRANSLATION top Screen has a powerful mechanism to translate characters to arbitrary strings depending on the current font and terminal type. Use this feature if you want to work with a common standard character set (say ISO8851-latin1) even on terminals that scatter the more unusual characters over several national language font pages. Syntax: XC=<charset-mapping>{,,<charset-mapping>} <charset-mapping> := <designator><template>{,<mapping>} <mapping> := <char-to-be-mapped><template-arg> The things in braces may be repeated any number of times. A <charset-mapping> tells screen how to map characters in font <designator> ('B': Ascii, 'A': UK, 'K': German, etc.) to strings. Every <mapping> describes to what string a single character will be translated. A template mechanism is used, as most of the time the codes have a lot in common (for example strings to switch to and from another charset). Each occurrence of '%' in <template> gets substituted with the <template-arg> specified together with the character. If your strings are not similar at all, then use '%' as a template and place the full string in <template-arg>. A quoting mechanism was added to make it possible to use a real '%'. The '\' character quotes the special characters '\', '%', and ','. Here is an example: termcap hp700 'XC=B\E(K%\E(B,\304[,\326\\\\,\334]' This tells screen how to translate ISOlatin1 (charset 'B') upper case umlaut characters on a hp700 terminal that has a German charset. '\304' gets translated to '\E(K[\E(B' and so on. Note that this line gets parsed *three* times before the internal lookup table is built, therefore a lot of quoting is needed to create a single '\'. Another extension was added to allow more emulation: If a mapping translates the unquoted '%' char, it will be sent to the terminal whenever screen switches to the corresponding <designator>. In this special case the template is assumed to be just '%' because the charset switch sequence and the character mappings normally haven't much in common. This example shows one use of the extension: termcap xterm 'XC=K%,%\E(B,[\304,\\\\\326,]\334' Here, a part of the German ('K') charset is emulated on an xterm. If screen has to change to the 'K' charset, '\E(B' will be sent to the terminal, i.e. the ASCII charset is used instead. The template is just '%', so the mapping is straightforward: '[' to '\304', '\' to '\326', and ']' to '\334'. ENVIRONMENT top COLUMNS Number of columns on the terminal (overrides termcap entry). HOME Directory in which to look for .screenrc. LINES Number of lines on the terminal (overrides termcap entry). LOCKPRG Screen lock program. NETHACKOPTIONS Turns on nethack option. PATH Used for locating programs to run. SCREENCAP For customizing a terminal's TERMCAP value. SCREENDIR Alternate socket directory. SCREENRC Alternate user screenrc file. SHELL Default shell program for opening windows (default "/bin/sh"). See also "shell" .screenrc command. STY Alternate socket name. SYSTEM_SCREENRC Alternate system screenrc file. TERM Terminal name. TERMCAP Terminal description. WINDOW Window number of a window (at creation time). FILES top .../screen-4.?.??/etc/screenrc .../screen-4.?.??/etc/etcscreenrc Examples in the screen distribution package for private and global initialization files. $SYSTEM_SCREENRC /usr/local/etc/screenrc screen initialization commands $SCREENRC $HOME/.screenrc Read in after /usr/local/etc/screenrc $SCREENDIR/S-<login> /local/screens/S-<login> Socket directories (default) /usr/tmp/screens/S-<login> Alternate socket directories. <socket directory>/.termcap Written by the "termcap" output function /usr/tmp/screens/screen-exchange or /tmp/screen-exchange screen `interprocess communication buffer' hardcopy.[0-9] Screen images created by the hardcopy function screenlog.[0-9] Output log files created by the log function /usr/lib/terminfo/?/* or /etc/termcap Terminal capability databases /etc/utmp Login records $LOCKPRG Program that locks a terminal. SEE ALSO top termcap(5), utmp(5), vi(1), captoinfo(1), tic(1) AUTHORS top Originally created by Oliver Laumann. For a long time maintained and developed by Juergen Weigert, Michael Schroeder, Micah Cowan and Sadrul Habib Chowdhury. Since 2015 maintained and developed by Amadeusz Slawinski <amade@asmblr.net> and Alexander Naumov <alexander_naumov@opensuse.org>. COPYLEFT top Copyright (c) 2018-2022 Alexander Naumov <alexander_naumov@opensuse.org> Amadeusz Slawinski <amade@asmblr.net> Copyright (c) 2015-2017 Juergen Weigert <jnweiger@immd4.informatik.uni-erlangen.de> Alexander Naumov <alexander_naumov@opensuse.org> Amadeusz Slawinski <amade@asmblr.net> Copyright (c) 2010-2015 Juergen Weigert <jnweiger@immd4.informatik.uni-erlangen.de> Sadrul Habib Chowdhury <sadrul@users.sourceforge.net> Copyright (c) 2008, 2009 Juergen Weigert <jnweiger@immd4.informatik.uni-erlangen.de> Michael Schroeder <mlschroe@immd4.informatik.uni-erlangen.de> Micah Cowan <micah@cowan.name> Sadrul Habib Chowdhury <sadrul@users.sourceforge.net> Copyright (C) 1993-2003 Juergen Weigert <jnweiger@immd4.informatik.uni-erlangen.de> Michael Schroeder <mlschroe@immd4.informatik.uni-erlangen.de> Copyright (C) 1987 Oliver Laumann This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program (see the file COPYING); if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA CONTRIBUTORS top Eric S. Raymond <esr@thyrsus.com>, Thomas Renninger <treen@suse.com>, Axel Beckert <abe@deuxchevaux.org>, Ken Beal <kbeal@amber.ssd.csd.harris.com>, Rudolf Koenig <rfkoenig@immd4.informatik.uni-erlangen.de>, Toerless Eckert <eckert@immd4.informatik.uni-erlangen.de>, Wayne Davison <davison@borland.com>, Patrick Wolfe <pat@kai.com, kailand!pat>, Bart Schaefer <schaefer@cse.ogi.edu>, Nathan Glasser <nathan@brokaw.lcs.mit.edu>, Larry W. Virden <lvirden@cas.org>, Howard Chu <hyc@hanauma.jpl.nasa.gov>, Tim MacKenzie <tym@dibbler.cs.monash.edu.au>, Markku Jarvinen <mta@{cc,cs,ee}.tut.fi>, Marc Boucher <marc@CAM.ORG>, Doug Siebert <dsiebert@isca.uiowa.edu>, Ken Stillson <stillson@tsfsrv.mitre.org>, Ian Frechett <frechett@spot.Colorado.EDU>, Brian Koehmstedt <bpk@gnu.ai.mit.edu>, Don Smith <djs6015@ultb.isc.rit.edu>, Frank van der Linden <vdlinden@fwi.uva.nl>, Martin Schweikert <schweik@cpp.ob.open.de>, David Vrona <dave@sashimi.lcu.com>, E. Tye McQueen <tye%spillman.UUCP@uunet.uu.net>, Matthew Green <mrg@eterna.com.au>, Christopher Williams <cgw@pobox.com>, Matt Mosley <mattm@access.digex.net>, Gregory Neil Shapiro <gshapiro@wpi.WPI.EDU>, Johannes Zellner <johannes@zellner.org>, Pablo Averbuj <pablo@averbuj.com>. VERSION top This is version 4.8.0. Its roots are a merge of a custom version 2.3PR7 by Wayne Davison and several enhancements to Oliver Laumann's version 2.0. Note that all versions numbered 2.x are copyright by Oliver Laumann. AVAILABILITY top The latest official release of screen available via anonymous ftp from ftp.gnu.org/gnu/screen/ or any other GNU distribution site. The home site of screen is savannah.gnu.org/projects/screen/. If you want to help, send a note to screen-devel@gnu.org. BUGS top `dm' (delete mode) and `xs' are not handled correctly (they are ignored). `xn' is treated as a magic-margin indicator. Screen has no clue about double-high or double-wide characters. But this is the only area where vttest is allowed to fail. It is not possible to change the environment variable $TERMCAP when reattaching under a different terminal type. The support of terminfo based systems is very limited. Adding extra capabilities to $TERMCAP may not have any effects. Screen does not make use of hardware tabs. Screen must be installed as set-uid with owner root on most systems in order to be able to correctly change the owner of the tty device file for each window. Special permission may also be required to write the file "/etc/utmp". Entries in "/etc/utmp" are not removed when screen is killed with SIGKILL. This will cause some programs (like "w" or "rwho") to advertise that a user is logged on who really isn't. Screen may give a strange warning when your tty has no utmp entry. When the modem line was hung up, screen may not automatically detach (or quit) unless the device driver is configured to send a HANGUP signal. To detach a screen session use the -D or -d command line option. If a password is set, the command line options -d and -D still detach a session without asking. Both "breaktype" and "defbreaktype" change the break generating method used by all terminal devices. The first should change a window specific setting, where the latter should change only the default for new windows. When attaching to a multiuser session, the user's .screenrc file is not sourced. Each user's personal settings have to be included in the .screenrc file from which the session is booted, or have to be changed manually. A weird imagination is most useful to gain full advantage of all the features. Send bug-reports, fixes, enhancements, t-shirts, money, beer & pizza to screen-devel@gnu.org. COLOPHON top This page is part of the screen (screen manager) project. Information about the project can be found at http://www.gnu.org/software/screen/. If you have a bug report for this manual page, see https://savannah.gnu.org/bugs/?func=additem&group=screen. This page was obtained from the project's upstream Git repository https://git.savannah.gnu.org/git/screen.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-08-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 4th Berkeley Distribution Feb 2017 SCREEN(1) Pages that refer to this page: curs_termcap(3x), logind.conf(5), tmpfiles.d(5), user_caps(5), pty(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# screen\n\n> Hold a session open on a remote server. Manage multiple windows with a single SSH connection.\n> See also `tmux` and `zellij`.\n> More information: <https://manned.org/screen>.\n\n- Start a new screen session:\n\n`screen`\n\n- Start a new named screen session:\n\n`screen -S {{session_name}}`\n\n- Start a new daemon and log the output to `screenlog.x`:\n\n`screen -dmLS {{session_name}} {{command}}`\n\n- Show open screen sessions:\n\n`screen -ls`\n\n- Reattach to an open screen:\n\n`screen -r {{session_name}}`\n\n- Detach from inside a screen:\n\n`<Ctrl> + A, D`\n\n- Kill the current screen session:\n\n`<Ctrl> + A, K`\n\n- Kill a detached screen:\n\n`screen -X -S {{session_name}} quit`\n
script
script(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training script(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SIGNALS | ENVIRONMENT | NOTES | HISTORY | BUGS | SEE ALSO | REPORTING BUGS | AVAILABILITY SCRIPT(1) User Commands SCRIPT(1) NAME top script - make typescript of terminal session SYNOPSIS top script [options] [file] DESCRIPTION top script makes a typescript of everything on your terminal session. The terminal data are stored in raw form to the log file and information about timing to another (optional) structured log file. The timing log file is necessary to replay the session later by scriptreplay(1) and to store additional information about the session. Since version 2.35, script supports multiple streams and allows the logging of input and output to separate files or all the one file. This version also supports a new timing file which records additional information. The command scriptreplay --summary then provides all the information. If the argument file or option --log-out file is given, script saves the dialogue in this file. If no filename is given, the dialogue is saved in the file typescript. Note that logging input using --log-in or --log-io may record security-sensitive information as the log file contains all terminal session input (e.g., passwords) independently of the terminal echo flag setting. OPTIONS top Below, the size argument may be followed by the multiplicative suffixes KiB (=1024), MiB (=1024*1024), and so on for GiB, TiB, PiB, EiB, ZiB and YiB (the "iB" is optional, e.g., "K" has the same meaning as "KiB"), or the suffixes KB (=1000), MB (=1000*1000), and so on for GB, TB, PB, EB, ZB and YB. -a, --append Append the output to file or to typescript, retaining the prior contents. -c, --command command Run the command rather than an interactive shell. This makes it easy for a script to capture the output of a program that behaves differently when its stdout is not a tty. -E, --echo when This option controls the ECHO flag for the slave end of the sessions pseudoterminal. The supported modes are always, never, or auto. The default is auto in this case, ECHO enabled for the pseudoterminal slave; if the current standard input is a terminal, ECHO is disabled for it to prevent double echo; if the current standard input is not a terminal (for example pipe: echo date | script) then keeping ECHO enabled for the pseudoterminal slave enables the standard input data to be viewed on screen while being recorded to session log simultaneously. Note that 'never' mode affects content of the session output log, because users input is not repeated on output. -e, --return Return the exit status of the child process. Uses the same format as bash termination on signal termination (i.e., exit status is 128 + the signal number). The exit status of the child process is always stored in the type script file too. -f, --flush Flush output after each write. This is nice for telecooperation: one person does mkfifo foo; script -f foo, and another can supervise in real-time what is being done using cat foo. Note that flush has an impact on performance; its possible to use SIGUSR1 to flush logs on demand. --force Allow the default output file typescript to be a hard or symbolic link. The command will follow a symbolic link. -B, --log-io file Log input and output to the same file. Note, this option makes sense only if --log-timing is also specified, otherwise its impossible to separate output and input streams from the log file. -I, --log-in file Log input to the file. The log output is disabled if only --log-in specified. Use this logging functionality carefully as it logs all input, including input when terminal has disabled echo flag (for example, password inputs). -O, --log-out file Log output to the file. The default is to log output to the file with name typescript if the option --log-out or --log-in is not given. The log output is disabled if only --log-in specified. -T, --log-timing file Log timing information to the file. Two timing file formats are supported now. The classic format is used when only one stream (input or output) logging is enabled. The multi-stream format is used on --log-io or when --log-in and --log-out are used together. See also --logging-format. -m, --logging-format format Force use of advanced or classic timing log format. The default is the classic format to log only output and the advanced format when input as well as output logging is requested. Classic format The timing log contains two fields, separated by a space. The first field indicates how much time elapsed since the previous output. The second field indicates how many characters were output this time. Advanced (multi-stream) format The first field is an entry type identifier ('Input, 'Output, 'Header, 'Signal). The second field is how much time elapsed since the previous entry, and the rest of the entry is type-specific data. -o, --output-limit size Limit the size of the typescript and timing files to size and stop the child process after this size is exceeded. The calculated file size does not include the start and done messages that the script command prepends and appends to the child process output. Due to buffering, the resulting output file might be larger than the specified value. -q, --quiet Be quiet (do not write start and done messages to standard output). -t[file], --timing[=file] Output timing data to standard error, or to file when given. This option is deprecated in favour of --log-timing where the file argument is not optional. -h, --help Display help text and exit. -V, --version Print version and exit. SIGNALS top Upon receiving SIGUSR1, script immediately flushes the output files. ENVIRONMENT top The following environment variable is utilized by script: SHELL If the variable SHELL exists, the shell forked by script will be that shell. If SHELL is not set, the Bourne shell is assumed. (Most shells set this variable automatically). NOTES top The script ends when the forked shell exits (a control-D for the Bourne shell (sh(1p)), and exit, logout or control-d (if ignoreeof is not set) for the C-shell, csh(1)). Certain interactive commands, such as vi(1), create garbage in the typescript file. script works best with commands that do not manipulate the screen, the results are meant to emulate a hardcopy terminal. It is not recommended to run script in non-interactive shells. The inner shell of script is always interactive, and this could lead to unexpected results. If you use script in the shell initialization file, you have to avoid entering an infinite loop. You can use for example the .profile file, which is read by login shells only: if test -t 0 ; then script exit fi You should also avoid use of script in command pipes, as script can read more input than you would expect. HISTORY top The script command appeared in 3.0BSD. BUGS top script places everything in the log file, including linefeeds and backspaces. This is not what the naive user expects. script is primarily designed for interactive terminal sessions. When stdin is not a terminal (for example: echo foo | script), then the session can hang, because the interactive shell within the script session misses EOF and script has no clue when to close the session. See the NOTES section for more information. SEE ALSO top csh(1) (for the history mechanism), scriptreplay(1), scriptlive(1) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The script command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 SCRIPT(1) Pages that refer to this page: scriptlive(1), scriptreplay(1), pty(7), e2fsck(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# script\n\n> Record all terminal output to file.\n> More information: <https://manned.org/script>.\n\n- Record a new session to a file named `typescript` in the current directory:\n\n`script`\n\n- Record a new session to a custom filepath:\n\n`script {{path/to/session.out}}`\n\n- Record a new session, appending to an existing file:\n\n`script -a {{path/to/session.out}}`\n\n- Record timing information (data is outputted to `stderr`):\n\n`script -t 2> {{path/to/timingfile}}`\n
scriptreplay
scriptreplay(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training scriptreplay(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | AUTHORS | COPYRIGHT | SEE ALSO | REPORTING BUGS | AVAILABILITY SCRIPTREPLAY(1) User Commands SCRIPTREPLAY(1) NAME top scriptreplay - play back typescripts, using timing information SYNOPSIS top scriptreplay [options] [-t] timingfile [typescript [divisor]] DESCRIPTION top This program replays a typescript, using timing information to ensure that output happens in the same rhythm as it originally appeared when the script was recorded. The replay simply displays the information again; the programs that were run when the typescript was being recorded are not run again. Since the same information is simply being displayed, scriptreplay is only guaranteed to work properly if run on the same type of terminal the typescript was recorded on. Otherwise, any escape characters in the typescript may be interpreted differently by the terminal to which scriptreplay is sending its output. The timing information is what script(1) outputs to file specified by --log-timing. By default, the typescript to display is assumed to be named typescript, but other filenames may be specified, as the second parameter or with option --log-out. If the third parameter or --divisor is specified, it is used as a speed-up multiplier. For example, a speed-up of 2 makes scriptreplay go twice as fast, and a speed-down of 0.1 makes it go ten times slower than the original session. OPTIONS top -I, --log-in file File containing script's terminal input. -O, --log-out file File containing script's terminal output. -B, --log-io file File containing script's terminal output and input. -t, --timing file File containing script's timing output. This option overrides old-style arguments. -T, --log-timing file This is an alias for -t, maintained for compatibility with script(1) command-line options. -s, --typescript file File containing script's terminal output. Deprecated alias to --log-out. This option overrides old-style arguments. -c, --cr-mode mode Specifies how to use the CR (0x0D, carriage return) character from log files. The default mode is auto, in this case CR is replaced with line break for stdin log, because otherwise scriptreplay would overwrite the same line. The other modes are never and always. -d, --divisor number Speed up the replay displaying this number of times. The argument is a floating-point number. Its called divisor because it divides the timings by this factor. This option overrides old-style arguments. -m, --maxdelay number Set the maximum delay between updates to number of seconds. The argument is a floating-point number. This can be used to avoid long pauses in the typescript replay. --summary Display details about the session recorded in the specified timing file and exit. The session has to be recorded using advanced format (see script(1) option --logging-format for more details). -x, --stream type Forces scriptreplay to print only the specified stream. The supported stream types are in, out, signal, or info. This option is recommended for multi-stream logs (e.g., --log-io) in order to print only specified data. -h, --help Display help text and exit. -V, --version Print version and exit. EXAMPLES top % script --log-timing file.tm --log-out script.out Script started, file is script.out % ls <etc, etc> % exit Script done, file is script.out % scriptreplay --log-timing file.tm --log-out script.out AUTHORS top The original scriptreplay program was written by Joey Hess <joey@kitenet.net>. The program was re-written in C by James Youngman <jay@gnu.org> and Karel Zak <kzak@redhat.com> COPYRIGHT top Copyright 2008 James Youngman Copyright 2008-2019 Karel Zak This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Released under the GNU General Public License version 2 or later. SEE ALSO top script(1), scriptlive(1) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The scriptreplay command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 SCRIPTREPLAY(1) Pages that refer to this page: script(1), scriptlive(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# scriptreplay\n\n> Replay a typescript created by the `script` command to `stdout`.\n> More information: <https://manned.org/scriptreplay>.\n\n- Replay a typescript at the speed it was recorded:\n\n`scriptreplay {{path/to/timing_file}} {{path/to/typescript}}`\n\n- Replay a typescript at double the original speed:\n\n`scriptreplay {{path/to/timingfile}} {{path/to/typescript}} 2`\n\n- Replay a typescript at half the original speed:\n\n`scriptreplay {{path/to/timingfile}} {{path/to/typescript}} 0.5`\n
sdiff
sdiff(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sdiff(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SDIFF(1) User Commands SDIFF(1) NAME top sdiff - side-by-side merge of file differences SYNOPSIS top sdiff [OPTION]... FILE1 FILE2 DESCRIPTION top Side-by-side merge of differences between FILE1 and FILE2. Mandatory arguments to long options are mandatory for short options too. -o, --output=FILE operate interactively, sending output to FILE -i, --ignore-case consider upper- and lower-case to be the same -E, --ignore-tab-expansion ignore changes due to tab expansion -Z, --ignore-trailing-space ignore white space at line end -b, --ignore-space-change ignore changes in the amount of white space -W, --ignore-all-space ignore all white space -B, --ignore-blank-lines ignore changes whose lines are all blank -I, --ignore-matching-lines=RE ignore changes all whose lines match RE --strip-trailing-cr strip trailing carriage return on input -a, --text treat all files as text -w, --width=NUM output at most NUM (default 130) print columns -l, --left-column output only the left column of common lines -s, --suppress-common-lines do not output common lines -t, --expand-tabs expand tabs to spaces in output --tabsize=NUM tab stops at every NUM (default 8) print columns -d, --minimal try hard to find a smaller set of changes -H, --speed-large-files assume large files, many scattered small changes --diff-program=PROGRAM use PROGRAM to compare files --help display this help and exit -v, --version output version information and exit If a FILE is '-', read standard input. Exit status is 0 if inputs are the same, 1 if different, 2 if trouble. AUTHOR top Written by Thomas Lord. REPORTING BUGS top Report bugs to: bug-diffutils@gnu.org GNU diffutils home page: <https://www.gnu.org/software/diffutils/> General help using GNU software: <https://www.gnu.org/gethelp/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top cmp(1), diff(1), diff3(1) The full documentation for sdiff is maintained as a Texinfo manual. If the info and sdiff programs are properly installed at your site, the command info sdiff should give you access to the complete manual. COLOPHON top This page is part of the diffutils (GNU diff utilities) project. Information about the project can be found at http://savannah.gnu.org/projects/diffutils/. If you have a bug report for this manual page, send it to bug-diffutils@gnu.org. This page was obtained from the project's upstream Git repository git://git.savannah.gnu.org/diffutils.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-09-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org diffutils 3.10.207-774b December 2023 SDIFF(1) Pages that refer to this page: cmp(1), diff(1), diff3(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sdiff\n\n> Compare the differences between and optionally merge 2 files.\n> More information: <https://manned.org/sdiff>.\n\n- Compare 2 files:\n\n`sdiff {{path/to/file1}} {{path/to/file2}}`\n\n- Compare 2 files, ignoring all tabs and whitespace:\n\n`sdiff -W {{path/to/file1}} {{path/to/file2}}`\n\n- Compare 2 files, ignoring whitespace at the end of lines:\n\n`sdiff -Z {{path/to/file1}} {{path/to/file2}}`\n\n- Compare 2 files in a case-insensitive manner:\n\n`sdiff -i {{path/to/file1}} {{path/to/file2}}`\n\n- Compare and then merge, writing the output to a new file:\n\n`sdiff -o {{path/to/merged_file}} {{path/to/file1}} {{path/to/file2}}`\n
sed
sed(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sed(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMMAND SYNOPSIS | REGULAR EXPRESSIONS | BUGS | AUTHOR | COPYRIGHT | SEE ALSO | COLOPHON SED(1) User Commands SED(1) NAME top sed - stream editor for filtering and transforming text SYNOPSIS top sed [-V] [--version] [--help] [-n] [--quiet] [--silent] [-l N] [--line-length=N] [-u] [--unbuffered] [-E] [-r] [--regexp-extended] [-e script] [--expression=script] [-f script-file] [--file=script-file] [script-if-no-other-script] [file...] DESCRIPTION top Sed is a stream editor. A stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline). While in some ways similar to an editor which permits scripted edits (such as ed), sed works by making only one pass over the input(s), and is consequently more efficient. But it is sed's ability to filter text in a pipeline which particularly distinguishes it from other types of editors. -n, --quiet, --silent suppress automatic printing of pattern space --debug annotate program execution -e script, --expression=script add the script to the commands to be executed -f script-file, --file=script-file add the contents of script-file to the commands to be executed --follow-symlinks follow symlinks when processing in place -i[SUFFIX], --in-place[=SUFFIX] edit files in place (makes backup if SUFFIX supplied) -l N, --line-length=N specify the desired line-wrap length for the `l' command --posix disable all GNU extensions. -E, -r, --regexp-extended use extended regular expressions in the script (for portability use POSIX -E). -s, --separate consider files as separate rather than as a single, continuous long stream. --sandbox operate in sandbox mode (disable e/r/w commands). -u, --unbuffered load minimal amounts of data from the input files and flush the output buffers more often -z, --null-data separate lines by NUL characters --help display this help and exit --version output version information and exit If no -e, --expression, -f, or --file option is given, then the first non-option argument is taken as the sed script to interpret. All remaining arguments are names of input files; if no input files are specified, then the standard input is read. GNU sed home page: <https://www.gnu.org/software/sed/>. General help using GNU software: <https://www.gnu.org/gethelp/>. E-mail bug reports to: <bug-sed@gnu.org>. COMMAND SYNOPSIS top This is just a brief synopsis of sed commands to serve as a reminder to those who already know sed; other documentation (such as the texinfo document) must be consulted for fuller descriptions. Zero-address ``commands'' : label Label for b and t commands. #comment The comment extends until the next newline (or the end of a -e script fragment). } The closing bracket of a { } block. Zero- or One- address commands = Print the current line number. a \ text Append text, which has each embedded newline preceded by a backslash. i \ text Insert text, which has each embedded newline preceded by a backslash. q [exit-code] Immediately quit the sed script without processing any more input, except that if auto-print is not disabled the current pattern space will be printed. The exit code argument is a GNU extension. Q [exit-code] Immediately quit the sed script without processing any more input. This is a GNU extension. r filename Append text read from filename. R filename Append a line read from filename. Each invocation of the command reads a line from the file. This is a GNU extension. Commands which accept address ranges { Begin a block of commands (end with a }). b label Branch to label; if label is omitted, branch to end of script. c \ text Replace the selected lines with text, which has each embedded newline preceded by a backslash. d Delete pattern space. Start next cycle. D If pattern space contains no newline, start a normal new cycle as if the d command was issued. Otherwise, delete text in the pattern space up to the first newline, and restart cycle with the resultant pattern space, without reading a new line of input. h H Copy/append pattern space to hold space. g G Copy/append hold space to pattern space. l List out the current line in a ``visually unambiguous'' form. l width List out the current line in a ``visually unambiguous'' form, breaking it at width characters. This is a GNU extension. n N Read/append the next line of input into the pattern space. p Print the current pattern space. P Print up to the first embedded newline of the current pattern space. s/regexp/replacement/ Attempt to match regexp against the pattern space. If successful, replace that portion matched with replacement. The replacement may contain the special character & to refer to that portion of the pattern space which matched, and the special escapes \1 through \9 to refer to the corresponding matching sub-expressions in the regexp. t label If a s/// has done a successful substitution since the last input line was read and since the last t or T command, then branch to label; if label is omitted, branch to end of script. T label If no s/// has done a successful substitution since the last input line was read and since the last t or T command, then branch to label; if label is omitted, branch to end of script. This is a GNU extension. w filename Write the current pattern space to filename. W filename Write the first line of the current pattern space to filename. This is a GNU extension. x Exchange the contents of the hold and pattern spaces. y/source/dest/ Transliterate the characters in the pattern space which appear in source to the corresponding character in dest. Addresses Sed commands can be given with no addresses, in which case the command will be executed for all input lines; with one address, in which case the command will only be executed for input lines which match that address; or with two addresses, in which case the command will be executed for all input lines which match the inclusive range of lines starting from the first address and continuing to the second address. Three things to note about address ranges: the syntax is addr1,addr2 (i.e., the addresses are separated by a comma); the line which addr1 matched will always be accepted, even if addr2 selects an earlier line; and if addr2 is a regexp, it will not be tested against the line that addr1 matched. After the address (or address-range), and before the command, a ! may be inserted, which specifies that the command shall only be executed if the address (or address-range) does not match. The following address types are supported: number Match only the specified line number (which increments cumulatively across files, unless the -s option is specified on the command line). first~step Match every step'th line starting with line first. For example, ``sed -n 1~2p'' will print all the odd-numbered lines in the input stream, and the address 2~5 will match every fifth line, starting with the second. first can be zero; in this case, sed operates as if it were equal to step. (This is an extension.) $ Match the last line. /regexp/ Match lines matching the regular expression regexp. Matching is performed on the current pattern space, which can be modified with commands such as ``s///''. \cregexpc Match lines matching the regular expression regexp. The c may be any character. GNU sed also supports some special 2-address forms: 0,addr2 Start out in "matched first address" state, until addr2 is found. This is similar to 1,addr2, except that if addr2 matches the very first line of input the 0,addr2 form will be at the end of its range, whereas the 1,addr2 form will still be at the beginning of its range. This works only when addr2 is a regular expression. addr1,+N Will match addr1 and the N lines following addr1. addr1,~N Will match addr1 and the lines following addr1 until the next line whose input line number is a multiple of N. REGULAR EXPRESSIONS top POSIX.2 BREs should be supported, but they aren't completely because of performance problems. The \n sequence in a regular expression matches the newline character, and similarly for \a, \t, and other sequences. The -E option switches to using extended regular expressions instead; it has been supported for years by GNU sed, and is now included in POSIX. BUGS top E-mail bug reports to bug-sed@gnu.org. Also, please include the output of ``sed --version'' in the body of your report if at all possible. AUTHOR top Written by Jay Fenlason, Tom Lord, Ken Pizzini, Paolo Bonzini, Jim Meyering, and Assaf Gordon. This sed program was built with SELinux support. SELinux is enabled on this system. GNU sed home page: <https://www.gnu.org/software/sed/>. General help using GNU software: <https://www.gnu.org/gethelp/>. E-mail bug reports to: <bug-sed@gnu.org>. COPYRIGHT top Copyright 2022 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top awk(1), ed(1), grep(1), tr(1), perlre(1), sed.info, any of various books on sed, the sed FAQ (http://sed.sf.net/grabbag/tutorials/sedfaq.txt), http://sed.sf.net/grabbag/. The full documentation for sed is maintained as a Texinfo manual. If the info and sed programs are properly installed at your site, the command info sed should give you access to the complete manual. COLOPHON top This page is part of the sed (stream-oriented editor) project. Information about the project can be found at http://www.gnu.org/software/sed/. If you have a bug report for this manual page, send it to bug-sed@gnu.org. This page was obtained from the tarball sed-4.9.tar.gz fetched from https://www.gnu.org/software/sed/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU sed 4.9 November 2022 SED(1) Pages that refer to this page: gawk(1), grep(1), iostat2pcp(1), pmdaopenmetrics(1), pmlogrewrite(1), sheet2pcp(1), cpuset(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sed\n\n> Edit text in a scriptable manner.\n> See also: `awk`, `ed`.\n> More information: <https://www.gnu.org/software/sed/manual/sed.html>.\n\n- Replace all `apple` (basic regex) occurrences with `mango` (basic regex) in all input lines and print the result to `stdout`:\n\n`{{command}} | sed 's/apple/mango/g'`\n\n- Replace all `apple` (extended regex) occurrences with `APPLE` (extended regex) in all input lines and print the result to `stdout`:\n\n`{{command}} | sed -E 's/(apple)/\U\1/g'`\n\n- Replace all `apple` (basic regex) occurrences with `mango` (basic regex) in a specific file and overwrite the original file in place:\n\n`sed -i 's/apple/mango/g' {{path/to/file}}`\n\n- Execute a specific script [f]ile and print the result to `stdout`:\n\n`{{command}} | sed -f {{path/to/script.sed}}`\n\n- Print just the first line to `stdout`:\n\n`{{command}} | sed -n '1p'`\n\n- [d]elete the first line of a file:\n\n`sed -i 1d {{path/to/file}}`\n\n- [i]nsert a new line at the first line of a file:\n\n`sed -i '1i\your new line text\' {{path/to/file}}`\n
semanage
semanage(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training semanage(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | AUTHOR | COLOPHON semanage(8) semanage(8) NAME top semanage - SELinux Policy Management tool SYNOPSIS top semanage {import,export,login,user,port,interface,module,node,fcontext,boolean,permissive,dontaudit,ibpkey,ibendport} ... positional arguments: import Import local customizations export Output local customizations login Manage login mappings between linux users and SELinux confined users user Manage SELinux confined users (Roles and levels for an SELinux user) port Manage network port type definitions interface Manage network interface type definitions module Manage SELinux policy modules node Manage network node type definitions fcontext Manage file context mapping definitions boolean Manage booleans to selectively enable functionality permissive Manage process type enforcement mode dontaudit Disable/Enable dontaudit rules in policy ibpkey Manage infiniband pkey type definitions ibendport Manage infiniband end port type definitions DESCRIPTION top semanage is used to configure certain elements of SELinux policy without requiring modification to or recompilation from policy sources. This includes the mapping from Linux usernames to SELinux user identities (which controls the initial security context assigned to Linux users when they login and bounds their authorized role set) as well as security context mappings for various kinds of objects, such as network ports, interfaces, infiniband pkeys and endports, and nodes (hosts) as well as the file context mapping. Note that the semanage login command deals with the mapping from Linux usernames (logins) to SELinux user identities, while the semanage user command deals with the mapping from SELinux user identities to authorized role sets. In most cases, only the former mapping needs to be adjusted by the administrator; the latter is principally defined by the base policy and usually does not require modification. OPTIONS top -h, --help List help information SEE ALSO top selinux(8), semanage-boolean(8), semanage-dontaudit(8), semanage-export(8), semanage-fcontext(8), semanage-import(8), semanage-interface(8), semanage-login(8), semanage-module(8), semanage-node(8), semanage-permissive(8), semanage-port(8), semanage-user(8) semanage-ibkey(8), semanage-ibendport(8), AUTHOR top This man page was written by Daniel Walsh <dwalsh@redhat.com> and Russell Coker <rcoker@redhat.com>. Examples by Thomas Bleher <ThomasBleher@gmx.de>. usage: semanage [-h] COLOPHON top This page is part of the selinux (Security-Enhanced Linux user- space libraries and tools) project. Information about the project can be found at https://github.com/SELinuxProject/selinux/wiki. If you have a bug report for this manual page, see https://github.com/SELinuxProject/selinux/wiki/Contributing. This page was obtained from the project's upstream Git repository https://github.com/SELinuxProject/selinux on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-05-11.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 20100223 semanage(8) Pages that refer to this page: customizable_types(5), semanage.conf(5), chcat(8), genhomedircon(8), sefcontext_compile(8), selinux(8), semanage-boolean(8), semanage-dontaudit(8), semanage-export(8), semanage-fcontext(8), semanage-ibendport(8), semanage-ibpkey(8), semanage-import(8), semanage-interface(8), semanage-login(8), semanage-module(8), semanage-node(8), semanage-permissive(8), semanage-port(8), semanage-user(8), sepolicy-network(8), setsebool(8), system-config-selinux(8), useradd(8), usermod(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# semanage\n\n> SELinux Policy Management tool.\n> More information: <https://manned.org/semanage>.\n\n- Output local customizations:\n\n`semanage -S {{store}} -o {{path/to/output_file}}`\n\n- Take a set of commands from a specified file and load them in a single transaction:\n\n`semanage -S {{store}} -i {{path/to/input_file}}`\n\n- Manage booleans. Booleans allow the administrator to modify the confinement of processes based on the current configuration:\n\n`semanage boolean -S {{store}} {{--delete|--modify|--list|--noheading|--deleteall}} {{-on|-off}} -F {{boolean|boolean_file}}`\n\n- Manage policy modules:\n\n`semanage module -S {{store}} {{--add|--delete|--list|--modify}} {{--enable|--disable}} {{module_name}}`\n\n- Disable/Enable dontaudit rules in policy:\n\n`semanage dontaudit -S {{store}} {{on|off}}`\n
semanage-fcontext
semanage-fcontext(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training semanage-fcontext(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLE | SEE ALSO | AUTHOR | COLOPHON semanage-fcontext(8) semanage-fcontext(8) NAME top semanage-fcontext - SELinux Policy Management file context tool SYNOPSIS top semanage fcontext [-h] [-n] [-N] [-S STORE] [ --add ( -t TYPE -f FTYPE -r RANGE -s SEUSER | -e EQUAL ) FILE_SPEC | --delete ( -t TYPE -f FTYPE | -e EQUAL ) FILE_SPEC | --deleteall | --extract | --list [-C] | --modify ( -t TYPE -f FTYPE -r RANGE -s SEUSER | -e EQUAL ) FILE_SPEC ] DESCRIPTION top semanage is used to configure certain elements of SELinux policy without requiring modification to or recompilation from policy sources. semanage fcontext is used to manage the default file system labeling on an SELinux system. This command maps file paths using regular expressions to SELinux labels. FILE_SPEC may contain either a fully qualified path, or a Perl compatible regular expression (PCRE), describing fully qualified path(s). The only PCRE flag in use is PCRE2_DOTALL, which causes a wildcard '.' to match anything, including a new line. Strings representing paths are processed as bytes (as opposed to Unicode), meaning that non-ASCII characters are not matched by a single wildcard. Note, that file context definitions specified using 'semanage fcontext' (i.e. local file context modifications stored in file_contexts.local) have higher priority than those specified in policy modules. This means that whenever a match for given file path is found in file_contexts.local, no other file context definitions are considered. Entries in file_contexts.local are processed from most recent one to the oldest, with first match being used (as opposed to the most specific match, which is used when matching other file context definitions). All regular expressions should therefore be as specific as possible, to avoid unintentionally impacting other parts of the filesystem. OPTIONS top -h, --help Show this help message and exit -n, --noheading Do not print heading when listing the specified object type -N, --noreload Do not reload policy after commit -C, --locallist List local customizations -S STORE, --store STORE Select an alternate SELinux Policy Store to manage -a, --add Add a record of the specified object type -d, --delete Delete a record of the specified object type -m, --modify Modify a record of the specified object type -l, --list List records of the specified object type -E, --extract Extract customizable commands, for use within a transaction -D, --deleteall Remove all local customizations -e EQUAL, --equal EQUAL Substitute target path with sourcepath when generating default label. This is used with fcontext. Requires source and target path arguments. The context labeling for the target subtree is made equivalent to that defined for the source. -f [{a,f,d,c,b,s,l,p}], --ftype [{a,f,d,c,b,s,l,p}] File Type. This is used with fcontext. Requires a file type as shown in the mode field by ls, e.g. use 'd' to match only directories or 'f' to match only regular files. The following file type options can be passed: f (regular file),d (directory),c (character device), b (block device),s (socket),l (symbolic link),p (named pipe). If you do not specify a file type, the file type will default to "all files". -s SEUSER, --seuser SEUSER SELinux user name -t TYPE, --type TYPE SELinux Type for the object -r RANGE, --range RANGE MLS/MCS Security Range (MLS/MCS Systems only) SELinux Range for SELinux login mapping defaults to the SELinux user record range. SELinux Range for SELinux user defaults to s0. EXAMPLE top Remember to run restorecon after you set the file context Add file-context httpd_sys_content_t for everything under /web # semanage fcontext -a -t httpd_sys_content_t "/web(/.*)?" # restorecon -R -v /web Substitute /home1 with /home when setting file context i.e. label everything under /home1 the same way /home is labeled # semanage fcontext -a -e /home /home1 # restorecon -R -v /home1 For home directories under top level directory, for example /disk6/home, execute the following commands. # semanage fcontext -a -t home_root_t "/disk6" # semanage fcontext -a -e /home /disk6/home # restorecon -R -v /disk6 SEE ALSO top selinux(8), semanage(8), restorecon(8), selabel_file(5) AUTHOR top This man page was written by Daniel Walsh <dwalsh@redhat.com> COLOPHON top This page is part of the selinux (Security-Enhanced Linux user- space libraries and tools) project. Information about the project can be found at https://github.com/SELinuxProject/selinux/wiki. If you have a bug report for this manual page, see https://github.com/SELinuxProject/selinux/wiki/Contributing. This page was obtained from the project's upstream Git repository https://github.com/SELinuxProject/selinux on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-05-11.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 20130617 semanage-fcontext(8) Pages that refer to this page: semanage(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# semanage fcontext\n\n> Manage persistent SELinux security context rules on files/directories.\n> See also: `semanage`, `restorecon`.\n> More information: <https://manned.org/semanage-fcontext>.\n\n- List all file labelling rules:\n\n`sudo semanage fcontext --list`\n\n- List all user-defined file labelling rules without headings:\n\n`sudo semanage fcontext --list --locallist --noheading`\n\n- Add a user-defined rule that labels any path which matches a PCRE regex:\n\n`sudo semanage fcontext --add --type {{samba_share_t}} {{'/mnt/share(/.*)?'}}`\n\n- Delete a user-defined rule using its PCRE regex:\n\n`sudo semanage fcontext --delete {{'/mnt/share(/.*)?'}}`\n\n- Relabel a directory recursively by applying the new rules:\n\n`restorecon -R -v {{path/to/directory}}`\n
sendmail
sendmail(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sendmail(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXIT STATUS | SEE ALSO | AUTHORS | COLOPHON SENDMAIL(8) System Manager's Manual SENDMAIL(8) NAME top sendmail a mail enqueuer for smtpd(8) SYNOPSIS top [-tv] [-F name] [-f from] to ... DESCRIPTION top The utility is a local enqueuer for the smtpd(8) daemon, compatible with mailwrapper(8). The message is read on standard input (stdin) until encounters an end-of-file. The enqueuer is not intended to be used directly to send mail, but rather via a frontend known as a mail user agent. Unless the optional -t flag is specified, one or more recipients must be specified on the command line. The options are as follows: -F name Set the sender's full name. -f from Set the sender's address. -t Read the message's To:, Cc:, and Bcc: fields for recipients. The Bcc: field will be deleted before sending. -v Enable verbose output. To maintain compatibility with Sendmail, Inc.'s implementation of , various other flags are accepted, but have no effect. EXIT STATUS top The utility exits 0 on success, and >0 if an error occurs. SEE ALSO top smtpctl(8), smtpd(8) AUTHORS top OpenSMTPD is primarily developed by Gilles Chehade, Eric Faurot, and Charles Longeau, with contributions from various OpenBSD hackers. It is distributed under the ISC license. This manpage was written by Ryan Kavanagh rak@debian.org for the Debian project and is distributed under the ISC license. COLOPHON top This page is part of the OpenSMTPD (a FREE implementation of the server-side SMTP protocol) project. Information about the project can be found at https://www.opensmtpd.org/. If you have a bug report for this manual page, see https://github.com/OpenSMTPD/OpenSMTPD/issues. This page was obtained from the project's upstream Git repository https://github.com/OpenSMTPD/OpenSMTPD.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-05.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU October 23, 2015 SENDMAIL(8) Pages that refer to this page: sysexits.h(3head), boot(7), mailaddr(7), cron(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sendmail\n\n> Send email.\n> More information: <https://manned.org/sendmail>.\n\n- Send a message with the content of `message.txt` to the mail directory of local user `username`:\n\n`sendmail {{username}} < {{message.txt}}`\n\n- Send an email from you@yourdomain.com (assuming the mail server is configured for this) to test@gmail.com containing the message in `message.txt`:\n\n`sendmail -f {{you@yourdomain.com}} {{test@gmail.com}} < {{message.txt}}`\n\n- Send an email from you@yourdomain.com (assuming the mail server is configured for this) to test@gmail.com containing the file `file.zip`:\n\n`sendmail -f {{you@yourdomain.com}} {{test@gmail.com}} < {{file.zip}}`\n
seq
seq(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training seq(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SEQ(1) User Commands SEQ(1) NAME top seq - print a sequence of numbers SYNOPSIS top seq [OPTION]... LAST seq [OPTION]... FIRST LAST seq [OPTION]... FIRST INCREMENT LAST DESCRIPTION top Print numbers from FIRST to LAST, in steps of INCREMENT. Mandatory arguments to long options are mandatory for short options too. -f, --format=FORMAT use printf style floating-point FORMAT -s, --separator=STRING use STRING to separate numbers (default: \n) -w, --equal-width equalize width by padding with leading zeroes --help display this help and exit --version output version information and exit If FIRST or INCREMENT is omitted, it defaults to 1. That is, an omitted INCREMENT defaults to 1 even when LAST is smaller than FIRST. The sequence of numbers ends when the sum of the current number and INCREMENT would become greater than LAST. FIRST, INCREMENT, and LAST are interpreted as floating point values. INCREMENT is usually positive if FIRST is smaller than LAST, and INCREMENT is usually negative if FIRST is greater than LAST. INCREMENT must not be 0; none of FIRST, INCREMENT and LAST may be NaN. FORMAT must be suitable for printing one argument of type 'double'; it defaults to %.PRECf if FIRST, INCREMENT, and LAST are all fixed point decimal numbers with maximum precision PREC, and to %g otherwise. AUTHOR top Written by Ulrich Drepper. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/seq> or available locally via: info '(coreutils) seq invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SEQ(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# seq\n\n> Output a sequence of numbers to `stdout`.\n> More information: <https://www.gnu.org/software/coreutils/seq>.\n\n- Sequence from 1 to 10:\n\n`seq 10`\n\n- Every 3rd number from 5 to 20:\n\n`seq 5 3 20`\n\n- Separate the output with a space instead of a newline:\n\n`seq -s " " 5 3 20`\n\n- Format output width to a minimum of 4 digits padding with zeros as necessary:\n\n`seq -f "%04g" 5 3 20`\n
set
set(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training set(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT SET(1P) POSIX Programmer's Manual SET(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top set set or unset options and positional parameters SYNOPSIS top set [-abCefhmnuvx] [-o option] [argument...] set [+abCefhmnuvx] [+o option] [argument...] set -- [argument...] set -o set +o DESCRIPTION top If no options or arguments are specified, set shall write the names and values of all shell variables in the collation sequence of the current locale. Each name shall start on a separate line, using the format: "%s=%s\n", <name>, <value> The value string shall be written with appropriate quoting; see the description of shell quoting in Section 2.2, Quoting. The output shall be suitable for reinput to the shell, setting or resetting, as far as possible, the variables that are currently set; read-only variables cannot be reset. When options are specified, they shall set or unset attributes of the shell, as described below. When arguments are specified, they cause positional parameters to be set or unset, as described below. Setting or unsetting attributes and positional parameters are not necessarily related actions, but they can be combined in a single invocation of set. The set special built-in shall support the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines except that options can be specified with either a leading <hyphen-minus> (meaning enable the option) or <plus-sign> (meaning disable it) unless otherwise specified. Implementations shall support the options in the following list in both their <hyphen-minus> and <plus-sign> forms. These options can also be specified as options to sh. -a When this option is on, the export attribute shall be set for each variable to which an assignment is performed; see the Base Definitions volume of POSIX.12017, Section 4.23, Variable Assignment. If the assignment precedes a utility name in a command, the export attribute shall not persist in the current execution environment after the utility completes, with the exception that preceding one of the special built-in utilities causes the export attribute to persist after the built-in has completed. If the assignment does not precede a utility name in the command, or if the assignment is a result of the operation of the getopts or read utilities, the export attribute shall persist until the variable is unset. -b This option shall be supported if the implementation supports the User Portability Utilities option. It shall cause the shell to notify the user asynchronously of background job completions. The following message is written to standard error: "[%d]%c %s%s\n", <job-number>, <current>, <status>, <job-name> where the fields shall be as follows: <current> The character '+' identifies the job that would be used as a default for the fg or bg utilities; this job can also be specified using the job_id "%+" or "%%". The character '-' identifies the job that would become the default if the current default job were to exit; this job can also be specified using the job_id "%-". For other jobs, this field is a <space>. At most one job can be identified with '+' and at most one job can be identified with '-'. If there is any suspended job, then the current job shall be a suspended job. If there are at least two suspended jobs, then the previous job also shall be a suspended job. <job-number> A number that can be used to identify the process group to the wait, fg, bg, and kill utilities. Using these utilities, the job can be identified by prefixing the job number with '%'. <status> Unspecified. <job-name> Unspecified. When the shell notifies the user a job has been completed, it may remove the job's process ID from the list of those known in the current shell execution environment; see Section 2.9.3.1, Examples. Asynchronous notification shall not be enabled by default. -C (Uppercase C.) Prevent existing files from being overwritten by the shell's '>' redirection operator (see Section 2.7.2, Redirecting Output); the ">|" redirection operator shall override this noclobber option for an individual file. -e When this option is on, when any command fails (for any of the reasons listed in Section 2.8.1, Consequences of Shell Errors or by returning an exit status greater than zero), the shell immediately shall exit, as if by executing the exit special built-in utility with no arguments, with the following exceptions: 1. The failure of any individual command in a multi- command pipeline shall not cause the shell to exit. Only the failure of the pipeline itself shall be considered. 2. The -e setting shall be ignored when executing the compound list following the while, until, if, or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last. 3. If the exit status of a compound command other than a subshell command was the result of a failure while -e was being ignored, then -e shall not apply to this command. This requirement applies to the shell environment and each subshell environment separately. For example, in: set -e; (false; echo one) | cat; echo two the false command causes the subshell to exit without executing echo one; however, echo two is executed because the exit status of the pipeline (false; echo one) | cat is zero. -f The shell shall disable pathname expansion. -h Locate and remember utilities invoked by functions as those functions are defined (the utilities are normally located when the function is executed). -m This option shall be supported if the implementation supports the User Portability Utilities option. All jobs shall be run in their own process groups. Immediately before the shell issues a prompt after completion of the background job, a message reporting the exit status of the background job shall be written to standard error. If a foreground job stops, the shell shall write a message to standard error to that effect, formatted as described by the jobs utility. In addition, if a job changes status other than exiting (for example, if it stops for input or output or is stopped by a SIGSTOP signal), the shell shall write a similar message immediately prior to writing the next prompt. This option is enabled by default for interactive shells. -n The shell shall read commands but does not execute them; this can be used to check for shell script syntax errors. An interactive shell may ignore this option. -o Write the current settings of the options to standard output in an unspecified format. +o Write the current option settings to standard output in a format that is suitable for reinput to the shell as commands that achieve the same options settings. -o option This option is supported if the system supports the User Portability Utilities option. It shall set various options, many of which shall be equivalent to the single option letters. The following values of option shall be supported: allexport Equivalent to -a. errexit Equivalent to -e. ignoreeof Prevent an interactive shell from exiting on end- of-file. This setting prevents accidental logouts when <control>D is entered. A user shall explicitly exit to leave the interactive shell. monitor Equivalent to -m. This option is supported if the system supports the User Portability Utilities option. noclobber Equivalent to -C (uppercase C). noglob Equivalent to -f. noexec Equivalent to -n. nolog Prevent the entry of function definitions into the command history; see Command History List. notify Equivalent to -b. nounset Equivalent to -u. verbose Equivalent to -v. vi Allow shell command line editing using the built- in vi editor. Enabling vi mode shall disable any other command line editing mode provided as an implementation extension. It need not be possible to set vi mode on for certain block-mode terminals. xtrace Equivalent to -x. -u When the shell tries to expand an unset parameter other than the '@' and '*' special parameters, it shall write a message to standard error and the expansion shall fail with the consequences specified in Section 2.8.1, Consequences of Shell Errors. -v The shell shall write its input to standard error as it is read. -x The shell shall write to standard error a trace for each command after it expands the command and before it executes it. It is unspecified whether the command that turns tracing off is traced. The default for all these options shall be off (unset) unless stated otherwise in the description of the option or unless the shell was invoked with them on; see sh. The remaining arguments shall be assigned in order to the positional parameters. The special parameter '#' shall be set to reflect the number of positional parameters. All positional parameters shall be unset before any new values are assigned. If the first argument is '-', the results are unspecified. The special argument "--" immediately following the set command name can be used to delimit the arguments if the first argument begins with '+' or '-', or to prevent inadvertent listing of all shell variables when there are no arguments. The command set -- without argument shall unset all positional parameters and set the special parameter '#' to zero. OPTIONS top See the DESCRIPTION. OPERANDS top See the DESCRIPTION. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top None. ASYNCHRONOUS EVENTS top Default. STDOUT top See the DESCRIPTION. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top 0 Successful completion. >0 An invalid option was specified, or an error occurred. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top Application writers should avoid relying on set -e within functions. For example, in the following script: set -e start() { some_server echo some_server started successfully } start || echo >&2 some_server failed the -e setting is ignored within the function body (because the function is a command in an AND-OR list other than the last). Therefore, if some_server fails, the function carries on to echo "some_serverstartedsuccessfully", and the exit status of the function is zero (which means "some_serverfailed" is not output). EXAMPLES top Write out all variables and their values: set Set $1, $2, and $3 and set "$#" to 3: set c a b Turn on the -x and -v options: set -xv Unset all positional parameters: set -- Set $1 to the value of x, even if it begins with '-' or '+': set -- "$x" Set the positional parameters to the expansion of x, even if x expands with a leading '-' or '+': set -- $x RATIONALE top The set -- form is listed specifically in the SYNOPSIS even though this usage is implied by the Utility Syntax Guidelines. The explanation of this feature removes any ambiguity about whether the set -- form might be misinterpreted as being equivalent to set without any options or arguments. The functionality of this form has been adopted from the KornShell. In System V, set -- only unsets parameters if there is at least one argument; the only way to unset all parameters is to use shift. Using the KornShell version should not affect System V scripts because there should be no reason to issue it without arguments deliberately; if it were issued as, for example: set -- "$@" and there were in fact no arguments resulting from "$@", unsetting the parameters would have no result. The set + form in early proposals was omitted as being an unnecessary duplication of set alone and not widespread historical practice. The noclobber option was changed to allow set -C as well as the set -o noclobber option. The single-letter version was added so that the historical "$-" paradigm would not be broken; see Section 2.5.2, Special Parameters. The description of the -e option is intended to match the behavior of the 1988 version of the KornShell. The -h flag is related to command name hashing. See hash(1p). The following set flags were omitted intentionally with the following rationale: -k The -k flag was originally added by the author of the Bourne shell to make it easier for users of pre-release versions of the shell. In early versions of the Bourne shell the construct set name=value had to be used to assign values to shell variables. The problem with -k is that the behavior affects parsing, virtually precluding writing any compilers. To explain the behavior of -k, it is necessary to describe the parsing algorithm, which is implementation- defined. For example: set -k; echo name=value and: set -k echo name=value behave differently. The interaction with functions is even more complex. What is more, the -k flag is never needed, since the command line could have been reordered. -t The -t flag is hard to specify and almost never used. The only known use could be done with here-documents. Moreover, the behavior with ksh and sh differs. The reference page says that it exits after reading and executing one command. What is one command? If the input is date;date, sh executes both date commands while ksh does only the first. Consideration was given to rewriting set to simplify its confusing syntax. A specific suggestion was that the unset utility should be used to unset options instead of using the non- getopt()-able +option syntax. However, the conclusion was reached that the historical practice of using +option was satisfactory and that there was no compelling reason to modify such widespread historical practice. The -o option was adopted from the KornShell to address user needs. In addition to its generally friendly interface, -o is needed to provide the vi command line editing mode, for which historical practice yields no single-letter option name. (Although it might have been possible to invent such a letter, it was recognized that other editing modes would be developed and -o provides ample name space for describing such extensions.) Historical implementations are inconsistent in the format used for -o option status reporting. The +o format without an option- argument was added to allow portable access to the options that can be saved and then later restored using, for instance, a dot script. Historically, sh did trace the command set +x, but ksh did not. The ignoreeof setting prevents accidental logouts when the end- of-file character (typically <control>D) is entered. A user shall explicitly exit to leave the interactive shell. The set -m option was added to apply only to the UPE because it applies primarily to interactive use, not shell script applications. The ability to do asynchronous notification became available in the 1988 version of the KornShell. To have it occur, the user had to issue the command: trap "jobs -n" CLD The C shell provides two different levels of an asynchronous notification capability. The environment variable notify is analogous to what is done in set -b or set -o notify. When set, it notifies the user immediately of background job completions. When unset, this capability is turned off. The other notification ability comes through the built-in utility notify. The syntax is: notify [%job ... ] By issuing notify with no operands, it causes the C shell to notify the user asynchronously when the state of the current job changes. If given operands, notify asynchronously informs the user of changes in the states of the specified jobs. To add asynchronous notification to the POSIX shell, neither the KornShell extensions to trap, nor the C shell notify environment variable seemed appropriate (notify is not a proper POSIX environment variable name). The set -b option was selected as a compromise. The notify built-in was considered to have more functionality than was required for simple asynchronous notification. Historically, some shells applied the -u option to all parameters including $@ and $*. The standard developers felt that this was a misfeature since it is normal and common for $@ and $* to be used in shell scripts regardless of whether they were passed any arguments. Treating these uses as an error when no arguments are passed reduces the value of -u for its intended purpose of finding spelling mistakes in variable names and uses of unset positional parameters. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.14, Special Built-In Utilities, hash(1p) The Base Definitions volume of POSIX.12017, Section 4.23, Variable Assignment, Section 12.2, Utility Syntax Guidelines COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 SET(1P) Pages that refer to this page: pathchk(1p), sh(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# set\n\n> Toggle shell options or set the values of positional parameters.\n> More information: <https://manned.org/set.1posix>.\n\n- Display the names and values of shell variables:\n\n`set`\n\n- Export newly initialized variables to child processes:\n\n`set -a`\n\n- Write formatted messages to `stderr` when jobs finish:\n\n`set -b`\n\n- Write and edit text in the command line with `vi`-like keybindings (e.g. `yy`):\n\n`set -o {{vi}}`\n\n- Exit the shell when (some) commands fail:\n\n`set -e`\n
setcap
setcap(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training setcap(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXIT CODE | REPORTING BUGS | SEE ALSO | COLOPHON SETCAP(8) System Manager's Manual SETCAP(8) NAME top setcap - set file capabilities SYNOPSIS top setcap [-q] [-n <rootuid>] [-v] {capabilities|-|-r} filename [ ... capabilitiesN fileN ] DESCRIPTION top In the absence of the -v (verify) option setcap sets the capabilities of each specified filename to the capabilities specified. The optional -n <rootuid> argument can be used to set the file capability for use only in a user namespace with this root user ID owner. The -v option is used to verify that the specified capabilities are currently associated with the file. If -v and -n are supplied, the -n <rootuid> argument is also verified. The capabilities are specified in the form described in cap_from_text(3). The special capability string, '-', can be used to indicate that capabilities are read from the standard input. In such cases, the capability set is terminated with a blank line. The special capability string, '-r', is used to remove a capability set from a file. Note, setting an empty capability set is not the same as removing it. An empty set can be used to guarantee a file is not executed with privilege in spite of the fact that the prevailing ambient+inheritable sets would otherwise bestow capabilities on executed binaries. The '-f', is used to force completion even when it is in some way considered an invalid operation. This can affect '-r' and setting file capabilities the kernel will not be able to make sense of. The -q flag is used to make the program less verbose in its output. EXIT CODE top The setcap program will exit with a 0 exit code if successful. On failure, the exit code is 1. REPORTING BUGS top Please report bugs via: https://bugzilla.kernel.org/buglist.cgi?component=libcap&list_id=1090757 SEE ALSO top capsh(1), cap_from_text(3), cap_get_file(3), capabilities(7), user_namespaces(7), captree(8), getcap(8) and getpcaps(8). COLOPHON top This page is part of the libcap (capabilities commands and library) project. Information about the project can be found at https://git.kernel.org/pub/scm/libs/libcap/libcap.git/. If you have a bug report for this manual page, send it to morgan@kernel.org (please put "libcap" in the Subject line). This page was obtained from the project's upstream Git repository https://git.kernel.org/pub/scm/libs/libcap/libcap.git/ on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-06-24.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 2020-08-29 SETCAP(8) Pages that refer to this page: capsh(1), cap_iab(3), libcap(3), capabilities(7), getcap(8), getpcaps(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# setcap\n\n> Set capabilities of specified file.\n> See also: `tldr getcap`.\n> More information: <https://manned.org/setcap>.\n\n- Set capability `cap_net_raw` (to use RAW and PACKET sockets) for a given file:\n\n`setcap '{{cap_net_raw}}' {{path/to/file}}`\n\n- Set multiple capabilities on a file (`ep` behind the capability means "effective permitted"):\n\n`setcap '{{cap_dac_read_search,cap_sys_tty_config+ep}}' {{path/to/file}}`\n\n- Remove all capabilities from a file:\n\n`setcap -r {{path/to/file}}`\n\n- Verify that the specified capabilities are currently associated with the specified file:\n\n`setcap -v '{{cap_net_raw}}' {{path/to/file}}`\n\n- The optional `-n root_uid` argument can be used to set the file capability for use only in a user namespace with this root user ID owner:\n\n`setcap -n {{root_uid}} '{{cap_net_admin}}' {{path/to/file}}`\n
setfacl
setfacl(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training setfacl(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | CONFORMANCE TO POSIX 1003.1e DRAFT STANDARD 17 | AUTHOR | SEE ALSO | COLOPHON SETFACL(1) Access Control Lists SETFACL(1) NAME top setfacl - set file access control lists SYNOPSIS top setfacl [-bkndRLPvh] [{-m|-x} acl_spec] [{-M|-X} acl_file] file ... setfacl --restore={file|-} DESCRIPTION top This utility sets Access Control Lists (ACLs) of files and directories. On the command line, a sequence of commands is followed by a sequence of files (which in turn can be followed by another sequence of commands, ...). The -m and -x options expect an ACL on the command line. Multiple ACL entries are separated by comma characters (`,'). The -M and -X options read an ACL from a file or from standard input. The ACL entry format is described in Section ACL ENTRIES. The --set and --set-file options set the ACL of a file or a directory. The previous ACL is replaced. ACL entries for this operation must include permissions. The -m (--modify) and -M (--modify-file) options modify the ACL of a file or directory. ACL entries for this operation must include permissions. The -x (--remove) and -X (--remove-file) options remove ACL entries. It is not an error to remove an entry which does not exist. Only ACL entries without the perms field are accepted as parameters, unless POSIXLY_CORRECT is defined. When reading from files using the -M and -X options, setfacl accepts the output getfacl produces. There is at most one ACL entry per line. After a Pound sign (`#'), everything up to the end of the line is treated as a comment. If setfacl is used on a file system which does not support ACLs, setfacl operates on the file mode permission bits. If the ACL does not fit completely in the permission bits, setfacl modifies the file mode permission bits to reflect the ACL as closely as possible, writes an error message to standard error, and returns with an exit status greater than 0. PERMISSIONS The file owner and processes capable of CAP_FOWNER are granted the right to modify ACLs of a file. This is analogous to the permissions required for accessing the file mode. (On current Linux systems, root is the only user with the CAP_FOWNER capability.) OPTIONS top -b, --remove-all Remove all extended ACL entries. The base ACL entries of the owner, group and others are retained. -k, --remove-default Remove the Default ACL. If no Default ACL exists, no warnings are issued. -n, --no-mask Do not recalculate the effective rights mask. The default behavior of setfacl is to recalculate the ACL mask entry, unless a mask entry was explicitly given. The mask entry is set to the union of all permissions of the owning group, and all named user and group entries. (These are exactly the entries affected by the mask entry). --mask Do recalculate the effective rights mask, even if an ACL mask entry was explicitly given. (See the -n option.) -d, --default All operations apply to the Default ACL. Regular ACL entries in the input set are promoted to Default ACL entries. Default ACL entries in the input set are discarded. (A warning is issued if that happens). --restore={file|-} Restore a permission backup created by `getfacl -R' or similar. All permissions of a complete directory subtree are restored using this mechanism. If the input contains owner comments or group comments, setfacl attempts to restore the owner and owning group. If the input contains flags comments (which define the setuid, setgid, and sticky bits), setfacl sets those three bits accordingly; otherwise, it clears them. This option cannot be mixed with other options except `--test'. If the file specified is '-', then it will be read from standard input. --test Test mode. Instead of changing the ACLs of any files, the resulting ACLs are listed. -R, --recursive Apply operations to all files and directories recursively. This option cannot be mixed with `--restore'. -L, --logical Logical walk, follow symbolic links to directories. The default behavior is to follow symbolic link arguments, and skip symbolic links encountered in subdirectories. Only effective in combination with -R. This option cannot be mixed with `--restore'. -P, --physical Physical walk, do not follow symbolic links to directories. This also skips symbolic link arguments. Only effective in combination with -R. This option cannot be mixed with `--restore'. -v, --version Print the version of setfacl and exit. -h, --help Print help explaining the command line options. -- End of command line options. All remaining parameters are interpreted as file names, even if they start with a dash. - If the file name parameter is a single dash, setfacl reads a list of files from standard input. ACL ENTRIES The setfacl utility recognizes the following ACL entry formats (blanks inserted for clarity): [d[efault]:] [u[ser]:]uid [:perms] Permissions of a named user. Permissions of the file owner if uid is empty. [d[efault]:] g[roup]:gid [:perms] Permissions of a named group. Permissions of the owning group if gid is empty. [d[efault]:] m[ask][:] [:perms] Effective rights mask [d[efault]:] o[ther][:] [:perms] Permissions of others. Whitespace between delimiter characters and non-delimiter characters is ignored. Proper ACL entries including permissions are used in modify and set operations. (options -m, -M, --set and --set-file). Entries without the perms field are used for deletion of entries (options -x and -X). For uid and gid you can specify either a name or a number. Character literals may be specified with a backslash followed by the 3-digit octal digits corresponding to the ASCII code for the character (e.g., \101 for 'A'). If the name contains a literal backslash followed by 3 digits, the backslash must be escaped (i.e., \\). The perms field is a combination of characters that indicate the read (r), write (w), execute (x) permissions. Dash characters in the perms field (-) are ignored. The character X stands for the execute permission if the file is a directory or already has execute permission for some user. Alternatively, the perms field can define the permissions numerically, as a bit-wise combination of read (4), write (2), and execute (1). Zero perms fields or perms fields that only consist of dashes indicate no permissions. AUTOMATICALLY CREATED ENTRIES Initially, files and directories contain only the three base ACL entries for the owner, the group, and others. There are some rules that need to be satisfied in order for an ACL to be valid: * The three base entries cannot be removed. There must be exactly one entry of each of these base entry types. * Whenever an ACL contains named user entries or named group objects, it must also contain an effective rights mask. * Whenever an ACL contains any Default ACL entries, the three Default ACL base entries (default owner, default group, and default others) must also exist. * Whenever a Default ACL contains named user entries or named group objects, it must also contain a default effective rights mask. To help the user ensure these rules, setfacl creates entries from existing entries under the following conditions: * If an ACL contains named user or named group entries, and no mask entry exists, a mask entry containing the same permissions as the group entry is created. Unless the -n option is given, the permissions of the mask entry are further adjusted to include the union of all permissions affected by the mask entry. (See the -n option description). * If a Default ACL entry is created, and the Default ACL contains no owner, owning group, or others entry, a copy of the ACL owner, owning group, or others entry is added to the Default ACL. * If a Default ACL contains named user entries or named group entries, and no mask entry exists, a mask entry containing the same permissions as the default Default ACL's group entry is added. Unless the -n option is given, the permissions of the mask entry are further adjusted to include the union of all permissions affected by the mask entry. (See the -n option description). EXAMPLES top Granting an additional user read access setfacl -m u:lisa:r file Revoking write access from all groups and all named users (using the effective rights mask) setfacl -m m::rx file Removing a named group entry from a file's ACL setfacl -x g:staff file Copying the ACL of one file to another getfacl file1 | setfacl --set-file=- file2 Copying the access ACL into the Default ACL getfacl --access dir | setfacl -d -M- dir CONFORMANCE TO POSIX 1003.1e DRAFT STANDARD 17 top If the environment variable POSIXLY_CORRECT is defined, the default behavior of setfacl changes as follows: All non-standard options are disabled. The ``default:'' prefix is disabled. The -x and -X options also accept permission fields (and ignore them). AUTHOR top Andreas Gruenbacher, <andreas.gruenbacher@gmail.com>. Please send your bug reports, suggested features and comments to the above address. SEE ALSO top getfacl(1), chmod(1), umask(1), acl(5) COLOPHON top This page is part of the acl (manipulating access control lists) project. Information about the project can be found at http://savannah.nongnu.org/projects/acl. If you have a bug report for this manual page, see http://savannah.nongnu.org/bugs/?group=acl. This page was obtained from the project's upstream Git repository git://git.savannah.nongnu.org/acl.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-01.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org May 2000 ACL File Utilities SETFACL(1) Pages that refer to this page: chacl(1), getfacl(1), nfs4_setfacl(1), tmpfiles.d(5), systemd-journald.service(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# setfacl\n\n> Set file access control lists (ACL).\n> More information: <https://manned.org/setfacl>.\n\n- [M]odify ACL of a file for user with read and write access:\n\n`setfacl --modify u:{{username}}:rw {{path/to/file_or_directory}}`\n\n- [M]odify [d]efault ACL of a file for all users:\n\n`setfacl --modify --default u::rw {{path/to/file_or_directory}}`\n\n- Remove ACL of a file for a user:\n\n`setfacl --remove u:{{username}} {{path/to/file_or_directory}}`\n\n- Remove all ACL entries of a file:\n\n`setfacl --remove-all {{path/to/file_or_directory}}`\n
setsid
setsid(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training setsid(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY SETSID(1) User Commands SETSID(1) NAME top setsid - run a program in a new session SYNOPSIS top setsid [options] program [arguments] DESCRIPTION top setsid runs a program in a new session. The command calls fork(2) if already a process group leader. Otherwise, it executes a program in the current process. This default behavior is possible to override by the --fork option. OPTIONS top -c, --ctty Set the controlling terminal to the current one. -f, --fork Always create a new process. -w, --wait Wait for the execution of the program to end, and return the exit status of this program as the exit status of setsid. -V, --version Display version information and exit. -h, --help Display help text and exit. AUTHORS top Rick Sladkey <jrs@world.std.com> SEE ALSO top setsid(2) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The setsid command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 SETSID(1) Pages that refer to this page: setsid(2) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# setsid\n\n> Run a program in a new session if the calling process is not a process group leader.\n> The created session is by default not controlled by the current terminal.\n> More information: <https://manned.org/setsid>.\n\n- Run a program in a new session:\n\n`setsid {{program}}`\n\n- Run a program in a new session discarding the resulting output and error:\n\n`setsid {{program}} > /dev/null 2>&1`\n\n- Run a program creating a new process:\n\n`setsid --fork {{program}}`\n\n- Return the exit code of a program as the exit code of setsid when the program exits:\n\n`setsid --wait {{program}}`\n\n- Run a program in a new session setting the current terminal as the controlling terminal:\n\n`setsid --ctty {{program}}`\n
sftp
sftp(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sftp(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | INTERACTIVE COMMANDS | SEE ALSO | COLOPHON SFTP(1) General Commands Manual SFTP(1) NAME top sftp OpenSSH secure file transfer SYNOPSIS top sftp [-46AaCfNpqrv] [-B buffer_size] [-b batchfile] [-c cipher] [-D sftp_server_command] [-F ssh_config] [-i identity_file] [-J destination] [-l limit] [-o ssh_option] [-P port] [-R num_requests] [-S program] [-s subsystem | sftp_server] [-X sftp_option] destination DESCRIPTION top is a file transfer program, similar to ftp(1), which performs all operations over an encrypted ssh(1) transport. It may also use many features of ssh, such as public key authentication and compression. The destination may be specified either as [user@]host[:path] or as a URI in the form sftp://[user@]host[:port][/path]. If the destination includes a path and it is not a directory, will retrieve files automatically if a non-interactive authentication method is used; otherwise it will do so after successful interactive authentication. If no path is specified, or if the path is a directory, will log in to the specified host and enter interactive command mode, changing to the remote directory if one was specified. An optional trailing slash can be used to force the path to be interpreted as a directory. Since the destination formats use colon characters to delimit host names from path names or port numbers, IPv6 addresses must be enclosed in square brackets to avoid ambiguity. The options are as follows: -4 Forces to use IPv4 addresses only. -6 Forces to use IPv6 addresses only. -A Allows forwarding of ssh-agent(1) to the remote system. The default is not to forward an authentication agent. -a Attempt to continue interrupted transfers rather than overwriting existing partial or complete copies of files. If the partial contents differ from those being transferred, then the resultant file is likely to be corrupt. -B buffer_size Specify the size of the buffer that uses when transferring files. Larger buffers require fewer round trips at the cost of higher memory consumption. The default is 32768 bytes. -b batchfile Batch mode reads a series of commands from an input batchfile instead of stdin. Since it lacks user interaction, it should be used in conjunction with non- interactive authentication to obviate the need to enter a password at connection time (see sshd(8) and ssh-keygen(1) for details). A batchfile of - may be used to indicate standard input. will abort if any of the following commands fail: get, put, reget, reput, rename, ln, rm, mkdir, chdir, ls, lchdir, copy, cp, chmod, chown, chgrp, lpwd, df, symlink, and lmkdir. Termination on error can be suppressed on a command by command basis by prefixing the command with a - character (for example, -rm /tmp/blah*). Echo of the command may be suppressed by prefixing the command with a @ character. These two prefixes may be combined in any order, for example -@ls /bsd. -C Enables compression (via ssh's -C flag). -c cipher Selects the cipher to use for encrypting the data transfers. This option is directly passed to ssh(1). -D sftp_server_command Connect directly to a local sftp server (rather than via ssh(1)). A command and arguments may be specified, for example "/path/sftp-server -el debug3". This option may be useful in debugging the client and server. -F ssh_config Specifies an alternative per-user configuration file for ssh(1). This option is directly passed to ssh(1). -f Requests that files be flushed to disk immediately after transfer. When uploading files, this feature is only enabled if the server implements the "fsync@openssh.com" extension. -i identity_file Selects the file from which the identity (private key) for public key authentication is read. This option is directly passed to ssh(1). -J destination Connect to the target host by first making an connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. This option is directly passed to ssh(1). -l limit Limits the used bandwidth, specified in Kbit/s. -N Disables quiet mode, e.g. to override the implicit quiet mode set by the -b flag. -o ssh_option Can be used to pass options to ssh in the format used in ssh_config(5). This is useful for specifying options for which there is no separate sftp command-line flag. For example, to specify an alternate port use: sftp -oPort=24. For full details of the options listed below, and their possible values, see ssh_config(5). AddressFamily BatchMode BindAddress BindInterface CanonicalDomains CanonicalizeFallbackLocal CanonicalizeHostname CanonicalizeMaxDots CanonicalizePermittedCNAMEs CASignatureAlgorithms CertificateFile CheckHostIP Ciphers Compression ConnectionAttempts ConnectTimeout ControlMaster ControlPath ControlPersist GlobalKnownHostsFile GSSAPIAuthentication GSSAPIDelegateCredentials HashKnownHosts Host HostbasedAcceptedAlgorithms HostbasedAuthentication HostKeyAlgorithms HostKeyAlias Hostname IdentitiesOnly IdentityAgent IdentityFile IPQoS KbdInteractiveAuthentication KbdInteractiveDevices KexAlgorithms KnownHostsCommand LogLevel MACs NoHostAuthenticationForLocalhost NumberOfPasswordPrompts PasswordAuthentication PKCS11Provider Port PreferredAuthentications ProxyCommand ProxyJump PubkeyAcceptedAlgorithms PubkeyAuthentication RekeyLimit RequiredRSASize SendEnv ServerAliveInterval ServerAliveCountMax SetEnv StrictHostKeyChecking TCPKeepAlive UpdateHostKeys User UserKnownHostsFile VerifyHostKeyDNS -P port Specifies the port to connect to on the remote host. -p Preserves modification times, access times, and modes from the original files transferred. -q Quiet mode: disables the progress meter as well as warning and diagnostic messages from ssh(1). -R num_requests Specify how many requests may be outstanding at any one time. Increasing this may slightly improve file transfer speed but will increase memory usage. The default is 64 outstanding requests. -r Recursively copy entire directories when uploading and downloading. Note that does not follow symbolic links encountered in the tree traversal. -S program Name of the program to use for the encrypted connection. The program must understand ssh(1) options. -s subsystem | sftp_server Specifies the SSH2 subsystem or the path for an sftp server on the remote host. A path is useful when the remote sshd(8) does not have an sftp subsystem configured. -v Raise logging level. This option is also passed to ssh. -X sftp_option Specify an option that controls aspects of SFTP protocol behaviour. The valid options are: nrequests=value Controls how many concurrent SFTP read or write requests may be in progress at any point in time during a download or upload. By default 64 requests may be active concurrently. buffer=value Controls the maximum buffer size for a single SFTP read/write operation used during download or upload. By default a 32KB buffer is used. INTERACTIVE COMMANDS top Once in interactive mode, understands a set of commands similar to those of ftp(1). Commands are case insensitive. Pathnames that contain spaces must be enclosed in quotes. Any special characters contained within pathnames that are recognized by glob(3) must be escaped with backslashes (\). bye Quit sftp. cd [path] Change remote directory to path. If path is not specified, then change directory to the one the session started in. chgrp [-h] grp path Change group of file path to grp. path may contain glob(7) characters and may match multiple files. grp must be a numeric GID. If the -h flag is specified, then symlinks will not be followed. Note that this is only supported by servers that implement the "lsetstat@openssh.com" extension. chmod [-h] mode path Change permissions of file path to mode. path may contain glob(7) characters and may match multiple files. If the -h flag is specified, then symlinks will not be followed. Note that this is only supported by servers that implement the "lsetstat@openssh.com" extension. chown [-h] own path Change owner of file path to own. path may contain glob(7) characters and may match multiple files. own must be a numeric UID. If the -h flag is specified, then symlinks will not be followed. Note that this is only supported by servers that implement the "lsetstat@openssh.com" extension. copy oldpath newpath Copy remote file from oldpath to newpath. Note that this is only supported by servers that implement the "copy-data" extension. cp oldpath newpath Alias to copy command. df [-hi] [path] Display usage information for the filesystem holding the current directory (or path if specified). If the -h flag is specified, the capacity information will be displayed using "human-readable" suffixes. The -i flag requests display of inode information in addition to capacity information. This command is only supported on servers that implement the statvfs@openssh.com extension. exit Quit sftp. get [-afpR] remote-path [local-path] Retrieve the remote-path and store it on the local machine. If the local path name is not specified, it is given the same name it has on the remote machine. remote-path may contain glob(7) characters and may match multiple files. If it does and local-path is specified, then local-path must specify a directory. If the -a flag is specified, then attempt to resume partial transfers of existing files. Note that resumption assumes that any partial copy of the local file matches the remote copy. If the remote file contents differ from the partial local copy then the resultant file is likely to be corrupt. If the -f flag is specified, then fsync(2) will be called after the file transfer has completed to flush the file to disk. If the -p flag is specified, then full file permissions and access times are copied too. If the -R flag is specified then directories will be copied recursively. Note that does not follow symbolic links when performing recursive transfers. help Display help text. lcd [path] Change local directory to path. If path is not specified, then change directory to the local user's home directory. lls [ls-options [path]] Display local directory listing of either path or current directory if path is not specified. ls-options may contain any flags supported by the local system's ls(1) command. path may contain glob(7) characters and may match multiple files. lmkdir path Create local directory specified by path. ln [-s] oldpath newpath Create a link from oldpath to newpath. If the -s flag is specified the created link is a symbolic link, otherwise it is a hard link. lpwd Print local working directory. ls [-1afhlnrSt] [path] Display a remote directory listing of either path or the current directory if path is not specified. path may contain glob(7) characters and may match multiple files. The following flags are recognized and alter the behaviour of ls accordingly: -1 Produce single columnar output. -a List files beginning with a dot (.). -f Do not sort the listing. The default sort order is lexicographical. -h When used with a long format option, use unit suffixes: Byte, Kilobyte, Megabyte, Gigabyte, Terabyte, Petabyte, and Exabyte in order to reduce the number of digits to four or fewer using powers of 2 for sizes (K=1024, M=1048576, etc.). -l Display additional details including permissions and ownership information. -n Produce a long listing with user and group information presented numerically. -r Reverse the sort order of the listing. -S Sort the listing by file size. -t Sort the listing by last modification time. lumask umask Set local umask to umask. mkdir path Create remote directory specified by path. progress Toggle display of progress meter. put [-afpR] local-path [remote-path] Upload local-path and store it on the remote machine. If the remote path name is not specified, it is given the same name it has on the local machine. local-path may contain glob(7) characters and may match multiple files. If it does and remote-path is specified, then remote-path must specify a directory. If the -a flag is specified, then attempt to resume partial transfers of existing files. Note that resumption assumes that any partial copy of the remote file matches the local copy. If the local file contents differ from the remote local copy then the resultant file is likely to be corrupt. If the -f flag is specified, then a request will be sent to the server to call fsync(2) after the file has been transferred. Note that this is only supported by servers that implement the "fsync@openssh.com" extension. If the -p flag is specified, then full file permissions and access times are copied too. If the -R flag is specified then directories will be copied recursively. Note that does not follow symbolic links when performing recursive transfers. pwd Display remote working directory. quit Quit sftp. reget [-fpR] remote-path [local-path] Resume download of remote-path. Equivalent to get with the -a flag set. reput [-fpR] local-path [remote-path] Resume upload of local-path. Equivalent to put with the -a flag set. rename oldpath newpath Rename remote file from oldpath to newpath. rm path Delete remote file specified by path. rmdir path Remove remote directory specified by path. symlink oldpath newpath Create a symbolic link from oldpath to newpath. version Display the protocol version. !command Execute command in local shell. ! Escape to local shell. ? Synonym for help. SEE ALSO top ftp(1), ls(1), scp(1), ssh(1), ssh-add(1), ssh-keygen(1), ssh_config(5), glob(7), sftp-server(8), sshd(8) T. Ylonen and S. Lehtinen, SSH File Transfer Protocol, draft-ietf-secsh- filexfer-00.txt, January 2001, work in progress material. COLOPHON top This page is part of the openssh (Portable OpenSSH) project. Information about the project can be found at http://www.openssh.com/portable.html. If you have a bug report for this manual page, see http://www.openssh.com/report.html. This page was obtained from the tarball openssh-9.6p1.tar.gz fetched from http://ftp.eu.openbsd.org/pub/OpenBSD/OpenSSH/portable/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up- to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU December 16, 2022 SFTP(1) Pages that refer to this page: sshfs(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sftp\n\n> Secure File Transfer Program.\n> Interactive program to copy files between hosts over SSH.\n> For non-interactive file transfers, see `scp` or `rsync`.\n> More information: <https://manned.org/sftp>.\n\n- Connect to a remote server and enter an interactive command mode:\n\n`sftp {{remote_user}}@{{remote_host}}`\n\n- Connect using an alternate port:\n\n`sftp -P {{remote_port}} {{remote_user}}@{{remote_host}}`\n\n- Connect using a predefined host (in `~/.ssh/config`):\n\n`sftp {{host}}`\n\n- Transfer remote file to the local system:\n\n`get {{/path/remote_file}}`\n\n- Transfer local file to the remote system:\n\n`put {{/path/local_file}}`\n\n- Transfer remote directory to the local system recursively (works with `put` too):\n\n`get -R {{/path/remote_directory}}`\n\n- Get list of files on local machine:\n\n`lls`\n\n- Get list of files on remote machine:\n\n`ls`\n
sg
sg(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sg(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | CONFIGURATION | FILES | SEE ALSO | COLOPHON SG(1) User Commands SG(1) NAME top sg - execute command as different group ID SYNOPSIS top sg [-] [group [-c ] command] DESCRIPTION top The sg command works similar to newgrp but accepts a command. The command will be executed with the /bin/sh shell. With most shells you may run sg from, you need to enclose multi-word commands in quotes. Another difference between newgrp and sg is that some shells treat newgrp specially, replacing themselves with a new instance of a shell that newgrp creates. This doesn't happen with sg, so upon exit from a sg command you are returned to your previous group ID. CONFIGURATION top The following configuration variables in /etc/login.defs change the behavior of this tool: SYSLOG_SG_ENAB (boolean) Enable "syslog" logging of sg activity. FILES top /etc/passwd User account information. /etc/shadow Secure user account information. /etc/group Group account information. /etc/gshadow Secure group account information. SEE ALSO top id(1), login(1), newgrp(1), su(1), gpasswd(1), group(5), gshadow(5). COLOPHON top This page is part of the shadow-utils (utilities for managing accounts and shadow password files) project. Information about the project can be found at https://github.com/shadow-maint/shadow. If you have a bug report for this manual page, send it to pkg-shadow-devel@alioth-lists.debian.net. This page was obtained from the project's upstream Git repository https://github.com/shadow-maint/shadow on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-15.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org shadow-utils 4.11.1 12/22/2023 SG(1) Pages that refer to this page: newgrp(1), su(1@@shadow-utils), group(5), credentials(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sg\n\n> Ast-grep is a tool for code structural search, lint, and rewriting.\n> More information: <https://ast-grep.github.io/guide/introduction.html>.\n\n- Scan for possible queries using interactive mode:\n\n`sg scan --interactive`\n\n- Rewrite code in the current directory using patterns:\n\n`sg run --pattern '{{foo}}' --rewrite '{{bar}}' --lang {{python}}`\n\n- Visualize possible changes without applying them:\n\n`sg run --pattern '{{useState<number>($A)}}' --rewrite '{{useState($A)}}' --lang {{typescript}}`\n\n- Output results as JSON, extract information using `jq` and interactively view it using `jless`:\n\n`sg run --pattern '{{Some($A)}}' --rewrite '{{None}}' --json | jq '{{.[].replacement}}' | jless`\n
sh
sh(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sh(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT SH(1P) POSIX Programmer's Manual SH(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top sh shell, the standard command language interpreter SYNOPSIS top sh [-abCefhimnuvx] [-o option]... [+abCefhimnuvx] [+o option]... [command_file [argument...]] sh -c [-abCefhimnuvx] [-o option]... [+abCefhimnuvx] [+o option]... command_string [command_name [argument...]] sh -s [-abCefhimnuvx] [-o option]... [+abCefhimnuvx] [+o option]... [argument...] DESCRIPTION top The sh utility is a command language interpreter that shall execute commands read from a command line string, the standard input, or a specified file. The application shall ensure that the commands to be executed are expressed in the language described in Chapter 2, Shell Command Language. Pathname expansion shall not fail due to the size of a file. Shell input and output redirections have an implementation- defined offset maximum that is established in the open file description. OPTIONS top The sh utility shall conform to the Base Definitions volume of POSIX.12017, Section 12.2, Utility Syntax Guidelines, with an extension for support of a leading <plus-sign> ('+') as noted below. The -a, -b, -C, -e, -f, -m, -n, -o option, -u, -v, and -x options are described as part of the set utility in Section 2.14, Special Built-In Utilities. The option letters derived from the set special built-in shall also be accepted with a leading <plus- sign> ('+') instead of a leading <hyphen-minus> (meaning the reverse case of the option as described in this volume of POSIX.12017). The following additional options shall be supported: -c Read commands from the command_string operand. Set the value of special parameter 0 (see Section 2.5.2, Special Parameters) from the value of the command_name operand and the positional parameters ($1, $2, and so on) in sequence from the remaining argument operands. No commands shall be read from the standard input. -i Specify that the shell is interactive; see below. An implementation may treat specifying the -i option as an error if the real user ID of the calling process does not equal the effective user ID or if the real group ID does not equal the effective group ID. -s Read commands from the standard input. If there are no operands and the -c option is not specified, the -s option shall be assumed. If the -i option is present, or if there are no operands and the shell's standard input and standard error are attached to a terminal, the shell is considered to be interactive. OPERANDS top The following operands shall be supported: - A single <hyphen-minus> shall be treated as the first operand and then ignored. If both '-' and "--" are given as arguments, or if other operands precede the single <hyphen-minus>, the results are undefined. argument The positional parameters ($1, $2, and so on) shall be set to arguments, if any. command_file The pathname of a file containing commands. If the pathname contains one or more <slash> characters, the implementation attempts to read that file; the file need not be executable. If the pathname does not contain a <slash> character: * The implementation shall attempt to read that file from the current working directory; the file need not be executable. * If the file is not in the current working directory, the implementation may perform a search for an executable file using the value of PATH, as described in Section 2.9.1.1, Command Search and Execution. Special parameter 0 (see Section 2.5.2, Special Parameters) shall be set to the value of command_file. If sh is called using a synopsis form that omits command_file, special parameter 0 shall be set to the value of the first argument passed to sh from its parent (for example, argv[0] for a C program), which is normally a pathname used to execute the sh utility. command_name A string assigned to special parameter 0 when executing the commands in command_string. If command_name is not specified, special parameter 0 shall be set to the value of the first argument passed to sh from its parent (for example, argv[0] for a C program), which is normally a pathname used to execute the sh utility. command_string A string that shall be interpreted by the shell as one or more commands, as if the string were the argument to the system() function defined in the System Interfaces volume of POSIX.12017. If the command_string operand is an empty string, sh shall exit with a zero exit status. STDIN top The standard input shall be used only if one of the following is true: * The -s option is specified. * The -c option is not specified and no operands are specified. * The script executes one or more commands that require input from standard input (such as a read command that does not redirect its input). See the INPUT FILES section. When the shell is using standard input and it invokes a command that also uses standard input, the shell shall ensure that the standard input file pointer points directly after the command it has read when the command begins execution. It shall not read ahead in such a manner that any characters intended to be read by the invoked command are consumed by the shell (whether interpreted by the shell or not) or that characters that are not read by the invoked command are not seen by the shell. When the command expecting to read standard input is started asynchronously by an interactive shell, it is unspecified whether characters are read by the command or interpreted by the shell. If the standard input to sh is a FIFO or terminal device and is set to non-blocking reads, then sh shall enable blocking reads on standard input. This shall remain in effect when the command completes. INPUT FILES top The input file shall be a text file, except that line lengths shall be unlimited. If the input file consists solely of zero or more blank lines and comments, sh shall exit with a zero exit status. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of sh: ENV This variable, when and only when an interactive shell is invoked, shall be subjected to parameter expansion (see Section 2.6.2, Parameter Expansion) by the shell, and the resulting value shall be used as a pathname of a file containing shell commands to execute in the current environment. The file need not be executable. If the expanded value of ENV is not an absolute pathname, the results are unspecified. ENV shall be ignored if the real and effective user IDs or real and effective group IDs of the process are different. FCEDIT This variable, when expanded by the shell, shall determine the default value for the -e editor option's editor option-argument. If FCEDIT is null or unset, ed shall be used as the editor. HISTFILE Determine a pathname naming a command history file. If the HISTFILE variable is not set, the shell may attempt to access or create a file .sh_history in the directory referred to by the HOME environment variable. If the shell cannot obtain both read and write access to, or create, the history file, it shall use an unspecified mechanism that allows the history to operate properly. (References to history ``file'' in this section shall be understood to mean this unspecified mechanism in such cases.) An implementation may choose to access this variable only when initializing the history file; this initialization shall occur when fc or sh first attempt to retrieve entries from, or add entries to, the file, as the result of commands issued by the user, the file named by the ENV variable, or implementation- defined system start-up files. Implementations may choose to disable the history list mechanism for users with appropriate privileges who do not set HISTFILE; the specific circumstances under which this occurs are implementation-defined. If more than one instance of the shell is using the same history file, it is unspecified how updates to the history file from those shells interact. As entries are deleted from the history file, they shall be deleted oldest first. It is unspecified when history file entries are physically removed from the history file. HISTSIZE Determine a decimal number representing the limit to the number of previous commands that are accessible. If this variable is unset, an unspecified default greater than or equal to 128 shall be used. The maximum number of commands in the history list is unspecified, but shall be at least 128. An implementation may choose to access this variable only when initializing the history file, as described under HISTFILE. Therefore, it is unspecified whether changes made to HISTSIZE after the history file has been initialized are effective. HOME Determine the pathname of the user's home directory. The contents of HOME are used in tilde expansion as described in Section 2.6.1, Tilde Expansion. LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_COLLATE Determine the behavior of range expressions, equivalence classes, and multi-character collating elements within pattern matching. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files), which characters are defined as letters (character class alpha), and the behavior of character classes within pattern matching. LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. MAIL Determine a pathname of the user's mailbox file for purposes of incoming mail notification. If this variable is set, the shell shall inform the user if the file named by the variable is created or if its modification time has changed. Informing the user shall be accomplished by writing a string of unspecified format to standard error prior to the writing of the next primary prompt string. Such check shall be performed only after the completion of the interval defined by the MAILCHECK variable after the last such check. The user shall be informed only if MAIL is set and MAILPATH is not set. MAILCHECK Establish a decimal integer value that specifies how often (in seconds) the shell shall check for the arrival of mail in the files specified by the MAILPATH or MAIL variables. The default value shall be 600 seconds. If set to zero, the shell shall check before issuing each primary prompt. MAILPATH Provide a list of pathnames and optional messages separated by <colon> characters. If this variable is set, the shell shall inform the user if any of the files named by the variable are created or if any of their modification times change. (See the preceding entry for MAIL for descriptions of mail arrival and user informing.) Each pathname can be followed by '%' and a string that shall be subjected to parameter expansion and written to standard error when the modification time changes. If a '%' character in the pathname is preceded by a <backslash>, it shall be treated as a literal '%' in the pathname. The default message is unspecified. The MAILPATH environment variable takes precedence over the MAIL variable. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. PATH Establish a string formatted as described in the Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, used to effect command interpretation; see Section 2.9.1.1, Command Search and Execution. PWD This variable shall represent an absolute pathname of the current working directory. Assignments to this variable may be ignored. ASYNCHRONOUS EVENTS top The sh utility shall take the standard action for all signals (see Section 1.4, Utility Description Defaults) with the following exceptions. If the shell is interactive, SIGINT signals received during command line editing shall be handled as described in the EXTENDED DESCRIPTION, and SIGINT signals received at other times shall be caught but no action performed. If the shell is interactive: * SIGQUIT and SIGTERM signals shall be ignored. * If the -m option is in effect, SIGTTIN, SIGTTOU, and SIGTSTP signals shall be ignored. * If the -m option is not in effect, it is unspecified whether SIGTTIN, SIGTTOU, and SIGTSTP signals are ignored, set to the default action, or caught. If they are caught, the shell shall, in the signal-catching function, set the signal to the default action and raise the signal (after taking any appropriate steps, such as restoring terminal settings). The standard actions, and the actions described above for interactive shells, can be overridden by use of the trap special built-in utility (see trap(1p) and Section 2.11, Signals and Error Handling). STDOUT top See the STDERR section. STDERR top Except as otherwise stated (by the descriptions of any invoked utilities or in interactive mode), standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top See Chapter 2, Shell Command Language. The functionality described in the rest of the EXTENDED DESCRIPTION section shall be provided on implementations that support the User Portability Utilities option (and the rest of this section is not further shaded for this option). Command History List When the sh utility is being used interactively, it shall maintain a list of commands previously entered from the terminal in the file named by the HISTFILE environment variable. The type, size, and internal format of this file are unspecified. Multiple sh processes can share access to the file for a user, if file access permissions allow this; see the description of the HISTFILE environment variable. Command Line Editing When sh is being used interactively from a terminal, the current command and the command history (see fc(1p)) can be edited using vi-mode command line editing. This mode uses commands, described below, similar to a subset of those described in the vi utility. Implementations may offer other command line editing modes corresponding to other editing utilities. The command set -o vi shall enable vi-mode editing and place sh into vi insert mode (see Command Line Editing (vi-mode)). This command also shall disable any other editing mode that the implementation may provide. The command set +o vi disables vi- mode editing. Certain block-mode terminals may be unable to support shell command line editing. If a terminal is unable to provide either edit mode, it need not be possible to set -o vi when using the shell on this terminal. In the following sections, the characters erase, interrupt, kill, and end-of-file are those set by the stty utility. Command Line Editing (vi-mode) In vi editing mode, there shall be a distinguished line, the edit line. All the editing operations which modify a line affect the edit line. The edit line is always the newest line in the command history buffer. With vi-mode enabled, sh can be switched between insert mode and command mode. When in insert mode, an entered character shall be inserted into the command line, except as noted in vi Line Editing Insert Mode. Upon entering sh and after termination of the previous command, sh shall be in insert mode. Typing an escape character shall switch sh into command mode (see vi Line Editing Command Mode). In command mode, an entered character shall either invoke a defined operation, be used as part of a multi-character operation, or be treated as an error. A character that is not recognized as part of an editing command shall terminate any specific editing command and shall alert the terminal. If sh receives a SIGINT signal in command mode (whether generated by typing the interrupt character or by other means), it shall terminate command line editing on the current command line, reissue the prompt on the next line of the terminal, and reset the command history (see fc(1p)) so that the most recently executed command is the previous command (that is, the command that was being edited when it was interrupted is not re-entered into the history). In the following sections, the phrase ``move the cursor to the beginning of the word'' shall mean ``move the cursor to the first character of the current word'' and the phrase ``move the cursor to the end of the word'' shall mean ``move the cursor to the last character of the current word''. The phrase ``beginning of the command line'' indicates the point between the end of the prompt string issued by the shell (or the beginning of the terminal line, if there is no prompt string) and the first character of the command text. vi Line Editing Insert Mode While in insert mode, any character typed shall be inserted in the current command line, unless it is from the following set. <newline> Execute the current command line. If the current command line is not empty, this line shall be entered into the command history (see fc(1p)). erase Delete the character previous to the current cursor position and move the current cursor position back one character. In insert mode, characters shall be erased from both the screen and the buffer when backspacing. interrupt If sh receives a SIGINT signal in insert mode (whether generated by typing the interrupt character or by other means), it shall terminate command line editing with the same effects as described for interrupting command mode; see Command Line Editing (vi-mode). kill Clear all the characters from the input line. <control>V Insert the next character input, even if the character is otherwise a special insert mode character. <control>W Delete the characters from the one preceding the cursor to the preceding word boundary. The word boundary in this case is the closer to the cursor of either the beginning of the line or a character that is in neither the blank nor punct character classification of the current locale. end-of-file Interpreted as the end of input in sh. This interpretation shall occur only at the beginning of an input line. If end-of-file is entered other than at the beginning of the line, the results are unspecified. <ESC> Place sh into command mode. vi Line Editing Command Mode In command mode for the command line editing feature, decimal digits not beginning with 0 that precede a command letter shall be remembered. Some commands use these decimal digits as a count number that affects the operation. The term motion command represents one of the commands: <space> 0 b F l W ^ $ ; E f T w | , B e h t If the current line is not the edit line, any command that modifies the current line shall cause the content of the current line to replace the content of the edit line, and the current line shall become the edit line. This replacement cannot be undone (see the u and U commands below). The modification requested shall then be performed to the edit line. When the current line is the edit line, the modification shall be done directly to the edit line. Any command that is preceded by count shall take a count (the numeric value of any preceding decimal digits). Unless otherwise noted, this count shall cause the specified operation to repeat by the number of times specified by the count. Also unless otherwise noted, a count that is out of range is considered an error condition and shall alert the terminal, but neither the cursor position, nor the command line, shall change. The terms word and bigword are used as defined in the vi description. The term save buffer corresponds to the term unnamed buffer in vi. The following commands shall be recognized in command mode: <newline> Execute the current command line. If the current command line is not empty, this line shall be entered into the command history (see fc(1p)). <control>L Redraw the current command line. Position the cursor at the same location on the redrawn line. # Insert the character '#' at the beginning of the current command line and treat the resulting edit line as a comment. This line shall be entered into the command history; see fc(1p). = Display the possible shell word expansions (see Section 2.6, Word Expansions) of the bigword at the current command line position. Note: This does not modify the content of the current line, and therefore does not cause the current line to become the edit line. These expansions shall be displayed on subsequent terminal lines. If the bigword contains none of the characters '?', '*', or '[', an <asterisk> ('*') shall be implicitly assumed at the end. If any directories are matched, these expansions shall have a '/' character appended. After the expansion, the line shall be redrawn, the cursor repositioned at the current cursor position, and sh shall be placed in command mode. \ Perform pathname expansion (see Section 2.6.6, Pathname Expansion) on the current bigword, up to the largest set of characters that can be matched uniquely. If the bigword contains none of the characters '?', '*', or '[', an <asterisk> ('*') shall be implicitly assumed at the end. This maximal expansion then shall replace the original bigword in the command line, and the cursor shall be placed after this expansion. If the resulting bigword completely and uniquely matches a directory, a '/' character shall be inserted directly after the bigword. If some other file is completely matched, a single <space> shall be inserted after the bigword. After this operation, sh shall be placed in insert mode. * Perform pathname expansion on the current bigword and insert all expansions into the command to replace the current bigword, with each expansion separated by a single <space>. If at the end of the line, the current cursor position shall be moved to the first column position following the expansions and sh shall be placed in insert mode. Otherwise, the current cursor position shall be the last column position of the first character after the expansions and sh shall be placed in insert mode. If the current bigword contains none of the characters '?', '*', or '[', before the operation, an <asterisk> ('*') shall be implicitly assumed at the end. @letter Insert the value of the alias named _letter. The symbol letter represents a single alphabetic character from the portable character set; implementations may support additional characters as an extension. If the alias _letter contains other editing commands, these commands shall be performed as part of the insertion. If no alias _letter is enabled, this command shall have no effect. [count]~ Convert, if the current character is a lowercase letter, to the equivalent uppercase letter and vice versa, as prescribed by the current locale. The current cursor position then shall be advanced by one character. If the cursor was positioned on the last character of the line, the case conversion shall occur, but the cursor shall not advance. If the '~' command is preceded by a count, that number of characters shall be converted, and the cursor shall be advanced to the character position after the last character converted. If the count is larger than the number of characters after the cursor, this shall not be considered an error; the cursor shall advance to the last character on the line. [count]. Repeat the most recent non-motion command, even if it was executed on an earlier command line. If the previous command was preceded by a count, and no count is given on the '.' command, the count from the previous command shall be included as part of the repeated command. If the '.' command is preceded by a count, this shall override any count argument to the previous command. The count specified in the '.' command shall become the count for subsequent '.' commands issued without a count. [number]v Invoke the vi editor to edit the current command line in a temporary file. When the editor exits, the commands in the temporary file shall be executed and placed in the command history. If a number is included, it specifies the command number in the command history to be edited, rather than the current command line. [count]l (ell) [count]<space> Move the current cursor position to the next character position. If the cursor was positioned on the last character of the line, the terminal shall be alerted and the cursor shall not be advanced. If the count is larger than the number of characters after the cursor, this shall not be considered an error; the cursor shall advance to the last character on the line. [count]h Move the current cursor position to the countth (default 1) previous character position. If the cursor was positioned on the first character of the line, the terminal shall be alerted and the cursor shall not be moved. If the count is larger than the number of characters before the cursor, this shall not be considered an error; the cursor shall move to the first character on the line. [count]w Move to the start of the next word. If the cursor was positioned on the last character of the line, the terminal shall be alerted and the cursor shall not be advanced. If the count is larger than the number of words after the cursor, this shall not be considered an error; the cursor shall advance to the last character on the line. [count]W Move to the start of the next bigword. If the cursor was positioned on the last character of the line, the terminal shall be alerted and the cursor shall not be advanced. If the count is larger than the number of bigwords after the cursor, this shall not be considered an error; the cursor shall advance to the last character on the line. [count]e Move to the end of the current word. If at the end of a word, move to the end of the next word. If the cursor was positioned on the last character of the line, the terminal shall be alerted and the cursor shall not be advanced. If the count is larger than the number of words after the cursor, this shall not be considered an error; the cursor shall advance to the last character on the line. [count]E Move to the end of the current bigword. If at the end of a bigword, move to the end of the next bigword. If the cursor was positioned on the last character of the line, the terminal shall be alerted and the cursor shall not be advanced. If the count is larger than the number of bigwords after the cursor, this shall not be considered an error; the cursor shall advance to the last character on the line. [count]b Move to the beginning of the current word. If at the beginning of a word, move to the beginning of the previous word. If the cursor was positioned on the first character of the line, the terminal shall be alerted and the cursor shall not be moved. If the count is larger than the number of words preceding the cursor, this shall not be considered an error; the cursor shall return to the first character on the line. [count]B Move to the beginning of the current bigword. If at the beginning of a bigword, move to the beginning of the previous bigword. If the cursor was positioned on the first character of the line, the terminal shall be alerted and the cursor shall not be moved. If the count is larger than the number of bigwords preceding the cursor, this shall not be considered an error; the cursor shall return to the first character on the line. ^ Move the current cursor position to the first character on the input line that is not a <blank>. $ Move to the last character position on the current command line. 0 (Zero.) Move to the first character position on the current command line. [count]| Move to the countth character position on the current command line. If no number is specified, move to the first position. The first character position shall be numbered 1. If the count is larger than the number of characters on the line, this shall not be considered an error; the cursor shall be placed on the last character on the line. [count]fc Move to the first occurrence of the character 'c' that occurs after the current cursor position. If the cursor was positioned on the last character of the line, the terminal shall be alerted and the cursor shall not be advanced. If the character 'c' does not occur in the line after the current cursor position, the terminal shall be alerted and the cursor shall not be moved. [count]Fc Move to the first occurrence of the character 'c' that occurs before the current cursor position. If the cursor was positioned on the first character of the line, the terminal shall be alerted and the cursor shall not be moved. If the character 'c' does not occur in the line before the current cursor position, the terminal shall be alerted and the cursor shall not be moved. [count]tc Move to the character before the first occurrence of the character 'c' that occurs after the current cursor position. If the cursor was positioned on the last character of the line, the terminal shall be alerted and the cursor shall not be advanced. If the character 'c' does not occur in the line after the current cursor position, the terminal shall be alerted and the cursor shall not be moved. [count]Tc Move to the character after the first occurrence of the character 'c' that occurs before the current cursor position. If the cursor was positioned on the first character of the line, the terminal shall be alerted and the cursor shall not be moved. If the character 'c' does not occur in the line before the current cursor position, the terminal shall be alerted and the cursor shall not be moved. [count]; Repeat the most recent f, F, t, or T command. Any number argument on that previous command shall be ignored. Errors are those described for the repeated command. [count], Repeat the most recent f, F, t, or T command. Any number argument on that previous command shall be ignored. However, reverse the direction of that command. a Enter insert mode after the current cursor position. Characters that are entered shall be inserted before the next character. A Enter insert mode after the end of the current command line. i Enter insert mode at the current cursor position. Characters that are entered shall be inserted before the current character. I Enter insert mode at the beginning of the current command line. R Enter insert mode, replacing characters from the command line beginning at the current cursor position. [count]cmotion Delete the characters between the current cursor position and the cursor position that would result from the specified motion command. Then enter insert mode before the first character following any deleted characters. If count is specified, it shall be applied to the motion command. A count shall be ignored for the following motion commands: 0 ^ $ c If the motion command is the character 'c', the current command line shall be cleared and insert mode shall be entered. If the motion command would move the current cursor position toward the beginning of the command line, the character under the current cursor position shall not be deleted. If the motion command would move the current cursor position toward the end of the command line, the character under the current cursor position shall be deleted. If the count is larger than the number of characters between the current cursor position and the end of the command line toward which the motion command would move the cursor, this shall not be considered an error; all of the remaining characters in the aforementioned range shall be deleted and insert mode shall be entered. If the motion command is invalid, the terminal shall be alerted, the cursor shall not be moved, and no text shall be deleted. C Delete from the current character to the end of the line and enter insert mode at the new end-of-line. S Clear the entire edit line and enter insert mode. [count]rc Replace the current character with the character 'c'. With a number count, replace the current and the following count-1 characters. After this command, the current cursor position shall be on the last character that was changed. If the count is larger than the number of characters after the cursor, this shall not be considered an error; all of the remaining characters shall be changed. [count]_ Append a <space> after the current character position and then append the last bigword in the previous input line after the <space>. Then enter insert mode after the last character just appended. With a number count, append the countth bigword in the previous line. [count]x Delete the character at the current cursor position and place the deleted characters in the save buffer. If the cursor was positioned on the last character of the line, the character shall be deleted and the cursor position shall be moved to the previous character (the new last character). If the count is larger than the number of characters after the cursor, this shall not be considered an error; all the characters from the cursor to the end of the line shall be deleted. [count]X Delete the character before the current cursor position and place the deleted characters in the save buffer. The character under the current cursor position shall not change. If the cursor was positioned on the first character of the line, the terminal shall be alerted, and the X command shall have no effect. If the line contained a single character, the X command shall have no effect. If the line contained no characters, the terminal shall be alerted and the cursor shall not be moved. If the count is larger than the number of characters before the cursor, this shall not be considered an error; all the characters from before the cursor to the beginning of the line shall be deleted. [count]dmotion Delete the characters between the current cursor position and the character position that would result from the motion command. A number count repeats the motion command count times. If the motion command would move toward the beginning of the command line, the character under the current cursor position shall not be deleted. If the motion command is d, the entire current command line shall be cleared. If the count is larger than the number of characters between the current cursor position and the end of the command line toward which the motion command would move the cursor, this shall not be considered an error; all of the remaining characters in the aforementioned range shall be deleted. The deleted characters shall be placed in the save buffer. D Delete all characters from the current cursor position to the end of the line. The deleted characters shall be placed in the save buffer. [count]ymotion Yank (that is, copy) the characters from the current cursor position to the position resulting from the motion command into the save buffer. A number count shall be applied to the motion command. If the motion command would move toward the beginning of the command line, the character under the current cursor position shall not be included in the set of yanked characters. If the motion command is y, the entire current command line shall be yanked into the save buffer. The current cursor position shall be unchanged. If the count is larger than the number of characters between the current cursor position and the end of the command line toward which the motion command would move the cursor, this shall not be considered an error; all of the remaining characters in the aforementioned range shall be yanked. Y Yank the characters from the current cursor position to the end of the line into the save buffer. The current character position shall be unchanged. [count]p Put a copy of the current contents of the save buffer after the current cursor position. The current cursor position shall be advanced to the last character put from the save buffer. A count shall indicate how many copies of the save buffer shall be put. [count]P Put a copy of the current contents of the save buffer before the current cursor position. The current cursor position shall be moved to the last character put from the save buffer. A count shall indicate how many copies of the save buffer shall be put. u Undo the last command that changed the edit line. This operation shall not undo the copy of any command line to the edit line. U Undo all changes made to the edit line. This operation shall not undo the copy of any command line to the edit line. [count]k [count]- Set the current command line to be the countth previous command line in the shell command history. If count is not specified, it shall default to 1. The cursor shall be positioned on the first character of the new command. If a k or - command would retreat past the maximum number of commands in effect for this shell (affected by the HISTSIZE environment variable), the terminal shall be alerted, and the command shall have no effect. [count]j [count]+ Set the current command line to be the countth next command line in the shell command history. If count is not specified, it shall default to 1. The cursor shall be positioned on the first character of the new command. If a j or + command advances past the edit line, the current command line shall be restored to the edit line and the terminal shall be alerted. [number]G Set the current command line to be the oldest command line stored in the shell command history. With a number number, set the current command line to be the command line number in the history. If command line number does not exist, the terminal shall be alerted and the command line shall not be changed. /pattern<newline> Move backwards through the command history, searching for the specified pattern, beginning with the previous command line. Patterns use the pattern matching notation described in Section 2.13, Pattern Matching Notation, except that the '^' character shall have special meaning when it appears as the first character of pattern. In this case, the '^' is discarded and the characters after the '^' shall be matched only at the beginning of a line. Commands in the command history shall be treated as strings, not as filenames. If the pattern is not found, the current command line shall be unchanged and the terminal shall be alerted. If it is found in a previous line, the current command line shall be set to that line and the cursor shall be set to the first character of the new command line. If pattern is empty, the last non-empty pattern provided to / or ? shall be used. If there is no previous non-empty pattern, the terminal shall be alerted and the current command line shall remain unchanged. ?pattern<newline> Move forwards through the command history, searching for the specified pattern, beginning with the next command line. Patterns use the pattern matching notation described in Section 2.13, Pattern Matching Notation, except that the '^' character shall have special meaning when it appears as the first character of pattern. In this case, the '^' is discarded and the characters after the '^' shall be matched only at the beginning of a line. Commands in the command history shall be treated as strings, not as filenames. If the pattern is not found, the current command line shall be unchanged and the terminal shall be alerted. If it is found in a following line, the current command line shall be set to that line and the cursor shall be set to the fist character of the new command line. If pattern is empty, the last non-empty pattern provided to / or ? shall be used. If there is no previous non-empty pattern, the terminal shall be alerted and the current command line shall remain unchanged. n Repeat the most recent / or ? command. If there is no previous / or ?, the terminal shall be alerted and the current command line shall remain unchanged. N Repeat the most recent / or ? command, reversing the direction of the search. If there is no previous / or ?, the terminal shall be alerted and the current command line shall remain unchanged. EXIT STATUS top The following exit values shall be returned: 0 The script to be executed consisted solely of zero or more blank lines or comments, or both. 1125 A non-interactive shell detected an error other than command_file not found or executable, including but not limited to syntax, redirection, or variable assignment errors. 126 A specified command_file could not be executed due to an [ENOEXEC] error (see Section 2.9.1.1, Command Search and Execution, item 2). 127 A specified command_file could not be found by a non- interactive shell. Otherwise, the shell shall return the exit status of the last command it invoked or attempted to invoke (see also the exit utility in Section 2.14, Special Built-In Utilities). CONSEQUENCES OF ERRORS top See Section 2.8.1, Consequences of Shell Errors. The following sections are informative. APPLICATION USAGE top Standard input and standard error are the files that determine whether a shell is interactive when -i is not specified. For example: sh > file and: sh 2> file create interactive and non-interactive shells, respectively. Although both accept terminal input, the results of error conditions are different, as described in Section 2.8.1, Consequences of Shell Errors; in the second example a redirection error encountered by a special built-in utility aborts the shell. A conforming application must protect its first operand, if it starts with a <plus-sign>, by preceding it with the "--" argument that denotes the end of the options. Applications should note that the standard PATH to the shell cannot be assumed to be either /bin/sh or /usr/bin/sh, and should be determined by interrogation of the PATH returned by getconf PATH, ensuring that the returned pathname is an absolute pathname and not a shell built-in. For example, to determine the location of the standard sh utility: command -v sh On some implementations this might return: /usr/xpg4/bin/sh Furthermore, on systems that support executable scripts (the "#!" construct), it is recommended that applications using executable scripts install them using getconf PATH to determine the shell pathname and update the "#!" script appropriately as it is being installed (for example, with sed). For example: # # Installation time script to install correct POSIX shell pathname # # Get list of paths to check # Sifs=$IFS Sifs_set=${IFS+y} IFS=: set -- $(getconf PATH) if [ "$Sifs_set" = y ] then IFS=$Sifs else unset IFS fi # # Check each path for 'sh' # for i do if [ -x "${i}"/sh ] then Pshell=${i}/sh fi done # # This is the list of scripts to update. They should be of the # form '${name}.source' and will be transformed to '${name}'. # Each script should begin: # # #!INSTALLSHELLPATH # scripts="a b c" # # Transform each script # for i in ${scripts} do sed -e "s|INSTALLSHELLPATH|${Pshell}|" < ${i}.source > ${i} done EXAMPLES top 1. Execute a shell command from a string: sh -c "cat myfile" 2. Execute a shell script from a file in the current directory: sh my_shell_cmds RATIONALE top The sh utility and the set special built-in utility share a common set of options. The name IFS was originally an abbreviation of ``Input Field Separators''; however, this name is misleading as the IFS characters are actually used as field terminators. One justification for ignoring the contents of IFS upon entry to the script, beyond security considerations, is to assist possible future shell compilers. Allowing IFS to be imported from the environment prevents many optimizations that might otherwise be performed via dataflow analysis of the script itself. The text in the STDIN section about non-blocking reads concerns an instance of sh that has been invoked, probably by a C-language program, with standard input that has been opened using the O_NONBLOCK flag; see open() in the System Interfaces volume of POSIX.12017. If the shell did not reset this flag, it would immediately terminate because no input data would be available yet and that would be considered the same as end-of-file. The options associated with a restricted shell (command name rsh and the -r option) were excluded because the standard developers considered that the implied level of security could not be achieved and they did not want to raise false expectations. On systems that support set-user-ID scripts, a historical trapdoor has been to link a script to the name -i. When it is called by a sequence such as: sh - or by: #! usr/bin/sh - the historical systems have assumed that no option letters follow. Thus, this volume of POSIX.12017 allows the single <hyphen-minus> to mark the end of the options, in addition to the use of the regular "--" argument, because it was considered that the older practice was so pervasive. An alternative approach is taken by the KornShell, where real and effective user/group IDs must match for an interactive shell; this behavior is specifically allowed by this volume of POSIX.12017. Note: There are other problems with set-user-ID scripts that the two approaches described here do not resolve. The initialization process for the history file can be dependent on the system start-up files, in that they may contain commands that effectively preempt the user's settings of HISTFILE and HISTSIZE. For example, function definition commands are recorded in the history file, unless the set -o nolog option is set. If the system administrator includes function definitions in some system start-up file called before the ENV file, the history file is initialized before the user gets a chance to influence its characteristics. In some historical shells, the history file is initialized just after the ENV file has been processed. Therefore, it is implementation-defined whether changes made to HISTFILE after the history file has been initialized are effective. The default messages for the various MAIL-related messages are unspecified because they vary across implementations. Typical messages are: "you have mail\n" or: "you have new mail\n" It is important that the descriptions of command line editing refer to the same shell as that in POSIX.12008 so that interactive users can also be application programmers without having to deal with programmatic differences in their two environments. It is also essential that the utility name sh be specified because this explicit utility name is too firmly rooted in historical practice of application programs for it to change. Consideration was given to mandating a diagnostic message when attempting to set vi-mode on terminals that do not support command line editing. However, it is not historical practice for the shell to be cognizant of all terminal types and thus be able to detect inappropriate terminals in all cases. Implementations are encouraged to supply diagnostics in this case whenever possible, rather than leaving the user in a state where editing commands work incorrectly. In early proposals, the KornShell-derived emacs mode of command line editing was included, even though the emacs editor itself was not. The community of emacs proponents was adamant that the full emacs editor not be standardized because they were concerned that an attempt to standardize this very powerful environment would encourage vendors to ship strictly conforming versions lacking the extensibility required by the community. The author of the original emacs program also expressed his desire to omit the program. Furthermore, there were a number of historical systems that did not include emacs, or included it without supporting it, but there were very few that did not include and support vi. The shell emacs command line editing mode was finally omitted because it became apparent that the KornShell version and the editor being distributed with the GNU system had diverged in some respects. The author of emacs requested that the POSIX emacs mode either be deleted or have a significant number of unspecified conditions. Although the KornShell author agreed to consider changes to bring the shell into alignment, the standard developers decided to defer specification at that time. At the time, it was assumed that convergence on an acceptable definition would occur for a subsequent draft, but that has not happened, and there appears to be no impetus to do so. In any case, implementations are free to offer additional command line editing modes based on the exact models of editors their users are most comfortable with. Early proposals had the following list entry in vi Line Editing Insert Mode: \ If followed by the erase or kill character, that character shall be inserted into the input line. Otherwise, the <backslash> itself shall be inserted into the input line. However, this is not actually a feature of sh command line editing insert mode, but one of some historical terminal line drivers. Some conforming implementations continue to do this when the stty iexten flag is set. In interactive shells, SIGTERM is ignored so that kill 0 does not kill the shell, and SIGINT is caught so that wait is interruptible. If the shell does not ignore SIGTTIN, SIGTTOU, and SIGTSTP signals when it is interactive and the -m option is not in effect, these signals suspend the shell if it is not a session leader. If it is a session leader, the signals are discarded if they would stop the process, as required by the System Interfaces volume of POSIX.12017, Section 2.4.3, Signal Actions for orphaned process groups. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.9.1.1, Command Search and Execution, Chapter 2, Shell Command Language, cd(1p), echo(1p), exit(1p), fc(1p), pwd(1p), invalid, set(1p), stty(1p), test(1p), trap(1p), umask(1p), vi(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Section 12.2, Utility Syntax Guidelines The System Interfaces volume of POSIX.12017, dup(3p), exec(1p), exit(3p), fork(3p), open(3p), pipe(3p), signal(3p), system(3p), ulimit(3p), umask(3p), wait(3p) COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 SH(1P) Pages that refer to this page: command(1p), ed(1p), ex(1p), fc(1p), find(1p), make(1p), newgrp(1p), nohup(1p), script(1), time(1p), wait(1p), popen(3p), system(3p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sh\n\n> Bourne shell, the standard command language interpreter.\n> See also `histexpand` for history expansion.\n> More information: <https://manned.org/sh>.\n\n- Start an interactive shell session:\n\n`sh`\n\n- Execute a command and then exit:\n\n`sh -c "{{command}}"`\n\n- Execute a script:\n\n`sh {{path/to/script.sh}}`\n\n- Read and execute commands from `stdin`:\n\n`sh -s`\n
sha1sum
sha1sum(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sha1sum(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | BUGS | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SHA1SUM(1) User Commands SHA1SUM(1) NAME top sha1sum - compute and check SHA1 message digest SYNOPSIS top sha1sum [OPTION]... [FILE]... DESCRIPTION top Print or check SHA1 (160-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in FIPS-180-1. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems. BUGS top Do not use the SHA-1 algorithm for security related purposes. Instead, use an SHA-2 algorithm, implemented in the programs sha224sum(1), sha256sum(1), sha384sum(1), sha512sum(1), or the BLAKE2 algorithm, implemented in b2sum(1) AUTHOR top Written by Ulrich Drepper, Scott Miller, and David Madore. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top cksum(1) Full documentation <https://www.gnu.org/software/coreutils/sha1sum> or available locally via: info '(coreutils) sha1sum invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SHA1SUM(1) Pages that refer to this page: pmlogmv(1), prelink(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sha1sum\n\n> Calculate SHA1 cryptographic checksums.\n> More information: <https://www.gnu.org/software/coreutils/sha1sum>.\n\n- Calculate the SHA1 checksum for one or more files:\n\n`sha1sum {{path/to/file1 path/to/file2 ...}}`\n\n- Calculate and save the list of SHA1 checksums to a file:\n\n`sha1sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.sha1}}`\n\n- Calculate a SHA1 checksum from `stdin`:\n\n`{{command}} | sha1sum`\n\n- Read a file of SHA1 sums and filenames and verify all files have matching checksums:\n\n`sha1sum --check {{path/to/file.sha1}}`\n\n- Only show a message for missing files or when verification fails:\n\n`sha1sum --check --quiet {{path/to/file.sha1}}`\n\n- Only show a message when verification fails, ignoring missing files:\n\n`sha1sum --ignore-missing --check --quiet {{path/to/file.sha1}}`\n
sha224sum
sha224sum(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sha224sum(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SHA224SUM(1) User Commands SHA224SUM(1) NAME top sha224sum - compute and check SHA224 message digest SYNOPSIS top sha224sum [OPTION]... [FILE]... DESCRIPTION top Print or check SHA224 (224-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in RFC 3874. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems. AUTHOR top Written by Ulrich Drepper, Scott Miller, and David Madore. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top cksum(1) Full documentation <https://www.gnu.org/software/coreutils/sha224sum> or available locally via: info '(coreutils) sha2 utilities' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SHA224SUM(1) Pages that refer to this page: md5sum(1), sha1sum(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sha224sum\n\n> Calculate SHA224 cryptographic checksums.\n> More information: <https://www.gnu.org/software/coreutils/manual/html_node/sha2-utilities.html>.\n\n- Calculate the SHA224 checksum for one or more files:\n\n`sha224sum {{path/to/file1 path/to/file2 ...}}`\n\n- Calculate and save the list of SHA224 checksums to a file:\n\n`sha224sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.sha224}}`\n\n- Calculate a SHA224 checksum from `stdin`:\n\n`{{command}} | sha224sum`\n\n- Read a file of SHA224 sums and filenames and verify all files have matching checksums:\n\n`sha224sum --check {{path/to/file.sha224}}`\n\n- Only show a message for missing files or when verification fails:\n\n`sha224sum --check --quiet {{path/to/file.sha224}}`\n\n- Only show a message when verification fails, ignoring missing files:\n\n`sha224sum --ignore-missing --check --quiet {{path/to/file.sha224}}`\n
sha256sum
sha256sum(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sha256sum(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SHA256SUM(1) User Commands SHA256SUM(1) NAME top sha256sum - compute and check SHA256 message digest SYNOPSIS top sha256sum [OPTION]... [FILE]... DESCRIPTION top Print or check SHA256 (256-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in FIPS-180-2. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems. AUTHOR top Written by Ulrich Drepper, Scott Miller, and David Madore. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top cksum(1) Full documentation <https://www.gnu.org/software/coreutils/sha256sum> or available locally via: info '(coreutils) sha2 utilities' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SHA256SUM(1) Pages that refer to this page: md5sum(1), pmlogmv(1), sha1sum(1), sysupdate.d(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sha256sum\n\n> Calculate SHA256 cryptographic checksums.\n> More information: <https://www.gnu.org/software/coreutils/manual/html_node/sha2-utilities.html>.\n\n- Calculate the SHA256 checksum for one or more files:\n\n`sha256sum {{path/to/file1 path/to/file2 ...}}`\n\n- Calculate and save the list of SHA256 checksums to a file:\n\n`sha256sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.sha256}}`\n\n- Calculate a SHA256 checksum from `stdin`:\n\n`{{command}} | sha256sum`\n\n- Read a file of SHA256 sums and filenames and verify all files have matching checksums:\n\n`sha256sum --check {{path/to/file.sha256}}`\n\n- Only show a message for missing files or when verification fails:\n\n`sha256sum --check --quiet {{path/to/file.sha256}}`\n\n- Only show a message when verification fails, ignoring missing files:\n\n`sha256sum --ignore-missing --check --quiet {{path/to/file.sha256}}`\n
sha384sum
sha384sum(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sha384sum(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SHA384SUM(1) User Commands SHA384SUM(1) NAME top sha384sum - compute and check SHA384 message digest SYNOPSIS top sha384sum [OPTION]... [FILE]... DESCRIPTION top Print or check SHA384 (384-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in FIPS-180-2. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems. AUTHOR top Written by Ulrich Drepper, Scott Miller, and David Madore. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top cksum(1) Full documentation <https://www.gnu.org/software/coreutils/sha384sum> or available locally via: info '(coreutils) sha2 utilities' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SHA384SUM(1) Pages that refer to this page: md5sum(1), sha1sum(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sha384sum\n\n> Calculate SHA384 cryptographic checksums.\n> More information: <https://www.gnu.org/software/coreutils/manual/html_node/sha2-utilities.html>.\n\n- Calculate the SHA384 checksum for one or more files:\n\n`sha384sum {{path/to/file1 path/to/file2 ...}}`\n\n- Calculate and save the list of SHA384 checksums to a file:\n\n`sha384sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.sha384}}`\n\n- Calculate a SHA384 checksum from `stdin`:\n\n`{{command}} | sha384sum`\n\n- Read a file of SHA384 sums and filenames and verify all files have matching checksums:\n\n`sha384sum --check {{path/to/file.sha384}}`\n\n- Only show a message for missing files or when verification fails:\n\n`sha384sum --check --quiet {{path/to/file.sha384}}`\n\n- Only show a message when verification fails, ignoring missing files:\n\n`sha384sum --ignore-missing --check --quiet {{path/to/file.sha384}}`\n
sha512sum
sha512sum(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sha512sum(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SHA512SUM(1) User Commands SHA512SUM(1) NAME top sha512sum - compute and check SHA512 message digest SYNOPSIS top sha512sum [OPTION]... [FILE]... DESCRIPTION top Print or check SHA512 (512-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in FIPS-180-2. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems. AUTHOR top Written by Ulrich Drepper, Scott Miller, and David Madore. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top cksum(1) Full documentation <https://www.gnu.org/software/coreutils/sha512sum> or available locally via: info '(coreutils) sha2 utilities' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SHA512SUM(1) Pages that refer to this page: md5sum(1), sha1sum(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sha512sum\n\n> Calculate SHA512 cryptographic checksums.\n> More information: <https://www.gnu.org/software/coreutils/manual/html_node/sha2-utilities.html>.\n\n- Calculate the SHA512 checksum for one or more files:\n\n`sha512sum {{path/to/file1 path/to/file2 ...}}`\n\n- Calculate and save the list of SHA512 checksums to a file:\n\n`sha512sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.sha512}}`\n\n- Calculate a SHA512 checksum from `stdin`:\n\n`{{command}} | sha512sum`\n\n- Read a file of SHA512 sums and filenames and verify all files have matching checksums:\n\n`sha512sum --check {{path/to/file.sha512}}`\n\n- Only show a message for missing files or when verification fails:\n\n`sha512sum --check --quiet {{path/to/file.sha512}}`\n\n- Only show a message when verification fails, ignoring missing files:\n\n`sha512sum --ignore-missing --check --quiet {{path/to/file.sha512}}`\n
shift
shift(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training shift(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT SHIFT(1P) POSIX Programmer's Manual SHIFT(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top shift shift positional parameters SYNOPSIS top shift [n] DESCRIPTION top The positional parameters shall be shifted. Positional parameter 1 shall be assigned the value of parameter (1+n), parameter 2 shall be assigned the value of parameter (2+n), and so on. The parameters represented by the numbers "$#" down to "$#-n+1" shall be unset, and the parameter '#' is updated to reflect the new number of positional parameters. The value n shall be an unsigned decimal integer less than or equal to the value of the special parameter '#'. If n is not given, it shall be assumed to be 1. If n is 0, the positional and special parameters are not changed. OPTIONS top None. OPERANDS top See the DESCRIPTION. STDIN top Not used. INPUT FILES top None. ENVIRONMENT VARIABLES top None. ASYNCHRONOUS EVENTS top Default. STDOUT top Not used. STDERR top The standard error shall be used only for diagnostic messages. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top If the n operand is invalid or is greater than "$#", this may be considered a syntax error and a non-interactive shell may exit; if the shell does not exit in this case, a non-zero exit status shall be returned. Otherwise, zero shall be returned. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top None. EXAMPLES top $ set a b c d e $ shift 2 $ echo $* c d e RATIONALE top None. FUTURE DIRECTIONS top None. SEE ALSO top Section 2.14, Special Built-In Utilities COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 SHIFT(1P) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# shift\n\n> Move positional parameters.\n> More information: <https://manned.org/shift.1posix>.\n\n- Remove the first positional parameter:\n\n`shift`\n\n- Remove the first `N` positional parameters:\n\n`shift {{N}}`\n
showkey
showkey(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training showkey(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | 2.6 KERNELS | SEE ALSO | COLOPHON SHOWKEY(1) General Commands Manual SHOWKEY(1) NAME top showkey - examine the codes sent by the keyboard SYNOPSIS top showkey [-h|--help] [-a|--ascii] [-s|--scancodes] [-k|--keycodes] [-V|--version] DESCRIPTION top showkey prints to standard output either the scan codes or the keycode or the `ascii' code of each key pressed. In the first two modes the program runs until 10 seconds have elapsed since the last key press or release event, or until it receives a suitable signal, like SIGTERM, from another process. In `ascii' mode the program terminates when the user types ^D. When in scancode dump mode, showkey prints in hexadecimal format each byte received from the keyboard to the standard output. A new line is printed when an interval of about 0.1 seconds occurs between the bytes received, or when the internal receive buffer fills up. This can be used to determine roughly, what byte sequences the keyboard sends at once on a given key press. The scan code dumping mode is primarily intended for debugging the keyboard driver or other low level interfaces. As such it shouldn't be of much interest to the regular end-user. However, some modern keyboards have keys or buttons that produce scancodes to which the kernel does not associate a keycode, and, after finding out what these are, the user can assign keycodes with setkeycodes(8). When in the default keycode dump mode, showkey prints to the standard output the keycode number or each key pressed or released. The kind of the event, press or release, is also reported. Keycodes are numbers assigned by the kernel to each individual physical key. Every key has always only one associated keycode number, whether the keyboard sends single or multiple scan codes when pressing it. Using showkey in this mode, you can find out what numbers to use in your personalized keymap files. When in `ascii' dump mode, showkey prints to the standard output the decimal, octal, and hexadecimal value(s) of the key pressed, according to he present keymap. OPTIONS top -h --help showkey prints to the standard error output its version number, a compile option and a short usage message, then exits. -s --scancodes Starts showkey in scan code dump mode. -k --keycodes Starts showkey in keycode dump mode. This is the default, when no command line options are present. -a --ascii Starts showkey in `ascii' dump mode. -V --version showkey prints version number and exits. 2.6 KERNELS top In 2.6 kernels key codes lie in the range 1-255, instead of 1-127. Key codes larger than 127 are returned as three bytes of which the low order 7 bits are: zero, bits 13-7, and bits 6-0 of the key code. The high order bits are: 0/1 for make/break, 1, 1. In 2.6 kernels raw mode, or scancode mode, is not very raw at all. Scan codes are first translated to key codes, and when scancodes are desired, the key codes are translated back. Various transformations are involved, and there is no guarantee at all that the final result corresponds to what the keyboard hardware did send. So, if you want to know the scan codes sent by various keys it is better to boot a 2.4 kernel. Since 2.6.9 there also is the boot option atkbd.softraw=0 that tells the 2.6 kernel to return the actual scan codes. SEE ALSO top loadkeys(1), dumpkeys(1), keymaps(5), setkeycodes(8) COLOPHON top This page is part of the kbd (Linux keyboard tools) project. Information about the project can be found at http://www.kbd-project.org/. If you have a bug report for this manual page, send it to kbd@lists.altlinux.org. This page was obtained from the project's upstream Git repository https://github.com/legionus/kbd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-13.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org kbd 1 Feb 1998 SHOWKEY(1) Pages that refer to this page: loadkeys(1), keymaps(5), setkeycodes(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# showkey\n\n> Display the keycode of pressed keys on the keyboard, helpful for debugging keyboard-related issues and key remapping.\n> More information: <https://manned.org/showkey>.\n\n- View keycodes in decimal:\n\n`sudo showkey`\n\n- Display [s]cancodes in hexadecimal:\n\n`sudo showkey {{-s|--scancodes}}`\n\n- Display [k]eycodes in decimal (default):\n\n`sudo showkey {{-k|--keycodes}}`\n\n- Display keycodes in [a]SCII, decimal, and hexadecimal:\n\n`sudo showkey {{-a|--ascii}}`\n\n- Exit the program:\n\n`Ctrl + d`\n
shred
shred(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training shred(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SHRED(1) User Commands SHRED(1) NAME top shred - overwrite a file to hide its contents, and optionally delete it SYNOPSIS top shred [OPTION]... FILE... DESCRIPTION top Overwrite the specified FILE(s) repeatedly, in order to make it harder for even very expensive hardware probing to recover the data. If FILE is -, shred standard output. Mandatory arguments to long options are mandatory for short options too. -f, --force change permissions to allow writing if necessary -n, --iterations=N overwrite N times instead of the default (3) --random-source=FILE get random bytes from FILE -s, --size=N shred this many bytes (suffixes like K, M, G accepted) -u deallocate and remove file after overwriting --remove[=HOW] like -u but give control on HOW to delete; See below -v, --verbose show progress -x, --exact do not round file sizes up to the next full block; this is the default for non-regular files -z, --zero add a final overwrite with zeros to hide shredding --help display this help and exit --version output version information and exit Delete FILE(s) if --remove (-u) is specified. The default is not to remove the files because it is common to operate on device files like /dev/hda, and those files usually should not be removed. The optional HOW parameter indicates how to remove a directory entry: 'unlink' => use a standard unlink call. 'wipe' => also first obfuscate bytes in the name. 'wipesync' => also sync each obfuscated byte to the device. The default mode is 'wipesync', but note it can be expensive. CAUTION: shred assumes the file system and hardware overwrite data in place. Although this is common, many platforms operate otherwise. Also, backups and mirrors may contain unremovable copies that will let a shredded file be recovered later. See the GNU coreutils manual for details. AUTHOR top Written by Colin Plumb. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/shred> or available locally via: info '(coreutils) shred invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SHRED(1) Pages that refer to this page: rm(1), logrotate(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# shred\n\n> Overwrite files to securely delete data.\n> More information: <https://www.gnu.org/software/coreutils/shred>.\n\n- Overwrite a file:\n\n`shred {{path/to/file}}`\n\n- Overwrite a file and show progress on the screen:\n\n`shred --verbose {{path/to/file}}`\n\n- Overwrite a file, leaving [z]eros instead of random data:\n\n`shred --zero {{path/to/file}}`\n\n- Overwrite a file a specific [n]umber of times:\n\n`shred --iterations {{25}} {{path/to/file}}`\n\n- Overwrite a file and remove it:\n\n`shred --remove {{path/to/file}}`\n\n- Overwrite a file 100 times, add a final overwrite with [z]eros, remove the file after overwriting it and show [v]erbose progress on the screen:\n\n`shred -vzun 100 {{path/to/file}}`\n
shuf
shuf(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training shuf(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SHUF(1) User Commands SHUF(1) NAME top shuf - generate random permutations SYNOPSIS top shuf [OPTION]... [FILE] shuf -e [OPTION]... [ARG]... shuf -i LO-HI [OPTION]... DESCRIPTION top Write a random permutation of the input lines to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -e, --echo treat each ARG as an input line -i, --input-range=LO-HI treat each number LO through HI as an input line -n, --head-count=COUNT output at most COUNT lines -o, --output=FILE write result to FILE instead of standard output --random-source=FILE get random bytes from FILE -r, --repeat output lines can be repeated -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit AUTHOR top Written by Paul Eggert. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/shuf> or available locally via: info '(coreutils) shuf invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SHUF(1) Pages that refer to this page: sort(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# shuf\n\n> Generate random permutations.\n> More information: <https://www.gnu.org/software/coreutils/shuf>.\n\n- Randomize the order of lines in a file and output the result:\n\n`shuf {{path/to/file}}`\n\n- Only output the first 5 entries of the result:\n\n`shuf --head-count={{5}} {{path/to/file}}`\n\n- Write the output to another file:\n\n`shuf {{path/to/input}} --output={{path/to/output}}`\n\n- Generate 3 random numbers in the range 1-10 (inclusive):\n\n`shuf --head-count={{3}} --input-range={{1-10}} --repeat`\n
shutdown
shutdown(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training shutdown(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | COMPATIBILITY | SEE ALSO | COLOPHON SHUTDOWN(8) shutdown SHUTDOWN(8) NAME top shutdown - Halt, power off or reboot the machine SYNOPSIS top shutdown [OPTIONS...] [TIME] [WALL...] DESCRIPTION top shutdown may be used to halt, power off, or reboot the machine. The first argument may be a time string (which is usually "now"). Optionally, this may be followed by a wall message to be sent to all logged-in users before going down. The time string may either be in the format "hh:mm" for hour/minutes specifying the time to execute the shutdown at, specified in 24h clock format. Alternatively it may be in the syntax "+m" referring to the specified number of minutes m from now. "now" is an alias for "+0", i.e. for triggering an immediate shutdown. If no time argument is specified, "+1" is implied. Note that to specify a wall message you must specify a time argument, too. If the time argument is used, 5 minutes before the system goes down the /run/nologin file is created to ensure that further logins shall not be allowed. OPTIONS top The following options are understood: --help Print a short help text and exit. -H, --halt Halt the machine. -P, --poweroff Power the machine off (the default). -r, --reboot Reboot the machine. -h The same as --poweroff, but does not override the action to take if it is "halt". E.g. shutdown --reboot -h means "poweroff", but shutdown --halt -h means "halt". -k Do not halt, power off, or reboot, but just write the wall message. --no-wall Do not send wall message before halt, power off, or reboot. -c Cancel a pending shutdown. This may be used to cancel the effect of an invocation of shutdown with a time argument that is not "+0" or "now". --show Show a pending shutdown action and time if there is any. Added in version 250. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. COMPATIBILITY top The shutdown command in previous init systems (including sysvinit) defaulted to single-user mode instead of powering off the machine. To change into single-user mode, use systemctl rescue instead. SEE ALSO top systemd(1), systemctl(1), halt(8), wall(1) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SHUTDOWN(8) Pages that refer to this page: last(1@@util-linux), login(1), wall(1), reboot(2), nologin(5), boot(7), systemd.directives(7), systemd.index(7), kexec(8), poweroff(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# shutdown\n\n> Shutdown and reboot the system.\n> More information: <https://manned.org/shutdown.8>.\n\n- Power off ([h]alt) immediately:\n\n`shutdown -h now`\n\n- [r]eboot immediately:\n\n`shutdown -r now`\n\n- [r]eboot in 5 minutes:\n\n`shutdown -r +{{5}} &`\n\n- Shutdown at 1:00 pm (Uses 24[h] clock):\n\n`shutdown -h 13:00`\n\n- [c]ancel a pending shutdown/reboot operation:\n\n`shutdown -c`\n
size
size(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training size(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | COPYRIGHT | COLOPHON SIZE(1) GNU Development Tools SIZE(1) NAME top size - list section sizes and total size of binary files SYNOPSIS top size [-A|-B|-G|--format=compatibility] [--help] [-d|-o|-x|--radix=number] [--common] [-t|--totals] [--target=bfdname] [-V|--version] [-f] [objfile...] DESCRIPTION top The GNU size utility lists the section sizes and the total size for each of the binary files objfile on its argument list. By default, one line of output is generated for each file or each module if the file is an archive. objfile... are the files to be examined. If none are specified, the file "a.out" will be used instead. OPTIONS top The command-line options have the following meanings: -A -B -G --format=compatibility Using one of these options, you can choose whether the output from GNU size resembles output from System V size (using -A, or --format=sysv), or Berkeley size (using -B, or --format=berkeley). The default is the one-line format similar to Berkeley's. Alternatively, you can choose the GNU format output (using -G, or --format=gnu), this is similar to Berkeley's output format, but sizes are counted differently. Here is an example of the Berkeley (default) format of output from size: $ size --format=Berkeley ranlib size text data bss dec hex filename 294880 81920 11592 388392 5ed28 ranlib 294880 81920 11888 388688 5ee50 size The Berkeley style output counts read only data in the "text" column, not in the "data" column, the "dec" and "hex" columns both display the sum of the "text", "data", and "bss" columns in decimal and hexadecimal respectively. The GNU format counts read only data in the "data" column, not the "text" column, and only displays the sum of the "text", "data", and "bss" columns once, in the "total" column. The --radix option can be used to change the number base for all columns. Here is the same data displayed with GNU conventions: $ size --format=GNU ranlib size text data bss total filename 279880 96920 11592 388392 ranlib 279880 96920 11888 388688 size This is the same data, but displayed closer to System V conventions: $ size --format=SysV ranlib size ranlib : section size addr .text 294880 8192 .data 81920 303104 .bss 11592 385024 Total 388392 size : section size addr .text 294880 8192 .data 81920 303104 .bss 11888 385024 Total 388688 --help -h -H -? Show a summary of acceptable arguments and options. -d -o -x --radix=number Using one of these options, you can control whether the size of each section is given in decimal (-d, or --radix=10); octal (-o, or --radix=8); or hexadecimal (-x, or --radix=16). In --radix=number, only the three values (8, 10, 16) are supported. The total size is always given in two radices; decimal and hexadecimal for -d or -x output, or octal and hexadecimal if you're using -o. --common Print total size of common symbols in each file. When using Berkeley or GNU format these are included in the bss size. -t --totals Show totals of all objects listed (Berkeley or GNU format mode only). --target=bfdname Specify that the object-code format for objfile is bfdname. This option may not be necessary; size can automatically recognize many formats. -v -V --version Display the version number of size. -f Ignored. This option is used by other versions of the size program, but it is not supported by the GNU Binutils version. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. SEE ALSO top ar(1), objdump(1), readelf(1), and the Info entries for binutils. COPYRIGHT top Copyright (c) 1991-2023 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". COLOPHON top This page is part of the binutils (a collection of tools for working with executable binaries) project. Information about the project can be found at http://www.gnu.org/software/binutils/. If you have a bug report for this manual page, see http://sourceware.org/bugzilla/enter_bug.cgi?product=binutils. This page was obtained from the tarball binutils-2.41.tar.gz fetched from https://ftp.gnu.org/gnu/binutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org binutils-2.41 2023-12-22 SIZE(1) Pages that refer to this page: elf(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# size\n\n> Displays the sizes of sections inside binary files.\n> More information: <https://sourceware.org/binutils/docs/binutils/size.html>.\n\n- Display the size of sections in a given object or executable file:\n\n`size {{path/to/file}}`\n\n- Display the size of sections in a given object or executable file in [o]ctal:\n\n`size {{-o|--radix=8}} {{path/to/file}}`\n\n- Display the size of sections in a given object or executable file in [d]ecimal:\n\n`size {{-d|--radix=10}} {{path/to/file}}`\n\n- Display the size of sections in a given object or executable file in he[x]adecimal:\n\n`size {{-x|--radix=16}} {{path/to/file}}`\n
sleep
sleep(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sleep(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SLEEP(1) User Commands SLEEP(1) NAME top sleep - delay for a specified amount of time SYNOPSIS top sleep NUMBER[SUFFIX]... sleep OPTION DESCRIPTION top Pause for NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days. NUMBER need not be an integer. Given two or more arguments, pause for the amount of time specified by the sum of their values. --help display this help and exit --version output version information and exit AUTHOR top Written by Jim Meyering and Paul Eggert. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top sleep(3) Full documentation <https://www.gnu.org/software/coreutils/sleep> or available locally via: info '(coreutils) sleep invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SLEEP(1) Pages that refer to this page: dbpmda(1), pmsleep(1), ioctl_ns(2), sleep(3) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sleep\n\n> Delay for a specified amount of time.\n> More information: <https://www.gnu.org/software/coreutils/sleep>.\n\n- Delay in seconds:\n\n`sleep {{seconds}}`\n\n- Delay in [m]inutes. (Other units [d]ay, [h]our, [s]econd, [inf]inity can also be used):\n\n`sleep {{minutes}}m`\n\n- Delay for 1 [d]ay 3 [h]ours:\n\n`sleep 1d 3h`\n\n- Execute a specific command after 20 [m]inutes delay:\n\n`sleep 20m && {{command}}`\n
sort
sort(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sort(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SORT(1) User Commands SORT(1) NAME top sort - sort lines of text files SYNOPSIS top sort [OPTION]... [FILE]... sort [OPTION]... --files0-from=F DESCRIPTION top Write sorted concatenation of all FILE(s) to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. Ordering options: -b, --ignore-leading-blanks ignore leading blanks -d, --dictionary-order consider only blanks and alphanumeric characters -f, --ignore-case fold lower case to upper case characters -g, --general-numeric-sort compare according to general numerical value -i, --ignore-nonprinting consider only printable characters -M, --month-sort compare (unknown) < 'JAN' < ... < 'DEC' -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G) -n, --numeric-sort compare according to string numerical value -R, --random-sort shuffle, but group identical keys. See shuf(1) --random-source=FILE get random bytes from FILE -r, --reverse reverse the result of comparisons --sort=WORD sort according to WORD: general-numeric -g, human-numeric -h, month -M, numeric -n, random -R, version -V -V, --version-sort natural sort of (version) numbers within text Other options: --batch-size=NMERGE merge at most NMERGE inputs at once; for more use temp files -c, --check, --check=diagnose-first check for sorted input; do not sort -C, --check=quiet, --check=silent like -c, but do not report first bad line --compress-program=PROG compress temporaries with PROG; decompress them with PROG -d --debug annotate the part of the line used to sort, and warn about questionable usage to stderr --files0-from=F read input from the files specified by NUL-terminated names in file F; If F is - then read names from standard input -k, --key=KEYDEF sort via a key; KEYDEF gives location and type -m, --merge merge already sorted files; do not sort -o, --output=FILE write result to FILE instead of standard output -s, --stable stabilize sort by disabling last-resort comparison -S, --buffer-size=SIZE use SIZE for main memory buffer -t, --field-separator=SEP use SEP instead of non-blank to blank transition -T, --temporary-directory=DIR use DIR for temporaries, not $TMPDIR or /tmp; multiple options specify multiple directories --parallel=N change the number of sorts run concurrently to N -u, --unique with -c, check for strict ordering; without -c, output only the first of an equal run -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit KEYDEF is F[.C][OPTS][,F[.C][OPTS]] for start and stop position, where F is a field number and C a character position in the field; both are origin 1, and the stop position defaults to the line's end. If neither -t nor -b is in effect, characters in a field are counted from the beginning of the preceding whitespace. OPTS is one or more single-letter ordering options [bdfgiMhnRrV], which override global ordering options for that key. If no key is given, use the entire line as the key. Use --debug to diagnose incorrect key usage. SIZE may be followed by the following multiplicative suffixes: % 1% of memory, b 1, K 1024 (default), and so on for M, G, T, P, E, Z, Y, R, Q. *** WARNING *** The locale specified by the environment affects sort order. Set LC_ALL=C to get the traditional sort order that uses native byte values. AUTHOR top Written by Mike Haertel and Paul Eggert. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top shuf(1), uniq(1) Full documentation <https://www.gnu.org/software/coreutils/sort> or available locally via: info '(coreutils) sort invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SORT(1) Pages that refer to this page: column(1), grep(1), look(1), prlimit(1), ps(1), uniq(1), qsort(3), environ(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sort\n\n> Sort lines of text files.\n> More information: <https://www.gnu.org/software/coreutils/sort>.\n\n- Sort a file in ascending order:\n\n`sort {{path/to/file}}`\n\n- Sort a file in descending order:\n\n`sort --reverse {{path/to/file}}`\n\n- Sort a file in case-insensitive way:\n\n`sort --ignore-case {{path/to/file}}`\n\n- Sort a file using numeric rather than alphabetic order:\n\n`sort --numeric-sort {{path/to/file}}`\n\n- Sort `/etc/passwd` by the 3rd field of each line numerically, using ":" as a field separator:\n\n`sort --field-separator={{:}} --key={{3n}} {{/etc/passwd}}`\n\n- Sort a file preserving only unique lines:\n\n`sort --unique {{path/to/file}}`\n\n- Sort a file, printing the output to the specified output file (can be used to sort a file in-place):\n\n`sort --output={{path/to/file}} {{path/to/file}}`\n\n- Sort numbers with exponents:\n\n`sort --general-numeric-sort {{path/to/file}}`\n
split
split(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training split(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SPLIT(1) User Commands SPLIT(1) NAME top split - split a file into pieces SYNOPSIS top split [OPTION]... [FILE [PREFIX]] DESCRIPTION top Output pieces of FILE to PREFIXaa, PREFIXab, ...; default size is 1000 lines, and default PREFIX is 'x'. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -a, --suffix-length=N generate suffixes of length N (default 2) --additional-suffix=SUFFIX append an additional SUFFIX to file names -b, --bytes=SIZE put SIZE bytes per output file -C, --line-bytes=SIZE put at most SIZE bytes of records per output file -d use numeric suffixes starting at 0, not alphabetic --numeric-suffixes[=FROM] same as -d, but allow setting the start value -x use hex suffixes starting at 0, not alphabetic --hex-suffixes[=FROM] same as -x, but allow setting the start value -e, --elide-empty-files do not generate empty output files with '-n' --filter=COMMAND write to shell COMMAND; file name is $FILE -l, --lines=NUMBER put NUMBER lines/records per output file -n, --number=CHUNKS generate CHUNKS output files; see explanation below -t, --separator=SEP use SEP instead of newline as the record separator; '\0' (zero) specifies the NUL character -u, --unbuffered immediately copy input to output with '-n r/...' --verbose print a diagnostic just before each output file is opened --help display this help and exit --version output version information and exit The SIZE argument is an integer and optional unit (example: 10K is 10*1024). Units are K,M,G,T,P,E,Z,Y,R,Q (powers of 1024) or KB,MB,... (powers of 1000). Binary prefixes can be used, too: KiB=K, MiB=M, and so on. CHUNKS may be: N split into N files based on size of input K/N output Kth of N to stdout l/N split into N files without splitting lines/records l/K/N output Kth of N to stdout without splitting lines/records r/N like 'l' but use round robin distribution r/K/N likewise but only output Kth of N to stdout AUTHOR top Written by Torbjorn Granlund and Richard M. Stallman. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/split> or available locally via: info '(coreutils) split invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SPLIT(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# split\n\n> Split a file into pieces.\n> More information: <https://www.gnu.org/software/coreutils/split>.\n\n- Split a file, each split having 10 lines (except the last split):\n\n`split -l {{10}} {{path/to/file}}`\n\n- Split a file into 5 files. File is split such that each split has same size (except the last split):\n\n`split -n {{5}} {{path/to/file}}`\n\n- Split a file with 512 bytes in each split (except the last split; use 512k for kilobytes and 512m for megabytes):\n\n`split -b {{512}} {{path/to/file}}`\n\n- Split a file with at most 512 bytes in each split without breaking lines:\n\n`split -C {{512}} {{path/to/file}}`\n
ss
ss(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ss(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | STATE-FILTER | EXPRESSION | HOST SYNTAX | USAGE EXAMPLES | SEE ALSO | AUTHOR | COLOPHON SS(8) System Manager's Manual SS(8) NAME top ss - another utility to investigate sockets SYNOPSIS top ss [options] [ FILTER ] DESCRIPTION top ss is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state information than other tools. OPTIONS top When no option is used ss displays a list of open non-listening sockets (e.g. TCP/UNIX/UDP) that have established connection. -h, --help Show summary of options. -V, --version Output version information. -H, --no-header Suppress header line. -O, --oneline Print each socket's data on a single line. -n, --numeric Do not try to resolve service names. Show exact bandwidth values, instead of human-readable. -r, --resolve Try to resolve numeric address/ports. -a, --all Display both listening and non-listening (for TCP this means established connections) sockets. -l, --listening Display only listening sockets (these are omitted by default). -o, --options Show timer information. For TCP protocol, the output format is: timer:(<timer_name>,<expire_time>,<retrans>) <timer_name> the name of the timer, there are five kind of timer names: on : means one of these timers: TCP retrans timer, TCP early retrans timer and tail loss probe timer keepalive: tcp keep alive timer timewait: timewait stage timer persist: zero window probe timer unknown: none of the above timers <expire_time> how long time the timer will expire <retrans> how many times the retransmission occurred -e, --extended Show detailed socket information. The output format is: uid:<uid_number> ino:<inode_number> sk:<cookie> <uid_number> the user id the socket belongs to <inode_number> the socket's inode number in VFS <cookie> an uuid of the socket -m, --memory Show socket memory usage. The output format is: skmem:(r<rmem_alloc>,rb<rcv_buf>,t<wmem_alloc>,tb<snd_buf>, f<fwd_alloc>,w<wmem_queued>,o<opt_mem>, bl<back_log>,d<sock_drop>) <rmem_alloc> the memory allocated for receiving packet <rcv_buf> the total memory can be allocated for receiving packet <wmem_alloc> the memory used for sending packet (which has been sent to layer 3) <snd_buf> the total memory can be allocated for sending packet <fwd_alloc> the memory allocated by the socket as cache, but not used for receiving/sending packet yet. If need memory to send/receive packet, the memory in this cache will be used before allocate additional memory. <wmem_queued> The memory allocated for sending packet (which has not been sent to layer 3) <opt_mem> The memory used for storing socket option, e.g., the key for TCP MD5 signature <back_log> The memory used for the sk backlog queue. On a process context, if the process is receiving packet, and a new packet is received, it will be put into the sk backlog queue, so it can be received by the process immediately <sock_drop> the number of packets dropped before they are de- multiplexed into the socket -p, --processes Show process using socket. -T, --threads Show thread using socket. Implies -p. -i, --info Show internal TCP information. Below fields may appear: ts show string "ts" if the timestamp option is set sack show string "sack" if the sack option is set ecn show string "ecn" if the explicit congestion notification option is set ecnseen show string "ecnseen" if the saw ecn flag is found in received packets fastopen show string "fastopen" if the fastopen option is set cong_alg the congestion algorithm name, the default congestion algorithm is "cubic" wscale:<snd_wscale>:<rcv_wscale> if window scale option is used, this field shows the send scale factor and receive scale factor rto:<icsk_rto> tcp re-transmission timeout value, the unit is millisecond backoff:<icsk_backoff> used for exponential backoff re-transmission, the actual re-transmission timeout value is icsk_rto << icsk_backoff rtt:<rtt>/<rttvar> rtt is the average round trip time, rttvar is the mean deviation of rtt, their units are millisecond ato:<ato> ack timeout, unit is millisecond, used for delay ack mode mss:<mss> max segment size cwnd:<cwnd> congestion window size pmtu:<pmtu> path MTU value ssthresh:<ssthresh> tcp congestion window slow start threshold bytes_acked:<bytes_acked> bytes acked bytes_received:<bytes_received> bytes received segs_out:<segs_out> segments sent out segs_in:<segs_in> segments received send <send_bps>bps egress bps lastsnd:<lastsnd> how long time since the last packet sent, the unit is millisecond lastrcv:<lastrcv> how long time since the last packet received, the unit is millisecond lastack:<lastack> how long time since the last ack received, the unit is millisecond pacing_rate <pacing_rate>bps/<max_pacing_rate>bps the pacing rate and max pacing rate rcv_space:<rcv_space> a helper variable for TCP internal auto tuning socket receive buffer tcp-ulp-mptcp flags:[MmBbJjecv] token:<rem_token(rem_id)/loc_token(loc_id)> seq:<sn> sfseq:<ssn> ssnoff:<off> maplen:<maplen> MPTCP subflow information --tos Show ToS and priority information. Below fields may appear: tos IPv4 Type-of-Service byte tclass IPv6 Traffic Class byte class_id Class id set by net_cls cgroup. If class is zero this shows priority set by SO_PRIORITY. --cgroup Show cgroup information. Below fields may appear: cgroup Cgroup v2 pathname. This pathname is relative to the mount point of the hierarchy. --tipcinfo Show internal tipc socket information. -K, --kill Attempts to forcibly close sockets. This option displays sockets that are successfully closed and silently skips sockets that the kernel does not support closing. It supports IPv4 and IPv6 sockets only. -s, --summary Print summary statistics. This option does not parse socket lists obtaining summary from various sources. It is useful when amount of sockets is so huge that parsing /proc/net/tcp is painful. -E, --events Continually display sockets as they are destroyed -Z, --context As the -p option but also shows process security context. If the -T option is used, also shows thread security context. For netlink(7) sockets the initiating process context is displayed as follows: 1. If valid pid show the process context. 2. If destination is kernel (pid = 0) show kernel initial context. 3. If a unique identifier has been allocated by the kernel or netlink user, show context as "unavailable". This will generally indicate that a process has more than one netlink socket active. -z, --contexts As the -Z option but also shows the socket context. The socket context is taken from the associated inode and is not the actual socket context held by the kernel. Sockets are typically labeled with the context of the creating process, however the context shown will reflect any policy role, type and/or range transition rules applied, and is therefore a useful reference. -N NSNAME, --net=NSNAME Switch to the specified network namespace name. -b, --bpf Show socket classic BPF filters (only administrators are allowed to get these information). -4, --ipv4 Display only IP version 4 sockets (alias for -f inet). -6, --ipv6 Display only IP version 6 sockets (alias for -f inet6). -0, --packet Display PACKET sockets (alias for -f link). -t, --tcp Display TCP sockets. -u, --udp Display UDP sockets. -d, --dccp Display DCCP sockets. -w, --raw Display RAW sockets. -x, --unix Display Unix domain sockets (alias for -f unix). -S, --sctp Display SCTP sockets. --tipc Display tipc sockets (alias for -f tipc). --vsock Display vsock sockets (alias for -f vsock). --xdp Display XDP sockets (alias for -f xdp). -M, --mptcp Display MPTCP sockets. --inet-sockopt Display inet socket options. -f FAMILY, --family=FAMILY Display sockets of type FAMILY. Currently the following families are supported: unix, inet, inet6, link, netlink, vsock, tipc, xdp. -A QUERY, --query=QUERY, --socket=QUERY List of socket tables to dump, separated by commas. The following identifiers are understood: all, inet, tcp, udp, raw, unix, packet, netlink, unix_dgram, unix_stream, unix_seqpacket, packet_raw, packet_dgram, dccp, sctp, tipc, vsock_stream, vsock_dgram, xdp, mptcp. Any item in the list may optionally be prefixed by an exclamation mark (!) to exclude that socket table from being dumped. -D FILE, --diag=FILE Do not display anything, just dump raw information about TCP sockets to FILE after applying filters. If FILE is - stdout is used. -F FILE, --filter=FILE Read filter information from FILE. Each line of FILE is interpreted like single command line option. If FILE is - stdin is used. FILTER := [ state STATE-FILTER ] [ EXPRESSION ] Please take a look at the official documentation for details regarding filters. STATE-FILTER top STATE-FILTER allows one to construct arbitrary set of states to match. Its syntax is sequence of keywords state and exclude followed by identifier of state. Available identifiers are: All standard TCP states: established, syn-sent, syn-recv, fin-wait-1, fin-wait-2, time-wait, closed, close-wait, last-ack, listening and closing. all - for all the states connected - all the states except for listening and closed synchronized - all the connected states except for syn- sent bucket - states, which are maintained as minisockets, i.e. time-wait and syn-recv big - opposite to bucket EXPRESSION top EXPRESSION allows filtering based on specific criteria. EXPRESSION consists of a series of predicates combined by boolean operators. The possible operators in increasing order of precedence are or (or | or ||), and (or & or &&), and not (or !). If no operator is between consecutive predicates, an implicit and operator is assumed. Subexpressions can be grouped with "(" and ")". The following predicates are supported: {dst|src} [=] HOST Test if the destination or source matches HOST. See HOST SYNTAX for details. {dport|sport} [OP] [FAMILY:]:PORT Compare the destination or source port to PORT. OP can be any of "<", "<=", "=", "!=", ">=" and ">". Following normal arithmetic rules. FAMILY and PORT are as described in HOST SYNTAX below. dev [=|!=] DEVICE Match based on the device the connection uses. DEVICE can either be a device name or the index of the interface. fwmark [=|!=] MASK Matches based on the fwmark value for the connection. This can either be a specific mark value or a mark value followed by a "/" and a bitmask of which bits to use in the comparison. For example "fwmark = 0x01/0x03" would match if the two least significant bits of the fwmark were 0x01. cgroup [=|!=] PATH Match if the connection is part of a cgroup at the given path. autobound Match if the port or path of the source address was automatically allocated (rather than explicitly specified). Most operators have aliases. If no operator is supplied "=" is assumed. Each of the following groups of operators are all equivalent: = == eq != ne neq > gt < lt >= ge geq <= le leq ! not | || or & && and HOST SYNTAX top The general host syntax is [FAMILY:]ADDRESS[:PORT]. FAMILY must be one of the families supported by the -f option. If not given it defaults to the family given with the -f option, and if that is also missing, will assume either inet or inet6. Note that all host conditions in the expression should either all be the same family or be only inet and inet6. If there is some other mixture of families, the results will probably be unexpected. The form of ADDRESS and PORT depends on the family used. "*" can be used as a wildcard for either the address or port. The details for each family are as follows: unix ADDRESS is a glob pattern (see fnmatch(3)) that will be matched case-insensitively against the unix socket's address. Both path and abstract names are supported. Unix addresses do not support a port, and "*" cannot be used as a wildcard. link ADDRESS is the case-insensitive name of an Ethernet protocol to match. PORT is either a device name or a device index for the desired link device, as seen in the output of ip link. netlink ADDRESS is a descriptor of the netlink family. Possible values come from /etc/iproute2/nl_protos. PORT is the port id of the socket, which is usually the same as the owning process id. The value "kernel" can be used to represent the kernel (port id of 0). vsock ADDRESS is an integer representing the CID address, and PORT is the port. inet and inet6 ADDRESS is an ip address (either v4 or v6 depending on the family) or a DNS hostname that resolves to an ip address of the required version. An ipv6 address must be enclosed in "[" and "]" to disambiguate the port separator. The address may additionally have a prefix length given in CIDR notation (a slash followed by the prefix length in bits). PORT is either the numerical socket port, or the service name for the port to match. USAGE EXAMPLES top ss -t -a Display all TCP sockets. ss -t -a -Z Display all TCP sockets with process SELinux security contexts. ss -u -a Display all UDP sockets. ss -o state established '( dport = :ssh or sport = :ssh )' Display all established ssh connections. ss -x src /tmp/.X11-unix/* Find all local processes connected to X server. ss -o state fin-wait-1 '( sport = :http or sport = :https )' dst 193.233.7/24 List all the tcp sockets in state FIN-WAIT-1 for our apache to network 193.233.7/24 and look at their timers. ss -a -A 'all,!tcp' List sockets in all states from all socket tables but TCP. SEE ALSO top ip(8), RFC 793 - https://tools.ietf.org/rfc/rfc793.txt (TCP states) AUTHOR top ss was written by Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>. This manual page was written by Michael Prokop <mika@grml.org> for the Debian project (but may be used by others). COLOPHON top This page is part of the iproute2 (utilities for controlling TCP/IP networking and traffic) project. Information about the project can be found at http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2. If you have a bug report for this manual page, send it to netdev@vger.kernel.org, shemminger@osdl.org. This page was obtained from the project's upstream Git repository https://git.kernel.org/pub/scm/network/iproute2/iproute2.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org SS(8) Pages that refer to this page: lsfd(1), pcp-ss(1), pmdasockets(1), netstat(8), ping(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# ss\n\n> Utility to investigate sockets.\n> More information: <https://manned.org/ss.8>.\n\n- Show all TCP/UDP/RAW/UNIX sockets:\n\n`ss -a {{-t|-u|-w|-x}}`\n\n- Filter TCP sockets by states, only/exclude:\n\n`ss {{state/exclude}} {{bucket/big/connected/synchronized/...}}`\n\n- Show all TCP sockets connected to the local HTTPS port (443):\n\n`ss -t src :{{443}}`\n\n- Show all TCP sockets listening on the local 8080 port:\n\n`ss -lt src :{{8080}}`\n\n- Show all TCP sockets along with processes connected to a remote SSH port:\n\n`ss -pt dst :{{ssh}}`\n\n- Show all UDP sockets connected on specific source and destination ports:\n\n`ss -u 'sport == :{{source_port}} and dport == :{{destination_port}}'`\n\n- Show all TCP IPv4 sockets locally connected on the subnet 192.168.0.0/16:\n\n`ss -4t src {{192.168/16}}`\n\n- Kill IPv4 or IPv6 Socket Connection with destination IP 192.168.1.17 and destination port 8080:\n\n`ss --kill dst {{192.168.1.17}} dport = {{8080}}`\n
ssh
ssh(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ssh(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHENTICATION | ESCAPE CHARACTERS | TCP FORWARDING | X11 FORWARDING | VERIFYING HOST KEYS | SSH-BASED VIRTUAL PRIVATE NETWORKS | ENVIRONMENT | FILES | EXIT STATUS | SEE ALSO | STANDARDS | AUTHORS | COLOPHON SSH(1) General Commands Manual SSH(1) NAME top ssh OpenSSH remote login client SYNOPSIS top ssh [-46AaCfGgKkMNnqsTtVvXxYy] [-B bind_interface] [-b bind_address] [-c cipher_spec] [-D [bind_address:]port] [-E log_file] [-e escape_char] [-F configfile] [-I pkcs11] [-i identity_file] [-J destination] [-L address] [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-P tag] [-p port] [-R address] [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]] destination [command [argument ...]] [-Q query_option] DESCRIPTION top (SSH client) is a program for logging into a remote machine and for executing commands on a remote machine. It is intended to provide secure encrypted communications between two untrusted hosts over an insecure network. X11 connections, arbitrary TCP ports and Unix-domain sockets can also be forwarded over the secure channel. connects and logs into the specified destination, which may be specified as either [user@]hostname or a URI of the form ssh://[user@]hostname[:port]. The user must prove their identity to the remote machine using one of several methods (see below). If a command is specified, it will be executed on the remote host instead of a login shell. A complete command line may be specified as command, or it may have additional arguments. If supplied, the arguments will be appended to the command, separated by spaces, before it is sent to the server to be executed. The options are as follows: -4 Forces to use IPv4 addresses only. -6 Forces to use IPv6 addresses only. -A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file. Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's Unix-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J). -a Disables forwarding of the authentication agent connection. -B bind_interface Bind to the address of bind_interface before attempting to connect to the destination host. This is only useful on systems with more than one address. -b bind_address Use bind_address on the local machine as the source address of the connection. Only useful on systems with more than one address. -C Requests compression of all data (including stdin, stdout, stderr, and data for forwarded X11, TCP and Unix-domain connections). The compression algorithm is the same used by gzip(1). Compression is desirable on modem lines and other slow connections, but will only slow down things on fast networks. The default value can be set on a host-by-host basis in the configuration files; see the Compression option in ssh_config(5). -c cipher_spec Selects the cipher specification for encrypting the session. cipher_spec is a comma-separated list of ciphers listed in order of preference. See the Ciphers keyword in ssh_config(5) for more information. -D [bind_address:]port Specifies a local dynamic application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS4 and SOCKS5 protocols are supported, and will act as a SOCKS server. Only root can forward privileged ports. Dynamic port forwardings can also be specified in the configuration file. IPv6 addresses can be specified by enclosing the address in square brackets. Only the superuser can forward privileged ports. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of localhost indicates that the listening port be bound for local use only, while an empty address or * indicates that the port should be available from all interfaces. -E log_file Append debug logs to log_file instead of standard error. -e escape_char Sets the escape character for sessions with a pty (default: ~). The escape character is only recognized at the beginning of a line. The escape character followed by a dot (.) closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once. Setting the character to none disables any escapes and makes the session fully transparent. -F configfile Specifies an alternative per-user configuration file. If a configuration file is given on the command line, the system-wide configuration file (/etc/ssh/ssh_config) will be ignored. The default for the per-user configuration file is ~/.ssh/config. If set to none, no configuration files will be read. -f Requests to go to background just before command execution. This is useful if is going to ask for passwords or passphrases, but the user wants it in the background. This implies -n. The recommended way to start X11 programs at a remote site is with something like ssh -f host xterm. If the ExitOnForwardFailure configuration option is set to yes, then a client started with -f will wait for all remote port forwards to be successfully established before placing itself in the background. Refer to the description of ForkAfterAuthentication in ssh_config(5) for details. -G Causes to print its configuration after evaluating Host and Match blocks and exit. -g Allows remote hosts to connect to local forwarded ports. If used on a multiplexed connection, then this option must be specified on the master process. -I pkcs11 Specify the PKCS#11 shared library should use to communicate with a PKCS#11 token providing keys for user authentication. -i identity_file Selects a file from which the identity (private key) for public key authentication is read. You can also specify a public key file to use the corresponding private key that is loaded in ssh-agent(1) when the private key file is not present locally. The default is ~/.ssh/id_rsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ecdsa_sk, ~/.ssh/id_ed25519, ~/.ssh/id_ed25519_sk and ~/.ssh/id_dsa. Identity files may also be specified on a per-host basis in the configuration file. It is possible to have multiple -i options (and multiple identities specified in configuration files). If no certificates have been explicitly specified by the CertificateFile directive, will also try to load certificate information from the filename obtained by appending -cert.pub to identity filenames. -J destination Connect to the target host by first making an connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. Note that configuration directives supplied on the command-line generally apply to the destination host and not any specified jump hosts. Use ~/.ssh/config to specify configuration for jump hosts. -K Enables GSSAPI-based authentication and forwarding (delegation) of GSSAPI credentials to the server. -k Disables forwarding (delegation) of GSSAPI credentials to the server. -L [bind_address:]port:host:hostport -L [bind_address:]port:remote_socket -L local_socket:host:hostport -L local_socket:remote_socket Specifies that connections to the given TCP port or Unix socket on the local (client) host are to be forwarded to the given host and port, or Unix socket, on the remote side. This works by allocating a socket to listen to either a TCP port on the local side, optionally bound to the specified bind_address, or to a Unix socket. Whenever a connection is made to the local port or socket, the connection is forwarded over the secure channel, and a connection is made to either host port hostport, or the Unix socket remote_socket, from the remote machine. Port forwardings can also be specified in the configuration file. Only the superuser can forward privileged ports. IPv6 addresses can be specified by enclosing the address in square brackets. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of localhost indicates that the listening port be bound for local use only, while an empty address or * indicates that the port should be available from all interfaces. -l login_name Specifies the user to log in as on the remote machine. This also may be specified on a per-host basis in the configuration file. -M Places the client into master mode for connection sharing. Multiple -M options places into master mode but with confirmation required using ssh-askpass(1) before each operation that changes the multiplexing state (e.g. opening a new session). Refer to the description of ControlMaster in ssh_config(5) for details. -m mac_spec A comma-separated list of MAC (message authentication code) algorithms, specified in order of preference. See the MACs keyword in ssh_config(5) for more information. -N Do not execute a remote command. This is useful for just forwarding ports. Refer to the description of SessionType in ssh_config(5) for details. -n Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel. The program will be put in the background. (This does not work if needs to ask for a password or passphrase; see also the -f option.) Refer to the description of StdinNull in ssh_config(5) for details. -O ctl_cmd Control an active connection multiplexing master process. When the -O option is specified, the ctl_cmd argument is interpreted and passed to the master process. Valid commands are: check (check that the master process is running), forward (request forwardings without command execution), cancel (cancel forwardings), exit (request the master to exit), and stop (request the master to stop accepting further multiplexing requests). -o option Can be used to give options in the format used in the configuration file. This is useful for specifying options for which there is no separate command-line flag. For full details of the options listed below, and their possible values, see ssh_config(5). AddKeysToAgent AddressFamily BatchMode BindAddress CanonicalDomains CanonicalizeFallbackLocal CanonicalizeHostname CanonicalizeMaxDots CanonicalizePermittedCNAMEs CASignatureAlgorithms CertificateFile CheckHostIP Ciphers ClearAllForwardings Compression ConnectionAttempts ConnectTimeout ControlMaster ControlPath ControlPersist DynamicForward EnableEscapeCommandline EscapeChar ExitOnForwardFailure FingerprintHash ForkAfterAuthentication ForwardAgent ForwardX11 ForwardX11Timeout ForwardX11Trusted GatewayPorts GlobalKnownHostsFile GSSAPIAuthentication GSSAPIDelegateCredentials HashKnownHosts Host HostbasedAcceptedAlgorithms HostbasedAuthentication HostKeyAlgorithms HostKeyAlias Hostname IdentitiesOnly IdentityAgent IdentityFile IPQoS KbdInteractiveAuthentication KbdInteractiveDevices KexAlgorithms KnownHostsCommand LocalCommand LocalForward LogLevel MACs Match NoHostAuthenticationForLocalhost NumberOfPasswordPrompts PasswordAuthentication PermitLocalCommand PermitRemoteOpen PKCS11Provider Port PreferredAuthentications ProxyCommand ProxyJump ProxyUseFdpass PubkeyAcceptedAlgorithms PubkeyAuthentication RekeyLimit RemoteCommand RemoteForward RequestTTY RequiredRSASize SendEnv ServerAliveInterval ServerAliveCountMax SessionType SetEnv StdinNull StreamLocalBindMask StreamLocalBindUnlink StrictHostKeyChecking TCPKeepAlive Tunnel TunnelDevice UpdateHostKeys User UserKnownHostsFile VerifyHostKeyDNS VisualHostKey XAuthLocation -P tag Specify a tag name that may be used to select configuration in ssh_config(5). Refer to the Tag and Match keywords in ssh_config(5) for more information. -p port Port to connect to on the remote host. This can be specified on a per-host basis in the configuration file. -Q query_option Queries for the algorithms supported by one of the following features: cipher (supported symmetric ciphers), cipher-auth (supported symmetric ciphers that support authenticated encryption), help (supported query terms for use with the -Q flag), mac (supported message integrity codes), kex (key exchange algorithms), key (key types), key-ca-sign (valid CA signature algorithms for certificates), key-cert (certificate key types), key-plain (non-certificate key types), key-sig (all key types and signature algorithms), protocol-version (supported SSH protocol versions), and sig (supported signature algorithms). Alternatively, any keyword from ssh_config(5) or sshd_config(5) that takes an algorithm list may be used as an alias for the corresponding query_option. -q Quiet mode. Causes most warning and diagnostic messages to be suppressed. -R [bind_address:]port:host:hostport -R [bind_address:]port:local_socket -R remote_socket:host:hostport -R remote_socket:local_socket -R [bind_address:]port Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side. This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by host port hostport, or local_socket, or, if no explicit destination was specified, will act as a SOCKS 4/5 proxy and forward connections to the destinations requested by the remote SOCKS client. Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6 addresses can be specified by enclosing the address in square brackets. By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty bind_address, or the address *, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the server's GatewayPorts option is enabled (see sshd_config(5)). If the port argument is 0, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O forward, the allocated port will be printed to the standard output. -S ctl_path Specifies the location of a control socket for connection sharing, or the string none to disable connection sharing. Refer to the description of ControlPath and ControlMaster in ssh_config(5) for details. -s May be used to request invocation of a subsystem on the remote system. Subsystems facilitate the use of SSH as a secure transport for other applications (e.g. sftp(1)). The subsystem is specified as the remote command. Refer to the description of SessionType in ssh_config(5) for details. -T Disable pseudo-terminal allocation. -t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if has no local tty. -V Display the version number and exit. -v Verbose mode. Causes to print debugging messages about its progress. This is helpful in debugging connection, authentication, and configuration problems. Multiple -v options increase the verbosity. The maximum is 3. -W host:port Requests that standard input and output on the client be forwarded to host on port over the secure channel. Implies -N, -T, ExitOnForwardFailure and ClearAllForwardings, though these can be overridden in the configuration file or using -o command line options. -w local_tun[:remote_tun] Requests tunnel device forwarding with the specified tun(4) devices between the client (local_tun) and the server (remote_tun). The devices may be specified by numerical ID or the keyword any, which uses the next available tunnel device. If remote_tun is not specified, it defaults to any. See also the Tunnel and TunnelDevice directives in ssh_config(5). If the Tunnel directive is unset, it will be set to the default tunnel mode, which is point-to-point. If a different Tunnel forwarding mode it desired, then it should be specified before -w. -X Enables X11 forwarding. This can also be specified on a per-host basis in a configuration file. X11 forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the user's X authorization database) can access the local X11 display through the forwarded connection. An attacker may then be able to perform activities such as keystroke monitoring. For this reason, X11 forwarding is subjected to X11 SECURITY extension restrictions by default. Refer to the -Y option and the ForwardX11Trusted directive in ssh_config(5) for more information. -x Disables X11 forwarding. -Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls. -y Send log information using the syslog(3) system module. By default this information is sent to stderr. may additionally obtain configuration data from a per-user configuration file and a system-wide configuration file. The file format and configuration options are described in ssh_config(5). AUTHENTICATION top The OpenSSH SSH client supports SSH protocol 2. The methods available for authentication are: GSSAPI-based authentication, host-based authentication, public key authentication, keyboard-interactive authentication, and password authentication. Authentication methods are tried in the order specified above, though PreferredAuthentications can be used to change the default order. Host-based authentication works as follows: If the machine the user logs in from is listed in /etc/hosts.equiv or /etc/shosts.equiv on the remote machine, the user is non-root and the user names are the same on both sides, or if the files ~/.rhosts or ~/.shosts exist in the user's home directory on the remote machine and contain a line containing the name of the client machine and the name of the user on that machine, the user is considered for login. Additionally, the server must be able to verify the client's host key (see the description of /etc/ssh/ssh_known_hosts and ~/.ssh/known_hosts, below) for login to be permitted. This authentication method closes security holes due to IP spoofing, DNS spoofing, and routing spoofing. [Note to the administrator: /etc/hosts.equiv, ~/.rhosts, and the rlogin/rsh protocol in general, are inherently insecure and should be disabled if security is desired.] Public key authentication works as follows: The scheme is based on public-key cryptography, using cryptosystems where encryption and decryption are done using separate keys, and it is unfeasible to derive the decryption key from the encryption key. The idea is that each user creates a public/private key pair for authentication purposes. The server knows the public key, and only the user knows the private key. implements public key authentication protocol automatically, using one of the DSA, ECDSA, Ed25519 or RSA algorithms. The HISTORY section of ssl(8) contains a brief discussion of the DSA and RSA algorithms. The file ~/.ssh/authorized_keys lists the public keys that are permitted for logging in. When the user logs in, the program tells the server which key pair it would like to use for authentication. The client proves that it has access to the private key and the server checks that the corresponding public key is authorized to accept the account. The server may inform the client of errors that prevented public key authentication from succeeding after authentication completes using a different method. These may be viewed by increasing the LogLevel to DEBUG or higher (e.g. by using the -v flag). The user creates their key pair by running ssh-keygen(1). This stores the private key in ~/.ssh/id_dsa (DSA), ~/.ssh/id_ecdsa (ECDSA), ~/.ssh/id_ecdsa_sk (authenticator-hosted ECDSA), ~/.ssh/id_ed25519 (Ed25519), ~/.ssh/id_ed25519_sk (authenticator- hosted Ed25519), or ~/.ssh/id_rsa (RSA) and stores the public key in ~/.ssh/id_dsa.pub (DSA), ~/.ssh/id_ecdsa.pub (ECDSA), ~/.ssh/id_ecdsa_sk.pub (authenticator-hosted ECDSA), ~/.ssh/id_ed25519.pub (Ed25519), ~/.ssh/id_ed25519_sk.pub (authenticator-hosted Ed25519), or ~/.ssh/id_rsa.pub (RSA) in the user's home directory. The user should then copy the public key to ~/.ssh/authorized_keys in their home directory on the remote machine. The authorized_keys file corresponds to the conventional ~/.rhosts file, and has one key per line, though the lines can be very long. After this, the user can log in without giving the password. A variation on public key authentication is available in the form of certificate authentication: instead of a set of public/private keys, signed certificates are used. This has the advantage that a single trusted certification authority can be used in place of many public/private keys. See the CERTIFICATES section of ssh-keygen(1) for more information. The most convenient way to use public key or certificate authentication may be with an authentication agent. See ssh-agent(1) and (optionally) the AddKeysToAgent directive in ssh_config(5) for more information. Keyboard-interactive authentication works as follows: The server sends an arbitrary "challenge" text and prompts for a response, possibly multiple times. Examples of keyboard-interactive authentication include BSD Authentication (see login.conf(5)) and PAM (some non-OpenBSD systems). Finally, if other authentication methods fail, prompts the user for a password. The password is sent to the remote host for checking; however, since all communications are encrypted, the password cannot be seen by someone listening on the network. automatically maintains and checks a database containing identification for all hosts it has ever been used with. Host keys are stored in ~/.ssh/known_hosts in the user's home directory. Additionally, the file /etc/ssh/ssh_known_hosts is automatically checked for known hosts. Any new hosts are automatically added to the user's file. If a host's identification ever changes, warns about this and disables password authentication to prevent server spoofing or man-in-the- middle attacks, which could otherwise be used to circumvent the encryption. The StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed. When the user's identity has been accepted by the server, the server either executes the given command in a non-interactive session or, if no command has been specified, logs into the machine and gives the user a normal shell as an interactive session. All communication with the remote command or shell will be automatically encrypted. If an interactive session is requested, by default will only request a pseudo-terminal (pty) for interactive sessions when the client has one. The flags -T and -t can be used to override this behaviour. If a pseudo-terminal has been allocated, the user may use the escape characters noted below. If no pseudo-terminal has been allocated, the session is transparent and can be used to reliably transfer binary data. On most systems, setting the escape character to none will also make the session transparent even if a tty is used. The session terminates when the command or shell on the remote machine exits and all X11 and TCP connections have been closed. ESCAPE CHARACTERS top When a pseudo-terminal has been requested, supports a number of functions through the use of an escape character. A single tilde character can be sent as ~~ or by following the tilde by a character other than those described below. The escape character must always follow a newline to be interpreted as special. The escape character can be changed in configuration files using the EscapeChar configuration directive or on the command line by the -e option. The supported escapes (assuming the default ~) are: ~. Disconnect. ~^Z Background . ~# List forwarded connections. ~& Background at logout when waiting for forwarded connection / X11 sessions to terminate. ~? Display a list of escape characters. ~B Send a BREAK to the remote system (only useful if the peer supports it). ~C Open command line. Currently this allows the addition of port forwardings using the -L, -R and -D options (see above). It also allows the cancellation of existing port-forwardings with -KL[bind_address:]port for local, -KR[bind_address:]port for remote and -KD[bind_address:]port for dynamic port-forwardings. !command allows the user to execute a local command if the PermitLocalCommand option is enabled in ssh_config(5). Basic help is available, using the -h option. ~R Request rekeying of the connection (only useful if the peer supports it). ~V Decrease the verbosity (LogLevel) when errors are being written to stderr. ~v Increase the verbosity (LogLevel) when errors are being written to stderr. TCP FORWARDING top Forwarding of arbitrary TCP connections over a secure channel can be specified either on the command line or in a configuration file. One possible application of TCP forwarding is a secure connection to a mail server; another is going through firewalls. In the example below, we look at encrypting communication for an IRC client, even though the IRC server it connects to does not directly support encrypted communication. This works as follows: the user connects to the remote host using , specifying the ports to be used to forward the connection. After that it is possible to start the program locally, and will encrypt and forward the connection to the remote server. The following example tunnels an IRC session from the client to an IRC server at server.example.com, joining channel #users, nickname pinky, using the standard IRC port, 6667: $ ssh -f -L 6667:localhost:6667 server.example.com sleep 10 $ irc -c '#users' pinky IRC/127.0.0.1 The -f option backgrounds and the remote command sleep 10 is specified to allow an amount of time (10 seconds, in the example) to start the program which is going to use the tunnel. If no connections are made within the time specified, will exit. X11 FORWARDING top If the ForwardX11 variable is set to yes (or see the description of the -X, -x, and -Y options above) and the user is using X11 (the DISPLAY environment variable is set), the connection to the X11 display is automatically forwarded to the remote side in such a way that any X11 programs started from the shell (or command) will go through the encrypted channel, and the connection to the real X server will be made from the local machine. The user should not manually set DISPLAY. Forwarding of X11 connections can be configured on the command line or in configuration files. The DISPLAY value set by will point to the server machine, but with a display number greater than zero. This is normal, and happens because creates a proxy X server on the server machine for forwarding the connections over the encrypted channel. will also automatically set up Xauthority data on the server machine. For this purpose, it will generate a random authorization cookie, store it in Xauthority on the server, and verify that any forwarded connections carry this cookie and replace it by the real cookie when the connection is opened. The real authentication cookie is never sent to the server machine (and no cookies are sent in the plain). If the ForwardAgent variable is set to yes (or see the description of the -A and -a options above) and the user is using an authentication agent, the connection to the agent is automatically forwarded to the remote side. VERIFYING HOST KEYS top When connecting to a server for the first time, a fingerprint of the server's public key is presented to the user (unless the option StrictHostKeyChecking has been disabled). Fingerprints can be determined using ssh-keygen(1): $ ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key If the fingerprint is already known, it can be matched and the key can be accepted or rejected. If only legacy (MD5) fingerprints for the server are available, the ssh-keygen(1) -E option may be used to downgrade the fingerprint algorithm to match. Because of the difficulty of comparing host keys just by looking at fingerprint strings, there is also support to compare host keys visually, using random art. By setting the VisualHostKey option to yes, a small ASCII graphic gets displayed on every login to a server, no matter if the session itself is interactive or not. By learning the pattern a known server produces, a user can easily find out that the host key has changed when a completely different pattern is displayed. Because these patterns are not unambiguous however, a pattern that looks similar to the pattern remembered only gives a good probability that the host key is the same, not guaranteed proof. To get a listing of the fingerprints along with their random art for all known hosts, the following command line can be used: $ ssh-keygen -lv -f ~/.ssh/known_hosts If the fingerprint is unknown, an alternative method of verification is available: SSH fingerprints verified by DNS. An additional resource record (RR), SSHFP, is added to a zonefile and the connecting client is able to match the fingerprint with that of the key presented. In this example, we are connecting a client to a server, host.example.com. The SSHFP resource records should first be added to the zonefile for host.example.com: $ ssh-keygen -r host.example.com. The output lines will have to be added to the zonefile. To check that the zone is answering fingerprint queries: $ dig -t SSHFP host.example.com Finally the client connects: $ ssh -o "VerifyHostKeyDNS ask" host.example.com [...] Matching host key fingerprint found in DNS. Are you sure you want to continue connecting (yes/no)? See the VerifyHostKeyDNS option in ssh_config(5) for more information. SSH-BASED VIRTUAL PRIVATE NETWORKS top contains support for Virtual Private Network (VPN) tunnelling using the tun(4) network pseudo-device, allowing two networks to be joined securely. The sshd_config(5) configuration option PermitTunnel controls whether the server supports this, and at what level (layer 2 or 3 traffic). The following example would connect client network 10.0.50.0/24 with remote network 10.0.99.0/24 using a point-to-point connection from 10.1.1.1 to 10.1.1.2, provided that the SSH server running on the gateway to the remote network, at 192.168.1.15, allows it. On the client: # ssh -f -w 0:1 192.168.1.15 true # ifconfig tun0 10.1.1.1 10.1.1.2 netmask 255.255.255.252 # route add 10.0.99.0/24 10.1.1.2 On the server: # ifconfig tun1 10.1.1.2 10.1.1.1 netmask 255.255.255.252 # route add 10.0.50.0/24 10.1.1.1 Client access may be more finely tuned via the /root/.ssh/authorized_keys file (see below) and the PermitRootLogin server option. The following entry would permit connections on tun(4) device 1 from user jane and on tun device 2 from user john, if PermitRootLogin is set to forced-commands-only: tunnel="1",command="sh /etc/netstart tun1" ssh-rsa ... jane tunnel="2",command="sh /etc/netstart tun2" ssh-rsa ... john Since an SSH-based setup entails a fair amount of overhead, it may be more suited to temporary setups, such as for wireless VPNs. More permanent VPNs are better provided by tools such as ipsecctl(8) and isakmpd(8). ENVIRONMENT top will normally set the following environment variables: DISPLAY The DISPLAY variable indicates the location of the X11 server. It is automatically set by to point to a value of the form hostname:n, where hostname indicates the host where the shell runs, and n is an integer 1. uses this special value to forward X11 connections over the secure channel. The user should normally not set DISPLAY explicitly, as that will render the X11 connection insecure (and will require the user to manually copy any required authorization cookies). HOME Set to the path of the user's home directory. LOGNAME Synonym for USER; set for compatibility with systems that use this variable. MAIL Set to the path of the user's mailbox. PATH Set to the default PATH, as specified when compiling . SSH_ASKPASS If needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal. If does not have a terminal associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the program specified by SSH_ASKPASS and open an X11 window to read the passphrase. This is particularly useful when calling from a .xsession or related script. (Note that on some machines it may be necessary to redirect the input from /dev/null to make this work.) SSH_ASKPASS_REQUIRE Allows further control over the use of an askpass program. If this variable is set to never then will never attempt to use one. If it is set to prefer, then will prefer to use the askpass program instead of the TTY when requesting passwords. Finally, if the variable is set to force, then the askpass program will be used for all passphrase input regardless of whether DISPLAY is set. SSH_AUTH_SOCK Identifies the path of a Unix-domain socket used to communicate with the agent. SSH_CONNECTION Identifies the client and server ends of the connection. The variable contains four space-separated values: client IP address, client port number, server IP address, and server port number. SSH_ORIGINAL_COMMAND This variable contains the original command line if a forced command is executed. It can be used to extract the original arguments. SSH_TTY This is set to the name of the tty (path to the device) associated with the current shell or command. If the current session has no tty, this variable is not set. SSH_TUNNEL Optionally set by sshd(8) to contain the interface names assigned if tunnel forwarding was requested by the client. SSH_USER_AUTH Optionally set by sshd(8), this variable may contain a pathname to a file that lists the authentication methods successfully used when the session was established, including any public keys that were used. TZ This variable is set to indicate the present time zone if it was set when the daemon was started (i.e. the daemon passes the value on to new connections). USER Set to the name of the user logging in. Additionally, reads ~/.ssh/environment, and adds lines of the format VARNAME=value to the environment if the file exists and users are allowed to change their environment. For more information, see the PermitUserEnvironment option in sshd_config(5). FILES top ~/.rhosts This file is used for host-based authentication (see above). On some machines this file may need to be world- readable if the user's home directory is on an NFS partition, because sshd(8) reads it as root. Additionally, this file must be owned by the user, and must not have write permissions for anyone else. The recommended permission for most machines is read/write for the user, and not accessible by others. ~/.shosts This file is used in exactly the same way as .rhosts, but allows host-based authentication without permitting login with rlogin/rsh. ~/.ssh/ This directory is the default location for all user- specific configuration and authentication information. There is no general requirement to keep the entire contents of this directory secret, but the recommended permissions are read/write/execute for the user, and not accessible by others. ~/.ssh/authorized_keys Lists the public keys (DSA, ECDSA, Ed25519, RSA) that can be used for logging in as this user. The format of this file is described in the sshd(8) manual page. This file is not highly sensitive, but the recommended permissions are read/write for the user, and not accessible by others. ~/.ssh/config This is the per-user configuration file. The file format and configuration options are described in ssh_config(5). Because of the potential for abuse, this file must have strict permissions: read/write for the user, and not writable by others. ~/.ssh/environment Contains additional definitions for environment variables; see ENVIRONMENT, above. ~/.ssh/id_dsa ~/.ssh/id_ecdsa ~/.ssh/id_ecdsa_sk ~/.ssh/id_ed25519 ~/.ssh/id_ed25519_sk ~/.ssh/id_rsa Contains the private key for authentication. These files contain sensitive data and should be readable by the user but not accessible by others (read/write/execute). will simply ignore a private key file if it is accessible by others. It is possible to specify a passphrase when generating the key which will be used to encrypt the sensitive part of this file using AES-128. ~/.ssh/id_dsa.pub ~/.ssh/id_ecdsa.pub ~/.ssh/id_ecdsa_sk.pub ~/.ssh/id_ed25519.pub ~/.ssh/id_ed25519_sk.pub ~/.ssh/id_rsa.pub Contains the public key for authentication. These files are not sensitive and can (but need not) be readable by anyone. ~/.ssh/known_hosts Contains a list of host keys for all hosts the user has logged into that are not already in the systemwide list of known host keys. See sshd(8) for further details of the format of this file. ~/.ssh/rc Commands in this file are executed by when the user logs in, just before the user's shell (or command) is started. See the sshd(8) manual page for more information. /etc/hosts.equiv This file is for host-based authentication (see above). It should only be writable by root. /etc/shosts.equiv This file is used in exactly the same way as hosts.equiv, but allows host-based authentication without permitting login with rlogin/rsh. /etc/ssh/ssh_config Systemwide configuration file. The file format and configuration options are described in ssh_config(5). /etc/ssh/ssh_host_key /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_ed25519_key /etc/ssh/ssh_host_rsa_key These files contain the private parts of the host keys and are used for host-based authentication. /etc/ssh/ssh_known_hosts Systemwide list of known host keys. This file should be prepared by the system administrator to contain the public host keys of all machines in the organization. It should be world-readable. See sshd(8) for further details of the format of this file. /etc/ssh/sshrc Commands in this file are executed by when the user logs in, just before the user's shell (or command) is started. See the sshd(8) manual page for more information. EXIT STATUS top exits with the exit status of the remote command or with 255 if an error occurred. SEE ALSO top scp(1), sftp(1), ssh-add(1), ssh-agent(1), ssh-keygen(1), ssh-keyscan(1), tun(4), ssh_config(5), ssh-keysign(8), sshd(8) STANDARDS top S. Lehtinen and C. Lonvick, The Secure Shell (SSH) Protocol Assigned Numbers, RFC 4250, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Protocol Architecture, RFC 4251, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Authentication Protocol, RFC 4252, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Transport Layer Protocol, RFC 4253, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Connection Protocol, RFC 4254, January 2006. J. Schlyter and W. Griffin, Using DNS to Securely Publish Secure Shell (SSH) Key Fingerprints, RFC 4255, January 2006. F. Cusack and M. Forssen, Generic Message Exchange Authentication for the Secure Shell Protocol (SSH), RFC 4256, January 2006. J. Galbraith and P. Remaker, The Secure Shell (SSH) Session Channel Break Extension, RFC 4335, January 2006. M. Bellare, T. Kohno, and C. Namprempre, The Secure Shell (SSH) Transport Layer Encryption Modes, RFC 4344, January 2006. B. Harris, Improved Arcfour Modes for the Secure Shell (SSH) Transport Layer Protocol, RFC 4345, January 2006. M. Friedl, N. Provos, and W. Simpson, Diffie-Hellman Group Exchange for the Secure Shell (SSH) Transport Layer Protocol, RFC 4419, March 2006. J. Galbraith and R. Thayer, The Secure Shell (SSH) Public Key File Format, RFC 4716, November 2006. D. Stebila and J. Green, Elliptic Curve Algorithm Integration in the Secure Shell Transport Layer, RFC 5656, December 2009. A. Perrig and D. Song, Hash Visualization: a New Technique to improve Real-World Security, 1999, International Workshop on Cryptographic Techniques and E-Commerce (CrypTEC '99). AUTHORS top OpenSSH is a derivative of the original and free ssh 1.2.12 release by Tatu Ylonen. Aaron Campbell, Bob Beck, Markus Friedl, Niels Provos, Theo de Raadt and Dug Song removed many bugs, re- added newer features and created OpenSSH. Markus Friedl contributed the support for SSH protocol versions 1.5 and 2.0. COLOPHON top This page is part of the openssh (Portable OpenSSH) project. Information about the project can be found at http://www.openssh.com/portable.html. If you have a bug report for this manual page, see http://www.openssh.com/report.html. This page was obtained from the tarball openssh-9.6p1.tar.gz fetched from http://ftp.eu.openbsd.org/pub/OpenBSD/OpenSSH/portable/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up- to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU October 11, 2023 SSH(1) Pages that refer to this page: stap-jupyter(1), systemd-stdio-bridge(1), tar(1), sd_bus_default(3), environment.d(5), proc(5), user@.service(5), pty(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# ssh\n\n> Secure Shell is a protocol used to securely log onto remote systems.\n> It can be used for logging or executing commands on a remote server.\n> More information: <https://man.openbsd.org/ssh>.\n\n- Connect to a remote server:\n\n`ssh {{username}}@{{remote_host}}`\n\n- Connect to a remote server with a specific identity (private key):\n\n`ssh -i {{path/to/key_file}} {{username}}@{{remote_host}}`\n\n- Connect to a remote server using a specific [p]ort:\n\n`ssh {{username}}@{{remote_host}} -p {{2222}}`\n\n- Run a command on a remote server with a [t]ty allocation allowing interaction with the remote command:\n\n`ssh {{username}}@{{remote_host}} -t {{command}} {{command_arguments}}`\n\n- SSH tunneling: [D]ynamic port forwarding (SOCKS proxy on `localhost:1080`):\n\n`ssh -D {{1080}} {{username}}@{{remote_host}}`\n\n- SSH tunneling: Forward a specific port (`localhost:9999` to `example.org:80`) along with disabling pseudo-[T]ty allocation and executio[N] of remote commands:\n\n`ssh -L {{9999}}:{{example.org}}:{{80}} -N -T {{username}}@{{remote_host}}`\n\n- SSH [J]umping: Connect through a jumphost to a remote server (Multiple jump hops may be specified separated by comma characters):\n\n`ssh -J {{username}}@{{jump_host}} {{username}}@{{remote_host}}`\n\n- Agent forwarding: Forward the authentication information to the remote machine (see `man ssh_config` for available options):\n\n`ssh -A {{username}}@{{remote_host}}`\n
ssh-add
ssh-add(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ssh-add(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | ENVIRONMENT | FILES | EXIT STATUS | SEE ALSO | AUTHORS | COLOPHON SSH-ADD(1) General Commands Manual SSH-ADD(1) NAME top ssh-add adds private key identities to the OpenSSH authentication agent SYNOPSIS top ssh-add [-cCDdKkLlqvXx] [-E fingerprint_hash] [-H hostkey_file] [-h destination_constraint] [-S provider] [-t life] [file ...] ssh-add -s pkcs11 [-vC] [certificate ...] ssh-add -e pkcs11 ssh-add -T pubkey ... DESCRIPTION top adds private key identities to the authentication agent, ssh-agent(1). When run without arguments, it adds the files ~/.ssh/id_rsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ecdsa_sk, ~/.ssh/id_ed25519, ~/.ssh/id_ed25519_sk, and ~/.ssh/id_dsa. After loading a private key, will try to load corresponding certificate information from the filename obtained by appending -cert.pub to the name of the private key file. Alternative file names can be given on the command line. If any file requires a passphrase, asks for the passphrase from the user. The passphrase is read from the user's tty. retries the last passphrase if multiple identity files are given. The authentication agent must be running and the SSH_AUTH_SOCK environment variable must contain the name of its socket for to work. The options are as follows: -c Indicates that added identities should be subject to confirmation before being used for authentication. Confirmation is performed by ssh-askpass(1). Successful confirmation is signaled by a zero exit status from ssh-askpass(1), rather than text entered into the requester. -C When loading keys into or deleting keys from the agent, process certificates only and skip plain keys. -D Deletes all identities from the agent. -d Instead of adding identities, removes identities from the agent. If has been run without arguments, the keys for the default identities and their corresponding certificates will be removed. Otherwise, the argument list will be interpreted as a list of paths to public key files to specify keys and certificates to be removed from the agent. If no public key is found at a given path, will append .pub and retry. If the argument list consists of - then will read public keys to be removed from standard input. -E fingerprint_hash Specifies the hash algorithm used when displaying key fingerprints. Valid options are: md5 and sha256. The default is sha256. -e pkcs11 Remove keys provided by the PKCS#11 shared library pkcs11. -H hostkey_file Specifies a known hosts file to look up hostkeys when using destination-constrained keys via the -h flag. This option may be specified multiple times to allow multiple files to be searched. If no files are specified, will use the default ssh_config(5) known hosts files: ~/.ssh/known_hosts, ~/.ssh/known_hosts2, /etc/ssh/ssh_known_hosts, and /etc/ssh/ssh_known_hosts2. -h destination_constraint When adding keys, constrain them to be usable only through specific hosts or to specific destinations. Destination constraints of the form [user@]dest-hostname permit use of the key only from the origin host (the one running ssh-agent(1)) to the listed destination host, with optional user name. Constraints of the form src-hostname>[user@]dst-hostname allow a key available on a forwarded ssh-agent(1) to be used through a particular host (as specified by src-hostname) to authenticate to a further host, specified by dst-hostname. Multiple destination constraints may be added when loading keys. When attempting authentication with a key that has destination constraints, the whole connection path, including ssh-agent(1) forwarding, is tested against those constraints and each hop must be permitted for the attempt to succeed. For example, if key is forwarded to a remote host, host-b, and is attempting authentication to another host, host-c, then the operation will be successful only if host-b was permitted from the origin host and the subsequent host-b>host-c hop is also permitted by destination constraints. Hosts are identified by their host keys, and are looked up from known hosts files by . Wildcards patterns may be used for hostnames and certificate host keys are supported. By default, keys added by are not destination constrained. Destination constraints were added in OpenSSH release 8.9. Support in both the remote SSH client and server is required when using destination-constrained keys over a forwarded ssh-agent(1) channel. It is also important to note that destination constraints can only be enforced by ssh-agent(1) when a key is used, or when it is forwarded by a cooperating ssh(1). Specifically, it does not prevent an attacker with access to a remote SSH_AUTH_SOCK from forwarding it again and using it on a different host (but only to a permitted destination). -K Load resident keys from a FIDO authenticator. -k When loading keys into or deleting keys from the agent, process plain private keys only and skip certificates. -L Lists public key parameters of all identities currently represented by the agent. -l Lists fingerprints of all identities currently represented by the agent. -q Be quiet after a successful operation. -S provider Specifies a path to a library that will be used when adding FIDO authenticator-hosted keys, overriding the default of using the internal USB HID support. -s pkcs11 Add keys provided by the PKCS#11 shared library pkcs11. Certificate files may optionally be listed as command- line arguments. If these are present, then they will be loaded into the agent using any corresponding private keys loaded from the PKCS#11 token. -T pubkey ... Tests whether the private keys that correspond to the specified pubkey files are usable by performing sign and verify operations on each. -t life Set a maximum lifetime when adding identities to an agent. The lifetime may be specified in seconds or in a time format specified in sshd_config(5). -v Verbose mode. Causes to print debugging messages about its progress. This is helpful in debugging problems. Multiple -v options increase the verbosity. The maximum is 3. -X Unlock the agent. -x Lock the agent with a password. ENVIRONMENT top DISPLAY, SSH_ASKPASS and SSH_ASKPASS_REQUIRE If needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal. If does not have a terminal associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the program specified by SSH_ASKPASS (by default ssh-askpass) and open an X11 window to read the passphrase. This is particularly useful when calling from a .xsession or related script. SSH_ASKPASS_REQUIRE allows further control over the use of an askpass program. If this variable is set to never then will never attempt to use one. If it is set to prefer, then will prefer to use the askpass program instead of the TTY when requesting passwords. Finally, if the variable is set to force, then the askpass program will be used for all passphrase input regardless of whether DISPLAY is set. SSH_AUTH_SOCK Identifies the path of a Unix-domain socket used to communicate with the agent. SSH_SK_PROVIDER Specifies a path to a library that will be used when loading any FIDO authenticator-hosted keys, overriding the default of using the built-in USB HID support. FILES top ~/.ssh/id_dsa ~/.ssh/id_ecdsa ~/.ssh/id_ecdsa_sk ~/.ssh/id_ed25519 ~/.ssh/id_ed25519_sk ~/.ssh/id_rsa Contains the DSA, ECDSA, authenticator-hosted ECDSA, Ed25519, authenticator-hosted Ed25519 or RSA authentication identity of the user. Identity files should not be readable by anyone but the user. Note that ignores identity files if they are accessible by others. EXIT STATUS top Exit status is 0 on success, 1 if the specified command fails, and 2 if is unable to contact the authentication agent. SEE ALSO top ssh(1), ssh-agent(1), ssh-askpass(1), ssh-keygen(1), sshd(8) AUTHORS top OpenSSH is a derivative of the original and free ssh 1.2.12 release by Tatu Ylonen. Aaron Campbell, Bob Beck, Markus Friedl, Niels Provos, Theo de Raadt and Dug Song removed many bugs, re- added newer features and created OpenSSH. Markus Friedl contributed the support for SSH protocol versions 1.5 and 2.0. COLOPHON top This page is part of the openssh (Portable OpenSSH) project. Information about the project can be found at http://www.openssh.com/portable.html. If you have a bug report for this manual page, see http://www.openssh.com/report.html. This page was obtained from the tarball openssh-9.6p1.tar.gz fetched from http://ftp.eu.openbsd.org/pub/OpenBSD/OpenSSH/portable/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up- to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU December 18, 2023 SSH-ADD(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# ssh-add\n\n> Manage loaded SSH keys in the `ssh-agent`.\n> Ensure that `ssh-agent` is up and running for the keys to be loaded in it.\n> More information: <https://man.openbsd.org/ssh-add>.\n\n- Add the default SSH keys in `~/.ssh` to the ssh-agent:\n\n`ssh-add`\n\n- Add a specific key to the ssh-agent:\n\n`ssh-add {{path/to/private_key}}`\n\n- List fingerprints of currently loaded keys:\n\n`ssh-add -l`\n\n- Delete a key from the ssh-agent:\n\n`ssh-add -d {{path/to/private_key}}`\n\n- Delete all currently loaded keys from the ssh-agent:\n\n`ssh-add -D`\n\n- Add a key to the ssh-agent and the keychain:\n\n`ssh-add -K {{path/to/private_key}}`\n
ssh-agent
ssh-agent(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ssh-agent(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | ENVIRONMENT | FILES | SEE ALSO | AUTHORS | COLOPHON SSH-AGENT(1) General Commands Manual SSH-AGENT(1) NAME top ssh-agent OpenSSH authentication agent SYNOPSIS top ssh-agent [-c | -s] [-Dd] [-a bind_address] [-E fingerprint_hash] [-O option] [-P allowed_providers] [-t life] ssh-agent [-a bind_address] [-E fingerprint_hash] [-O option] [-P allowed_providers] [-t life] command [arg ...] ssh-agent [-c | -s] -k DESCRIPTION top is a program to hold private keys used for public key authentication. Through use of environment variables the agent can be located and automatically used for authentication when logging in to other machines using ssh(1). The options are as follows: -a bind_address Bind the agent to the Unix-domain socket bind_address. The default is $TMPDIR/ssh-XXXXXXXXXX/agent.<ppid>. -c Generate C-shell commands on stdout. This is the default if SHELL looks like it's a csh style of shell. -D Foreground mode. When this option is specified, will not fork. -d Debug mode. When this option is specified, will not fork and will write debug information to standard error. -E fingerprint_hash Specifies the hash algorithm used when displaying key fingerprints. Valid options are: md5 and sha256. The default is sha256. -k Kill the current agent (given by the SSH_AGENT_PID environment variable). -O option Specify an option when starting . Currently two options are supported: allow-remote-pkcs11 and no-restrict-websafe. The allow-remote-pkcs11 option allows clients of a forwarded to load PKCS#11 or FIDO provider libraries. By default only local clients may perform this operation. Note that signalling that an client is remote is performed by ssh(1), and use of other tools to forward access to the agent socket may circumvent this restriction. The no-restrict-websafe option instructs to permit signatures using FIDO keys that might be web authentication requests. By default, refuses signature requests for FIDO keys where the key application string does not start with ssh: and when the data to be signed does not appear to be a ssh(1) user authentication request or a ssh-keygen(1) signature. The default behaviour prevents forwarded access to a FIDO key from also implicitly forwarding the ability to authenticate to websites. -P allowed_providers Specify a pattern-list of acceptable paths for PKCS#11 provider and FIDO authenticator middleware shared libraries that may be used with the -S or -s options to ssh-add(1). Libraries that do not match the pattern list will be refused. See PATTERNS in ssh_config(5) for a description of pattern-list syntax. The default list is usr/lib*/*,/usr/local/lib*/*. -s Generate Bourne shell commands on stdout. This is the default if SHELL does not look like it's a csh style of shell. -t life Set a default value for the maximum lifetime of identities added to the agent. The lifetime may be specified in seconds or in a time format specified in sshd_config(5). A lifetime specified for an identity with ssh-add(1) overrides this value. Without this option the default maximum lifetime is forever. command [arg ...] If a command (and optional arguments) is given, this is executed as a subprocess of the agent. The agent exits automatically when the command given on the command line terminates. There are two main ways to get an agent set up. The first is at the start of an X session, where all other windows or programs are started as children of the program. The agent starts a command under which its environment variables are exported, for example ssh-agent xterm &. When the command terminates, so does the agent. The second method is used for a login session. When is started, it prints the shell commands required to set its environment variables, which in turn can be evaluated in the calling shell, for example eval `ssh-agent -s`. In both cases, ssh(1) looks at these environment variables and uses them to establish a connection to the agent. The agent initially does not have any private keys. Keys are added using ssh-add(1) or by ssh(1) when AddKeysToAgent is set in ssh_config(5). Multiple identities may be stored in concurrently and ssh(1) will automatically use them if present. ssh-add(1) is also used to remove keys from and to query the keys that are held in one. Connections to may be forwarded from further remote hosts using the -A option to ssh(1) (but see the caveats documented therein), avoiding the need for authentication data to be stored on other machines. Authentication passphrases and private keys never go over the network: the connection to the agent is forwarded over SSH remote connections and the result is returned to the requester, allowing the user access to their identities anywhere in the network in a secure fashion. ENVIRONMENT top SSH_AGENT_PID When starts, it stores the name of the agent's process ID (PID) in this variable. SSH_AUTH_SOCK When starts, it creates a Unix-domain socket and stores its pathname in this variable. It is accessible only to the current user, but is easily abused by root or another instance of the same user. FILES top $TMPDIR/ssh-XXXXXXXXXX/agent.<ppid> Unix-domain sockets used to contain the connection to the authentication agent. These sockets should only be readable by the owner. The sockets should get automatically removed when the agent exits. SEE ALSO top ssh(1), ssh-add(1), ssh-keygen(1), ssh_config(5), sshd(8) AUTHORS top OpenSSH is a derivative of the original and free ssh 1.2.12 release by Tatu Ylonen. Aaron Campbell, Bob Beck, Markus Friedl, Niels Provos, Theo de Raadt and Dug Song removed many bugs, re- added newer features and created OpenSSH. Markus Friedl contributed the support for SSH protocol versions 1.5 and 2.0. COLOPHON top This page is part of the openssh (Portable OpenSSH) project. Information about the project can be found at http://www.openssh.com/portable.html. If you have a bug report for this manual page, see http://www.openssh.com/report.html. This page was obtained from the tarball openssh-9.6p1.tar.gz fetched from http://ftp.eu.openbsd.org/pub/OpenBSD/OpenSSH/portable/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up- to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU August 10, 2023 SSH-AGENT(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# ssh-agent\n\n> Spawn an SSH Agent process.\n> An SSH Agent holds SSH keys decrypted in memory until removed or the process is killed.\n> See also `ssh-add`, which can add and manage keys held by an SSH Agent.\n> More information: <https://man.openbsd.org/ssh-agent>.\n\n- Start an SSH Agent for the current shell:\n\n`eval $(ssh-agent)`\n\n- Kill the currently running agent:\n\n`ssh-agent -k`\n
ssh-keygen
ssh-keygen(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ssh-keygen(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | MODULI GENERATION | CERTIFICATES | FIDO AUTHENTICATOR | KEY REVOCATION LISTS | ALLOWED SIGNERS | ENVIRONMENT | FILES | SEE ALSO | AUTHORS | COLOPHON SSH-KEYGEN(1) General Commands Manual SSH-KEYGEN(1) NAME top ssh-keygen OpenSSH authentication key utility SYNOPSIS top ssh-keygen [-q] [-a rounds] [-b bits] [-C comment] [-f output_keyfile] [-m format] [-N new_passphrase] [-O option] [-t dsa | ecdsa | ecdsa-sk | ed25519 | ed25519-sk | rsa] [-w provider] [-Z cipher] ssh-keygen -p [-a rounds] [-f keyfile] [-m format] [-N new_passphrase] [-P old_passphrase] [-Z cipher] ssh-keygen -i [-f input_keyfile] [-m key_format] ssh-keygen -e [-f input_keyfile] [-m key_format] ssh-keygen -y [-f input_keyfile] ssh-keygen -c [-a rounds] [-C comment] [-f keyfile] [-P passphrase] ssh-keygen -l [-v] [-E fingerprint_hash] [-f input_keyfile] ssh-keygen -B [-f input_keyfile] ssh-keygen -D pkcs11 ssh-keygen -F hostname [-lv] [-f known_hosts_file] ssh-keygen -H [-f known_hosts_file] ssh-keygen -K [-a rounds] [-w provider] ssh-keygen -R hostname [-f known_hosts_file] ssh-keygen -r hostname [-g] [-f input_keyfile] ssh-keygen -M generate [-O option] output_file ssh-keygen -M screen [-f input_file] [-O option] output_file ssh-keygen -I certificate_identity -s ca_key [-hU] [-D pkcs11_provider] [-n principals] [-O option] [-V validity_interval] [-z serial_number] file ... ssh-keygen -L [-f input_keyfile] ssh-keygen -A [-a rounds] [-f prefix_path] ssh-keygen -k -f krl_file [-u] [-s ca_public] [-z version_number] file ... ssh-keygen -Q [-l] -f krl_file file ... ssh-keygen -Y find-principals [-O option] -s signature_file -f allowed_signers_file ssh-keygen -Y match-principals -I signer_identity -f allowed_signers_file ssh-keygen -Y check-novalidate [-O option] -n namespace -s signature_file ssh-keygen -Y sign [-O option] -f key_file -n namespace file ... ssh-keygen -Y verify [-O option] -f allowed_signers_file -I signer_identity -n namespace -s signature_file [-r revocation_file] DESCRIPTION top generates, manages and converts authentication keys for ssh(1). can create keys for use by SSH protocol version 2. The type of key to be generated is specified with the -t option. If invoked without any arguments, will generate an Ed25519 key. is also used to generate groups for use in Diffie-Hellman group exchange (DH-GEX). See the MODULI GENERATION section for details. Finally, can be used to generate and update Key Revocation Lists, and to test whether given keys have been revoked by one. See the KEY REVOCATION LISTS section for details. Normally each user wishing to use SSH with public key authentication runs this once to create the authentication key in ~/.ssh/id_dsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ecdsa_sk, ~/.ssh/id_ed25519, ~/.ssh/id_ed25519_sk or ~/.ssh/id_rsa. Additionally, the system administrator may use this to generate host keys, as seen in /etc/rc. Normally this program generates the key and asks for a file in which to store the private key. The public key is stored in a file with the same name but .pub appended. The program also asks for a passphrase. The passphrase may be empty to indicate no passphrase (host keys must have an empty passphrase), or it may be a string of arbitrary length. A passphrase is similar to a password, except it can be a phrase with a series of words, punctuation, numbers, whitespace, or any string of characters you want. Good passphrases are 10-30 characters long, are not simple sentences or otherwise easily guessable (English prose has only 1-2 bits of entropy per character, and provides very bad passphrases), and contain a mix of upper and lowercase letters, numbers, and non-alphanumeric characters. The passphrase can be changed later by using the -p option. There is no way to recover a lost passphrase. If the passphrase is lost or forgotten, a new key must be generated and the corresponding public key copied to other machines. will by default write keys in an OpenSSH-specific format. This format is preferred as it offers better protection for keys at rest as well as allowing storage of key comments within the private key file itself. The key comment may be useful to help identify the key. The comment is initialized to user@host when the key is created, but can be changed using the -c option. It is still possible for to write the previously-used PEM format private keys using the -m flag. This may be used when generating new keys, and existing new-format keys may be converted using this option in conjunction with the -p (change passphrase) flag. After a key is generated, will ask where the keys should be placed to be activated. The options are as follows: -A Generate host keys of all default key types (rsa, ecdsa, and ed25519) if they do not already exist. The host keys are generated with the default key file path, an empty passphrase, default bits for the key type, and default comment. If -f has also been specified, its argument is used as a prefix to the default path for the resulting host key files. This is used by /etc/rc to generate new host keys. -a rounds When saving a private key, this option specifies the number of KDF (key derivation function, currently bcrypt_pbkdf(3)) rounds used. Higher numbers result in slower passphrase verification and increased resistance to brute-force password cracking (should the keys be stolen). The default is 16 rounds. -B Show the bubblebabble digest of specified private or public key file. -b bits Specifies the number of bits in the key to create. For RSA keys, the minimum size is 1024 bits and the default is 3072 bits. Generally, 3072 bits is considered sufficient. DSA keys must be exactly 1024 bits as specified by FIPS 186-2. For ECDSA keys, the -b flag determines the key length by selecting from one of three elliptic curve sizes: 256, 384 or 521 bits. Attempting to use bit lengths other than these three values for ECDSA keys will fail. ECDSA-SK, Ed25519 and Ed25519-SK keys have a fixed length and the -b flag will be ignored. -C comment Provides a new comment. -c Requests changing the comment in the private and public key files. The program will prompt for the file containing the private keys, for the passphrase if the key has one, and for the new comment. -D pkcs11 Download the public keys provided by the PKCS#11 shared library pkcs11. When used in combination with -s, this option indicates that a CA key resides in a PKCS#11 token (see the CERTIFICATES section for details). -E fingerprint_hash Specifies the hash algorithm used when displaying key fingerprints. Valid options are: md5 and sha256. The default is sha256. -e This option will read a private or public OpenSSH key file and print to stdout a public key in one of the formats specified by the -m option. The default export format is RFC4716. This option allows exporting OpenSSH keys for use by other programs, including several commercial SSH implementations. -F hostname | [hostname]:port Search for the specified hostname (with optional port number) in a known_hosts file, listing any occurrences found. This option is useful to find hashed host names or addresses and may also be used in conjunction with the -H option to print found keys in a hashed format. -f filename Specifies the filename of the key file. -g Use generic DNS format when printing fingerprint resource records using the -r command. -H Hash a known_hosts file. This replaces all hostnames and addresses with hashed representations within the specified file; the original content is moved to a file with a .old suffix. These hashes may be used normally by ssh and sshd, but they do not reveal identifying information should the file's contents be disclosed. This option will not modify existing hashed hostnames and is therefore safe to use on files that mix hashed and non-hashed names. -h When signing a key, create a host certificate instead of a user certificate. See the CERTIFICATES section for details. -I certificate_identity Specify the key identity when signing a public key. See the CERTIFICATES section for details. -i This option will read an unencrypted private (or public) key file in the format specified by the -m option and print an OpenSSH compatible private (or public) key to stdout. This option allows importing keys from other software, including several commercial SSH implementations. The default import format is RFC4716. -K Download resident keys from a FIDO authenticator. Public and private key files will be written to the current directory for each downloaded key. If multiple FIDO authenticators are attached, keys will be downloaded from the first touched authenticator. See the FIDO AUTHENTICATOR section for more information. -k Generate a KRL file. In this mode, will generate a KRL file at the location specified via the -f flag that revokes every key or certificate presented on the command line. Keys/certificates to be revoked may be specified by public key file or using the format described in the KEY REVOCATION LISTS section. -L Prints the contents of one or more certificates. -l Show fingerprint of specified public key file. For RSA and DSA keys tries to find the matching public key file and prints its fingerprint. If combined with -v, a visual ASCII art representation of the key is supplied with the fingerprint. -M generate Generate candidate Diffie-Hellman Group Exchange (DH-GEX) parameters for eventual use by the diffie-hellman-group-exchange-* key exchange methods. The numbers generated by this operation must be further screened before use. See the MODULI GENERATION section for more information. -M screen Screen candidate parameters for Diffie-Hellman Group Exchange. This will accept a list of candidate numbers and test that they are safe (Sophie Germain) primes with acceptable group generators. The results of this operation may be added to the /etc/moduli file. See the MODULI GENERATION section for more information. -m key_format Specify a key format for key generation, the -i (import), -e (export) conversion options, and the -p change passphrase operation. The latter may be used to convert between OpenSSH private key and PEM private key formats. The supported key formats are: RFC4716 (RFC 4716/SSH2 public or private key), PKCS8 (PKCS8 public or private key) or PEM (PEM public key). By default OpenSSH will write newly-generated private keys in its own format, but when converting public keys for export the default format is RFC4716. Setting a format of PEM when generating or updating a supported private key type will cause the key to be stored in the legacy PEM private key format. -N new_passphrase Provides the new passphrase. -n principals Specify one or more principals (user or host names) to be included in a certificate when signing a key. Multiple principals may be specified, separated by commas. See the CERTIFICATES section for details. -O option Specify a key/value option. These are specific to the operation that has been requested to perform. When signing certificates, one of the options listed in the CERTIFICATES section may be specified here. When performing moduli generation or screening, one of the options listed in the MODULI GENERATION section may be specified. When generating FIDO authenticator-backed keys, the options listed in the FIDO AUTHENTICATOR section may be specified. When performing signature-related options using the -Y flag, the following options are accepted: hashalg=algorithm Selects the hash algorithm to use for hashing the message to be signed. Valid algorithms are sha256 and sha512. The default is sha512. print-pubkey Print the full public key to standard output after signature verification. verify-time=timestamp Specifies a time to use when validating signatures instead of the current time. The time may be specified as a date or time in the YYYYMMDD[Z] or in YYYYMMDDHHMM[SS][Z] formats. Dates and times will be interpreted in the current system time zone unless suffixed with a Z character, which causes them to be interpreted in the UTC time zone. When generating SSHFP DNS records from public keys using the -r flag, the following options are accepted: hashalg=algorithm Selects a hash algorithm to use when printing SSHFP records using the -D flag. Valid algorithms are sha1 and sha256. The default is to print both. The -O option may be specified multiple times. -P passphrase Provides the (old) passphrase. -p Requests changing the passphrase of a private key file instead of creating a new private key. The program will prompt for the file containing the private key, for the old passphrase, and twice for the new passphrase. -Q Test whether keys have been revoked in a KRL. If the -l option is also specified then the contents of the KRL will be printed. -q Silence ssh-keygen. -R hostname | [hostname]:port Removes all keys belonging to the specified hostname (with optional port number) from a known_hosts file. This option is useful to delete hashed hosts (see the -H option above). -r hostname Print the SSHFP fingerprint resource record named hostname for the specified public key file. -s ca_key Certify (sign) a public key using the specified CA key. See the CERTIFICATES section for details. When generating a KRL, -s specifies a path to a CA public key file used to revoke certificates directly by key ID or serial number. See the KEY REVOCATION LISTS section for details. -t dsa | ecdsa | ecdsa-sk | ed25519 | ed25519-sk | rsa Specifies the type of key to create. The possible values are dsa, ecdsa, ecdsa-sk, ed25519, ed25519-sk, or rsa. This flag may also be used to specify the desired signature type when signing certificates using an RSA CA key. The available RSA signature variants are ssh-rsa (SHA1 signatures, not recommended), rsa-sha2-256, and rsa-sha2-512 (the default). -U When used in combination with -s or -Y sign, this option indicates that a CA key resides in a ssh-agent(1). See the CERTIFICATES section for more information. -u Update a KRL. When specified with -k, keys listed via the command line are added to the existing KRL rather than a new KRL being created. -V validity_interval Specify a validity interval when signing a certificate. A validity interval may consist of a single time, indicating that the certificate is valid beginning now and expiring at that time, or may consist of two times separated by a colon to indicate an explicit time interval. The start time may be specified as: The string always to indicate the certificate has no specified start time. A date or time in the system time zone formatted as YYYYMMDD or YYYYMMDDHHMM[SS]. A date or time in the UTC time zone as YYYYMMDDZ or YYYYMMDDHHMM[SS]Z. A relative time before the current system time consisting of a minus sign followed by an interval in the format described in the TIME FORMATS section of sshd_config(5). A raw seconds since epoch (Jan 1 1970 00:00:00 UTC) as a hexadecimal number beginning with 0x. The end time may be specified similarly to the start time: The string forever to indicate the certificate has no specified end time. A date or time in the system time zone formatted as YYYYMMDD or YYYYMMDDHHMM[SS]. A date or time in the UTC time zone as YYYYMMDDZ or YYYYMMDDHHMM[SS]Z. A relative time after the current system time consisting of a plus sign followed by an interval in the format described in the TIME FORMATS section of sshd_config(5). A raw seconds since epoch (Jan 1 1970 00:00:00 UTC) as a hexadecimal number beginning with 0x. For example: +52w1d Valid from now to 52 weeks and one day from now. -4w:+4w Valid from four weeks ago to four weeks from now. 20100101123000:20110101123000 Valid from 12:30 PM, January 1st, 2010 to 12:30 PM, January 1st, 2011. 20100101123000Z:20110101123000Z Similar, but interpreted in the UTC time zone rather than the system time zone. -1d:20110101 Valid from yesterday to midnight, January 1st, 2011. 0x1:0x2000000000 Valid from roughly early 1970 to May 2033. -1m:forever Valid from one minute ago and never expiring. -v Verbose mode. Causes to print debugging messages about its progress. This is helpful for debugging moduli generation. Multiple -v options increase the verbosity. The maximum is 3. -w provider Specifies a path to a library that will be used when creating FIDO authenticator-hosted keys, overriding the default of using the internal USB HID support. -Y find-principals Find the principal(s) associated with the public key of a signature, provided using the -s flag in an authorized signers file provided using the -f flag. The format of the allowed signers file is documented in the ALLOWED SIGNERS section below. If one or more matching principals are found, they are returned on standard output. -Y match-principals Find principal matching the principal name provided using the -I flag in the authorized signers file specified using the -f flag. If one or more matching principals are found, they are returned on standard output. -Y check-novalidate Checks that a signature generated using -Y sign has a valid structure. This does not validate if a signature comes from an authorized signer. When testing a signature, accepts a message on standard input and a signature namespace using -n. A file containing the corresponding signature must also be supplied using the -s flag. Successful testing of the signature is signalled by returning a zero exit status. -Y sign Cryptographically sign a file or some data using an SSH key. When signing, accepts zero or more files to sign on the command-line - if no files are specified then will sign data presented on standard input. Signatures are written to the path of the input file with .sig appended, or to standard output if the message to be signed was read from standard input. The key used for signing is specified using the -f option and may refer to either a private key, or a public key with the private half available via ssh-agent(1). An additional signature namespace, used to prevent signature confusion across different domains of use (e.g. file signing vs email signing) must be provided via the -n flag. Namespaces are arbitrary strings, and may include: file for file signing, email for email signing. For custom uses, it is recommended to use names following a NAMESPACE@YOUR.DOMAIN pattern to generate unambiguous namespaces. -Y verify Request to verify a signature generated using -Y sign as described above. When verifying a signature, accepts a message on standard input and a signature namespace using -n. A file containing the corresponding signature must also be supplied using the -s flag, along with the identity of the signer using -I and a list of allowed signers via the -f flag. The format of the allowed signers file is documented in the ALLOWED SIGNERS section below. A file containing revoked keys can be passed using the -r flag. The revocation file may be a KRL or a one-per-line list of public keys. Successful verification by an authorized signer is signalled by returning a zero exit status. -y This option will read a private OpenSSH format file and print an OpenSSH public key to stdout. -Z cipher Specifies the cipher to use for encryption when writing an OpenSSH-format private key file. The list of available ciphers may be obtained using "ssh -Q cipher". The default is aes256-ctr. -z serial_number Specifies a serial number to be embedded in the certificate to distinguish this certificate from others from the same CA. If the serial_number is prefixed with a + character, then the serial number will be incremented for each certificate signed on a single command-line. The default serial number is zero. When generating a KRL, the -z flag is used to specify a KRL version number. MODULI GENERATION top may be used to generate groups for the Diffie-Hellman Group Exchange (DH-GEX) protocol. Generating these groups is a two- step process: first, candidate primes are generated using a fast, but memory intensive process. These candidate primes are then tested for suitability (a CPU-intensive process). Generation of primes is performed using the -M generate option. The desired length of the primes may be specified by the -O bits option. For example: # ssh-keygen -M generate -O bits=2048 moduli-2048.candidates By default, the search for primes begins at a random point in the desired length range. This may be overridden using the -O start option, which specifies a different start point (in hex). Once a set of candidates have been generated, they must be screened for suitability. This may be performed using the -M screen option. In this mode will read candidates from standard input (or a file specified using the -f option). For example: # ssh-keygen -M screen -f moduli-2048.candidates moduli-2048 By default, each candidate will be subjected to 100 primality tests. This may be overridden using the -O prime-tests option. The DH generator value will be chosen automatically for the prime under consideration. If a specific generator is desired, it may be requested using the -O generator option. Valid generator values are 2, 3, and 5. Screened DH groups may be installed in /etc/moduli. It is important that this file contains moduli of a range of bit lengths. A number of options are available for moduli generation and screening via the -O flag: lines=number Exit after screening the specified number of lines while performing DH candidate screening. start-line=line-number Start screening at the specified line number while performing DH candidate screening. checkpoint=filename Write the last line processed to the specified file while performing DH candidate screening. This will be used to skip lines in the input file that have already been processed if the job is restarted. memory=mbytes Specify the amount of memory to use (in megabytes) when generating candidate moduli for DH-GEX. start=hex-value Specify start point (in hex) when generating candidate moduli for DH-GEX. generator=value Specify desired generator (in decimal) when testing candidate moduli for DH-GEX. CERTIFICATES top supports signing of keys to produce certificates that may be used for user or host authentication. Certificates consist of a public key, some identity information, zero or more principal (user or host) names and a set of options that are signed by a Certification Authority (CA) key. Clients or servers may then trust only the CA key and verify its signature on a certificate rather than trusting many user/host keys. Note that OpenSSH certificates are a different, and much simpler, format to the X.509 certificates used in ssl(8). supports two types of certificates: user and host. User certificates authenticate users to servers, whereas host certificates authenticate server hosts to users. To generate a user certificate: $ ssh-keygen -s /path/to/ca_key -I key_id /path/to/user_key.pub The resultant certificate will be placed in /path/to/user_key-cert.pub. A host certificate requires the -h option: $ ssh-keygen -s /path/to/ca_key -I key_id -h /path/to/host_key.pub The host certificate will be output to /path/to/host_key-cert.pub. It is possible to sign using a CA key stored in a PKCS#11 token by providing the token library using -D and identifying the CA key by providing its public half as an argument to -s: $ ssh-keygen -s ca_key.pub -D libpkcs11.so -I key_id user_key.pub Similarly, it is possible for the CA key to be hosted in a ssh-agent(1). This is indicated by the -U flag and, again, the CA key must be identified by its public half. $ ssh-keygen -Us ca_key.pub -I key_id user_key.pub In all cases, key_id is a "key identifier" that is logged by the server when the certificate is used for authentication. Certificates may be limited to be valid for a set of principal (user/host) names. By default, generated certificates are valid for all users or hosts. To generate a certificate for a specified set of principals: $ ssh-keygen -s ca_key -I key_id -n user1,user2 user_key.pub $ ssh-keygen -s ca_key -I key_id -h -n host.domain host_key.pub Additional limitations on the validity and use of user certificates may be specified through certificate options. A certificate option may disable features of the SSH session, may be valid only when presented from particular source addresses or may force the use of a specific command. The options that are valid for user certificates are: clear Clear all enabled permissions. This is useful for clearing the default set of permissions so permissions may be added individually. critical:name[=contents] extension:name[=contents] Includes an arbitrary certificate critical option or extension. The specified name should include a domain suffix, e.g. name@example.com. If contents is specified then it is included as the contents of the extension/option encoded as a string, otherwise the extension/option is created with no contents (usually indicating a flag). Extensions may be ignored by a client or server that does not recognise them, whereas unknown critical options will cause the certificate to be refused. force-command=command Forces the execution of command instead of any shell or command specified by the user when the certificate is used for authentication. no-agent-forwarding Disable ssh-agent(1) forwarding (permitted by default). no-port-forwarding Disable port forwarding (permitted by default). no-pty Disable PTY allocation (permitted by default). no-user-rc Disable execution of ~/.ssh/rc by sshd(8) (permitted by default). no-x11-forwarding Disable X11 forwarding (permitted by default). permit-agent-forwarding Allows ssh-agent(1) forwarding. permit-port-forwarding Allows port forwarding. permit-pty Allows PTY allocation. permit-user-rc Allows execution of ~/.ssh/rc by sshd(8). permit-X11-forwarding Allows X11 forwarding. no-touch-required Do not require signatures made using this key include demonstration of user presence (e.g. by having the user touch the authenticator). This option only makes sense for the FIDO authenticator algorithms ecdsa-sk and ed25519-sk. source-address=address_list Restrict the source addresses from which the certificate is considered valid. The address_list is a comma- separated list of one or more address/netmask pairs in CIDR format. verify-required Require signatures made using this key indicate that the user was first verified. This option only makes sense for the FIDO authenticator algorithms ecdsa-sk and ed25519-sk. Currently PIN authentication is the only supported verification method, but other methods may be supported in the future. At present, no standard options are valid for host keys. Finally, certificates may be defined with a validity lifetime. The -V option allows specification of certificate start and end times. A certificate that is presented at a time outside this range will not be considered valid. By default, certificates are valid from the Unix Epoch to the distant future. For certificates to be used for user or host authentication, the CA public key must be trusted by sshd(8) or ssh(1). Refer to those manual pages for details. FIDO AUTHENTICATOR top is able to generate FIDO authenticator-backed keys, after which they may be used much like any other key type supported by OpenSSH, so long as the hardware authenticator is attached when the keys are used. FIDO authenticators generally require the user to explicitly authorise operations by touching or tapping them. FIDO keys consist of two parts: a key handle part stored in the private key file on disk, and a per-device private key that is unique to each FIDO authenticator and that cannot be exported from the authenticator hardware. These are combined by the hardware at authentication time to derive the real key that is used to sign authentication challenges. Supported key types are ecdsa-sk and ed25519-sk. The options that are valid for FIDO keys are: application Override the default FIDO application/origin string of ssh:. This may be useful when generating host or domain-specific resident keys. The specified application string must begin with ssh:. challenge=path Specifies a path to a challenge string that will be passed to the FIDO authenticator during key generation. The challenge string may be used as part of an out-of- band protocol for key enrollment (a random challenge is used by default). device Explicitly specify a fido(4) device to use, rather than letting the authenticator middleware select one. no-touch-required Indicate that the generated private key should not require touch events (user presence) when making signatures. Note that sshd(8) will refuse such signatures by default, unless overridden via an authorized_keys option. resident Indicate that the key handle should be stored on the FIDO authenticator itself. This makes it easier to use the authenticator on multiple computers. Resident keys may be supported on FIDO2 authenticators and typically require that a PIN be set on the authenticator prior to generation. Resident keys may be loaded off the authenticator using ssh-add(1). Storing both parts of a key on a FIDO authenticator increases the likelihood of an attacker being able to use a stolen authenticator device. user A username to be associated with a resident key, overriding the empty default username. Specifying a username may be useful when generating multiple resident keys for the same application name. verify-required Indicate that this private key should require user verification for each signature. Not all FIDO authenticators support this option. Currently PIN authentication is the only supported verification method, but other methods may be supported in the future. write-attestation=path May be used at key generation time to record the attestation data returned from FIDO authenticators during key generation. This information is potentially sensitive. By default, this information is discarded. KEY REVOCATION LISTS top is able to manage OpenSSH format Key Revocation Lists (KRLs). These binary files specify keys or certificates to be revoked using a compact format, taking as little as one bit per certificate if they are being revoked by serial number. KRLs may be generated using the -k flag. This option reads one or more files from the command line and generates a new KRL. The files may either contain a KRL specification (see below) or public keys, listed one per line. Plain public keys are revoked by listing their hash or contents in the KRL and certificates revoked by serial number or key ID (if the serial is zero or not available). Revoking keys using a KRL specification offers explicit control over the types of record used to revoke keys and may be used to directly revoke certificates by serial number or key ID without having the complete original certificate on hand. A KRL specification consists of lines containing one of the following directives followed by a colon and some directive-specific information. serial: serial_number[-serial_number] Revokes a certificate with the specified serial number. Serial numbers are 64-bit values, not including zero and may be expressed in decimal, hex or octal. If two serial numbers are specified separated by a hyphen, then the range of serial numbers including and between each is revoked. The CA key must have been specified on the command line using the -s option. id: key_id Revokes a certificate with the specified key ID string. The CA key must have been specified on the command line using the -s option. key: public_key Revokes the specified key. If a certificate is listed, then it is revoked as a plain public key. sha1: public_key Revokes the specified key by including its SHA1 hash in the KRL. sha256: public_key Revokes the specified key by including its SHA256 hash in the KRL. KRLs that revoke keys by SHA256 hash are not supported by OpenSSH versions prior to 7.9. hash: fingerprint Revokes a key using a fingerprint hash, as obtained from a sshd(8) authentication log message or the -l flag. Only SHA256 fingerprints are supported here and resultant KRLs are not supported by OpenSSH versions prior to 7.9. KRLs may be updated using the -u flag in addition to -k. When this option is specified, keys listed via the command line are merged into the KRL, adding to those already there. It is also possible, given a KRL, to test whether it revokes a particular key (or keys). The -Q flag will query an existing KRL, testing each key specified on the command line. If any key listed on the command line has been revoked (or an error encountered) then will exit with a non-zero exit status. A zero exit status will only be returned if no key was revoked. ALLOWED SIGNERS top When verifying signatures, uses a simple list of identities and keys to determine whether a signature comes from an authorized source. This "allowed signers" file uses a format patterned after the AUTHORIZED_KEYS FILE FORMAT described in sshd(8). Each line of the file contains the following space-separated fields: principals, options, keytype, base64-encoded key. Empty lines and lines starting with a # are ignored as comments. The principals field is a pattern-list (see PATTERNS in ssh_config(5)) consisting of one or more comma-separated USER@DOMAIN identity patterns that are accepted for signing. When verifying, the identity presented via the -I option must match a principals pattern in order for the corresponding key to be considered acceptable for verification. The options (if present) consist of comma-separated option specifications. No spaces are permitted, except within double quotes. The following option specifications are supported (note that option keywords are case-insensitive): cert-authority Indicates that this key is accepted as a certificate authority (CA) and that certificates signed by this CA may be accepted for verification. namespaces=namespace-list Specifies a pattern-list of namespaces that are accepted for this key. If this option is present, the signature namespace embedded in the signature object and presented on the verification command-line must match the specified list before the key will be considered acceptable. valid-after=timestamp Indicates that the key is valid for use at or after the specified timestamp, which may be a date or time in the YYYYMMDD[Z] or YYYYMMDDHHMM[SS][Z] formats. Dates and times will be interpreted in the current system time zone unless suffixed with a Z character, which causes them to be interpreted in the UTC time zone. valid-before=timestamp Indicates that the key is valid for use at or before the specified timestamp. When verifying signatures made by certificates, the expected principal name must match both the principals pattern in the allowed signers file and the principals embedded in the certificate itself. An example allowed signers file: # Comments allowed at start of line user1@example.com,user2@example.com ssh-rsa AAAAX1... # A certificate authority, trusted for all principals in a domain. *@example.com cert-authority ssh-ed25519 AAAB4... # A key that is accepted only for file signing. user2@example.com namespaces="file" ssh-ed25519 AAA41... ENVIRONMENT top SSH_SK_PROVIDER Specifies a path to a library that will be used when loading any FIDO authenticator-hosted keys, overriding the default of using the built-in USB HID support. FILES top ~/.ssh/id_dsa ~/.ssh/id_ecdsa ~/.ssh/id_ecdsa_sk ~/.ssh/id_ed25519 ~/.ssh/id_ed25519_sk ~/.ssh/id_rsa Contains the DSA, ECDSA, authenticator-hosted ECDSA, Ed25519, authenticator-hosted Ed25519 or RSA authentication identity of the user. This file should not be readable by anyone but the user. It is possible to specify a passphrase when generating the key; that passphrase will be used to encrypt the private part of this file using 128-bit AES. This file is not automatically accessed by but it is offered as the default file for the private key. ssh(1) will read this file when a login attempt is made. ~/.ssh/id_dsa.pub ~/.ssh/id_ecdsa.pub ~/.ssh/id_ecdsa_sk.pub ~/.ssh/id_ed25519.pub ~/.ssh/id_ed25519_sk.pub ~/.ssh/id_rsa.pub Contains the DSA, ECDSA, authenticator-hosted ECDSA, Ed25519, authenticator-hosted Ed25519 or RSA public key for authentication. The contents of this file should be added to ~/.ssh/authorized_keys on all machines where the user wishes to log in using public key authentication. There is no need to keep the contents of this file secret. /etc/moduli Contains Diffie-Hellman groups used for DH-GEX. The file format is described in moduli(5). SEE ALSO top ssh(1), ssh-add(1), ssh-agent(1), moduli(5), sshd(8) The Secure Shell (SSH) Public Key File Format, RFC 4716, 2006. AUTHORS top OpenSSH is a derivative of the original and free ssh 1.2.12 release by Tatu Ylonen. Aaron Campbell, Bob Beck, Markus Friedl, Niels Provos, Theo de Raadt and Dug Song removed many bugs, re- added newer features and created OpenSSH. Markus Friedl contributed the support for SSH protocol versions 1.5 and 2.0. COLOPHON top This page is part of the openssh (Portable OpenSSH) project. Information about the project can be found at http://www.openssh.com/portable.html. If you have a bug report for this manual page, see http://www.openssh.com/report.html. This page was obtained from the tarball openssh-9.6p1.tar.gz fetched from http://ftp.eu.openbsd.org/pub/OpenBSD/OpenSSH/portable/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up- to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU September 4, 2023 SSH-KEYGEN(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# ssh-keygen\n\n> Generate SSH keys used for authentication, password-less logins, and other things.\n> More information: <https://man.openbsd.org/ssh-keygen>.\n\n- Generate a key interactively:\n\n`ssh-keygen`\n\n- Generate an ed25519 key with 32 key derivation function rounds and save the key to a specific file:\n\n`ssh-keygen -t {{ed25519}} -a {{32}} -f {{~/.ssh/filename}}`\n\n- Generate an RSA 4096-bit key with email as a comment:\n\n`ssh-keygen -t {{rsa}} -b {{4096}} -C "{{comment|email}}"`\n\n- Remove the keys of a host from the known_hosts file (useful when a known host has a new key):\n\n`ssh-keygen -R {{remote_host}}`\n\n- Retrieve the fingerprint of a key in MD5 Hex:\n\n`ssh-keygen -l -E {{md5}} -f {{~/.ssh/filename}}`\n\n- Change the password of a key:\n\n`ssh-keygen -p -f {{~/.ssh/filename}}`\n\n- Change the type of the key format (for example from OPENSSH format to PEM), the file will be rewritten in-place:\n\n`ssh-keygen -p -N "" -m {{PEM}} -f {{~/.ssh/OpenSSH_private_key}}`\n\n- Retrieve public key from secret key:\n\n`ssh-keygen -y -f {{~/.ssh/OpenSSH_private_key}}`\n
ssh-keyscan
ssh-keyscan(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training ssh-keyscan(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | FILES | EXAMPLES | SEE ALSO | AUTHORS | COLOPHON SSH-KEYSCAN(1) General Commands Manual SSH-KEYSCAN(1) NAME top ssh-keyscan gather SSH public keys from servers SYNOPSIS top ssh-keyscan [-46cDHv] [-f file] [-O option] [-p port] [-T timeout] [-t type] [host | addrlist namelist] DESCRIPTION top is a utility for gathering the public SSH host keys of a number of hosts. It was designed to aid in building and verifying ssh_known_hosts files, the format of which is documented in sshd(8). provides a minimal interface suitable for use by shell and perl scripts. uses non-blocking socket I/O to contact as many hosts as possible in parallel, so it is very efficient. The keys from a domain of 1,000 hosts can be collected in tens of seconds, even when some of those hosts are down or do not run sshd(8). For scanning, one does not need login access to the machines that are being scanned, nor does the scanning process involve any encryption. Hosts to be scanned may be specified by hostname, address or by CIDR network range (e.g. 192.168.16/28). If a network range is specified, then all addresses in that range will be scanned. The options are as follows: -4 Force to use IPv4 addresses only. -6 Force to use IPv6 addresses only. -c Request certificates from target hosts instead of plain keys. -D Print keys found as SSHFP DNS records. The default is to print keys in a format usable as a ssh(1) known_hosts file. -f file Read hosts or addrlist namelist pairs from file, one per line. If - is supplied instead of a filename, will read from the standard input. Names read from a file must start with an address, hostname or CIDR network range to be scanned. Addresses and hostnames may optionally be followed by comma-separated name or address aliases that will be copied to the output. For example: 192.168.11.0/24 10.20.1.1 happy.example.org 10.0.0.1,sad.example.org -H Hash all hostnames and addresses in the output. Hashed names may be used normally by ssh(1) and sshd(8), but they do not reveal identifying information should the file's contents be disclosed. -O option Specify a key/value option. At present, only a single option is supported: hashalg=algorithm Selects a hash algorithm to use when printing SSHFP records using the -D flag. Valid algorithms are sha1 and sha256. The default is to print both. -p port Connect to port on the remote host. -T timeout Set the timeout for connection attempts. If timeout seconds have elapsed since a connection was initiated to a host or since the last time anything was read from that host, the connection is closed and the host in question considered unavailable. The default is 5 seconds. -t type Specify the type of the key to fetch from the scanned hosts. The possible values are dsa, ecdsa, ed25519, ecdsa-sk, ed25519-sk, or rsa. Multiple values may be specified by separating them with commas. The default is to fetch rsa, ecdsa, ed25519, ecdsa-sk, and ed25519-sk keys. -v Verbose mode: print debugging messages about progress. If an ssh_known_hosts file is constructed using without verifying the keys, users will be vulnerable to man in the middle attacks. On the other hand, if the security model allows such a risk, can help in the detection of tampered keyfiles or man in the middle attacks which have begun after the ssh_known_hosts file was created. FILES top /etc/ssh/ssh_known_hosts EXAMPLES top Print the RSA host key for machine hostname: $ ssh-keyscan -t rsa hostname Search a network range, printing all supported key types: $ ssh-keyscan 192.168.0.64/25 Find all hosts from the file ssh_hosts which have new or different keys from those in the sorted file ssh_known_hosts: $ ssh-keyscan -t rsa,dsa,ecdsa,ed25519 -f ssh_hosts | \ sort -u - ssh_known_hosts | diff ssh_known_hosts - SEE ALSO top ssh(1), sshd(8) Using DNS to Securely Publish Secure Shell (SSH) Key Fingerprints, RFC 4255, 2006. AUTHORS top David Mazieres <dm@lcs.mit.edu> wrote the initial version, and Wayne Davison <wayned@users.sourceforge.net> added support for protocol version 2. COLOPHON top This page is part of the openssh (Portable OpenSSH) project. Information about the project can be found at http://www.openssh.com/portable.html. If you have a bug report for this manual page, see http://www.openssh.com/report.html. This page was obtained from the tarball openssh-9.6p1.tar.gz fetched from http://ftp.eu.openbsd.org/pub/OpenBSD/OpenSSH/portable/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up- to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU February 10, 2023 SSH-KEYSCAN(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# ssh-keyscan\n\n> Get the public SSH keys of remote hosts.\n> More information: <https://man.openbsd.org/ssh-keyscan>.\n\n- Retrieve all public SSH keys of a remote host:\n\n`ssh-keyscan {{host}}`\n\n- Retrieve all public SSH keys of a remote host listening on a specific port:\n\n`ssh-keyscan -p {{port}} {{host}}`\n\n- Retrieve certain types of public SSH keys of a remote host:\n\n`ssh-keyscan -t {{rsa,dsa,ecdsa,ed25519}} {{host}}`\n\n- Manually update the SSH known_hosts file with the fingerprint of a given host:\n\n`ssh-keyscan -H {{host}} >> ~/.ssh/known_hosts`\n
sshd
sshd(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sshd(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHENTICATION | LOGIN PROCESS | SSHRC | AUTHORIZED_KEYS FILE FORMAT | SSH_KNOWN_HOSTS FILE FORMAT | FILES | SEE ALSO | AUTHORS | COLOPHON SSHD(8) System Manager's Manual SSHD(8) NAME top sshd OpenSSH daemon SYNOPSIS top sshd [-46DdeGiqTtV] [-C connection_spec] [-c host_certificate_file] [-E log_file] [-f config_file] [-g login_grace_time] [-h host_key_file] [-o option] [-p port] [-u len] DESCRIPTION top (OpenSSH Daemon) is the daemon program for ssh(1). It provides secure encrypted communications between two untrusted hosts over an insecure network. listens for connections from clients. It is normally started at boot from /etc/rc. It forks a new daemon for each incoming connection. The forked daemons handle key exchange, encryption, authentication, command execution, and data exchange. can be configured using command-line options or a configuration file (by default sshd_config(5)); command-line options override values specified in the configuration file. rereads its configuration file when it receives a hangup signal, SIGHUP, by executing itself with the name and options it was started with, e.g. /usr/sbin/sshd. The options are as follows: -4 Forces to use IPv4 addresses only. -6 Forces to use IPv6 addresses only. -C connection_spec Specify the connection parameters to use for the -T extended test mode. If provided, any Match directives in the configuration file that would apply are applied before the configuration is written to standard output. The connection parameters are supplied as keyword=value pairs and may be supplied in any order, either with multiple -C options or as a comma-separated list. The keywords are addr, user, host, laddr, lport, and rdomain and correspond to source address, user, resolved source host name, local address, local port number and routing domain respectively. -c host_certificate_file Specifies a path to a certificate file to identify during key exchange. The certificate file must match a host key file specified using the -h option or the HostKey configuration directive. -D When this option is specified, will not detach and does not become a daemon. This allows easy monitoring of sshd. -d Debug mode. The server sends verbose debug output to standard error, and does not put itself in the background. The server also will not fork(2) and will only process one connection. This option is only intended for debugging for the server. Multiple -d options increase the debugging level. Maximum is 3. -E log_file Append debug logs to log_file instead of the system log. -e Write debug logs to standard error instead of the system log. -f config_file Specifies the name of the configuration file. The default is /etc/ssh/sshd_config. refuses to start if there is no configuration file. -G Parse and print configuration file. Check the validity of the configuration file, output the effective configuration to stdout and then exit. Optionally, Match rules may be applied by specifying the connection parameters using one or more -C options. -g login_grace_time Gives the grace time for clients to authenticate themselves (default 120 seconds). If the client fails to authenticate the user within this many seconds, the server disconnects and exits. A value of zero indicates no limit. -h host_key_file Specifies a file from which a host key is read. This option must be given if is not run as root (as the normal host key files are normally not readable by anyone but root). The default is /etc/ssh/ssh_host_ecdsa_key, /etc/ssh/ssh_host_ed25519_key and /etc/ssh/ssh_host_rsa_key. It is possible to have multiple host key files for the different host key algorithms. -i Specifies that is being run from inetd(8). -o option Can be used to give options in the format used in the configuration file. This is useful for specifying options for which there is no separate command-line flag. For full details of the options, and their values, see sshd_config(5). -p port Specifies the port on which the server listens for connections (default 22). Multiple port options are permitted. Ports specified in the configuration file with the Port option are ignored when a command-line port is specified. Ports specified using the ListenAddress option override command-line ports. -q Quiet mode. Nothing is sent to the system log. Normally the beginning, authentication, and termination of each connection is logged. -T Extended test mode. Check the validity of the configuration file, output the effective configuration to stdout and then exit. Optionally, Match rules may be applied by specifying the connection parameters using one or more -C options. This is similar to the -G flag, but it includes the additional testing performed by the -t flag. -t Test mode. Only check the validity of the configuration file and sanity of the keys. This is useful for updating reliably as configuration options may change. -u len This option is used to specify the size of the field in the utmp structure that holds the remote host name. If the resolved host name is longer than len, the dotted decimal value will be used instead. This allows hosts with very long host names that overflow this field to still be uniquely identified. Specifying -u0 indicates that only dotted decimal addresses should be put into the utmp file. -u0 may also be used to prevent from making DNS requests unless the authentication mechanism or configuration requires it. Authentication mechanisms that may require DNS include HostbasedAuthentication and using a from="pattern-list" option in a key file. Configuration options that require DNS include using a USER@HOST pattern in AllowUsers or DenyUsers. -V Display the version number and exit. AUTHENTICATION top The OpenSSH SSH daemon supports SSH protocol 2 only. Each host has a host-specific key, used to identify the host. Whenever a client connects, the daemon responds with its public host key. The client compares the host key against its own database to verify that it has not changed. Forward secrecy is provided through a Diffie-Hellman key agreement. This key agreement results in a shared session key. The rest of the session is encrypted using a symmetric cipher. The client selects the encryption algorithm to use from those offered by the server. Additionally, session integrity is provided through a cryptographic message authentication code (MAC). Finally, the server and the client enter an authentication dialog. The client tries to authenticate itself using host-based authentication, public key authentication, challenge-response authentication, or password authentication. Regardless of the authentication type, the account is checked to ensure that it is accessible. An account is not accessible if it is locked, listed in DenyUsers or its group is listed in DenyGroups . The definition of a locked account is system dependent. Some platforms have their own account database (eg AIX) and some modify the passwd field ( *LK* on Solaris and UnixWare, * on HP-UX, containing Nologin on Tru64, a leading *LOCKED* on FreeBSD and a leading ! on most Linuxes). If there is a requirement to disable password authentication for the account while allowing still public-key, then the passwd field should be set to something other than these values (eg NP or *NP* ). If the client successfully authenticates itself, a dialog for preparing the session is entered. At this time the client may request things like allocating a pseudo-tty, forwarding X11 connections, forwarding TCP connections, or forwarding the authentication agent connection over the secure channel. After this, the client either requests an interactive shell or execution of a non-interactive command, which will execute via the user's shell using its -c option. The sides then enter session mode. In this mode, either side may send data at any time, and such data is forwarded to/from the shell or command on the server side, and the user terminal in the client side. When the user program terminates and all forwarded X11 and other connections have been closed, the server sends command exit status to the client, and both sides exit. LOGIN PROCESS top When a user successfully logs in, does the following: 1. If the login is on a tty, and no command has been specified, prints last login time and /etc/motd (unless prevented in the configuration file or by ~/.hushlogin; see the FILES section). 2. If the login is on a tty, records login time. 3. Checks /etc/nologin; if it exists, prints contents and quits (unless root). 4. Changes to run with normal user privileges. 5. Sets up basic environment. 6. Reads the file ~/.ssh/environment, if it exists, and users are allowed to change their environment. See the PermitUserEnvironment option in sshd_config(5). 7. Changes to user's home directory. 8. If ~/.ssh/rc exists and the sshd_config(5) PermitUserRC option is set, runs it; else if /etc/ssh/sshrc exists, runs it; otherwise runs xauth(1). The rc files are given the X11 authentication protocol and cookie in standard input. See SSHRC, below. 9. Runs user's shell or command. All commands are run under the user's login shell as specified in the system password database. SSHRC top If the file ~/.ssh/rc exists, sh(1) runs it after reading the environment files but before starting the user's shell or command. It must not produce any output on stdout; stderr must be used instead. If X11 forwarding is in use, it will receive the "proto cookie" pair in its standard input (and DISPLAY in its environment). The script must call xauth(1) because will not run xauth automatically to add X11 cookies. The primary purpose of this file is to run any initialization routines which may be needed before the user's home directory becomes accessible; AFS is a particular example of such an environment. This file will probably contain some initialization code followed by something similar to: if read proto cookie && [ -n "$DISPLAY" ]; then if [ `echo $DISPLAY | cut -c1-10` = 'localhost:' ]; then # X11UseLocalhost=yes echo add unix:`echo $DISPLAY | cut -c11-` $proto $cookie else # X11UseLocalhost=no echo add $DISPLAY $proto $cookie fi | xauth -q - fi If this file does not exist, /etc/ssh/sshrc is run, and if that does not exist either, xauth is used to add the cookie. AUTHORIZED_KEYS FILE FORMAT top AuthorizedKeysFile specifies the files containing public keys for public key authentication; if this option is not specified, the default is ~/.ssh/authorized_keys and ~/.ssh/authorized_keys2. Each line of the file contains one key (empty lines and lines starting with a # are ignored as comments). Public keys consist of the following space-separated fields: options, keytype, base64-encoded key, comment. The options field is optional. The supported key types are: sk-ecdsa-sha2-nistp256@openssh.com ecdsa-sha2-nistp256 ecdsa-sha2-nistp384 ecdsa-sha2-nistp521 sk-ssh-ed25519@openssh.com ssh-ed25519 ssh-dss ssh-rsa The comment field is not used for anything (but may be convenient for the user to identify the key). Note that lines in this file can be several hundred bytes long (because of the size of the public key encoding) up to a limit of 8 kilobytes, which permits RSA keys up to 16 kilobits. You don't want to type them in; instead, copy the id_dsa.pub, id_ecdsa.pub, id_ecdsa_sk.pub, id_ed25519.pub, id_ed25519_sk.pub, or the id_rsa.pub file and edit it. enforces a minimum RSA key modulus size of 1024 bits. The options (if present) consist of comma-separated option specifications. No spaces are permitted, except within double quotes. The following option specifications are supported (note that option keywords are case-insensitive): agent-forwarding Enable authentication agent forwarding previously disabled by the restrict option. cert-authority Specifies that the listed key is a certification authority (CA) that is trusted to validate signed certificates for user authentication. Certificates may encode access restrictions similar to these key options. If both certificate restrictions and key options are present, the most restrictive union of the two is applied. command="command" Specifies that the command is executed whenever this key is used for authentication. The command supplied by the user (if any) is ignored. The command is run on a pty if the client requests a pty; otherwise it is run without a tty. If an 8-bit clean channel is required, one must not request a pty or should specify no-pty. A quote may be included in the command by quoting it with a backslash. This option might be useful to restrict certain public keys to perform just a specific operation. An example might be a key that permits remote backups but nothing else. Note that the client may specify TCP and/or X11 forwarding unless they are explicitly prohibited, e.g. using the restrict key option. The command originally supplied by the client is available in the SSH_ORIGINAL_COMMAND environment variable. Note that this option applies to shell, command or subsystem execution. Also note that this command may be superseded by a sshd_config(5) ForceCommand directive. If a command is specified and a forced-command is embedded in a certificate used for authentication, then the certificate will be accepted only if the two commands are identical. environment="NAME=value" Specifies that the string is to be added to the environment when logging in using this key. Environment variables set this way override other default environment values. Multiple options of this type are permitted. Environment processing is disabled by default and is controlled via the PermitUserEnvironment option. expiry-time="timespec" Specifies a time after which the key will not be accepted. The time may be specified as a YYYYMMDD[Z] date or a YYYYMMDDHHMM[SS][Z] time. Dates and times will be interpreted in the system time zone unless suffixed by a Z character, in which case they will be interpreted in the UTC time zone. from="pattern-list" Specifies that in addition to public key authentication, either the canonical name of the remote host or its IP address must be present in the comma-separated list of patterns. See PATTERNS in ssh_config(5) for more information on patterns. In addition to the wildcard matching that may be applied to hostnames or addresses, a from stanza may match IP addresses using CIDR address/masklen notation. The purpose of this option is to optionally increase security: public key authentication by itself does not trust the network or name servers or anything (but the key); however, if somebody somehow steals the key, the key permits an intruder to log in from anywhere in the world. This additional option makes using a stolen key more difficult (name servers and/or routers would have to be compromised in addition to just the key). no-agent-forwarding Forbids authentication agent forwarding when this key is used for authentication. no-port-forwarding Forbids TCP forwarding when this key is used for authentication. Any port forward requests by the client will return an error. This might be used, e.g. in connection with the command option. no-pty Prevents tty allocation (a request to allocate a pty will fail). no-user-rc Disables execution of ~/.ssh/rc. no-X11-forwarding Forbids X11 forwarding when this key is used for authentication. Any X11 forward requests by the client will return an error. permitlisten="[host:]port" Limit remote port forwarding with the ssh(1) -R option such that it may only listen on the specified host (optional) and port. IPv6 addresses can be specified by enclosing the address in square brackets. Multiple permitlisten options may be applied separated by commas. Hostnames may include wildcards as described in the PATTERNS section in ssh_config(5). A port specification of * matches any port. Note that the setting of GatewayPorts may further restrict listen addresses. Note that ssh(1) will send a hostname of localhost if a listen host was not specified when the forwarding was requested, and that this name is treated differently to the explicit localhost addresses 127.0.0.1 and ::1. permitopen="host:port" Limit local port forwarding with the ssh(1) -L option such that it may only connect to the specified host and port. IPv6 addresses can be specified by enclosing the address in square brackets. Multiple permitopen options may be applied separated by commas. No pattern matching or name lookup is performed on the specified hostnames, they must be literal host names and/or addresses. A port specification of * matches any port. port-forwarding Enable port forwarding previously disabled by the restrict option. principals="principals" On a cert-authority line, specifies allowed principals for certificate authentication as a comma-separated list. At least one name from the list must appear in the certificate's list of principals for the certificate to be accepted. This option is ignored for keys that are not marked as trusted certificate signers using the cert-authority option. pty Permits tty allocation previously disabled by the restrict option. no-touch-required Do not require demonstration of user presence for signatures made using this key. This option only makes sense for the FIDO authenticator algorithms ecdsa-sk and ed25519-sk. verify-required Require that signatures made using this key attest that they verified the user, e.g. via a PIN. This option only makes sense for the FIDO authenticator algorithms ecdsa-sk and ed25519-sk. restrict Enable all restrictions, i.e. disable port, agent and X11 forwarding, as well as disabling PTY allocation and execution of ~/.ssh/rc. If any future restriction capabilities are added to authorized_keys files, they will be included in this set. tunnel="n" Force a tun(4) device on the server. Without this option, the next available device will be used if the client requests a tunnel. user-rc Enables execution of ~/.ssh/rc previously disabled by the restrict option. X11-forwarding Permits X11 forwarding previously disabled by the restrict option. An example authorized_keys file: # Comments are allowed at start of line. Blank lines are allowed. # Plain key, no restrictions ssh-rsa ... # Forced command, disable PTY and all forwarding restrict,command="dump /home" ssh-rsa ... # Restriction of ssh -L forwarding destinations permitopen="192.0.2.1:80",permitopen="192.0.2.2:25" ssh-rsa ... # Restriction of ssh -R forwarding listeners permitlisten="localhost:8080",permitlisten="[::1]:22000" ssh-rsa ... # Configuration for tunnel forwarding tunnel="0",command="sh /etc/netstart tun0" ssh-rsa ... # Override of restriction to allow PTY allocation restrict,pty,command="nethack" ssh-rsa ... # Allow FIDO key without requiring touch no-touch-required sk-ecdsa-sha2-nistp256@openssh.com ... # Require user-verification (e.g. PIN or biometric) for FIDO key verify-required sk-ecdsa-sha2-nistp256@openssh.com ... # Trust CA key, allow touch-less FIDO if requested in certificate cert-authority,no-touch-required,principals="user_a" ssh-rsa ... SSH_KNOWN_HOSTS FILE FORMAT top The /etc/ssh/ssh_known_hosts and ~/.ssh/known_hosts files contain host public keys for all known hosts. The global file should be prepared by the administrator (optional), and the per-user file is maintained automatically: whenever the user connects to an unknown host, its key is added to the per-user file. Each line in these files contains the following fields: marker (optional), hostnames, keytype, base64-encoded key, comment. The fields are separated by spaces. The marker is optional, but if it is present then it must be one of @cert-authority, to indicate that the line contains a certification authority (CA) key, or @revoked, to indicate that the key contained on the line is revoked and must not ever be accepted. Only one marker should be used on a key line. Hostnames is a comma-separated list of patterns (* and ? act as wildcards); each pattern in turn is matched against the host name. When sshd is authenticating a client, such as when using HostbasedAuthentication, this will be the canonical client host name. When ssh(1) is authenticating a server, this will be the host name given by the user, the value of the ssh(1) HostkeyAlias if it was specified, or the canonical server hostname if the ssh(1) CanonicalizeHostname option was used. A pattern may also be preceded by ! to indicate negation: if the host name matches a negated pattern, it is not accepted (by that line) even if it matched another pattern on the line. A hostname or address may optionally be enclosed within [ and ] brackets then followed by : and a non-standard port number. Alternately, hostnames may be stored in a hashed form which hides host names and addresses should the file's contents be disclosed. Hashed hostnames start with a | character. Only one hashed hostname may appear on a single line and none of the above negation or wildcard operators may be applied. The keytype and base64-encoded key are taken directly from the host key; they can be obtained, for example, from /etc/ssh/ssh_host_rsa_key.pub. The optional comment field continues to the end of the line, and is not used. Lines starting with # and empty lines are ignored as comments. When performing host authentication, authentication is accepted if any matching line has the proper key; either one that matches exactly or, if the server has presented a certificate for authentication, the key of the certification authority that signed the certificate. For a key to be trusted as a certification authority, it must use the @cert-authority marker described above. The known hosts file also provides a facility to mark keys as revoked, for example when it is known that the associated private key has been stolen. Revoked keys are specified by including the @revoked marker at the beginning of the key line, and are never accepted for authentication or as certification authorities, but instead will produce a warning from ssh(1) when they are encountered. It is permissible (but not recommended) to have several lines or different host keys for the same names. This will inevitably happen when short forms of host names from different domains are put in the file. It is possible that the files contain conflicting information; authentication is accepted if valid information can be found from either file. Note that the lines in these files are typically hundreds of characters long, and you definitely don't want to type in the host keys by hand. Rather, generate them by a script, ssh-keyscan(1) or by taking, for example, /etc/ssh/ssh_host_rsa_key.pub and adding the host names at the front. ssh-keygen(1) also offers some basic automated editing for ~/.ssh/known_hosts including removing hosts matching a host name and converting all host names to their hashed representations. An example ssh_known_hosts file: # Comments allowed at start of line cvs.example.net,192.0.2.10 ssh-rsa AAAA1234.....= # A hashed hostname |1|JfKTdBh7rNbXkVAQCRp4OQoPfmI=|USECr3SWf1JUPsms5AqfD5QfxkM= ssh-rsa AAAA1234.....= # A revoked key @revoked * ssh-rsa AAAAB5W... # A CA key, accepted for any host in *.mydomain.com or *.mydomain.org @cert-authority *.mydomain.org,*.mydomain.com ssh-rsa AAAAB5W... FILES top ~/.hushlogin This file is used to suppress printing the last login time and /etc/motd, if PrintLastLog and PrintMotd, respectively, are enabled. It does not suppress printing of the banner specified by Banner. ~/.rhosts This file is used for host-based authentication (see ssh(1) for more information). On some machines this file may need to be world-readable if the user's home directory is on an NFS partition, because reads it as root. Additionally, this file must be owned by the user, and must not have write permissions for anyone else. The recommended permission for most machines is read/write for the user, and not accessible by others. ~/.shosts This file is used in exactly the same way as .rhosts, but allows host-based authentication without permitting login with rlogin/rsh. ~/.ssh/ This directory is the default location for all user- specific configuration and authentication information. There is no general requirement to keep the entire contents of this directory secret, but the recommended permissions are read/write/execute for the user, and not accessible by others. ~/.ssh/authorized_keys Lists the public keys (DSA, ECDSA, Ed25519, RSA) that can be used for logging in as this user. The format of this file is described above. The content of the file is not highly sensitive, but the recommended permissions are read/write for the user, and not accessible by others. If this file, the ~/.ssh directory, or the user's home directory are writable by other users, then the file could be modified or replaced by unauthorized users. In this case, will not allow it to be used unless the StrictModes option has been set to no. ~/.ssh/environment This file is read into the environment at login (if it exists). It can only contain empty lines, comment lines (that start with #), and assignment lines of the form name=value. The file should be writable only by the user; it need not be readable by anyone else. Environment processing is disabled by default and is controlled via the PermitUserEnvironment option. ~/.ssh/known_hosts Contains a list of host keys for all hosts the user has logged into that are not already in the systemwide list of known host keys. The format of this file is described above. This file should be writable only by root/the owner and can, but need not be, world-readable. ~/.ssh/rc Contains initialization routines to be run before the user's home directory becomes accessible. This file should be writable only by the user, and need not be readable by anyone else. /etc/hosts.equiv This file is for host-based authentication (see ssh(1)). It should only be writable by root. /etc/moduli Contains Diffie-Hellman groups used for the "Diffie- Hellman Group Exchange" key exchange method. The file format is described in moduli(5). If no usable groups are found in this file then fixed internal groups will be used. /etc/motd See motd(5). /etc/nologin If this file exists, refuses to let anyone except root log in. The contents of the file are displayed to anyone trying to log in, and non-root connections are refused. The file should be world-readable. /etc/shosts.equiv This file is used in exactly the same way as hosts.equiv, but allows host-based authentication without permitting login with rlogin/rsh. /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_ed25519_key /etc/ssh/ssh_host_rsa_key These files contain the private parts of the host keys. These files should only be owned by root, readable only by root, and not accessible to others. Note that does not start if these files are group/world-accessible. /etc/ssh/ssh_host_ecdsa_key.pub /etc/ssh/ssh_host_ed25519_key.pub /etc/ssh/ssh_host_rsa_key.pub These files contain the public parts of the host keys. These files should be world-readable but writable only by root. Their contents should match the respective private parts. These files are not really used for anything; they are provided for the convenience of the user so their contents can be copied to known hosts files. These files are created using ssh-keygen(1). /etc/ssh/ssh_known_hosts Systemwide list of known host keys. This file should be prepared by the system administrator to contain the public host keys of all machines in the organization. The format of this file is described above. This file should be writable only by root/the owner and should be world-readable. /etc/ssh/sshd_config Contains configuration data for sshd. The file format and configuration options are described in sshd_config(5). /etc/ssh/sshrc Similar to ~/.ssh/rc, it can be used to specify machine- specific login-time initializations globally. This file should be writable only by root, and should be world- readable. /var/empty chroot(2) directory used by during privilege separation in the pre-authentication phase. The directory should not contain any files and must be owned by root and not group or world-writable. /var/run/sshd.pid Contains the process ID of the listening for connections (if there are several daemons running concurrently for different ports, this contains the process ID of the one started last). The content of this file is not sensitive; it can be world-readable. SEE ALSO top scp(1), sftp(1), ssh(1), ssh-add(1), ssh-agent(1), ssh-keygen(1), ssh-keyscan(1), chroot(2), login.conf(5), moduli(5), sshd_config(5), inetd(8), sftp-server(8) AUTHORS top OpenSSH is a derivative of the original and free ssh 1.2.12 release by Tatu Ylonen. Aaron Campbell, Bob Beck, Markus Friedl, Niels Provos, Theo de Raadt and Dug Song removed many bugs, re- added newer features and created OpenSSH. Markus Friedl contributed the support for SSH protocol versions 1.5 and 2.0. Niels Provos and Markus Friedl contributed support for privilege separation. COLOPHON top This page is part of the openssh (Portable OpenSSH) project. Information about the project can be found at http://www.openssh.com/portable.html. If you have a bug report for this manual page, see http://www.openssh.com/report.html. This page was obtained from the tarball openssh-9.6p1.tar.gz fetched from http://ftp.eu.openbsd.org/pub/OpenBSD/OpenSSH/portable/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up- to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU September 19, 2023 SSHD(8) Pages that refer to this page: pts(4), environment.d(5), user@.service(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sshd\n\n> Secure Shell Daemon - allows remote machines to securely log in to the current machine.\n> Remote machines can execute commands as it is executed at this machine.\n> More information: <https://man.openbsd.org/sshd>.\n\n- Start daemon in the background:\n\n`sshd`\n\n- Run sshd in the foreground:\n\n`sshd -D`\n\n- Run with verbose output (for debugging):\n\n`sshd -D -d`\n\n- Run on a specific port:\n\n`sshd -p {{port}}`\n
sshfs
sshfs(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sshfs(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CAVEATS / WORKAROUNDS | MOUNTING FROM /ETC/FSTAB | SEE ALSO | GETTING HELP | AUTHORS | COLOPHON SSHFS(1) User Commands SSHFS(1) NAME top SSHFS - filesystem client based on SSH SYNOPSIS top To mount a filesystem: sshfs [user@]host:[dir] mountpoint [options] If host is a numeric IPv6 address, it needs to be enclosed in square brackets. To unmount it: fusermount3 -u mountpoint # Linux umount mountpoint # OS X, FreeBSD DESCRIPTION top SSHFS allows you to mount a remote filesystem using SSH (more precisely, the SFTP subsystem). Most SSH servers support and enable this SFTP access by default, so SSHFS is very simple to use - there's nothing to do on the server-side. By default, file permissions are ignored by SSHFS. Any user that can access the filesystem will be able to perform any operation that the remote server permits - based on the credentials that were used to connect to the server. If this is undesired, local permission checking can be enabled with -o default_permissions. By default, only the mounting user will be able to access the filesystem. Access for other users can be enabled by passing -o allow_other. In this case you most likely also want to use -o default_permissions. It is recommended to run SSHFS as regular user (not as root). For this to work the mountpoint must be owned by the user. If username is omitted SSHFS will use the local username. If the directory is omitted, SSHFS will mount the (remote) home directory. If you need to enter a password sshfs will ask for it (actually it just runs ssh which ask for the password if needed). OPTIONS top -o opt,[opt...] mount options, see below for details. A a variety of SSH options can be given here as well, see the manual pages for sftp(1) and ssh_config(5). -h, --help print help and exit. -V, --version print version information and exit. -d, --debug print debugging information. -p PORT equivalent to '-o port=PORT' -f do not daemonize, stay in foreground. -s Single threaded operation. -C equivalent to '-o compression=yes' -F ssh_configfile specifies alternative ssh configuration file -1 equivalent to '-o ssh_protocol=1' -o reconnect automatically reconnect to server if connection is interrupted. Attempts to access files that were opened before the reconnection will give errors and need to be re-opened. -o delay_connect Don't immediately connect to server, wait until mountpoint is first accessed. -o sshfs_sync synchronous writes. This will slow things down, but may be useful in some situations. -o no_readahead Only read exactly the data that was requested, instead of speculatively reading more to anticipate the next read request. -o sync_readdir synchronous readdir. This will slow things down, but may be useful in some situations. -o workaround=LIST Enable the specified workaround. See the Caveats section below for some additional information. Possible values are: rename Emulate overwriting an existing file by deleting and renaming. renamexdev Make rename fail with EXDEV instead of the default EPERM to allow moving files across remote filesystems. truncate Work around servers that don't support truncate by coping the whole file, truncating it locally, and sending it back. fstat Work around broken servers that don't support fstat() by using stat instead. buflimit Work around OpenSSH "buffer fillup" bug. createmode Work around broken servers that produce an error when passing a non-zero mode to create, by always passing a mode of 0. -o idmap=TYPE How to map remote UID/GIDs to local values. Possible values are: none no translation of the ID space (default). user map the UID/GID of the remote user to UID/GID of the mounting user. file translate UIDs/GIDs based upon the contents of --uidfile and --gidfile. -o uidfile=FILE file containing username:uid mappings for -o idmap=file -o gidfile=FILE file containing groupname:gid mappings for -o idmap=file -o nomap=TYPE with idmap=file, how to handle missing mappings: ignore don't do any re-mapping error return an error (default) -o ssh_command=CMD execute CMD instead of 'ssh' -o ssh_protocol=N ssh protocol to use (default: 2) -o sftp_server=SERV path to sftp server or subsystem (default: sftp) -o directport=PORT directly connect to PORT bypassing ssh -o vsock=CID:PORT directly connect using a vsock to CID:PORT bypassing ssh -o passive communicate over stdin and stdout bypassing network. Useful for mounting local filesystem on the remote side. An example using dpipe command would be dpipe /usr/lib/openssh/sftp-server = ssh RemoteHostname sshfs :/directory/to/be/shared ~/mnt/src -o passive -o disable_hardlink With this option set, attempts to call link(2) will fail with error code ENOSYS. -o transform_symlinks transform absolute symlinks on remote side to relative symlinks. This means that if e.g. on the server side /foo/bar/com is a symlink to /foo/blub, SSHFS will transform the link target to ../blub on the client side. -o follow_symlinks follow symlinks on the server, i.e. present them as regular files on the client. If a symlink is dangling (i.e, the target does not exist) the behavior depends on the remote server - the entry may appear as a symlink on the client, or it may appear as a regular file that cannot be accessed. -o no_check_root don't check for existence of 'dir' on server -o password_stdin read password from stdin (only for pam_mount!) -o dir_cache=BOOL Enables (yes) or disables (no) the SSHFS directory cache. The directory cache holds the names of directory entries. Enabling it allows readdir(3) system calls to be processed without network access. -o dcache_max_size=N sets the maximum size of the directory cache. -o dcache_timeout=N sets timeout for directory cache in seconds. -o dcache_{stat,link,dir}_timeout=N sets separate timeout for {attributes, symlinks, names} in the directory cache. -o dcache_clean_interval=N sets the interval for automatic cleaning of the directory cache. -o dcache_min_clean_interval=N sets the interval for forced cleaning of the directory cache when full. -o direct_io This option disables the use of page cache (file content cache) in the kernel for this filesystem. This has several affects: 1. Each read() or write() system call will initiate one or more read or write operations, data will not be cached in the kernel. 2. The return value of the read() and write() system calls will correspond to the return values of the read and write operations. This is useful for example if the file size is not known in advance (before reading it). e.g. /proc filesystem -o max_conns=N sets the maximum number of simultaneous SSH connections to use. Each connection is established with a separate SSH process. The primary purpose of this feature is to improve the responsiveness of the file system during large file transfers. When using more than once connection, the password_stdin and passive options can not be used, and the buflimit workaround is not supported. In addition, SSHFS accepts several options common to all FUSE file systems. These are described in the mount.fuse manpage (look for "general", "libfuse specific", and "high-level API" options). CAVEATS / WORKAROUNDS top Hardlinks If the SSH server supports the hardlinks extension, SSHFS will allow you to create hardlinks. However, hardlinks will always appear as individual files when seen through an SSHFS mount, i.e. they will appear to have different inodes and an st_nlink value of 1. Rename Some SSH servers do not support atomically overwriting the destination when renaming a file. In this case you will get an error when you attempt to rename a file and the destination already exists. A workaround is to first remove the destination file, and then do the rename. SSHFS can do this automatically if you call it with -o workaround=rename. However, in this case it is still possible that someone (or something) recreates the destination file after SSHFS has removed it, but before SSHFS had the time to rename the old file. In this case, the rename will still fail. Permission denied when moving files across remote filesystems Most SFTP servers return only a generic "failure" when failing to rename across filesystem boundaries (EXDEV). sshfs normally converts this generic failure to a permission denied error (EPERM). If the option -o workaround=renamexdev is given, generic failures will be considered EXDEV errors which will make programs like mv(1) attempt to actually move the file after the failed rename. SSHFS hangs for no apparent reason In some cases, attempts to access the SSHFS mountpoint may freeze if no filesystem activity has occurred for some time. This is typically caused by the SSH connection being dropped because of inactivity without SSHFS being informed about that. As a workaround, you can try to mount with -o ServerAliveInterval=15. This will force the SSH connection to stay alive even if you have no activity. SSHFS hangs after the connection was interrupted By default, network operations in SSHFS run without timeouts, mirroring the default behavior of SSH itself. As a consequence, if the connection to the remote host is interrupted (e.g. because a network cable was removed), operations on files or directories under the mountpoint will block until the connection is either restored or closed altogether (e.g. manually). Applications that try to access such files or directories will generally appear to "freeze" when this happens. If it is acceptable to discard data being read or written, a quick workaround is to kill the responsible sshfs process, which will make any blocking operations on the mounted filesystem error out and thereby "unfreeze" the relevant applications. Note that force unmounting with fusermount -zu, on the other hand, does not help in this case and will leave read/write operations in the blocking state. For a more automatic solution, one can use the -o ServerAliveInterval=15 option mentioned above, which will drop the connection after not receiving a response for 3 * 15 = 45 seconds from the remote host. By also supplying -o reconnect, one can ensure that the connection is re-established as soon as possible afterwards. As before, this will naturally lead to loss of data that was in the process of being read or written at the time when the connection was interrupted. MOUNTING FROM /ETC/FSTAB top To mount an SSHFS filesystem from /etc/fstab, simply use sshfs as the file system type. (For backwards compatibility, you may also use fuse.sshfs). SEE ALSO top The mount.fuse(8) manpage. GETTING HELP top If you need help, please ask on the < fuse-sshfs@lists.sourceforge.net> mailing list (subscribe at https://lists.sourceforge.net/lists/listinfo/fuse-sshfs ). Please report any bugs on the GitHub issue tracker at https://github.com/libfuse/libfuse/issues . AUTHORS top SSHFS is currently maintained by Nikolaus Rath < Nikolaus@rath.org>, and was created by Miklos Szeredi < miklos@szeredi.hu>. This man page was originally written by Bartosz Fenski < fenio@debian.org> for the Debian GNU/Linux distribution (but it may be used by others). COLOPHON top This page is part of the sshfs (SSH Filesystem) project. Information about the project can be found at https://github.com/libfuse/sshfs. If you have a bug report for this manual page, see https://github.com/libfuse/sshfs/issues. This page was obtained from the project's upstream Git repository https://github.com/libfuse/sshfs.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-11-06.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org SSHFS(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sshfs\n\n> Filesystem client based on SSH.\n> More information: <https://github.com/libfuse/sshfs>.\n\n- Mount remote directory:\n\n`sshfs {{username}}@{{remote_host}}:{{remote_directory}} {{mountpoint}}`\n\n- Unmount remote directory:\n\n`umount {{mountpoint}}`\n\n- Mount remote directory from server with specific port:\n\n`sshfs {{username}}@{{remote_host}}:{{remote_directory}} -p {{2222}}`\n\n- Use compression:\n\n`sshfs {{username}}@{{remote_host}}:{{remote_directory}} -C`\n\n- Follow symbolic links:\n\n`sshfs -o follow_symlinks {{username}}@{{remote_host}}:{{remote_directory}} {{mountpoint}}`\n
stat
stat(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training stat(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON STAT(1) User Commands STAT(1) NAME top stat - display file or file system status SYNOPSIS top stat [OPTION]... FILE... DESCRIPTION top Display file or file system status. Mandatory arguments to long options are mandatory for short options too. -L, --dereference follow links -f, --file-system display file system status instead of file status --cached=MODE specify how to use cached attributes; useful on remote file systems. See MODE below -c --format=FORMAT use the specified FORMAT instead of the default; output a newline after each use of FORMAT --printf=FORMAT like --format, but interpret backslash escapes, and do not output a mandatory trailing newline; if you want a newline, include \n in FORMAT -t, --terse print the information in terse form --help display this help and exit --version output version information and exit The MODE argument of --cached can be: always, never, or default. 'always' will use cached attributes if available, while 'never' will try to synchronize with the latest attributes, and 'default' will leave it up to the underlying file system. The valid format sequences for files (without --file-system): %a permission bits in octal (note '#' and '0' printf flags) %A permission bits and file type in human readable form %b number of blocks allocated (see %B) %B the size in bytes of each block reported by %b %C SELinux security context string %d device number in decimal (st_dev) %D device number in hex (st_dev) %Hd major device number in decimal %Ld minor device number in decimal %f raw mode in hex %F file type %g group ID of owner %G group name of owner %h number of hard links %i inode number %m mount point %n file name %N quoted file name with dereference if symbolic link %o optimal I/O transfer size hint %s total size, in bytes %r device type in decimal (st_rdev) %R device type in hex (st_rdev) %Hr major device type in decimal, for character/block device special files %Lr minor device type in decimal, for character/block device special files %t major device type in hex, for character/block device special files %T minor device type in hex, for character/block device special files %u user ID of owner %U user name of owner %w time of file birth, human-readable; - if unknown %W time of file birth, seconds since Epoch; 0 if unknown %x time of last access, human-readable %X time of last access, seconds since Epoch %y time of last data modification, human-readable %Y time of last data modification, seconds since Epoch %z time of last status change, human-readable %Z time of last status change, seconds since Epoch Valid format sequences for file systems: %a free blocks available to non-superuser %b total data blocks in file system %c total file nodes in file system %d free file nodes in file system %f free blocks in file system %i file system ID in hex %l maximum length of filenames %n file name %s block size (for faster transfers) %S fundamental block size (for block counts) %t file system type in hex %T file system type in human readable form --terse is equivalent to the following FORMAT: %n %s %b %f %u %g %D %i %h %t %T %X %Y %Z %W %o %C --terse --file-system is equivalent to the following FORMAT: %n %i %l %t %s %S %b %f %a %c %d NOTE: your shell may have its own version of stat, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports. AUTHOR top Written by Michael Meskes. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top stat(2), statfs(2), statx(2) Full documentation <https://www.gnu.org/software/coreutils/stat> or available locally via: info '(coreutils) stat invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 STAT(1) Pages that refer to this page: namei(1), stat(2), statx(2), inode(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# stat\n\n> Display file and filesystem information.\n> More information: <https://www.gnu.org/software/coreutils/manual/html_node/stat-invocation.html>.\n\n- Display properties about a specific file such as size, permissions, creation and access dates among others:\n\n`stat {{path/to/file}}`\n\n- Display properties about a specific file such as size, permissions, creation and access dates among others without labels:\n\n`stat --terse {{path/to/file}}`\n\n- Display information about the filesystem where a specific file is located:\n\n`stat --file-system {{path/to/file}}`\n\n- Show only octal file permissions:\n\n`stat --format="%a %n" {{path/to/file}}`\n\n- Show the owner and group of a specific file:\n\n`stat --format="%U %G" {{path/to/file}}`\n\n- Show the size of a specific file in bytes:\n\n`stat --format="%s %n" {{path/to/file}}`\n
stdbuf
stdbuf(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training stdbuf(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | EXAMPLES | BUGS | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON STDBUF(1) User Commands STDBUF(1) NAME top stdbuf - Run COMMAND, with modified buffering operations for its standard streams. SYNOPSIS top stdbuf OPTION... COMMAND DESCRIPTION top Run COMMAND, with modified buffering operations for its standard streams. Mandatory arguments to long options are mandatory for short options too. -i, --input=MODE adjust standard input stream buffering -o, --output=MODE adjust standard output stream buffering -e, --error=MODE adjust standard error stream buffering --help display this help and exit --version output version information and exit If MODE is 'L' the corresponding stream will be line buffered. This option is invalid with standard input. If MODE is '0' the corresponding stream will be unbuffered. Otherwise MODE is a number which may be followed by one of the following: KB 1000, K 1024, MB 1000*1000, M 1024*1024, and so on for G,T,P,E,Z,Y,R,Q. Binary prefixes can be used, too: KiB=K, MiB=M, and so on. In this case the corresponding stream will be fully buffered with the buffer size set to MODE bytes. NOTE: If COMMAND adjusts the buffering of its standard streams ('tee' does for example) then that will override corresponding changes by 'stdbuf'. Also some filters (like 'dd' and 'cat' etc.) don't use streams for I/O, and are thus unaffected by 'stdbuf' settings. Exit status: 125 if the stdbuf command itself fails 126 if COMMAND is found but cannot be invoked 127 if COMMAND cannot be found - the exit status of COMMAND otherwise EXAMPLES top tail -f access.log | stdbuf -oL cut -d ' ' -f1 | uniq This will immediately display unique entries from access.log BUGS top On GLIBC platforms, specifying a buffer size, i.e., using fully buffered mode will result in undefined operation. AUTHOR top Written by Padraig Brady. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/stdbuf> or available locally via: info '(coreutils) stdbuf invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 STDBUF(1) Pages that refer to this page: setbuf(3) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# stdbuf\n\n> Run a command with modified buffering operations for its standard streams.\n> More information: <https://www.gnu.org/software/coreutils/stdbuf>.\n\n- Change `stdin` buffer size to 512 KiB:\n\n`stdbuf --input={{512K}} {{command}}`\n\n- Change `stdout` buffer to line-buffered:\n\n`stdbuf --output={{L}} {{command}}`\n\n- Change `stderr` buffer to unbuffered:\n\n`stdbuf --error={{0}} {{command}}`\n
strace
strace(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training strace(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | DIAGNOSTICS | SETUID INSTALLATION | MULTIPLE PERSONALITIES SUPPORT | NOTES | BUGS | HISTORY | REPORTING BUGS | SEE ALSO | AUTHORS | COLOPHON STRACE(1) General Commands Manual STRACE(1) NAME top strace - trace system calls and signals SYNOPSIS top strace [-ACdffhikkqqrtttTvVwxxyyYzZ] [-a column] [-b execve] [-e expr]... [-I n] [-o file] [-O overhead] [-p pid]... [-P path]... [-s strsize] [-S sortby] [-U columns] [-X format] [--seccomp-bpf] [--syscall-limit limit] [--secontext[=format]] [--tips[=format]] { -p pid | [-DDD] [-E var[=val]]... [-u username] command [args] } strace -c [-dfwzZ] [-b execve] [-e expr]... [-I n] [-O overhead] [-p pid]... [-P path]... [-S sortby] [-U columns] [--seccomp-bpf] [--syscall-limit limit] [--tips[=format]] { -p pid | [-DDD] [-E var[=val]]... [-u username] command [args] } strace --tips[=format] DESCRIPTION top In the simplest case strace runs the specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process. The name of each system call, its arguments and its return value are printed on standard error or to the file specified with the -o option. strace is a useful diagnostic, instructional, and debugging tool. System administrators, diagnosticians and trouble-shooters will find it invaluable for solving problems with programs for which the source is not readily available since they do not need to be recompiled in order to trace them. Students, hackers and the overly-curious will find that a great deal can be learned about a system and its system calls by tracing even ordinary programs. And programmers will find that since system calls and signals are events that happen at the user/kernel interface, a close examination of this boundary is very useful for bug isolation, sanity checking and attempting to capture race conditions. Each line in the trace contains the system call name, followed by its arguments in parentheses and its return value. An example from stracing the command "cat /dev/null" is: open("/dev/null", O_RDONLY) = 3 Errors (typically a return value of -1) have the errno symbol and error string appended. open("/foo/bar", O_RDONLY) = -1 ENOENT (No such file or directory) Signals are printed as signal symbol and decoded siginfo structure. An excerpt from stracing and interrupting the command "sleep 666" is: sigsuspend([] <unfinished ...> --- SIGINT {si_signo=SIGINT, si_code=SI_USER, si_pid=...} --- +++ killed by SIGINT +++ If a system call is being executed and meanwhile another one is being called from a different thread/process then strace will try to preserve the order of those events and mark the ongoing call as being unfinished. When the call returns it will be marked as resumed. [pid 28772] select(4, [3], NULL, NULL, NULL <unfinished ...> [pid 28779] clock_gettime(CLOCK_REALTIME, {tv_sec=1130322148, tv_nsec=3977000}) = 0 [pid 28772] <... select resumed> ) = 1 (in [3]) Interruption of a (restartable) system call by a signal delivery is processed differently as kernel terminates the system call and also arranges its immediate reexecution after the signal handler completes. read(0, 0x7ffff72cf5cf, 1) = ? ERESTARTSYS (To be restarted) --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- rt_sigreturn({mask=[]}) = 0 read(0, "", 1) = 0 Arguments are printed in symbolic form with passion. This example shows the shell performing ">>xyzzy" output redirection: open("xyzzy", O_WRONLY|O_APPEND|O_CREAT, 0666) = 3 Here, the second and the third argument of open(2) are decoded by breaking down the flag argument into its three bitwise-OR constituents and printing the mode value in octal by tradition. Where the traditional or native usage differs from ANSI or POSIX, the latter forms are preferred. In some cases, strace output is proven to be more readable than the source. Structure pointers are dereferenced and the members are displayed as appropriate. In most cases, arguments are formatted in the most C-like fashion possible. For example, the essence of the command "ls -l /dev/null" is captured as: lstat("/dev/null", {st_mode=S_IFCHR|0666, st_rdev=makedev(0x1, 0x3), ...}) = 0 Notice how the 'struct stat' argument is dereferenced and how each member is displayed symbolically. In particular, observe how the st_mode member is carefully decoded into a bitwise-OR of symbolic and numeric values. Also notice in this example that the first argument to lstat(2) is an input to the system call and the second argument is an output. Since output arguments are not modified if the system call fails, arguments may not always be dereferenced. For example, retrying the "ls -l" example with a non-existent file produces the following line: lstat("/foo/bar", 0xb004) = -1 ENOENT (No such file or directory) In this case the porch light is on but nobody is home. Syscalls unknown to strace are printed raw, with the unknown system call number printed in hexadecimal form and prefixed with "syscall_": syscall_0xbad(0x1, 0x2, 0x3, 0x4, 0x5, 0x6) = -1 ENOSYS (Function not implemented) Character pointers are dereferenced and printed as C strings. Non-printing characters in strings are normally represented by ordinary C escape codes. Only the first strsize (32 by default) bytes of strings are printed; longer strings have an ellipsis appended following the closing quote. Here is a line from "ls -l" where the getpwuid(3) library routine is reading the password file: read(3, "root::0:0:System Administrator:/"..., 1024) = 422 While structures are annotated using curly braces, pointers to basic types and arrays are printed using square brackets with commas separating the elements. Here is an example from the command id(1) on a system with supplementary group ids: getgroups(32, [100, 0]) = 2 On the other hand, bit-sets are also shown using square brackets, but set elements are separated only by a space. Here is the shell, preparing to execute an external command: sigprocmask(SIG_BLOCK, [CHLD TTOU], []) = 0 Here, the second argument is a bit-set of two signals, SIGCHLD and SIGTTOU. In some cases, the bit-set is so full that printing out the unset elements is more valuable. In that case, the bit- set is prefixed by a tilde like this: sigprocmask(SIG_UNBLOCK, ~[], NULL) = 0 Here, the second argument represents the full set of all signals. OPTIONS top General -e expr A qualifying expression which modifies which events to trace or how to trace them. The format of the expression is: [qualifier=][!]value[,value]... where qualifier is one of trace (or t), trace-fds (or trace-fd or fd or fds), abbrev (or a), verbose (or v), raw (or x), signal (or signals or s), read (or reads or r), write (or writes or w), fault, inject, status, quiet (or silent or silence or q), secontext, decode-fds (or decode-fd), decode-pids (or decode-pid), or kvm, and value is a qualifier-dependent symbol or number. The default qualifier is trace. Using an exclamation mark negates the set of values. For example, -e open means literally -e trace=open which in turn means trace only the open system call. By contrast, -e trace=!open means to trace every system call except open. In addition, the special values all and none have the obvious meanings. Note that some shells use the exclamation point for history expansion even inside quoted arguments. If so, you must escape the exclamation point with a backslash. Startup -E var=val --env=var=val Run command with var=val in its list of environment variables. -E var --env=var Remove var from the inherited list of environment variables before passing it on to the command. -p pid --attach=pid Attach to the process with the process ID pid and begin tracing. The trace may be terminated at any time by a keyboard interrupt signal (CTRL-C). strace will respond by detaching itself from the traced process(es) leaving it (them) to continue running. Multiple -p options can be used to attach to many processes in addition to command (which is optional if at least one -p option is given). Multiple process IDs, separated by either comma (,), space ( ), tab, or newline character, can be provided as an argument to a single -p option, so, for example, -p "$(pidof PROG)" and -p "$(pgrep PROG)" syntaxes are supported. -u username --user=username Run command with the user ID, group ID, and supplementary groups of username. This option is only useful when running as root and enables the correct execution of setuid and/or setgid binaries. Unless this option is used setuid and setgid programs are executed without effective privileges. --argv0=name Set argv[0] of the command being executed to name. Useful for tracing multi-call executables which interpret argv[0], such as busybox or kmod. Tracing -b syscall --detach-on=syscall If specified syscall is reached, detach from traced process. Currently, only execve(2) syscall is supported. This option is useful if you want to trace multi-threaded process and therefore require -f, but don't want to trace its (potentially very complex) children. -D --daemonize --daemonize=grandchild Run tracer process as a grandchild, not as the parent of the tracee. This reduces the visible effect of strace by keeping the tracee a direct child of the calling process. -DD --daemonize=pgroup --daemonize=pgrp Run tracer process as tracee's grandchild in a separate process group. In addition to reduction of the visible effect of strace, it also avoids killing of strace with kill(2) issued to the whole process group. -DDD --daemonize=session Run tracer process as tracee's grandchild in a separate session ("true daemonisation"). In addition to reduction of the visible effect of strace, it also avoids killing of strace upon session termination. -f --follow-forks Trace child processes as they are created by currently traced processes as a result of the fork(2), vfork(2) and clone(2) system calls. Note that -p PID -f will attach all threads of process PID if it is multi-threaded, not only thread with thread_id = PID. --output-separately If the --output=filename option is in effect, each processes trace is written to filename.pid where pid is the numeric process id of each process. -ff --follow-forks --output-separately Combine the effects of --follow-forks and --output-separately options. This is incompatible with -c, since no per-process counts are kept. One might want to consider using strace-log-merge(1) to obtain a combined strace log view. -I interruptible --interruptible=interruptible When strace can be interrupted by signals (such as pressing CTRL-C). 1, anywhere no signals are blocked; 2, waiting fatal signals are blocked while decoding syscall (default); 3, never fatal signals are always blocked (default if -o FILE PROG); 4, never_tstp fatal signals and SIGTSTP (CTRL-Z) are always blocked (useful to make strace -o FILE PROG not stop on CTRL-Z, default if -D). --syscall-limit=limit Detach all tracees when limit number of syscalls have been captured. Syscalls filtered out via --trace, --trace-path or --status options are not considered when keeping track of the number of syscalls that are captured. --kill-on-exit Set PTRACE_O_EXITKILL ptrace option to all tracee processes (which send a SIGKILL signal to the tracee if the tracer exits) and do not detach them on cleanup so they will not be left running after the tracer exit. --kill-on-exit is not compatible with -p/--attach options. Filtering -e trace=syscall_set -e t=syscall_set --trace=syscall_set Trace only the specified set of system calls. syscall_set is defined as [!]value[,value], and value can be one of the following: syscall Trace specific syscall, specified by its name (see syscalls(2) for a reference, but also see NOTES). ?value Question mark before the syscall qualification allows suppression of error in case no syscalls matched the qualification provided. value@64 Limit the syscall specification described by value to 64-bit personality. value@32 Limit the syscall specification described by value to 32-bit personality. value@x32 Limit the syscall specification described by value to x32 personality. all Trace all system calls. /regex Trace only those system calls that match the regex. You can use POSIX Extended Regular Expression syntax (see regex(7)). %file file Trace all system calls which take a file name as an argument. You can think of this as an abbreviation for -e trace=open,stat,chmod,unlink,... which is useful to seeing what files the process is referencing. Furthermore, using the abbreviation will ensure that you don't accidentally forget to include a call like lstat(2) in the list. Betchya woulda forgot that one. The syntax without a preceding percent sign ("-e trace=file") is deprecated. %process process Trace system calls associated with process lifecycle (creation, exec, termination). The syntax without a preceding percent sign ("-e trace=process") is deprecated. %net %network network Trace all the network related system calls. The syntax without a preceding percent sign ("-e trace=network") is deprecated. %signal signal Trace all signal related system calls. The syntax without a preceding percent sign ("-e trace=signal") is deprecated. %ipc ipc Trace all IPC related system calls. The syntax without a preceding percent sign ("-e trace=ipc") is deprecated. %desc desc Trace all file descriptor related system calls. The syntax without a preceding percent sign ("-e trace=desc") is deprecated. %memory memory Trace all memory mapping related system calls. The syntax without a preceding percent sign ("-e trace=memory") is deprecated. %creds Trace system calls that read or modify user and group identifiers or capability sets. %stat Trace stat syscall variants. %lstat Trace lstat syscall variants. %fstat Trace fstat, fstatat, and statx syscall variants. %%stat Trace syscalls used for requesting file status (stat, lstat, fstat, fstatat, statx, and their variants). %statfs Trace statfs, statfs64, statvfs, osf_statfs, and osf_statfs64 system calls. The same effect can be achieved with -e trace=/^(.*_)?statv?fs regular expression. %fstatfs Trace fstatfs, fstatfs64, fstatvfs, osf_fstatfs, and osf_fstatfs64 system calls. The same effect can be achieved with -e trace=/fstatv?fs regular expression. %%statfs Trace syscalls related to file system statistics (statfs-like, fstatfs-like, and ustat). The same effect can be achieved with -e trace=/statv?fs|fsstat|ustat regular expression. %clock Trace system calls that read or modify system clocks. %pure Trace syscalls that always succeed and have no arguments. Currently, this list includes arc_gettls(2), getdtablesize(2), getegid(2), getegid32(2), geteuid(2), geteuid32(2), getgid(2), getgid32(2), getpagesize(2), getpgrp(2), getpid(2), getppid(2), get_thread_area(2) (on architectures other than x86), gettid(2), get_tls(2), getuid(2), getuid32(2), getxgid(2), getxpid(2), getxuid(2), kern_features(2), and metag_get_tls(2) syscalls. The -c option is useful for determining which system calls might be useful to trace. For example, trace=open,close,read,write means to only trace those four system calls. Be careful when making inferences about the user/kernel boundary if only a subset of system calls are being monitored. The default is trace=all. -e trace-fd=set -e trace-fds=set -e fd=set -e fds=set --trace-fds=set Trace only the syscalls that operate on the specified subset of (non-negative) file descriptors. Note that usage of this option also filters out all the syscalls that do not operate on file descriptors at all. Applies in (inclusive) disjunction with the --trace-path option. -e signal=set -e signals=set -e s=set --signal=set Trace only the specified subset of signals. The default is signal=all. For example, signal=!SIGIO (or signal=!io) causes SIGIO signals not to be traced. -e status=set --status=set Print only system calls with the specified return status. The default is status=all. When using the status qualifier, because strace waits for system calls to return before deciding whether they should be printed or not, the traditional order of events may not be preserved anymore. If two system calls are executed by concurrent threads, strace will first print both the entry and exit of the first system call to exit, regardless of their respective entry time. The entry and exit of the second system call to exit will be printed afterwards. Here is an example when select(2) is called, but a different thread calls clock_gettime(2) before select(2) finishes: [pid 28779] 1130322148.939977 clock_gettime(CLOCK_REALTIME, {1130322148, 939977000}) = 0 [pid 28772] 1130322148.438139 select(4, [3], NULL, NULL, NULL) = 1 (in [3]) set can include the following elements: successful Trace system calls that returned without an error code. The -z option has the effect of status=successful. failed Trace system calls that returned with an error code. The -Z option has the effect of status=failed. unfinished Trace system calls that did not return. This might happen, for example, due to an execve call in a neighbour thread. unavailable Trace system calls that returned but strace failed to fetch the error status. detached Trace system calls for which strace detached before the return. -P path --trace-path=path Trace only system calls accessing path. Multiple -P options can be used to specify several paths. Applies in (inclusive) disjunction with the --trace-fds option. -z --successful-only Print only syscalls that returned without an error code. -Z --failed-only Print only syscalls that returned with an error code. Output format -a column --columns=column Align return values in a specific column (default column 40). -e abbrev=syscall_set -e a=syscall_set --abbrev=syscall_set Abbreviate the output from printing each member of large structures. The syntax of the syscall_set specification is the same as in the -e trace option. The default is abbrev=all. The -v option has the effect of abbrev=none. -e verbose=syscall_set -e v=syscall_set --verbose=syscall_set Dereference structures for the specified set of system calls. The syntax of the syscall_set specification is the same as in the -e trace option. The default is verbose=all. -e raw=syscall_set -e x=syscall_set --raw=syscall_set Print raw, undecoded arguments for the specified set of system calls. The syntax of the syscall_set specification is the same as in the -e trace option. This option has the effect of causing all arguments to be printed in hexadecimal. This is mostly useful if you don't trust the decoding or you need to know the actual numeric value of an argument. See also -X raw option. -e read=set -e reads=set -e r=set --read=set Perform a full hexadecimal and ASCII dump of all the data read from file descriptors listed in the specified set. For example, to see all input activity on file descriptors 3 and 5 use -e read=3,5. Note that this is independent from the normal tracing of the read(2) system call which is controlled by the option -e trace=read. -e write=set -e writes=set -e w=set --write=set Perform a full hexadecimal and ASCII dump of all the data written to file descriptors listed in the specified set. For example, to see all output activity on file descriptors 3 and 5 use -e write=3,5. Note that this is independent from the normal tracing of the write(2) system call which is controlled by the option -e trace=write. -e quiet=set -e silent=set -e silence=set -e q=set --quiet=set --silent=set --silence=set Suppress various information messages. The default is quiet=none. set can include the following elements: attach Suppress messages about attaching and detaching ("[ Process NNNN attached ]", "[ Process NNNN detached ]"). exit Suppress messages about process exits ("+++ exited with SSS +++"). path-resolution Suppress messages about resolution of paths provided via the -P option ("Requested path "..." resolved into "...""). personality Suppress messages about process personality changes ("[ Process PID=NNNN runs in PPP mode. ]"). thread-execve superseded Suppress messages about process being superseded by execve(2) in another thread ("+++ superseded by execve in pid NNNN +++"). -e decode-fds=set --decode-fds=set Decode various information associated with file descriptors. The default is decode-fds=none. set can include the following elements: path Print file paths. Also enables printing of tracee's current working directory when AT_FDCWD constant is used. socket Print socket protocol-specific information, dev Print character/block device numbers. pidfd Print PIDs associated with pidfd file descriptors. signalfd Print signal masks associated with signalfd file descriptors. -e decode-pids=set --decode-pids=set Decode various information associated with process IDs (and also thread IDs, process group IDs, and session IDs). The default is decode-pids=none. set can include the following elements: comm Print command names associated with thread or process IDs. pidns Print thread, process, process group, and session IDs in strace's PID namespace if the tracee is in a different PID namespace. -e kvm=vcpu --kvm=vcpu Print the exit reason of kvm vcpu. Requires Linux kernel version 4.16.0 or higher. -i --instruction-pointer Print the instruction pointer at the time of the system call. -n --syscall-number Print the syscall number. -k --stack-traces [= symbol ] Print the execution stack trace of the traced processes after each system call. -kk --stack-traces [= source ] Print the execution stack trace and source code information of the traced processes after each system call. This option expects the target program is compiled with appropriate debug options: "-g" (gcc), or "-g -gdwarf-aranges" (clang). -o filename --output=filename Write the trace output to the file filename rather than to stderr. filename.pid form is used if -ff option is supplied. If the argument begins with '|' or '!', the rest of the argument is treated as a command and all output is piped to it. This is convenient for piping the debugging output to a program without affecting the redirections of executed programs. The latter is not compatible with -ff option currently. -A --output-append-mode Open the file provided in the -o option in append mode. -q --quiet --quiet=attach,personality Suppress messages about attaching, detaching, and personality changes. This happens automatically when output is redirected to a file and the command is run directly instead of attaching. -qq --quiet=attach,personality,exit Suppress messages attaching, detaching, personality changes, and about process exit status. -qqq --quiet=all Suppress all suppressible messages (please refer to the -e quiet option description for the full list of suppressible messages). -r --relative-timestamps[=precision] Print a relative timestamp upon entry to each system call. This records the time difference between the beginning of successive system calls. precision can be one of s (for seconds), ms (milliseconds), us (microseconds), or ns (nanoseconds), and allows setting the precision of time value being printed. Default is us (microseconds). Note that since -r option uses the monotonic clock time for measuring time difference and not the wall clock time, its measurements can differ from the difference in time reported by the -t option. -s strsize --string-limit=strsize Specify the maximum string size to print (the default is 32). Note that filenames are not considered strings and are always printed in full. --absolute-timestamps[=[[format:]format],[[precision:]precision]] --timestamps[=[[format:]format],[[precision:]precision]] Prefix each line of the trace with the wall clock time in the specified format with the specified precision. format can be one of the following: none No time stamp is printed. Can be used to override the previous setting. time Wall clock time (strftime(3) format string is %T). unix Number of seconds since the epoch (strftime(3) format string is %s). precision can be one of s (for seconds), ms (milliseconds), us (microseconds), or ns (nanoseconds). Default arguments for the option are format:time,precision:s. -t --absolute-timestamps Prefix each line of the trace with the wall clock time. -tt --absolute-timestamps=precision:us If given twice, the time printed will include the microseconds. -ttt --absolute-timestamps=format:unix,precision:us If given thrice, the time printed will include the microseconds and the leading portion will be printed as the number of seconds since the epoch. -T --syscall-times[=precision] Show the time spent in system calls. This records the time difference between the beginning and the end of each system call. precision can be one of s (for seconds), ms (milliseconds), us (microseconds), or ns (nanoseconds), and allows setting the precision of time value being printed. Default is us (microseconds). -v --no-abbrev Print unabbreviated versions of environment, stat, termios, etc. calls. These structures are very common in calls and so the default behavior displays a reasonable subset of structure members. Use this option to get all of the gory details. --strings-in-hex[=option] Control usage of escape sequences with hexadecimal numbers in the printed strings. Normally (when no --strings-in-hex or -x option is supplied), escape sequences are used to print non-printable and non-ASCII characters (that is, characters with a character code less than 32 or greater than 127), or to disambiguate the output (so, for quotes and other characters that encase the printed string, for example, angle brackets, in case of file descriptor path output); for the former use case, unless it is a white space character that has a symbolic escape sequence defined in the C standard (that is, \t for a horizontal tab, \n for a newline, \v for a vertical tab, \f for a form feed page break, and \r for a carriage return) are printed using escape sequences with numbers that correspond to their byte values, with octal number format being the default. option can be one of the following: none Hexadecimal numbers are not used in the output at all. When there is a need to emit an escape sequence, octal numbers are used. non-ascii-chars Hexadecimal numbers are used instead of octal in the escape sequences. non-ascii Strings that contain non-ASCII characters are printed using escape sequences with hexadecimal numbers. all All strings are printed using escape sequences with hexadecimal numbers. When the option is supplied without an argument, all is assumed. -x --strings-in-hex=non-ascii Print all non-ASCII strings in hexadecimal string format. -xx --strings-in-hex[=all] Print all strings in hexadecimal string format. -X format --const-print-style=format Set the format for printing of named constants and flags. Supported format values are: raw Raw number output, without decoding. abbrev Output a named constant or a set of flags instead of the raw number if they are found. This is the default strace behaviour. verbose Output both the raw value and the decoded string (as a comment). -y --decode-fds --decode-fds=path Print paths associated with file descriptor arguments and with the AT_FDCWD constant. -yy --decode-fds=all Print all available information associated with file descriptors: protocol-specific information associated with socket file descriptors, block/character device number associated with device file descriptors, and PIDs associated with pidfd file descriptors. --pidns-translation --decode-pids=pidns If strace and tracee are in different PID namespaces, print PIDs in strace's namespace, too. -Y --decode-pids=comm Print command names for PIDs. --secontext[=format] -e secontext=format When SELinux is available and is not disabled, print in square brackets SELinux contexts of processes, files, and descriptors. The format argument is a comma-separated list of items being one of the following: full Print the full context (user, role, type level and category). mismatch Also print the context recorded by the SELinux database in case the current context differs. The latter is printed after two exclamation marks (!!). The default value for --secontext is !full,mismatch which prints only the type instead of full context and doesn't check for context mismatches. Statistics -c --summary-only Count time, calls, and errors for each system call and report a summary on program exit, suppressing the regular output. This attempts to show system time (CPU time spent running in the kernel) independent of wall clock time. If -c is used with -f, only aggregate totals for all traced processes are kept. -C --summary Like -c but also print regular output while processes are running. -O overhead --summary-syscall-overhead=overhead Set the overhead for tracing system calls to overhead. This is useful for overriding the default heuristic for guessing how much time is spent in mere measuring when timing system calls using the -c option. The accuracy of the heuristic can be gauged by timing a given program run without tracing (using time(1)) and comparing the accumulated system call time to the total produced using -c. The format of overhead specification is described in section Time specification format description. -S sortby --summary-sort-by=sortby Sort the output of the histogram printed by the -c option by the specified criterion. Legal values are time (or time-percent or time-total or total-time), min-time (or shortest or time-min), max-time (or longest or time-max), avg-time (or time-avg), calls (or count), errors (or error), name (or syscall or syscall-name), and nothing (or none); default is time. -U columns --summary-columns=columns Configure a set (and order) of columns being shown in the call summary. The columns argument is a comma-separated list with items being one of the following: time-percent (or time) Percentage of cumulative time consumed by a specific system call. total-time (or time-total) Total system (or wall clock, if -w option is provided) time consumed by a specific system call. min-time (or shortest or time-min) Minimum observed call duration. max-time (or longest or time-max) Maximum observed call duration. avg-time (or time-avg) Average call duration. calls (or count) Call count. errors (or error) Error count. name (or syscall or syscall-name) Syscall name. The default value is time-percent,total-time,avg-time,calls,errors,name. If the name field is not supplied explicitly, it is added as the last column. -w --summary-wall-clock Summarise the time difference between the beginning and end of each system call. The default is to summarise the system time. Tampering -e inject=syscall_set[:error=errno|:retval=value][:signal=sig] [:syscall=syscall][:delay_enter=delay][:delay_exit=delay] [:poke_enter=@argN=DATAN,@argM=DATAM...] [:poke_exit=@argN=DATAN,@argM=DATAM...][:when=expr] --inject=syscall_set[:error=errno|:retval=value][:signal=sig] [:syscall=syscall][:delay_enter=delay][:delay_exit=delay] [:poke_enter=@argN=DATAN,@argM=DATAM...] [:poke_exit=@argN=DATAN,@argM=DATAM...][:when=expr] Perform syscall tampering for the specified set of syscalls. The syntax of the syscall_set specification is the same as in the -e trace option. At least one of error, retval, signal, delay_enter, delay_exit, poke_enter, or poke_exit options has to be specified. error and retval are mutually exclusive. If :error=errno option is specified, a fault is injected into a syscall invocation: the syscall number is replaced by -1 which corresponds to an invalid syscall (unless a syscall is specified with :syscall= option), and the error code is specified using a symbolic errno value like ENOSYS or a numeric value within 1..4095 range. If :retval=value option is specified, success injection is performed: the syscall number is replaced by -1, but a bogus success value is returned to the callee. If :signal=sig option is specified with either a symbolic value like SIGSEGV or a numeric value within 1..SIGRTMAX range, that signal is delivered on entering every syscall specified by the set. If :delay_enter=delay or :delay_exit=delay options are specified, delay injection is performed: the tracee is delayed by time period specified by delay on entering or exiting the syscall, respectively. The format of delay specification is described in section Time specification format description. If :poke_enter=@argN=DATAN,@argM=DATAM... or :poke_exit=@argN=DATAN,@argM=DATAM... options are specified, tracee's memory at locations, pointed to by system call arguments argN and argM (going from arg1 to arg7) is overwritten by data DATAN and DATAM (specified in hexadecimal format; for example :poke_enter=@arg1=0000DEAD0000BEEF). :poke_enter modifies memory on syscall enter, and :poke_exit - on exit. If :signal=sig option is specified without :error=errno, :retval=value or :delay_{enter,exit}=usecs options, then only a signal sig is delivered without a syscall fault or delay injection. Conversely, :error=errno or :retval=value option without :delay_enter=delay, :delay_exit=delay or :signal=sig options injects a fault without delivering a signal or injecting a delay, etc. If :signal=sig option is specified together with :error=errno or :retval=value, then both injection of a fault or success and signal delivery are performed. if :syscall=syscall option is specified, the corresponding syscall with no side effects is injected instead of -1. Currently, only "pure" (see -e trace=%pure description) syscalls can be specified there. Unless a :when=expr subexpression is specified, an injection is being made into every invocation of each syscall from the set. The format of the subexpression is: first[..last][+[step]] Number first stands for the first invocation number in the range, number last stands for the last invocation number in the range, and step stands for the step between two consecutive invocations. The following combinations are useful: first For every syscall from the set, perform an injection for the syscall invocation number first only. first..last For every syscall from the set, perform an injection for the syscall invocation number first and all subsequent invocations until the invocation number last (inclusive). first+ For every syscall from the set, perform injections for the syscall invocation number first and all subsequent invocations. first..last+ For every syscall from the set, perform injections for the syscall invocation number first and all subsequent invocations until the invocation number last (inclusive). first+step For every syscall from the set, perform injections for syscall invocations number first, first+step, first+step+step, and so on. first..last+step Same as the previous, but consider only syscall invocations with numbers up to last (inclusive). For example, to fail each third and subsequent chdir syscalls with ENOENT, use -e inject=chdir:error=ENOENT:when=3+. The valid range for numbers first and step is 1..65535, and for number last is 1..65534. An injection expression can contain only one error= or retval= specification, and only one signal= specification. If an injection expression contains multiple when= specifications, the last one takes precedence. Accounting of syscalls that are subject to injection is done per syscall and per tracee. Specification of syscall injection can be combined with other syscall filtering options, for example, -P /dev/urandom -e inject=file:error=ENOENT. -e fault=syscall_set[:error=errno][:when=expr] --fault=syscall_set[:error=errno][:when=expr] Perform syscall fault injection for the specified set of syscalls. This is equivalent to more generic -e inject= expression with default value of errno option set to ENOSYS. Miscellaneous -d --debug Show some debugging output of strace itself on the standard error. -F This option is deprecated. It is retained for backward compatibility only and may be removed in future releases. Usage of multiple instances of -F option is still equivalent to a single -f, and it is ignored at all if used along with one or more instances of -f option. -h --help Print the help summary. --seccomp-bpf Try to enable use of seccomp-bpf (see seccomp(2)) to have ptrace(2)-stops only when system calls that are being traced occur in the traced processes. This option has no effect unless -f/--follow-forks is also specified. --seccomp-bpf is not compatible with --syscall-limit and -b/--detach-on options. It is also not applicable to processes attached using -p/--attach option. An attempt to enable system calls filtering using seccomp-bpf may fail for various reasons, e.g. there are too many system calls to filter, the seccomp API is not available, or strace itself is being traced. In cases when seccomp-bpf filter setup failed, strace proceeds as usual and stops traced processes on every system call. When --seccomp-bpf is activated and -p/--attach option is not used, --kill-on-exit option is activated as well. --tips[=[[id:]id],[[format:]format]] Show strace tips, tricks, and tweaks before exit. id can be a non-negative integer number, which enables printing of specific tip, trick, or tweak (these ID are not guaranteed to be stable), or random (the default), in which case a random tip is printed. format can be one of the following: none No tip is printed. Can be used to override the previous setting. compact Print the tip just big enough to contain all the text. full Print the tip in its full glory. Default is id:random,format:compact. -V --version Print the version number of strace. Multiple instances of the option beyond specific threshold tend to increase Strauss awareness. Time specification format description Time values can be specified as a decimal floating point number (in a format accepted by strtod(3)), optionally followed by one of the following suffices that specify the unit of time: s (seconds), ms (milliseconds), us (microseconds), or ns (nanoseconds). If no suffix is specified, the value is interpreted as microseconds. The described format is used for -O, -e inject=delay_enter, and -e inject=delay_exit options. DIAGNOSTICS top When command exits, strace exits with the same exit status. If command is terminated by a signal, strace terminates itself with the same signal, so that strace can be used as a wrapper process transparent to the invoking parent process. Note that parent- child relationship (signal stop notifications, getppid(2) value, etc) between traced process and its parent are not preserved unless -D is used. When using -p without a command, the exit status of strace is zero unless no processes has been attached or there was an unexpected error in doing the tracing. SETUID INSTALLATION top If strace is installed setuid to root then the invoking user will be able to attach to and trace processes owned by any user. In addition setuid and setgid programs will be executed and traced with the correct effective privileges. Since only users trusted with full root privileges should be allowed to do these things, it only makes sense to install strace as setuid to root when the users who can execute it are restricted to those users who have this trust. For example, it makes sense to install a special version of strace with mode 'rwsr-xr--', user root and group trace, where members of the trace group are trusted users. If you do use this feature, please remember to install a regular non-setuid version of strace for ordinary users to use. MULTIPLE PERSONALITIES SUPPORT top On some architectures, strace supports decoding of syscalls for processes that use different ABI rather than the one strace uses. Specifically, in addition to decoding native ABI, strace can decode the following ABIs on the following architectures: Architecture ABIs supported x86_64 i386, x32 [1]; i386 [2] AArch64 ARM 32-bit EABI PowerPC 64-bit [3] PowerPC 32-bit s390x s390 SPARC 64-bit SPARC 32-bit TILE 64-bit TILE 32-bit [1] When strace is built as an x86_64 application [2] When strace is built as an x32 application [3] Big endian only This support is optional and relies on ability to generate and parse structure definitions during the build time. Please refer to the output of the strace -V command in order to figure out what support is available in your strace build ("non-native" refers to an ABI that differs from the ABI strace has): m32-mpers strace can trace and properly decode non-native 32-bit binaries. no-m32-mpers strace can trace, but cannot properly decode non-native 32-bit binaries. mx32-mpers strace can trace and properly decode non-native 32-on-64-bit binaries. no-mx32-mpers strace can trace, but cannot properly decode non-native 32-on-64-bit binaries. If the output contains neither m32-mpers nor no-m32-mpers, then decoding of non-native 32-bit binaries is not implemented at all or not applicable. Likewise, if the output contains neither mx32-mpers nor no- mx32-mpers, then decoding of non-native 32-on-64-bit binaries is not implemented at all or not applicable. NOTES top It is a pity that so much tracing clutter is produced by systems employing shared libraries. It is instructive to think about system call inputs and outputs as data-flow across the user/kernel boundary. Because user-space and kernel-space are separate and address-protected, it is sometimes possible to make deductive inferences about process behavior using inputs and outputs as propositions. In some cases, a system call will differ from the documented behavior or have a different name. For example, the faccessat(2) system call does not have flags argument, and the setrlimit(2) library function uses prlimit64(2) system call on modern (2.6.38+) kernels. These discrepancies are normal but idiosyncratic characteristics of the system call interface and are accounted for by C library wrapper functions. Some system calls have different names in different architectures and personalities. In these cases, system call filtering and printing uses the names that match corresponding __NR_* kernel macros of the tracee's architecture and personality. There are two exceptions from this general rule: arm_fadvise64_64(2) ARM syscall and xtensa_fadvise64_64(2) Xtensa syscall are filtered and printed as fadvise64_64(2). On x32, syscalls that are intended to be used by 64-bit processes and not x32 ones (for example, readv(2), that has syscall number 19 on x86_64, with its x32 counterpart has syscall number 515), but called with __X32_SYSCALL_BIT flag being set, are designated with #64 suffix. On some platforms a process that is attached to with the -p option may observe a spurious EINTR return from the current system call that is not restartable. (Ideally, all system calls should be restarted on strace attach, making the attach invisible to the traced process, but a few system calls aren't. Arguably, every instance of such behavior is a kernel bug.) This may have an unpredictable effect on the process if the process takes no action to restart the system call. As strace executes the specified command directly and does not employ a shell for that, scripts without shebang that usually run just fine when invoked by shell fail to execute with ENOEXEC error. It is advisable to manually supply a shell as a command with the script as its argument. BUGS top Programs that use the setuid bit do not have effective user ID privileges while being traced. A traced process runs slowly (but check out the --seccomp-bpf option). Unless --kill-on-exit option is used (or --seccomp-bpf option is used in a way that implies --kill-on-exit), traced processes which are descended from command may be left running after an interrupt signal (CTRL-C). HISTORY top The original strace was written by Paul Kranenburg for SunOS and was inspired by its trace utility. The SunOS version of strace was ported to Linux and enhanced by Branko Lankester, who also wrote the Linux kernel support. Even though Paul released strace 2.5 in 1992, Branko's work was based on Paul's strace 1.5 release from 1991. In 1993, Rick Sladkey merged strace 2.5 for SunOS and the second release of strace for Linux, added many of the features of truss(1) from SVR4, and produced an strace that worked on both platforms. In 1994 Rick ported strace to SVR4 and Solaris and wrote the automatic configuration support. In 1995 he ported strace to Irix and became tired of writing about himself in the third person. Beginning with 1996, strace was maintained by Wichert Akkerman. During his tenure, strace development migrated to CVS; ports to FreeBSD and many architectures on Linux (including ARM, IA-64, MIPS, PA-RISC, PowerPC, s390, SPARC) were introduced. In 2002, the burden of strace maintainership was transferred to Roland McGrath. Since then, strace gained support for several new Linux architectures (AMD64, s390x, SuperH), bi-architecture support for some of them, and received numerous additions and improvements in syscalls decoders on Linux; strace development migrated to Git during that period. Since 2009, strace is actively maintained by Dmitry Levin. strace gained support for AArch64, ARC, AVR32, Blackfin, Meta, Nios II, OpenRISC 1000, RISC-V, Tile/TileGx, Xtensa architectures since that time. In 2012, unmaintained and apparently broken support for non-Linux operating systems was removed. Also, in 2012 strace gained support for path tracing and file descriptor path decoding. In 2014, support for stack traces printing was added. In 2016, syscall fault injection was implemented. For the additional information, please refer to the NEWS file and strace repository commit log. REPORTING BUGS top Problems with strace should be reported to the strace mailing list mailto:strace-devel@lists.strace.io. SEE ALSO top strace-log-merge(1), ltrace(1), perf-trace(1), trace-cmd(1), time(1), ptrace(2), syscall(2), proc(5), signal(7) strace Home Page https://strace.io/ AUTHORS top The complete list of strace contributors can be found in the CREDITS file. COLOPHON top This page is part of the strace (system call tracer) project. Information about the project can be found at http://strace.io/. If you have a bug report for this manual page, send it to strace-devel@lists.sourceforge.net. This page was obtained from the project's upstream Git repository https://github.com/strace/strace.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org strace 6.6.0.29.9c5b2 2023-11-21 STRACE(1) Pages that refer to this page: ltrace(1), strace-log-merge(1), ptrace(2), seccomp(2), proc(5), capabilities(7), mount_namespaces(7), vdso(7), ovs-ctl(8), systemd-sysext(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# strace\n\n> Troubleshooting tool for tracing system calls.\n> More information: <https://manned.org/strace>.\n\n- Start tracing a specific [p]rocess by its PID:\n\n`strace -p {{pid}}`\n\n- Trace a [p]rocess and filt[e]r output by system call:\n\n`strace -p {{pid}} -e {{system_call,system_call2,...}}`\n\n- Count time, calls, and errors for each system call and report a summary on program exit:\n\n`strace -p {{pid}} -c`\n\n- Show the [T]ime spent in every system call:\n\n`strace -p {{pid}} -T`\n\n- Start tracing a program by executing it:\n\n`strace {{program}}`\n\n- Start tracing file operations of a program:\n\n`strace -e trace=file {{program}}`\n\n- Start tracing network operations of a program as well as all its [f]orked and child processes, saving the [o]utput to a file:\n\n`strace -f -e trace=network -o {{trace.txt}} {{program}}`\n
strings
strings(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training strings(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | COPYRIGHT | COLOPHON STRINGS(1) GNU Development Tools STRINGS(1) NAME top strings - print the sequences of printable characters in files SYNOPSIS top strings [-afovV] [-min-len] [-n min-len] [--bytes=min-len] [-t radix] [--radix=radix] [-e encoding] [--encoding=encoding] [-U method] [--unicode=method] [-] [--all] [--print-file-name] [-T bfdname] [--target=bfdname] [-w] [--include-all-whitespace] [-s] [--output-separator sep_string] [--help] [--version] file... DESCRIPTION top For each file given, GNU strings prints the printable character sequences that are at least 4 characters long (or the number given with the options below) and are followed by an unprintable character. Depending upon how the strings program was configured it will default to either displaying all the printable sequences that it can find in each file, or only those sequences that are in loadable, initialized data sections. If the file type is unrecognizable, or if strings is reading from stdin then it will always display all of the printable sequences that it can find. For backwards compatibility any file that occurs after a command- line option of just - will also be scanned in full, regardless of the presence of any -d option. strings is mainly useful for determining the contents of non-text files. OPTIONS top -a --all - Scan the whole file, regardless of what sections it contains or whether those sections are loaded or initialized. Normally this is the default behaviour, but strings can be configured so that the -d is the default instead. The - option is position dependent and forces strings to perform full scans of any file that is mentioned after the - on the command line, even if the -d option has been specified. -d --data Only print strings from initialized, loaded data sections in the file. This may reduce the amount of garbage in the output, but it also exposes the strings program to any security flaws that may be present in the BFD library used to scan and load sections. Strings can be configured so that this option is the default behaviour. In such cases the -a option can be used to avoid using the BFD library and instead just print all of the strings found in the file. -f --print-file-name Print the name of the file before each string. --help Print a summary of the program usage on the standard output and exit. -min-len -n min-len --bytes=min-len Print sequences of displayable characters that are at least min-len characters long. If not specified a default minimum length of 4 is used. The distinction between displayable and non-displayable characters depends upon the setting of the -e and -U options. Sequences are always terminated at control characters such as new-line and carriage-return, but not the tab character. -o Like -t o. Some other versions of strings have -o act like -t d instead. Since we can not be compatible with both ways, we simply chose one. -t radix --radix=radix Print the offset within the file before each string. The single character argument specifies the radix of the offset---o for octal, x for hexadecimal, or d for decimal. -e encoding --encoding=encoding Select the character encoding of the strings that are to be found. Possible values for encoding are: s = single-7-bit-byte characters (default), S = single-8-bit-byte characters, b = 16-bit bigendian, l = 16-bit littleendian, B = 32-bit bigendian, L = 32-bit littleendian. Useful for finding wide character strings. (l and b apply to, for example, Unicode UTF-16/UCS-2 encodings). -U [d|i|l|e|x|h] --unicode=[default|invalid|locale|escape|hex|highlight] Controls the display of UTF-8 encoded multibyte characters in strings. The default (--unicode=default) is to give them no special treatment, and instead rely upon the setting of the --encoding option. The other values for this option automatically enable --encoding=S. The --unicode=invalid option treats them as non-graphic characters and hence not part of a valid string. All the remaining options treat them as valid string characters. The --unicode=locale option displays them in the current locale, which may or may not support UTF-8 encoding. The --unicode=hex option displays them as hex byte sequences enclosed between <> characters. The --unicode=escape option displays them as escape sequences (\uxxxx) and the --unicode=highlight option displays them as escape sequences highlighted in red (if supported by the output device). The colouring is intended to draw attention to the presence of unicode sequences where they might not be expected. -T bfdname --target=bfdname Specify an object code format other than your system's default format. -v -V --version Print the program version number on the standard output and exit. -w --include-all-whitespace By default tab and space characters are included in the strings that are displayed, but other whitespace characters, such a newlines and carriage returns, are not. The -w option changes this so that all whitespace characters are considered to be part of a string. -s --output-separator By default, output strings are delimited by a new-line. This option allows you to supply any string to be used as the output record separator. Useful with --include-all-whitespace where strings may contain new-lines internally. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. SEE ALSO top ar(1), nm(1), objdump(1), ranlib(1), readelf(1) and the Info entries for binutils. COPYRIGHT top Copyright (c) 1991-2023 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". COLOPHON top This page is part of the binutils (a collection of tools for working with executable binaries) project. Information about the project can be found at http://www.gnu.org/software/binutils/. If you have a bug report for this manual page, see http://sourceware.org/bugzilla/enter_bug.cgi?product=binutils. This page was obtained from the tarball binutils-2.41.tar.gz fetched from https://ftp.gnu.org/gnu/binutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org binutils-2.41 2023-12-22 STRINGS(1) Pages that refer to this page: elf(5) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# strings\n\n> Find printable strings in an object file or binary.\n> More information: <https://manned.org/strings>.\n\n- Print all strings in a binary:\n\n`strings {{path/to/file}}`\n\n- Limit results to strings at least n characters long:\n\n`strings -n {{n}} {{path/to/file}}`\n\n- Prefix each result with its offset within the file:\n\n`strings -t d {{path/to/file}}`\n\n- Prefix each result with its offset within the file in hexadecimal:\n\n`strings -t x {{path/to/file}}`\n
strip
strip(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training strip(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | COPYRIGHT | COLOPHON STRIP(1) GNU Development Tools STRIP(1) NAME top strip - discard symbols and other data from object files SYNOPSIS top strip [-F bfdname |--target=bfdname] [-I bfdname |--input-target=bfdname] [-O bfdname |--output-target=bfdname] [-s|--strip-all] [-S|-g|-d|--strip-debug] [--strip-dwo] [-K symbolname|--keep-symbol=symbolname] [-M|--merge-notes][--no-merge-notes] [-N symbolname |--strip-symbol=symbolname] [-w|--wildcard] [-x|--discard-all] [-X |--discard-locals] [-R sectionname |--remove-section=sectionname] [--keep-section=sectionpattern] [--remove-relocations=sectionpattern] [--strip-section-headers] [-o file] [-p|--preserve-dates] [-D|--enable-deterministic-archives] [-U|--disable-deterministic-archives] [--keep-section-symbols] [--keep-file-symbols] [--only-keep-debug] [-v |--verbose] [-V|--version] [--help] [--info] objfile... DESCRIPTION top GNU strip discards all symbols from object files objfile. The list of object files may include archives. At least one object file must be given. strip modifies the files named in its argument, rather than writing modified copies under different names. OPTIONS top -F bfdname --target=bfdname Treat the original objfile as a file with the object code format bfdname, and rewrite it in the same format. --help Show a summary of the options to strip and exit. --info Display a list showing all architectures and object formats available. -I bfdname --input-target=bfdname Treat the original objfile as a file with the object code format bfdname. -O bfdname --output-target=bfdname Replace objfile with a file in the output format bfdname. -R sectionname --remove-section=sectionname Remove any section named sectionname from the output file, in addition to whatever sections would otherwise be removed. This option may be given more than once. Note that using this option inappropriately may make the output file unusable. The wildcard character * may be given at the end of sectionname. If so, then any section starting with sectionname will be removed. If the first character of sectionpattern is the exclamation point (!) then matching sections will not be removed even if an earlier use of --remove-section on the same command line would otherwise remove it. For example: --remove-section=.text.* --remove-section=!.text.foo will remove all sections matching the pattern '.text.*', but will not remove the section '.text.foo'. --keep-section=sectionpattern When removing sections from the output file, keep sections that match sectionpattern. --remove-relocations=sectionpattern Remove relocations from the output file for any section matching sectionpattern. This option may be given more than once. Note that using this option inappropriately may make the output file unusable. Wildcard characters are accepted in sectionpattern. For example: --remove-relocations=.text.* will remove the relocations for all sections matching the patter '.text.*'. If the first character of sectionpattern is the exclamation point (!) then matching sections will not have their relocation removed even if an earlier use of --remove-relocations on the same command line would otherwise cause the relocations to be removed. For example: --remove-relocations=.text.* --remove-relocations=!.text.foo will remove all relocations for sections matching the pattern '.text.*', but will not remove relocations for the section '.text.foo'. --strip-section-headers Strip section headers. This option is specific to ELF files. Implies --strip-all and --merge-notes. -s --strip-all Remove all symbols. -g -S -d --strip-debug Remove debugging symbols only. --strip-dwo Remove the contents of all DWARF .dwo sections, leaving the remaining debugging sections and all symbols intact. See the description of this option in the objcopy section for more information. --strip-unneeded Remove all symbols that are not needed for relocation processing in addition to debugging symbols and sections stripped by --strip-debug. -K symbolname --keep-symbol=symbolname When stripping symbols, keep symbol symbolname even if it would normally be stripped. This option may be given more than once. -M --merge-notes --no-merge-notes For ELF files, attempt (or do not attempt) to reduce the size of any SHT_NOTE type sections by removing duplicate notes. The default is to attempt this reduction unless stripping debug or DWO information. -N symbolname --strip-symbol=symbolname Remove symbol symbolname from the source file. This option may be given more than once, and may be combined with strip options other than -K. -o file Put the stripped output in file, rather than replacing the existing file. When this argument is used, only one objfile argument may be specified. -p --preserve-dates Preserve the access and modification dates of the file. -D --enable-deterministic-archives Operate in deterministic mode. When copying archive members and writing the archive index, use zero for UIDs, GIDs, timestamps, and use consistent file modes for all files. If binutils was configured with --enable-deterministic-archives, then this mode is on by default. It can be disabled with the -U option, below. -U --disable-deterministic-archives Do not operate in deterministic mode. This is the inverse of the -D option, above: when copying archive members and writing the archive index, use their actual UID, GID, timestamp, and file mode values. This is the default unless binutils was configured with --enable-deterministic-archives. -w --wildcard Permit regular expressions in symbolnames used in other command line options. The question mark (?), asterisk (*), backslash (\) and square brackets ([]) operators can be used anywhere in the symbol name. If the first character of the symbol name is the exclamation point (!) then the sense of the switch is reversed for that symbol. For example: -w -K !foo -K fo* would cause strip to only keep symbols that start with the letters "fo", but to discard the symbol "foo". -x --discard-all Remove non-global symbols. -X --discard-locals Remove compiler-generated local symbols. (These usually start with L or ..) --keep-section-symbols When stripping a file, perhaps with --strip-debug or --strip-unneeded, retain any symbols specifying section names, which would otherwise get stripped. --keep-file-symbols When stripping a file, perhaps with --strip-debug or --strip-unneeded, retain any symbols specifying source file names, which would otherwise get stripped. --only-keep-debug Strip a file, emptying the contents of any sections that would not be stripped by --strip-debug and leaving the debugging sections intact. In ELF files, this preserves all the note sections in the output as well. Note - the section headers of the stripped sections are preserved, including their sizes, but the contents of the section are discarded. The section headers are preserved so that other tools can match up the debuginfo file with the real executable, even if that executable has been relocated to a different address space. The intention is that this option will be used in conjunction with --add-gnu-debuglink to create a two part executable. One a stripped binary which will occupy less space in RAM and in a distribution and the second a debugging information file which is only needed if debugging abilities are required. The suggested procedure to create these files is as follows: 1.<Link the executable as normal. Assuming that it is called> "foo" then... 1.<Run "objcopy --only-keep-debug foo foo.dbg" to> create a file containing the debugging info. 1.<Run "objcopy --strip-debug foo" to create a> stripped executable. 1.<Run "objcopy --add-gnu-debuglink=foo.dbg foo"> to add a link to the debugging info into the stripped executable. Note---the choice of ".dbg" as an extension for the debug info file is arbitrary. Also the "--only-keep-debug" step is optional. You could instead do this: 1.<Link the executable as normal.> 1.<Copy "foo" to "foo.full"> 1.<Run "strip --strip-debug foo"> 1.<Run "objcopy --add-gnu-debuglink=foo.full foo"> i.e., the file pointed to by the --add-gnu-debuglink can be the full executable. It does not have to be a file created by the --only-keep-debug switch. Note---this switch is only intended for use on fully linked files. It does not make sense to use it on object files where the debugging information may be incomplete. Besides the gnu_debuglink feature currently only supports the presence of one filename containing debugging information, not multiple filenames on a one-per-object-file basis. -V --version Show the version number for strip. -v --verbose Verbose output: list all object files modified. In the case of archives, strip -v lists all members of the archive. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. SEE ALSO top the Info entries for binutils. COPYRIGHT top Copyright (c) 1991-2023 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". COLOPHON top This page is part of the binutils (a collection of tools for working with executable binaries) project. Information about the project can be found at http://www.gnu.org/software/binutils/. If you have a bug report for this manual page, see http://sourceware.org/bugzilla/enter_bug.cgi?product=binutils. This page was obtained from the tarball binutils-2.41.tar.gz fetched from https://ftp.gnu.org/gnu/binutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org binutils-2.41 2023-12-22 STRIP(1) Pages that refer to this page: elf(5), warning::debuginfo(7stap), warning::symbols(7stap) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# strip\n\n> Discard symbols from executables or object files.\n> More information: <https://manned.org/strip>.\n\n- Replace the input file with its stripped version:\n\n`strip {{path/to/file}}`\n\n- Strip symbols from a file, saving the output to a specific file:\n\n`strip {{path/to/input_file}} -o {{path/to/output_file}}`\n\n- Strip debug symbols only:\n\n`strip --strip-debug {{path/to/file.o}}`\n
stty
stty(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training stty(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON STTY(1) User Commands STTY(1) NAME top stty - change and print terminal line settings SYNOPSIS top stty [-F DEVICE | --file=DEVICE] [SETTING]... stty [-F DEVICE | --file=DEVICE] [-a|--all] stty [-F DEVICE | --file=DEVICE] [-g|--save] DESCRIPTION top Print or change terminal characteristics. Mandatory arguments to long options are mandatory for short options too. -a, --all print all current settings in human-readable form -g, --save print all current settings in a stty-readable form -F, --file=DEVICE open and use the specified DEVICE instead of stdin --help display this help and exit --version output version information and exit Optional - before SETTING indicates negation. An * marks non-POSIX settings. The underlying system defines which settings are available. Special characters: * discard CHAR CHAR will toggle discarding of output eof CHAR CHAR will send an end of file (terminate the input) eol CHAR CHAR will end the line * eol2 CHAR alternate CHAR for ending the line erase CHAR CHAR will erase the last character typed intr CHAR CHAR will send an interrupt signal kill CHAR CHAR will erase the current line * lnext CHAR CHAR will enter the next character quoted quit CHAR CHAR will send a quit signal * rprnt CHAR CHAR will redraw the current line start CHAR CHAR will restart the output after stopping it stop CHAR CHAR will stop the output susp CHAR CHAR will send a terminal stop signal * swtch CHAR CHAR will switch to a different shell layer * werase CHAR CHAR will erase the last word typed Special settings: N set the input and output speeds to N bauds * cols N tell the kernel that the terminal has N columns * columns N same as cols N * [-]drain wait for transmission before applying settings (on by default) ispeed N set the input speed to N * line N use line discipline N min N with -icanon, set N characters minimum for a completed read ospeed N set the output speed to N * rows N tell the kernel that the terminal has N rows * size print the number of rows and columns according to the kernel speed print the terminal speed time N with -icanon, set read timeout of N tenths of a second Control settings: [-]clocal disable modem control signals [-]cread allow input to be received * [-]crtscts enable RTS/CTS handshaking csN set character size to N bits, N in [5..8] [-]cstopb use two stop bits per character (one with '-') [-]hup send a hangup signal when the last process closes the tty [-]hupcl same as [-]hup [-]parenb generate parity bit in output and expect parity bit in input [-]parodd set odd parity (or even parity with '-') * [-]cmspar use "stick" (mark/space) parity Input settings: [-]brkint breaks cause an interrupt signal [-]icrnl translate carriage return to newline [-]ignbrk ignore break characters [-]igncr ignore carriage return [-]ignpar ignore characters with parity errors * [-]imaxbel beep and do not flush a full input buffer on a character [-]inlcr translate newline to carriage return [-]inpck enable input parity checking [-]istrip clear high (8th) bit of input characters * [-]iutf8 assume input characters are UTF-8 encoded * [-]iuclc translate uppercase characters to lowercase * [-]ixany let any character restart output, not only start character [-]ixoff enable sending of start/stop characters [-]ixon enable XON/XOFF flow control [-]parmrk mark parity errors (with a 255-0-character sequence) [-]tandem same as [-]ixoff Output settings: * bsN backspace delay style, N in [0..1] * crN carriage return delay style, N in [0..3] * ffN form feed delay style, N in [0..1] * nlN newline delay style, N in [0..1] * [-]ocrnl translate carriage return to newline * [-]ofdel use delete characters for fill instead of NUL characters * [-]ofill use fill (padding) characters instead of timing for delays * [-]olcuc translate lowercase characters to uppercase * [-]onlcr translate newline to carriage return-newline * [-]onlret newline performs a carriage return * [-]onocr do not print carriage returns in the first column [-]opost postprocess output * tabN horizontal tab delay style, N in [0..3] * tabs same as tab0 * -tabs same as tab3 * vtN vertical tab delay style, N in [0..1] Local settings: [-]crterase echo erase characters as backspace-space-backspace * crtkill kill all line by obeying the echoprt and echoe settings * -crtkill kill all line by obeying the echoctl and echok settings * [-]ctlecho echo control characters in hat notation ('^c') [-]echo echo input characters * [-]echoctl same as [-]ctlecho [-]echoe same as [-]crterase [-]echok echo a newline after a kill character * [-]echoke same as [-]crtkill [-]echonl echo newline even if not echoing other characters * [-]echoprt echo erased characters backward, between '\' and '/' * [-]extproc enable "LINEMODE"; useful with high latency links * [-]flusho discard output [-]icanon enable special characters: erase, kill, werase, rprnt [-]iexten enable non-POSIX special characters [-]isig enable interrupt, quit, and suspend special characters [-]noflsh disable flushing after interrupt and quit special characters * [-]prterase same as [-]echoprt * [-]tostop stop background jobs that try to write to the terminal * [-]xcase with icanon, escape with '\' for uppercase characters Combination settings: * [-]LCASE same as [-]lcase cbreak same as -icanon -cbreak same as icanon cooked same as brkint ignpar istrip icrnl ixon opost isig icanon, eof and eol characters to their default values -cooked same as raw crt same as echoe echoctl echoke dec same as echoe echoctl echoke -ixany intr ^c erase 0177 kill ^u * [-]decctlq same as [-]ixany ek erase and kill characters to their default values evenp same as parenb -parodd cs7 -evenp same as -parenb cs8 * [-]lcase same as xcase iuclc olcuc litout same as -parenb -istrip -opost cs8 -litout same as parenb istrip opost cs7 nl same as -icrnl -onlcr -nl same as icrnl -inlcr -igncr onlcr -ocrnl -onlret oddp same as parenb parodd cs7 -oddp same as -parenb cs8 [-]parity same as [-]evenp pass8 same as -parenb -istrip cs8 -pass8 same as parenb istrip cs7 raw same as -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -icanon -opost -isig -iuclc -ixany -imaxbel -xcase min 1 time 0 -raw same as cooked sane same as cread -ignbrk brkint -inlcr -igncr icrnl icanon iexten echo echoe echok -echonl -noflsh -ixoff -iutf8 -iuclc -ixany imaxbel -xcase -olcuc -ocrnl opost -ofill onlcr -onocr -onlret nl0 cr0 tab0 bs0 vt0 ff0 isig -tostop -ofdel -echoprt echoctl echoke -extproc -flusho, all special characters to their default values Handle the tty line connected to standard input. Without arguments, prints baud rate, line discipline, and deviations from stty sane. In settings, CHAR is taken literally, or coded as in ^c, 0x37, 0177 or 127; special values ^- or undef used to disable special characters. AUTHOR top Written by David MacKenzie. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/stty> or available locally via: info '(coreutils) stty invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 STTY(1) Pages that refer to this page: setterm(1), tcpdump(1), tput(1), tset(1), ncurses(3x), readline(3), stdin(3), termios(3), dir_colors(5), termcap(5), terminfo(5), termio(7), resizecons(8), tcpdump(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# stty\n\n> Set options for a terminal device interface.\n> More information: <https://www.gnu.org/software/coreutils/stty>.\n\n- Display all settings for the current terminal:\n\n`stty --all`\n\n- Set the number of rows or columns:\n\n`stty {{rows|cols}} {{count}}`\n\n- Get the actual transfer speed of a device:\n\n`stty --file {{path/to/device_file}} speed`\n\n- Reset all modes to reasonable values for the current terminal:\n\n`stty sane`\n
su
su(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training Another version of this page is provided by the shadow-utils project su(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SIGNALS | CONFIG FILES | EXIT STATUS | FILES | NOTES | HISTORY | SEE ALSO | REPORTING BUGS | AVAILABILITY SU(1) User Commands SU(1) NAME top su - run a command with substitute user and group ID SYNOPSIS top su [options] [-] [user [argument...]] DESCRIPTION top su allows commands to be run with a substitute user and group ID. When called with no user specified, su defaults to running an interactive shell as root. When user is specified, additional arguments can be supplied, in which case they are passed to the shell. For backward compatibility, su defaults to not change the current directory and to only set the environment variables HOME and SHELL (plus USER and LOGNAME if the target user is not root). It is recommended to always use the --login option (instead of its shortcut -) to avoid side effects caused by mixing environments. This version of su uses PAM for authentication, account and session management. Some configuration options found in other su implementations, such as support for a wheel group, have to be configured via PAM. su is mostly designed for unprivileged users, the recommended solution for privileged users (e.g., scripts executed by root) is to use non-set-user-ID command runuser(1) that does not require authentication and provides separate PAM configuration. If the PAM session is not required at all then the recommended solution is to use command setpriv(1). Note that su in all cases uses PAM (pam_getenvlist(3)) to do the final environment modification. Command-line options such as --login and --preserve-environment affect the environment before it is modified by PAM. Since version 2.38 su resets process resource limits RLIMIT_NICE, RLIMIT_RTPRIO, RLIMIT_FSIZE, RLIMIT_AS and RLIMIT_NOFILE. OPTIONS top -c, --command=command Pass command to the shell with the -c option. -f, --fast Pass -f to the shell, which may or may not be useful, depending on the shell. -g, --group=group Specify the primary group. This option is available to the root user only. -G, --supp-group=group Specify a supplementary group. This option is available to the root user only. The first specified supplementary group is also used as a primary group if the option --group is not specified. -, -l, --login Start the shell as a login shell with an environment similar to a real login: clears all the environment variables except TERM and variables specified by --whitelist-environment initializes the environment variables HOME, SHELL, USER, LOGNAME, and PATH changes to the target users home directory sets argv[0] of the shell to '-' in order to make the shell a login shell -m, -p, --preserve-environment Preserve the entire environment, i.e., do not set HOME, SHELL, USER or LOGNAME. This option is ignored if the option --login is specified. -P, --pty Create a pseudo-terminal for the session. The independent terminal provides better security as the user does not share a terminal with the original session. This can be used to avoid TIOCSTI ioctl terminal injection and other security attacks against terminal file descriptors. The entire session can also be moved to the background (e.g., su --pty - username -c application &). If the pseudo-terminal is enabled, then su works as a proxy between the sessions (sync stdin and stdout). This feature is mostly designed for interactive sessions. If the standard input is not a terminal, but for example a pipe (e.g., echo "date" | su --pty), then the ECHO flag for the pseudo-terminal is disabled to avoid messy output. -s, --shell=shell Run the specified shell instead of the default. The shell to run is selected according to the following rules, in order: the shell specified with --shell the shell specified in the environment variable SHELL, if the --preserve-environment option is used the shell listed in the passwd entry of the target user /bin/sh If the target user has a restricted shell (i.e., not listed in /etc/shells), the --shell option and the SHELL environment variables are ignored unless the calling user is root. --session-command=command Same as -c, but do not create a new session. (Discouraged.) -w, --whitelist-environment=list Dont reset the environment variables specified in the comma-separated list when clearing the environment for --login. The whitelist is ignored for the environment variables HOME, SHELL, USER, LOGNAME, and PATH. -h, --help Display help text and exit. -V, --version Print version and exit. SIGNALS top Upon receiving either SIGINT, SIGQUIT or SIGTERM, su terminates its child and afterwards terminates itself with the received signal. The child is terminated by SIGTERM, after unsuccessful attempt and 2 seconds of delay the child is killed by SIGKILL. CONFIG FILES top su reads the /etc/default/su and /etc/login.defs configuration files. The following configuration items are relevant for su: FAIL_DELAY (number) Delay in seconds in case of an authentication failure. The number must be a non-negative integer. ENV_PATH (string) Defines the PATH environment variable for a regular user. The default value is /usr/local/bin:/bin:/usr/bin. ENV_ROOTPATH (string), ENV_SUPATH (string) Defines the PATH environment variable for root. ENV_SUPATH takes precedence. The default value is /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin. ALWAYS_SET_PATH (boolean) If set to yes and --login and --preserve-environment were not specified su initializes PATH. The environment variable PATH may be different on systems where /bin and /sbin are merged into /usr; this variable is also affected by the --login command-line option and the PAM system setting (e.g., pam_env(8)). EXIT STATUS top su normally returns the exit status of the command it executed. If the command was killed by a signal, su returns the number of the signal plus 128. Exit status generated by su itself: 1 Generic error before executing the requested command 126 The requested command could not be executed 127 The requested command was not found FILES top /etc/pam.d/su default PAM configuration file /etc/pam.d/su-l PAM configuration file if --login is specified /etc/default/su command specific logindef config file /etc/login.defs global logindef config file NOTES top For security reasons, su always logs failed log-in attempts to the btmp file, but it does not write to the lastlog file at all. This solution can be used to control su behavior by PAM configuration. If you want to use the pam_lastlog(8) module to print warning message about failed log-in attempts then pam_lastlog(8) has to be configured to update the lastlog file as well. For example by: session required pam_lastlog.so nowtmp HISTORY top This su command was derived from coreutils' su, which was based on an implementation by David MacKenzie. The util-linux version has been refactored by Karel Zak. SEE ALSO top setpriv(1), login.defs(5), shells(5), pam(8), runuser(1) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The su command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 SU(1) Pages that refer to this page: flock(1), homectl(1), login(1), login(1@@shadow-utils), machinectl(1), newgrp(1), runuser(1), setpriv(1), sg(1), updatedb(1), pam(3), pts(4), crontab(5), login.defs(5), passwd(5), passwd(5@@shadow-utils), shadow(5), suauth(5), credentials(7), environ(7), PAM(8), pam_rootok(8), pam_xauth(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# su\n\n> Switch shell to another user.\n> More information: <https://manned.org/su>.\n\n- Switch to superuser (requires the root password):\n\n`su`\n\n- Switch to a given user (requires the user's password):\n\n`su {{username}}`\n\n- Switch to a given user and simulate a full login shell:\n\n`su - {{username}}`\n\n- Execute a command as another user:\n\n`su - {{username}} -c "{{command}}"`\n
sudo
sudo(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sudo(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMMAND EXECUTION | EXIT VALUE | SECURITY NOTES | ENVIRONMENT | FILES | EXAMPLES | DIAGNOSTICS | SEE ALSO | HISTORY | AUTHORS | CAVEATS | BUGS | SUPPORT | DISCLAIMER | COLOPHON SUDO(8) System Manager's Manual SUDO(8) NAME top sudo, sudoedit execute a command as another user SYNOPSIS top sudo -h | -K | -k | -V sudo -v [-ABkNnS] [-g group] [-h host] [-p prompt] [-u user] sudo -l [-ABkNnS] [-g group] [-h host] [-p prompt] [-U user] [-u user] [command [arg ...]] sudo [-ABbEHnPS] [-C num] [-D directory] [-g group] [-h host] [-p prompt] [-R directory] [-T timeout] [-u user] [VAR=value] [-i | -s] [command [arg ...]] sudoedit [-ABkNnS] [-C num] [-D directory] [-g group] [-h host] [-p prompt] [-R directory] [-T timeout] [-u user] file ... DESCRIPTION top allows a permitted user to execute a command as the superuser or another user, as specified by the security policy. The invoking user's real (not effective) user-ID is used to determine the user name with which to query the security policy. supports a plugin architecture for security policies, auditing, and input/output logging. Third parties can develop and distribute their own plugins to work seamlessly with the front- end. The default security policy is sudoers, which is configured via the file /etc/sudoers, or via LDAP. See the Plugins section for more information. The security policy determines what privileges, if any, a user has to run . The policy may require that users authenticate themselves with a password or another authentication mechanism. If authentication is required, will exit if the user's password is not entered within a configurable time limit. This limit is policy-specific; the default password prompt timeout for the sudoers security policy is 5 minutes. Security policies may support credential caching to allow the user to run again for a period of time without requiring authentication. By default, the sudoers policy caches credentials on a per-terminal basis for 5 minutes. See the timestamp_type and timestamp_timeout options in sudoers(5) for more information. By running with the -v option, a user can update the cached credentials without running a command. On systems where is the primary method of gaining superuser privileges, it is imperative to avoid syntax errors in the security policy configuration files. For the default security policy, sudoers(5), changes to the configuration files should be made using the visudo(8) utility which will ensure that no syntax errors are introduced. When invoked as sudoedit, the -e option (described below), is implied. Security policies and audit plugins may log successful and failed attempts to run . If an I/O plugin is configured, the running command's input and output may be logged as well. The options are as follows: -A, --askpass Normally, if requires a password, it will read it from the user's terminal. If the -A (askpass) option is specified, a (possibly graphical) helper program is executed to read the user's password and output the password to the standard output. If the SUDO_ASKPASS environment variable is set, it specifies the path to the helper program. Otherwise, if sudo.conf(5) contains a line specifying the askpass program, that value will be used. For example: # Path to askpass helper program Path askpass /usr/X11R6/bin/ssh-askpass If no askpass program is available, will exit with an error. -B, --bell Ring the bell as part of the password prompt when a terminal is present. This option has no effect if an askpass program is used. -b, --background Run the given command in the background. It is not possible to use shell job control to manipulate background processes started by . Most interactive commands will fail to work properly in background mode. -C num, --close-from=num Close all file descriptors greater than or equal to num before executing a command. Values less than three are not permitted. By default, will close all open file descriptors other than standard input, standard output, and standard error when executing a command. The security policy may restrict the user's ability to use this option. The sudoers policy only permits use of the -C option when the administrator has enabled the closefrom_override option. -D directory, --chdir=directory Run the command in the specified directory instead of the current working directory. The security policy may return an error if the user does not have permission to specify the working directory. -E, --preserve-env Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may return an error if the user does not have permission to preserve the environment. --preserve-env=list Indicates to the security policy that the user wishes to add the comma-separated list of environment variables to those preserved from the user's environment. The security policy may return an error if the user does not have permission to preserve the environment. This option may be specified multiple times. -e, --edit Edit one or more files instead of running a command. In lieu of a path name, the string "sudoedit" is used when consulting the security policy. If the user is authorized by the policy, the following steps are taken: 1. Temporary copies are made of the files to be edited with the owner set to the invoking user. 2. The editor specified by the policy is run to edit the temporary files. The sudoers policy uses the SUDO_EDITOR, VISUAL and EDITOR environment variables (in that order). If none of SUDO_EDITOR, VISUAL or EDITOR are set, the first program listed in the editor sudoers(5) option is used. 3. If they have been modified, the temporary files are copied back to their original location and the temporary versions are removed. To help prevent the editing of unauthorized files, the following restrictions are enforced unless explicitly allowed by the security policy: Symbolic links may not be edited (version 1.8.15 and higher). Symbolic links along the path to be edited are not followed when the parent directory is writable by the invoking user unless that user is root (version 1.8.16 and higher). Files located in a directory that is writable by the invoking user may not be edited unless that user is root (version 1.8.16 and higher). Users are never allowed to edit device special files. If the specified file does not exist, it will be created. Unlike most commands run by sudo, the editor is run with the invoking user's environment unmodified. If the temporary file becomes empty after editing, the user will be prompted before it is installed. If, for some reason, is unable to update a file with its edited version, the user will receive a warning and the edited copy will remain in a temporary file. -g group, --group=group Run the command with the primary group set to group instead of the primary group specified by the target user's password database entry. The group may be either a group name or a numeric group-ID (GID) prefixed with the # character (e.g., #0 for GID 0). When running a command as a GID, many shells require that the # be escaped with a backslash (\). If no -u option is specified, the command will be run as the invoking user. In either case, the primary group will be set to group. The sudoers policy permits any of the target user's groups to be specified via the -g option as long as the -P option is not in use. -H, --set-home Request that the security policy set the HOME environment variable to the home directory specified by the target user's password database entry. Depending on the policy, this may be the default behavior. -h, --help Display a short help message to the standard output and exit. -h host, --host=host Run the command on the specified host if the security policy plugin supports remote commands. The sudoers plugin does not currently support running remote commands. This may also be used in conjunction with the -l option to list a user's privileges for the remote host. -i, --login Run the shell specified by the target user's password database entry as a login shell. This means that login- specific resource files such as .profile, .bash_profile, or .login will be read by the shell. If a command is specified, it is passed to the shell as a simple command using the -c option. The command and any args are concatenated, separated by spaces, after escaping each character (including white space) with a backslash (\) except for alphanumerics, underscores, hyphens, and dollar signs. If no command is specified, an interactive shell is executed. attempts to change to that user's home directory before running the shell. The command is run with an environment similar to the one a user would receive at log in. Most shells behave differently when a command is specified as compared to an interactive session; consult the shell's manual for details. The Command environment section in the sudoers(5) manual documents how the -i option affects the environment in which a command is run when the sudoers policy is in use. -K, --remove-timestamp Similar to the -k option, except that it removes every cached credential for the user, regardless of the terminal or parent process ID. The next time is run, a password must be entered if the security policy requires authentication. It is not possible to use the -K option in conjunction with a command or other option. This option does not require a password. Not all security policies support credential caching. -k, --reset-timestamp When used without a command, invalidates the user's cached credentials for the current session. The next time is run in the session, a password must be entered if the security policy requires authentication. By default, the sudoers policy uses a separate record in the credential cache for each terminal (or parent process ID if no terminal is present). This prevents the -k option from interfering with commands run in a different terminal session. See the timestamp_type option in sudoers(5) for more information. This option does not require a password, and was added to allow a user to revoke permissions from a .logout file. When used in conjunction with a command or an option that may require a password, this option will cause to ignore the user's cached credentials. As a result, will prompt for a password (if one is required by the security policy) and will not update the user's cached credentials. Not all security policies support credential caching. -l, --list If no command is specified, list the privileges for the invoking user (or the user specified by the -U option) on the current host. A longer list format is used if this option is specified multiple times and the security policy supports a verbose output format. If a command is specified and is permitted by the security policy for the invoking user (or the, user specified by the -U option) on the current host, the fully-qualified path to the command is displayed along with any args. If -l is specified more than once (and the security policy supports it), the matching rule is displayed in a verbose format along with the command. If a command is specified but not allowed by the policy, will exit with a status value of 1. -N, --no-update Do not update the user's cached credentials, even if the user successfully authenticates. Unlike the -k flag, existing cached credentials are used if they are valid. To detect when the user's cached credentials are valid (or when no authentication is required), the following can be used: sudo -Nnv Not all security policies support credential caching. -n, --non-interactive Avoid prompting the user for input of any kind. If a password is required for the command to run, will display an error message and exit. -P, --preserve-groups Preserve the invoking user's group vector unaltered. By default, the sudoers policy will initialize the group vector to the list of groups the target user is a member of. The real and effective group-IDs, however, are still set to match the target user. -p prompt, --prompt=prompt Use a custom password prompt with optional escape sequences. The following percent (%) escape sequences are supported by the sudoers policy: %H expanded to the host name including the domain name (only if the machine's host name is fully qualified or the fqdn option is set in sudoers(5)) %h expanded to the local host name without the domain name %p expanded to the name of the user whose password is being requested (respects the rootpw, targetpw, and runaspw flags in sudoers(5)) %U expanded to the login name of the user the command will be run as (defaults to root unless the -u option is also specified) %u expanded to the invoking user's login name %% two consecutive % characters are collapsed into a single % character The custom prompt will override the default prompt specified by either the security policy or the SUDO_PROMPT environment variable. On systems that use PAM, the custom prompt will also override the prompt specified by a PAM module unless the passprompt_override flag is disabled in sudoers. -R directory, --chroot=directory Change to the specified root directory (see chroot(8)) before running the command. The security policy may return an error if the user does not have permission to specify the root directory. -S, --stdin Write the prompt to the standard error and read the password from the standard input instead of using the terminal device. -s, --shell Run the shell specified by the SHELL environment variable if it is set or the shell specified by the invoking user's password database entry. If a command is specified, it is passed to the shell as a simple command using the -c option. The command and any args are concatenated, separated by spaces, after escaping each character (including white space) with a backslash (\) except for alphanumerics, underscores, hyphens, and dollar signs. If no command is specified, an interactive shell is executed. Most shells behave differently when a command is specified as compared to an interactive session; consult the shell's manual for details. -U user, --other-user=user Used in conjunction with the -l option to list the privileges for user instead of for the invoking user. The security policy may restrict listing other users' privileges. When using the sudoers policy, the -U option is restricted to the root user and users with either the list priviege for the specified user or the ability to run any command as root or user on the current host. -T timeout, --command-timeout=timeout Used to set a timeout for the command. If the timeout expires before the command has exited, the command will be terminated. The security policy may restrict the user's ability to set timeouts. The sudoers policy requires that user-specified timeouts be explicitly enabled. -u user, --user=user Run the command as a user other than the default target user (usually root). The user may be either a user name or a numeric user-ID (UID) prefixed with the # character (e.g., #0 for UID 0). When running commands as a UID, many shells require that the # be escaped with a backslash (\). Some security policies may restrict UIDs to those listed in the password database. The sudoers policy allows UIDs that are not in the password database as long as the targetpw option is not set. Other security policies may not support this. -V, --version Print the version string as well as the version string of any configured plugins. If the invoking user is already root, the -V option will display the options passed to configure when was built; plugins may display additional information such as default options. -v, --validate Update the user's cached credentials, authenticating the user if necessary. For the sudoers plugin, this extends the timeout for another 5 minutes by default, but does not run a command. Not all security policies support cached credentials. -- The -- is used to delimit the end of the options. Subsequent options are passed to the command. Options that take a value may only be specified once unless otherwise indicated in the description. This is to help guard against problems caused by poorly written scripts that invoke sudo with user-controlled input. Environment variables to be set for the command may also be passed as options to in the form VAR=value, for example LD_LIBRARY_PATH=/usr/local/pkg/lib. Environment variables may be subject to restrictions imposed by the security policy plugin. The sudoers policy subjects environment variables passed as options to the same restrictions as existing environment variables with one important difference. If the setenv option is set in sudoers, the command to be run has the SETENV tag set or the command matched is ALL, the user may set variables that would otherwise be forbidden. See sudoers(5) for more information. COMMAND EXECUTION top When executes a command, the security policy specifies the execution environment for the command. Typically, the real and effective user and group and IDs are set to match those of the target user, as specified in the password database, and the group vector is initialized based on the group database (unless the -P option was specified). The following parameters may be specified by security policy: real and effective user-ID real and effective group-ID supplementary group-IDs the environment list current working directory file creation mode mask (umask) scheduling priority (aka nice value) Process model There are two distinct ways can run a command. If an I/O logging plugin is configured to log terminal I/O, or if the security policy explicitly requests it, a new pseudo-terminal (pty) is allocated and fork(2) is used to create a second process, referred to as the monitor. The monitor creates a new terminal session with itself as the leader and the pty as its controlling terminal, calls fork(2) again, sets up the execution environment as described above, and then uses the execve(2) system call to run the command in the child process. The monitor exists to relay job control signals between the user's terminal and the pty the command is being run in. This makes it possible to suspend and resume the command normally. Without the monitor, the command would be in what POSIX terms an orphaned process group and it would not receive any job control signals from the kernel. When the command exits or is terminated by a signal, the monitor passes the command's exit status to the main process and exits. After receiving the command's exit status, the main process passes the command's exit status to the security policy's close function, as well as the close function of any configured audit plugin, and exits. This mode is the default for sudo versions 1.9.14 and above when using the sudoers policy. If no pty is used, calls fork(2), sets up the execution environment as described above, and uses the execve(2) system call to run the command in the child process. The main process waits until the command has completed, then passes the command's exit status to the security policy's close function, as well as the close function of any configured audit plugins, and exits. As a special case, if the policy plugin does not define a close function, will execute the command directly instead of calling fork(2) first. The sudoers policy plugin will only define a close function when I/O logging is enabled, a pty is required, an SELinux role is specified, the command has an associated timeout, or the pam_session or pam_setcred options are enabled. Both pam_session and pam_setcred are enabled by default on systems using PAM. This mode is the default for sudo versions prior to 1.9.14 when using the sudoers policy. On systems that use PAM, the security policy's close function is responsible for closing the PAM session. It may also log the command's exit status. Signal handling When the command is run as a child of the process, will relay signals it receives to the command. The SIGINT and SIGQUIT signals are only relayed when the command is being run in a new pty or when the signal was sent by a user process, not the kernel. This prevents the command from receiving SIGINT twice each time the user enters control-C. Some signals, such as SIGSTOP and SIGKILL, cannot be caught and thus will not be relayed to the command. As a general rule, SIGTSTP should be used instead of SIGSTOP when you wish to suspend a command being run by . As a special case, will not relay signals that were sent by the command it is running. This prevents the command from accidentally killing itself. On some systems, the reboot(8) utility sends SIGTERM to all non-system processes other than itself before rebooting the system. This prevents from relaying the SIGTERM signal it received back to reboot(8), which might then exit before the system was actually rebooted, leaving it in a half-dead state similar to single user mode. Note, however, that this check only applies to the command run by and not any other processes that the command may create. As a result, running a script that calls reboot(8) or shutdown(8) via may cause the system to end up in this undefined state unless the reboot(8) or shutdown(8) are run using the exec() family of functions instead of system() (which interposes a shell between the command and the calling process). Plugins Plugins may be specified via Plugin directives in the sudo.conf(5) file. They may be loaded as dynamic shared objects (on systems that support them), or compiled directly into the binary. If no sudo.conf(5) file is present, or if it doesn't contain any Plugin lines, will use sudoers(5) for the policy, auditing, and I/O logging plugins. See the sudo.conf(5) manual for details of the /etc/sudo.conf file and the sudo_plugin(5) manual for more information about the plugin architecture. EXIT VALUE top Upon successful execution of a command, the exit status from will be the exit status of the program that was executed. If the command terminated due to receipt of a signal, will send itself the same signal that terminated the command. If the -l option was specified without a command, will exit with a value of 0 if the user is allowed to run and they authenticated successfully (as required by the security policy). If a command is specified with the -l option, the exit value will only be 0 if the command is permitted by the security policy, otherwise it will be 1. If there is an authentication failure, a configuration/permission problem, or if the given command cannot be executed, exits with a value of 1. In the latter case, the error string is printed to the standard error. If cannot stat(2) one or more entries in the user's PATH, an error is printed to the standard error. (If the directory does not exist or if it is not really a directory, the entry is ignored and no error is printed.) This should not happen under normal circumstances. The most common reason for stat(2) to return permission denied is if you are running an automounter and one of the directories in your PATH is on a machine that is currently unreachable. SECURITY NOTES top tries to be safe when executing external commands. To prevent command spoofing, checks "." and "" (both denoting current directory) last when searching for a command in the user's PATH (if one or both are in the PATH). Depending on the security policy, the user's PATH environment variable may be modified, replaced, or passed unchanged to the program that executes. Users should never be granted privileges to execute files that are writable by the user or that reside in a directory that is writable by the user. If the user can modify or replace the command there is no way to limit what additional commands they can run. By default, will only log the command it explicitly runs. If a user runs a command such as sudo su or sudo sh, subsequent commands run from that shell are not subject to sudo's security policy. The same is true for commands that offer shell escapes (including most editors). If I/O logging is enabled, subsequent commands will have their input and/or output logged, but there will not be traditional logs for those commands. Because of this, care must be taken when giving users access to commands via to verify that the command does not inadvertently give the user an effective root shell. For information on ways to address this, see the Preventing shell escapes section in sudoers(5). To prevent the disclosure of potentially sensitive information, disables core dumps by default while it is executing (they are re-enabled for the command that is run). This historical practice dates from a time when most operating systems allowed set-user-ID processes to dump core by default. To aid in debugging crashes, you may wish to re-enable core dumps by setting disable_coredump to false in the sudo.conf(5) file as follows: Set disable_coredump false See the sudo.conf(5) manual for more information. ENVIRONMENT top utilizes the following environment variables. The security policy has control over the actual content of the command's environment. EDITOR Default editor to use in -e (sudoedit) mode if neither SUDO_EDITOR nor VISUAL is set. MAIL Set to the mail spool of the target user when the -i option is specified, or when env_reset is enabled in sudoers (unless MAIL is present in the env_keep list). HOME Set to the home directory of the target user when the -i or -H options are specified, when the -s option is specified and set_home is set in sudoers, when always_set_home is enabled in sudoers, or when env_reset is enabled in sudoers and HOME is not present in the env_keep list. LOGNAME Set to the login name of the target user when the -i option is specified, when the set_logname option is enabled in sudoers, or when the env_reset option is enabled in sudoers (unless LOGNAME is present in the env_keep list). PATH May be overridden by the security policy. SHELL Used to determine shell to run with -s option. SUDO_ASKPASS Specifies the path to a helper program used to read the password if no terminal is available or if the -A option is specified. SUDO_COMMAND Set to the command run by sudo, including any args. The args are truncated at 4096 characters to prevent a potential execution error. SUDO_EDITOR Default editor to use in -e (sudoedit) mode. SUDO_GID Set to the group-ID of the user who invoked sudo. SUDO_PROMPT Used as the default password prompt unless the -p option was specified. SUDO_PS1 If set, PS1 will be set to its value for the program being run. SUDO_UID Set to the user-ID of the user who invoked sudo. SUDO_USER Set to the login name of the user who invoked sudo. USER Set to the same value as LOGNAME, described above. VISUAL Default editor to use in -e (sudoedit) mode if SUDO_EDITOR is not set. FILES top /etc/sudo.conf front-end configuration EXAMPLES top The following examples assume a properly configured security policy. To get a file listing of an unreadable directory: $ sudo ls /usr/local/protected To list the home directory of user yaz on a machine where the file system holding ~yaz is not exported as root: $ sudo -u yaz ls ~yaz To edit the index.html file as user www: $ sudoedit -u www ~www/htdocs/index.html To view system logs only accessible to root and users in the adm group: $ sudo -g adm more /var/log/syslog To run an editor as jim with a different primary group: $ sudoedit -u jim -g audio ~jim/sound.txt To shut down a machine: $ sudo shutdown -r +15 "quick reboot" To make a usage listing of the directories in the /home partition. The commands are run in a sub-shell to allow the cd command and file redirection to work. $ sudo sh -c "cd /home ; du -s * | sort -rn > USAGE" DIAGNOSTICS top Error messages produced by include: editing files in a writable directory is not permitted By default, sudoedit does not permit editing a file when any of the parent directories are writable by the invoking user. This avoids a race condition that could allow the user to overwrite an arbitrary file. See the sudoedit_checkdir option in sudoers(5) for more information. editing symbolic links is not permitted By default, sudoedit does not follow symbolic links when opening files. See the sudoedit_follow option in sudoers(5) for more information. effective uid is not 0, is sudo installed setuid root? was not run with root privileges. The binary must be owned by the root user and have the set-user-ID bit set. Also, it must not be located on a file system mounted with the nosuid option or on an NFS file system that maps uid 0 to an unprivileged uid. effective uid is not 0, is sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges? was not run with root privileges. The binary has the proper owner and permissions but it still did not run with root privileges. The most common reason for this is that the file system the binary is located on is mounted with the nosuid option or it is an NFS file system that maps uid 0 to an unprivileged uid. fatal error, unable to load plugins An error occurred while loading or initializing the plugins specified in sudo.conf(5). invalid environment variable name One or more environment variable names specified via the -E option contained an equal sign (=). The arguments to the -E option should be environment variable names without an associated value. no password was provided When tried to read the password, it did not receive any characters. This may happen if no terminal is available (or the -S option is specified) and the standard input has been redirected from /dev/null. a terminal is required to read the password needs to read the password but there is no mechanism available for it to do so. A terminal is not present to read the password from, has not been configured to read from the standard input, the -S option was not used, and no askpass helper has been specified either via the sudo.conf(5) file or the SUDO_ASKPASS environment variable. no writable temporary directory found sudoedit was unable to find a usable temporary directory in which to store its intermediate files. The no new privileges flag is set, which prevents sudo from running as root. was run by a process that has the Linux no new privileges flag is set. This causes the set-user-ID bit to be ignored when running an executable, which will prevent from functioning. The most likely cause for this is running within a container that sets this flag. Check the documentation to see if it is possible to configure the container such that the flag is not set. sudo must be owned by uid 0 and have the setuid bit set was not run with root privileges. The binary does not have the correct owner or permissions. It must be owned by the root user and have the set-user-ID bit set. sudoedit is not supported on this platform It is only possible to run sudoedit on systems that support setting the effective user-ID. timed out reading password The user did not enter a password before the password timeout (5 minutes by default) expired. you do not exist in the passwd database Your user-ID does not appear in the system passwd database. you may not specify environment variables in edit mode It is only possible to specify environment variables when running a command. When editing a file, the editor is run with the user's environment unmodified. SEE ALSO top su(1), stat(2), login_cap(3), passwd(5), sudo.conf(5), sudo_plugin(5), sudoers(5), sudoers_timestamp(5), sudoreplay(8), visudo(8) HISTORY top See the HISTORY.md file in the distribution (https://www.sudo.ws/about/history/) for a brief history of sudo. AUTHORS top Many people have worked on over the years; this version consists of code written primarily by: Todd C. Miller See the CONTRIBUTORS.md file in the distribution (https://www.sudo.ws/about/contributors/) for an exhaustive list of people who have contributed to . CAVEATS top There is no easy way to prevent a user from gaining a root shell if that user is allowed to run arbitrary commands via . Also, many programs (such as editors) allow the user to run commands via shell escapes, thus avoiding sudo's checks. However, on most systems it is possible to prevent shell escapes with the sudoers(5) plugin's noexec functionality. It is not meaningful to run the cd command directly via sudo, e.g., $ sudo cd /usr/local/protected since when the command exits the parent process (your shell) will still be the same. The -D option can be used to run a command in a specific directory. Running shell scripts via can expose the same kernel bugs that make set-user-ID shell scripts unsafe on some operating systems (if your OS has a /dev/fd/ directory, set-user-ID shell scripts are generally safe). BUGS top If you believe you have found a bug in , you can submit a bug report at https://bugzilla.sudo.ws/ SUPPORT top Limited free support is available via the sudo-users mailing list, see https://www.sudo.ws/mailman/listinfo/sudo-users to subscribe or search the archives. DISCLAIMER top is provided AS IS and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. See the LICENSE.md file distributed with or https://www.sudo.ws/about/license/ for complete details. COLOPHON top This page is part of the sudo (execute a command as another user) project. Information about the project can be found at https://www.sudo.ws/. If you have a bug report for this manual page, see https://bugzilla.sudo.ws/. This page was obtained from the project's upstream Git repository https://github.com/sudo-project/sudo on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-21.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org Sudo 1.9.15p4 August 9, 2023 SUDO(8) Pages that refer to this page: homectl(1), journalctl(1), localectl(1), loginctl(1), machinectl(1), portablectl(1), setpriv(1), systemctl(1), systemd(1), systemd-analyze(1), systemd-ask-password(1), systemd-inhibit(1), systemd-nspawn(1), systemd-vmspawn(1), timedatectl(1), uid0(1), userdbctl(1), nsswitch.conf(5), credentials(7), systemd-tmpfiles(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sudo\n\n> Executes a single command as the superuser or another user.\n> More information: <https://www.sudo.ws/sudo.html>.\n\n- Run a command as the superuser:\n\n`sudo {{less /var/log/syslog}}`\n\n- Edit a file as the superuser with your default editor:\n\n`sudo --edit {{/etc/fstab}}`\n\n- Run a command as another user and/or group:\n\n`sudo --user={{user}} --group={{group}} {{id -a}}`\n\n- Repeat the last command prefixed with `sudo` (only in Bash, Zsh, etc.):\n\n`sudo !!`\n\n- Launch the default shell with superuser privileges and run login-specific files (`.profile`, `.bash_profile`, etc.):\n\n`sudo --login`\n\n- Launch the default shell with superuser privileges without changing the environment:\n\n`sudo --shell`\n\n- Launch the default shell as the specified user, loading the user's environment and reading login-specific files (`.profile`, `.bash_profile`, etc.):\n\n`sudo --login --user={{user}}`\n\n- List the allowed (and forbidden) commands for the invoking user:\n\n`sudo --list`\n
sum
sum(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sum(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SUM(1) User Commands SUM(1) NAME top sum - checksum and count the blocks in a file SYNOPSIS top sum [OPTION]... [FILE]... DESCRIPTION top Print or check BSD (16-bit) checksums. With no FILE, or when FILE is -, read standard input. -r use BSD sum algorithm (the default), use 1K blocks -s, --sysv use System V sum algorithm, use 512 bytes blocks --help display this help and exit --version output version information and exit AUTHOR top Written by Kayvan Aghaiepour and David MacKenzie. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top Full documentation <https://www.gnu.org/software/coreutils/sum> or available locally via: info '(coreutils) sum invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SUM(1) Pages that refer to this page: pmdashping(1), pmlogmv(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sum\n\n> Compute checksums and the number of blocks for a file.\n> A predecessor to the more modern `cksum`.\n> More information: <https://www.gnu.org/software/coreutils/sum>.\n\n- Compute a checksum with BSD-compatible algorithm and 1024-byte blocks:\n\n`sum {{path/to/file}}`\n\n- Compute a checksum with System V-compatible algorithm and 512-byte blocks:\n\n`sum --sysv {{path/to/file}}`\n
swaplabel
swaplabel(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training swaplabel(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ENVIRONMENT | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY SWAPLABEL(8) System Administration SWAPLABEL(8) NAME top swaplabel - print or change the label or UUID of a swap area SYNOPSIS top swaplabel [-L label] [-U UUID] device DESCRIPTION top swaplabel will display or change the label or UUID of a swap partition located on device (or regular file). If the optional arguments -L and -U are not given, swaplabel will simply display the current swap-area label and UUID of device. If an optional argument is present, then swaplabel will change the appropriate value on device. These values can also be set during swap creation using mkswap(8). The swaplabel utility allows changing the label or UUID on an actively used swap device. OPTIONS top -h, --help Display help text and exit. -V, --version Print version and exit. -L, --label label Specify a new label for the device. Swap partition labels can be at most 16 characters long. If label is longer than 16 characters, swaplabel will truncate it and print a warning message. -U, --uuid UUID Specify a new UUID for the device. The UUID must be in the standard 8-4-4-4-12 character format, such as is output by uuidgen(1). ENVIRONMENT top LIBBLKID_DEBUG=all enables libblkid debug output. AUTHORS top swaplabel was written by Jason Borden <jborden@bluehost.com> and Karel Zak <kzak@redhat.com>. SEE ALSO top uuidgen(1), mkswap(8), swapon(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The swaplabel command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 SWAPLABEL(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# swaplabel\n\n> Print or change the label or UUID of a swap area.\n> Note: `path/to/file` can either point to a regular file or a swap partition.\n> More information: <https://manned.org/swaplabel>.\n\n- Display the current label and UUID of a swap area:\n\n`swaplabel {{path/to/file}}`\n\n- Set the label of a swap area:\n\n`swaplabel --label {{new_label}} {{path/to/file}}`\n\n- Set the UUID of a swap area (you can generate a UUID using `uuidgen`):\n\n`swaplabel --uuid {{new_uuid}} {{path/to/file}}`\n
swapoff
swapon(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training swapon(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | ENVIRONMENT | FILES | NOTES | HISTORY | SEE ALSO | REPORTING BUGS | AVAILABILITY SWAPON(8) System Administration SWAPON(8) NAME top swapon, swapoff - enable/disable devices and files for paging and swapping SYNOPSIS top swapon [options] [specialfile...] swapoff [-va] [specialfile...] DESCRIPTION top swapon is used to specify devices on which paging and swapping are to take place. The device or file used is given by the specialfile parameter. It may be of the form -L label or -U uuid to indicate a device by label or uuid. Calls to swapon normally occur in the system boot scripts making all swap devices available, so that the paging and swapping activity is interleaved across several devices and files. swapoff disables swapping on the specified devices and files. When the -a flag is given, swapping is disabled on all known swap devices and files (as found in /proc/swaps or /etc/fstab). OPTIONS top -a, --all All devices marked as "swap" in /etc/fstab are made available, except for those with the "noauto" option. Devices that are already being used as swap are silently skipped. -T, --fstab path Specifies an alternative fstab file for compatibility with mount(8). If path is a directory, then the files in the directory are sorted by strverscmp(3); files that start with "." or without an .fstab extension are ignored. The option can be specified more than once. This option is mostly designed for initramfs or chroot scripts where additional configuration is specified beyond standard system configuration. -d, --discard[=policy] Enable swap discards, if the swap backing device supports the discard or trim operation. This may improve performance on some Solid State Devices, but often it does not. The option allows one to select between two available swap discard policies: --discard=once to perform a single-time discard operation for the whole swap area at swapon; or --discard=pages to asynchronously discard freed swap pages before they are available for reuse. If no policy is selected, the default behavior is to enable both discard types. The /etc/fstab mount options discard, discard=once, or discard=pages may also be used to enable discard flags. -e, --ifexists Silently skip devices that do not exist. The /etc/fstab mount option nofail may also be used to skip non-existing device. -f, --fixpgsz Reinitialize (exec mkswap) the swap space if its page size does not match that of the current running kernel. mkswap(8) initializes the whole device and does not check for bad blocks. -L label Use the partition that has the specified label. (For this, access to /proc/partitions is needed.) -o, --options opts Specify swap options by an fstab-compatible comma-separated string. For example: swapon -o pri=1,discard=pages,nofail /dev/sda2 The opts string is evaluated last and overrides all other command line options. -p, --priority priority Specify the priority of the swap device. priority is a value between 0 and 32767. Higher numbers indicate higher priority. See swapon(2) for a full description of swap priorities. Add pri=value to the option field of /etc/fstab for use with swapon -a. When no priority is defined, Linux kernel defaults to negative numbers. -s, --summary Display swap usage summary by device. Equivalent to cat /proc/swaps. This output format is DEPRECATED in favour of --show that provides better control on output data. --show[=column...] Display a definable table of swap areas. See the --help output for a list of available columns. --output-all Output all available columns. --noheadings Do not print headings when displaying --show output. --raw Display --show output without aligning table columns. --bytes Display swap size in bytes in --show output instead of in user-friendly units. -U uuid Use the partition that has the specified uuid. -v, --verbose Be verbose. -h, --help Display help text and exit. -V, --version Print version and exit. EXIT STATUS top swapoff has the following exit status values since v2.36: 0 success 2 system has insufficient memory to stop swapping (OOM) 4 swapoff(2) syscall failed for another reason 8 non-swapoff(2) syscall system error (out of memory, ...) 16 usage or syntax error 32 all swapoff failed on --all 64 some swapoff succeeded on --all The command swapoff --all returns 0 (all succeeded), 32 (all failed), or 64 (some failed, some succeeded). + The old versions before v2.36 has no documented exit status, 0 means success in all versions. ENVIRONMENT top LIBMOUNT_DEBUG=all enables libmount debug output. LIBBLKID_DEBUG=all enables libblkid debug output. FILES top /dev/sd?? standard paging devices /etc/fstab ascii filesystem description table NOTES top Files with holes The swap file implementation in the kernel expects to be able to write to the file directly, without the assistance of the filesystem. This is a problem on files with holes or on copy-on-write files on filesystems like Btrfs. Commands like cp(1) or truncate(1) create files with holes. These files will be rejected by swapon. Preallocated files created by fallocate(1) may be interpreted as files with holes too depending of the filesystem. Preallocated swap files are supported on XFS since Linux 4.18. The most portable solution to create a swap file is to use dd(1) and /dev/zero. Btrfs Swap files on Btrfs are supported since Linux 5.0 on files with nocow attribute. See the btrfs(5) manual page for more details. NFS Swap over NFS may not work. Suspend swapon automatically detects and rewrites a swap space signature with old software suspend data (e.g., S1SUSPEND, S2SUSPEND, ...). The problem is that if we dont do it, then we get data corruption the next time an attempt at unsuspending is made. HISTORY top The swapon command appeared in 4.0BSD. SEE ALSO top swapoff(2), swapon(2), fstab(5), init(8), fallocate(1), mkswap(8), mount(8), rc(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The swapon command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-08-25 SWAPON(8) Pages that refer to this page: swapon(2), fstab(5), org.freedesktop.systemd1(5), proc(5), systemd.swap(5), mkswap(8), mount(8), swaplabel(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# swapoff\n\n> Disable devices and files for swapping.\n> Note: `path/to/file` can either point to a regular file or a swap partition.\n> More information: <https://manned.org/swapoff>.\n\n- Disable a given swap area:\n\n`swapoff {{path/to/file}}`\n\n- Disable all swap areas in `/proc/swaps`:\n\n`swapoff --all`\n\n- Disable a swap partition by its label:\n\n`swapoff -L {{label}}`\n
swapon
swapon(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training swapon(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | ENVIRONMENT | FILES | NOTES | HISTORY | SEE ALSO | REPORTING BUGS | AVAILABILITY SWAPON(8) System Administration SWAPON(8) NAME top swapon, swapoff - enable/disable devices and files for paging and swapping SYNOPSIS top swapon [options] [specialfile...] swapoff [-va] [specialfile...] DESCRIPTION top swapon is used to specify devices on which paging and swapping are to take place. The device or file used is given by the specialfile parameter. It may be of the form -L label or -U uuid to indicate a device by label or uuid. Calls to swapon normally occur in the system boot scripts making all swap devices available, so that the paging and swapping activity is interleaved across several devices and files. swapoff disables swapping on the specified devices and files. When the -a flag is given, swapping is disabled on all known swap devices and files (as found in /proc/swaps or /etc/fstab). OPTIONS top -a, --all All devices marked as "swap" in /etc/fstab are made available, except for those with the "noauto" option. Devices that are already being used as swap are silently skipped. -T, --fstab path Specifies an alternative fstab file for compatibility with mount(8). If path is a directory, then the files in the directory are sorted by strverscmp(3); files that start with "." or without an .fstab extension are ignored. The option can be specified more than once. This option is mostly designed for initramfs or chroot scripts where additional configuration is specified beyond standard system configuration. -d, --discard[=policy] Enable swap discards, if the swap backing device supports the discard or trim operation. This may improve performance on some Solid State Devices, but often it does not. The option allows one to select between two available swap discard policies: --discard=once to perform a single-time discard operation for the whole swap area at swapon; or --discard=pages to asynchronously discard freed swap pages before they are available for reuse. If no policy is selected, the default behavior is to enable both discard types. The /etc/fstab mount options discard, discard=once, or discard=pages may also be used to enable discard flags. -e, --ifexists Silently skip devices that do not exist. The /etc/fstab mount option nofail may also be used to skip non-existing device. -f, --fixpgsz Reinitialize (exec mkswap) the swap space if its page size does not match that of the current running kernel. mkswap(8) initializes the whole device and does not check for bad blocks. -L label Use the partition that has the specified label. (For this, access to /proc/partitions is needed.) -o, --options opts Specify swap options by an fstab-compatible comma-separated string. For example: swapon -o pri=1,discard=pages,nofail /dev/sda2 The opts string is evaluated last and overrides all other command line options. -p, --priority priority Specify the priority of the swap device. priority is a value between 0 and 32767. Higher numbers indicate higher priority. See swapon(2) for a full description of swap priorities. Add pri=value to the option field of /etc/fstab for use with swapon -a. When no priority is defined, Linux kernel defaults to negative numbers. -s, --summary Display swap usage summary by device. Equivalent to cat /proc/swaps. This output format is DEPRECATED in favour of --show that provides better control on output data. --show[=column...] Display a definable table of swap areas. See the --help output for a list of available columns. --output-all Output all available columns. --noheadings Do not print headings when displaying --show output. --raw Display --show output without aligning table columns. --bytes Display swap size in bytes in --show output instead of in user-friendly units. -U uuid Use the partition that has the specified uuid. -v, --verbose Be verbose. -h, --help Display help text and exit. -V, --version Print version and exit. EXIT STATUS top swapoff has the following exit status values since v2.36: 0 success 2 system has insufficient memory to stop swapping (OOM) 4 swapoff(2) syscall failed for another reason 8 non-swapoff(2) syscall system error (out of memory, ...) 16 usage or syntax error 32 all swapoff failed on --all 64 some swapoff succeeded on --all The command swapoff --all returns 0 (all succeeded), 32 (all failed), or 64 (some failed, some succeeded). + The old versions before v2.36 has no documented exit status, 0 means success in all versions. ENVIRONMENT top LIBMOUNT_DEBUG=all enables libmount debug output. LIBBLKID_DEBUG=all enables libblkid debug output. FILES top /dev/sd?? standard paging devices /etc/fstab ascii filesystem description table NOTES top Files with holes The swap file implementation in the kernel expects to be able to write to the file directly, without the assistance of the filesystem. This is a problem on files with holes or on copy-on-write files on filesystems like Btrfs. Commands like cp(1) or truncate(1) create files with holes. These files will be rejected by swapon. Preallocated files created by fallocate(1) may be interpreted as files with holes too depending of the filesystem. Preallocated swap files are supported on XFS since Linux 4.18. The most portable solution to create a swap file is to use dd(1) and /dev/zero. Btrfs Swap files on Btrfs are supported since Linux 5.0 on files with nocow attribute. See the btrfs(5) manual page for more details. NFS Swap over NFS may not work. Suspend swapon automatically detects and rewrites a swap space signature with old software suspend data (e.g., S1SUSPEND, S2SUSPEND, ...). The problem is that if we dont do it, then we get data corruption the next time an attempt at unsuspending is made. HISTORY top The swapon command appeared in 4.0BSD. SEE ALSO top swapoff(2), swapon(2), fstab(5), init(8), fallocate(1), mkswap(8), mount(8), rc(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The swapon command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-08-25 SWAPON(8) Pages that refer to this page: swapon(2), fstab(5), org.freedesktop.systemd1(5), proc(5), systemd.swap(5), mkswap(8), mount(8), swaplabel(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# swapon\n\n> Enable devices and files for swapping.\n> Note: `path/to/file` can either point to a regular file or a swap partition.\n> More information: <https://manned.org/swapon>.\n\n- Show swap information:\n\n`swapon`\n\n- Enable a given swap area:\n\n`swapon {{path/to/file}}`\n\n- Enable all swap areas specified in `/etc/fstab` except those with the `noauto` option:\n\n`swapon --all`\n\n- Enable a swap partition by its label:\n\n`swapon -L {{label}}`\n
switch_root
switch_root(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training switch_root(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | NOTES | AUTHORS | SEE ALSO | REPORTING BUGS | AVAILABILITY SWITCH_ROOT(8) System Administration SWITCH_ROOT(8) NAME top switch_root - switch to another filesystem as the root of the mount tree SYNOPSIS top switch_root [-hV] switch_root newroot init [arg...] DESCRIPTION top switch_root moves already mounted /proc, /dev, /sys and /run to newroot and makes newroot the new root filesystem and starts init process. WARNING: switch_root removes recursively all files and directories on the current root filesystem. OPTIONS top -h, --help Display help text and exit. -V, --version Print version and exit. EXIT STATUS top switch_root returns 1 on failure, it never returns on success. NOTES top switch_root will fail to function if newroot is not the root of a mount. If you want to switch root into a directory that does not meet this requirement then you can first use a bind-mounting trick to turn any directory into a mount point: mount --bind $DIR $DIR AUTHORS top Peter Jones <pjones@redhat.com>, Jeremy Katz <katzj@redhat.com>, Karel Zak <kzak@redhat.com> SEE ALSO top chroot(2), init(8), mkinitrd(8), mount(8) REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The switch_root command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 SWITCH_ROOT(8) Pages that refer to this page: chroot(2), pivot_root(2), namespaces(7), pid_namespaces(7), pivot_root(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# switch_root\n\n> Use a different filesystem as the root of the mount tree.\n> Note: switch_root will fail to function if the new root is not the root of a mount. Use bind-mounting as a workaround.\n> See also: `chroot`, `mount`.\n> More information: <https://manned.org/switch_root.8>.\n\n- Move `/proc`, `/dev`, `/sys` and `/run` to the specified filesystem, use this filesystem as the new root and start the specified init process:\n\n`switch_root {{new_root}} {{/sbin/init}}`\n\n- Display help:\n\n`switch_root -h`\n
sync
sync(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sync(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | BUGS | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON SYNC(1) User Commands SYNC(1) NAME top sync - Synchronize cached writes to persistent storage SYNOPSIS top sync [OPTION] [FILE]... DESCRIPTION top Synchronize cached writes to persistent storage If one or more files are specified, sync only them, or their containing file systems. -d, --data sync only file data, no unneeded metadata -f, --file-system sync the file systems that contain the files --help display this help and exit --version output version information and exit BUGS top Persistence guarantees vary per system. See the system calls below for more details. AUTHOR top Written by Jim Meyering and Giuseppe Scrivano. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top fdatasync(2), fsync(2), sync(2), syncfs(2) Full documentation <https://www.gnu.org/software/coreutils/sync> or available locally via: info '(coreutils) sync invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 SYNC(1) Pages that refer to this page: bdflush(2), fsync(2), quotactl(2), sync(2), proc(5), btrfs-filesystem(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sync\n\n> Flushes all pending write operations to the appropriate disks.\n> More information: <https://www.gnu.org/software/coreutils/sync>.\n\n- Flush all pending write operations on all disks:\n\n`sync`\n\n- Flush all pending write operations on a single file to disk:\n\n`sync {{path/to/file}}`\n
sysctl
sysctl(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training sysctl(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | PARAMETERS | SYSTEM FILE PRECEDENCE | EXAMPLES | DEPRECATED PARAMETERS | FILES | SEE ALSO | AUTHOR | REPORTING BUGS | COLOPHON SYSCTL(8) System Administration SYSCTL(8) NAME top sysctl - configure kernel parameters at runtime SYNOPSIS top sysctl [options] [variable[=value]] [...] sysctl -p [file or regexp] [...] DESCRIPTION top sysctl is used to modify kernel parameters at runtime. The parameters available are those listed under /proc/sys/. Procfs is required for sysctl support in Linux. You can use sysctl to both read and write sysctl data. PARAMETERS top variable The name of a key to read from. An example is kernel.ostype. The '/' separator is also accepted in place of a '.'. variable=value To set a key, use the form variable=value where variable is the key and value is the value to set it to. If the value contains quotes or characters which are parsed by the shell, you may need to enclose the value in double quotes. -n, --values Use this option to disable printing of the key name when printing values. -e, --ignore Use this option to ignore errors about unknown keys. -N, --names Use this option to only print the names. It may be useful with shells that have programmable completion. -q, --quiet Use this option to not display the values set to stdout. -w, --write Force all arguments to be write arguments and print an error if they cannot be parsed this way. -p[FILE], --load[=FILE] Load in sysctl settings from the file specified or /etc/sysctl.conf if none given. Specifying - as filename means reading data from standard input. Using this option will mean arguments to sysctl are files, which are read in the order they are specified. The file argument may be specified as regular expression. -a, --all Display all values currently available. --deprecated Include deprecated parameters to --all values listing. -b, --binary Print value without new line. --system Load settings from all system configuration files. See the SYSTEM FILE PRECEDENCE section below. -r, --pattern pattern Only apply settings that match pattern. The pattern uses extended regular expression syntax. -A Alias of -a -d Alias of -h -f Alias of -p -X Alias of -a -o Does nothing, exists for BSD compatibility. -x Does nothing, exists for BSD compatibility. -h, --help Display help text and exit. -V, --version Display version information and exit. SYSTEM FILE PRECEDENCE top When using the --system option, sysctl will read files from directories in the following list in given order from top to bottom. Once a file of a given filename is loaded, any file of the same name in subsequent directories is ignored. /etc/sysctl.d/*.conf /run/sysctl.d/*.conf /usr/local/lib/sysctl.d/*.conf /usr/lib/sysctl.d/*.conf /lib/sysctl.d/*.conf /etc/sysctl.conf All configuration files are sorted in lexicographic order, regardless of the directory they reside in. Configuration files can either be completely replaced (by having a new configuration file with the same name in a directory of higher priority) or partially replaced (by having a configuration file that is ordered later). EXAMPLES top /sbin/sysctl -a /sbin/sysctl -n kernel.hostname /sbin/sysctl -w kernel.domainname="example.com" /sbin/sysctl -p/etc/sysctl.conf /sbin/sysctl -a --pattern forward /sbin/sysctl -a --pattern forward$ /sbin/sysctl -a --pattern 'net.ipv4.conf.(eth|wlan)0.arp' /sbin/sysctl --pattern '^net.ipv6' --system DEPRECATED PARAMETERS top The base_reachable_time and retrans_time are deprecated. The sysctl command does not allow changing values of these parameters. Users who insist to use deprecated kernel interfaces should push values to /proc file system by other means. For example: echo 256 > /proc/sys/net/ipv6/neigh/eth0/base_reachable_time FILES top /proc/sys /etc/sysctl.d/*.conf /run/sysctl.d/*.conf /usr/local/lib/sysctl.d/*.conf /usr/lib/sysctl.d/*.conf /lib/sysctl.d/*.conf /etc/sysctl.conf SEE ALSO top proc(5), sysctl.conf(5), regex(7) AUTHOR top George Staikos staikos@0wned.org REPORTING BUGS top Please send bug reports to procps@freelists.org COLOPHON top This page is part of the procps-ng (/proc filesystem utilities) project. Information about the project can be found at https://gitlab.com/procps-ng/procps. If you have a bug report for this manual page, see https://gitlab.com/procps-ng/procps/blob/master/Documentation/bugs.md. This page was obtained from the project's upstream Git repository https://gitlab.com/procps-ng/procps.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-10-16.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org procps-ng 2023-08-19 SYSCTL(8) Pages that refer to this page: perfalloc(1), pmdaperfevent(1), coredump.conf(5), lxc.container.conf(5), proc(5), sysctl.conf(5), sysctl.d(5), flowtop(8), systemd-coredump(8), systemd-sysctl.service(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# sysctl\n\n> List and change kernel runtime variables.\n> More information: <https://manned.org/sysctl.8>.\n\n- Show all available variables and their values:\n\n`sysctl -a`\n\n- Set a changeable kernel state variable:\n\n`sysctl -w {{section.tunable}}={{value}}`\n\n- Get currently open file handlers:\n\n`sysctl fs.file-nr`\n\n- Get limit for simultaneous open files:\n\n`sysctl fs.file-max`\n\n- Apply changes from `/etc/sysctl.conf`:\n\n`sysctl -p`\n
systemctl
systemctl(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemctl(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMMANDS | OPTIONS | EXIT STATUS | ENVIRONMENT | SEE ALSO | NOTES | COLOPHON SYSTEMCTL(1) systemctl SYSTEMCTL(1) NAME top systemctl - Control the systemd system and service manager SYNOPSIS top systemctl [OPTIONS...] COMMAND [UNIT...] DESCRIPTION top systemctl may be used to introspect and control the state of the "systemd" system and service manager. Please refer to systemd(1) for an introduction into the basic concepts and functionality this tool manages. COMMANDS top The following commands are understood: Unit Commands (Introspection and Modification) list-units [PATTERN...] List units that systemd currently has in memory. This includes units that are either referenced directly or through a dependency, units that are pinned by applications programmatically, or units that were active in the past and have failed. By default only units which are active, have pending jobs, or have failed are shown; this can be changed with option --all. If one or more PATTERNs are specified, only units matching one of them are shown. The units that are shown are additionally filtered by --type= and --state= if those options are specified. Note that this command does not show unit templates, but only instances of unit templates. Units templates that aren't instantiated are not runnable, and will thus never show up in the output of this command. Specifically this means that foo@.service will never be shown in this list unless instantiated, e.g. as foo@bar.service. Use list-unit-files (see below) for listing installed unit template files. Produces output similar to UNIT LOAD ACTIVE SUB DESCRIPTION sys-module-fuse.device loaded active plugged /sys/module/fuse -.mount loaded active mounted Root Mount boot-efi.mount loaded active mounted /boot/efi systemd-journald.service loaded active running Journal Service systemd-logind.service loaded active running Login Service user@1000.service loaded failed failed User Manager for UID 1000 ... systemd-tmpfiles-clean.timer loaded active waiting Daily Cleanup of Temporary Directories LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 123 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'. The header and the last unit of a given type are underlined if the terminal supports that. A colored dot is shown next to services which were masked, not found, or otherwise failed. The LOAD column shows the load state, one of loaded, not-found, bad-setting, error, masked. The ACTIVE columns shows the general unit state, one of active, reloading, inactive, failed, activating, deactivating. The SUB column shows the unit-type-specific detailed state of the unit, possible values vary by unit type. The list of possible LOAD, ACTIVE, and SUB states is not constant and new systemd releases may both add and remove values. systemctl --state=help command may be used to display the current set of possible values. This is the default command. list-automounts [PATTERN...] List automount units currently in memory, ordered by mount path. If one or more PATTERNs are specified, only automount units matching one of them are shown. Produces output similar to WHAT WHERE MOUNTED IDLE TIMEOUT UNIT /dev/sdb1 /mnt/test no 120s mnt-test.automount binfmt_misc /proc/sys/fs/binfmt_misc yes 0 proc-sys-fs-binfmt_misc.automount 2 automounts listed. Also see --show-types, --all, and --state=. Added in version 252. list-paths [PATTERN...] List path units currently in memory, ordered by path. If one or more PATTERNs are specified, only path units matching one of them are shown. Produces output similar to PATH CONDITION UNIT ACTIVATES /run/systemd/ask-password DirectoryNotEmpty systemd-ask-password-plymouth.path systemd-ask-password-plymouth.service /run/systemd/ask-password DirectoryNotEmpty systemd-ask-password-wall.path systemd-ask-password-wall.service /var/cache/cups/org.cups.cupsd PathExists cups.path cups.service 3 paths listed. Also see --show-types, --all, and --state=. Added in version 254. list-sockets [PATTERN...] List socket units currently in memory, ordered by listening address. If one or more PATTERNs are specified, only socket units matching one of them are shown. Produces output similar to LISTEN UNIT ACTIVATES /dev/initctl systemd-initctl.socket systemd-initctl.service ... [::]:22 sshd.socket sshd.service kobject-uevent 1 systemd-udevd-kernel.socket systemd-udevd.service 5 sockets listed. Note: because the addresses might contains spaces, this output is not suitable for programmatic consumption. Also see --show-types, --all, and --state=. Added in version 202. list-timers [PATTERN...] List timer units currently in memory, ordered by the time they elapse next. If one or more PATTERNs are specified, only units matching one of them are shown. Produces output similar to NEXT LEFT LAST PASSED UNIT ACTIVATES - - Thu 2017-02-23 13:40:29 EST 3 days ago ureadahead-stop.timer ureadahead-stop.service Sun 2017-02-26 18:55:42 EST 1min 14s left Thu 2017-02-23 13:54:44 EST 3 days ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service Sun 2017-02-26 20:37:16 EST 1h 42min left Sun 2017-02-26 11:56:36 EST 6h ago apt-daily.timer apt-daily.service Sun 2017-02-26 20:57:49 EST 2h 3min left Sun 2017-02-26 11:56:36 EST 6h ago snapd.refresh.timer snapd.refresh.service NEXT shows the next time the timer will run. LEFT shows how long till the next time the timer runs. LAST shows the last time the timer ran. PASSED shows how long has passed since the timer last ran. UNIT shows the name of the timer ACTIVATES shows the name the service the timer activates when it runs. Also see --all and --state=. Added in version 209. is-active PATTERN... Check whether any of the specified units are active (i.e. running). Returns an exit code 0 if at least one is active, or non-zero otherwise. Unless --quiet is specified, this will also print the current unit state to standard output. is-failed [PATTERN...] Check whether any of the specified units is in the "failed" state. If no unit is specified, check whether there are any failed units, which corresponds to the "degraded" state returned by is-system-running. Returns an exit code 0 if at least one has failed, non-zero otherwise. Unless --quiet is specified, this will also print the current unit or system state to standard output. Added in version 197. status [PATTERN...|PID...]] Show runtime status information about the whole system or about one or more units followed by most recent log data from the journal. If no positional arguments are specified, and no unit filter is given with --type=, --state=, or --failed, shows the status of the whole system. If combined with --all, follows that with the status of all units. If positional arguments are specified, each positional argument is treated as either a unit name to show, or a glob pattern to show units whose names match that pattern, or a PID to show the unit containing that PID. When --type=, --state=, or --failed are used, units are additionally filtered by the TYPE and ACTIVE state. This function is intended to generate human-readable output. If you are looking for computer-parsable output, use show instead. By default, this function only shows 10 lines of output and ellipsizes lines to fit in the terminal window. This can be changed with --lines and --full, see above. In addition, journalctl --unit=NAME or journalctl --user-unit=NAME use a similar filter for messages and might be more convenient. Note that this operation only displays runtime status, i.e. information about the current invocation of the unit (if it is running) or the most recent invocation (if it is not running anymore, and has not been released from memory). Information about earlier invocations, invocations from previous system boots, or prior invocations that have already been released from memory may be retrieved via journalctl --unit=. systemd implicitly loads units as necessary, so just running the status will attempt to load a file. The command is thus not useful for determining if something was already loaded or not. The units may possibly also be quickly unloaded after the operation is completed if there's no reason to keep it in memory thereafter. Example 1. Example output from systemctl status $ systemctl status bluetooth bluetooth.service - Bluetooth service Loaded: loaded (/usr/lib/systemd/system/bluetooth.service; enabled; preset: enabled) Active: active (running) since Wed 2017-01-04 13:54:04 EST; 1 weeks 0 days ago Docs: man:bluetoothd(8) Main PID: 930 (bluetoothd) Status: "Running" Tasks: 1 Memory: 648.0K CPU: 435ms CGroup: /system.slice/bluetooth.service 930 /usr/lib/bluetooth/bluetoothd Jan 12 10:46:45 example.com bluetoothd[8900]: Not enough free handles to register service Jan 12 10:46:45 example.com bluetoothd[8900]: Current Time Service could not be registered Jan 12 10:46:45 example.com bluetoothd[8900]: gatt-time-server: Input/output error (5) The dot ("") uses color on supported terminals to summarize the unit state at a glance. Along with its color, its shape varies according to its state: "inactive" or "maintenance" is a white circle (""), "active" is a green dot (""), "deactivating" is a white dot, "failed" or "error" is a red cross (""), and "reloading" is a green clockwise circle arrow (""). The "Loaded:" line in the output will show "loaded" if the unit has been loaded into memory. Other possible values for "Loaded:" include: "error" if there was a problem loading it, "not-found" if no unit file was found for this unit, "bad-setting" if an essential unit file setting could not be parsed and "masked" if the unit file has been masked. Along with showing the path to the unit file, this line will also show the enablement state. Enabled units are included in the dependency network between units, and thus are started at boot or via some other form of activation. See the full table of possible enablement states including the definition of "masked" in the documentation for the is-enabled command. The "Active:" line shows active state. The value is usually "active" or "inactive". Active could mean started, bound, plugged in, etc depending on the unit type. The unit could also be in process of changing states, reporting a state of "activating" or "deactivating". A special "failed" state is entered when the service failed in some way, such as a crash, exiting with an error code or timing out. If the failed state is entered the cause will be logged for later reference. show [PATTERN...|JOB...] Show properties of one or more units, jobs, or the manager itself. If no argument is specified, properties of the manager will be shown. If a unit name is specified, properties of the unit are shown, and if a job ID is specified, properties of the job are shown. By default, empty properties are suppressed. Use --all to show those too. To select specific properties to show, use --property=. This command is intended to be used whenever computer-parsable output is required. Use status if you are looking for formatted human-readable output. Many properties shown by systemctl show map directly to configuration settings of the system and service manager and its unit files. Note that the properties shown by the command are generally more low-level, normalized versions of the original configuration settings and expose runtime state in addition to configuration. For example, properties shown for service units include the service's current main process identifier as "MainPID" (which is runtime state), and time settings are always exposed as properties ending in the "...USec" suffix even if a matching configuration options end in "...Sec", because microseconds is the normalized time unit used internally by the system and service manager. For details about many of these properties, see the documentation of the D-Bus interface backing these properties, see org.freedesktop.systemd1(5). cat PATTERN... Show backing files of one or more units. Prints the "fragment" and "drop-ins" (source files) of units. Each file is preceded by a comment which includes the file name. Note that this shows the contents of the backing files on disk, which may not match the system manager's understanding of these units if any unit files were updated on disk and the daemon-reload command wasn't issued since. Added in version 209. help PATTERN...|PID... Show manual pages for one or more units, if available. If a PID is given, the manual pages for the unit the process belongs to are shown. Added in version 185. list-dependencies [UNIT...] Shows units required and wanted by the specified units. This recursively lists units following the Requires=, Requisite=, Wants=, ConsistsOf=, BindsTo=, and Upholds= dependencies. If no units are specified, default.target is implied. The units that are shown are additionally filtered by --type= and --state= if those options are specified. Note that we won't be able to use a tree structure in this case, so --plain is implied. By default, only target units are recursively expanded. When --all is passed, all other units are recursively expanded as well. Options --reverse, --after, --before may be used to change what types of dependencies are shown. Note that this command only lists units currently loaded into memory by the service manager. In particular, this command is not suitable to get a comprehensive list at all reverse dependencies on a specific unit, as it won't list the dependencies declared by units currently not loaded. Added in version 198. start PATTERN... Start (activate) one or more units specified on the command line. Note that unit glob patterns expand to names of units currently in memory. Units which are not active and are not in a failed state usually are not in memory, and will not be matched by any pattern. In addition, in case of instantiated units, systemd is often unaware of the instance name until the instance has been started. Therefore, using glob patterns with start has limited usefulness. Also, secondary alias names of units are not considered. Option --all may be used to also operate on inactive units which are referenced by other loaded units. Note that this is not the same as operating on "all" possible units, because as the previous paragraph describes, such a list is ill-defined. Nevertheless, systemctl start --all GLOB may be useful if all the units that should match the pattern are pulled in by some target which is known to be loaded. stop PATTERN... Stop (deactivate) one or more units specified on the command line. This command will fail if the unit does not exist or if stopping of the unit is prohibited (see RefuseManualStop= in systemd.unit(5)). It will not fail if any of the commands configured to stop the unit (ExecStop=, etc.) fail, because the manager will still forcibly terminate the unit. If a unit that gets stopped can still be triggered by other units, a warning containing the names of the triggering units is shown. --no-warn can be used to suppress the warning. reload PATTERN... Asks all units listed on the command line to reload their configuration. Note that this will reload the service-specific configuration, not the unit configuration file of systemd. If you want systemd to reload the configuration file of a unit, use the daemon-reload command. In other words: for the example case of Apache, this will reload Apache's httpd.conf in the web server, not the apache.service systemd unit file. This command should not be confused with the daemon-reload command. restart PATTERN... Stop and then start one or more units specified on the command line. If the units are not running yet, they will be started. Note that restarting a unit with this command does not necessarily flush out all of the unit's resources before it is started again. For example, the per-service file descriptor storage facility (see FileDescriptorStoreMax= in systemd.service(5)) will remain intact as long as the unit has a job pending, and is only cleared when the unit is fully stopped and no jobs are pending anymore. If it is intended that the file descriptor store is flushed out, too, during a restart operation an explicit systemctl stop command followed by systemctl start should be issued. try-restart PATTERN... Stop and then start one or more units specified on the command line if the units are running. This does nothing if units are not running. reload-or-restart PATTERN... Reload one or more units if they support it. If not, stop and then start them instead. If the units are not running yet, they will be started. try-reload-or-restart PATTERN... Reload one or more units if they support it. If not, stop and then start them instead. This does nothing if the units are not running. Added in version 229. isolate UNIT Start the unit specified on the command line and its dependencies and stop all others, unless they have IgnoreOnIsolate=yes (see systemd.unit(5)). If a unit name with no extension is given, an extension of ".target" will be assumed. This command is dangerous, since it will immediately stop processes that are not enabled in the new target, possibly including the graphical environment or terminal you are currently using. Note that this operation is allowed only on units where AllowIsolate= is enabled. See systemd.unit(5) for details. kill PATTERN... Send a UNIX process signal to one or more processes of the unit. Use --kill-whom= to select which process to send the signal to. Use --signal= to select the signal to send. Combine with --kill-value= to enqueue a POSIX Realtime Signal with an associated value. clean PATTERN... Remove the configuration, state, cache, logs or runtime data of the specified units. Use --what= to select which kind of resource to remove. For service units this may be used to remove the directories configured with ConfigurationDirectory=, StateDirectory=, CacheDirectory=, LogsDirectory= and RuntimeDirectory=, see systemd.exec(5) for details. It may also be used to clear the file descriptor store as enabled via FileDescriptorStoreMax=, see systemd.service(5) for details. For timer units this may be used to clear out the persistent timestamp data if Persistent= is used and --what=state is selected, see systemd.timer(5). This command only applies to units that use either of these settings. If --what= is not specified, the cache and runtime data as well as the file descriptor store are removed (as these three types of resources are generally redundant and reproducible on the next invocation of the unit). Note that the specified units must be stopped to invoke this operation. Added in version 243. freeze PATTERN... Freeze one or more units specified on the command line using cgroup freezer Freezing the unit will cause all processes contained within the cgroup corresponding to the unit to be suspended. Being suspended means that unit's processes won't be scheduled to run on CPU until thawed. Note that this command is supported only on systems that use unified cgroup hierarchy. Unit is automatically thawed just before we execute a job against the unit, e.g. before the unit is stopped. Added in version 246. thaw PATTERN... Thaw (unfreeze) one or more units specified on the command line. This is the inverse operation to the freeze command and resumes the execution of processes in the unit's cgroup. Added in version 246. set-property UNIT PROPERTY=VALUE... Set the specified unit properties at runtime where this is supported. This allows changing configuration parameter properties such as resource control settings at runtime. Not all properties may be changed at runtime, but many resource control settings (primarily those in systemd.resource-control(5)) may. The changes are applied immediately, and stored on disk for future boots, unless --runtime is passed, in which case the settings only apply until the next reboot. The syntax of the property assignment follows closely the syntax of assignments in unit files. Example: systemctl set-property foobar.service CPUWeight=200 If the specified unit appears to be inactive, the changes will be only stored on disk as described previously hence they will be effective when the unit will be started. Note that this command allows changing multiple properties at the same time, which is preferable over setting them individually. Example: systemctl set-property foobar.service CPUWeight=200 MemoryMax=2G IPAccounting=yes Like with unit file configuration settings, assigning an empty setting usually resets a property to its defaults. Example: systemctl set-property avahi-daemon.service IPAddressDeny= Added in version 206. bind UNIT PATH [PATH] Bind-mounts a file or directory from the host into the specified unit's mount namespace. The first path argument is the source file or directory on the host, the second path argument is the destination file or directory in the unit's mount namespace. When the latter is omitted, the destination path in the unit's mount namespace is the same as the source path on the host. When combined with the --read-only switch, a ready-only bind mount is created. When combined with the --mkdir switch, the destination path is first created before the mount is applied. Note that this option is currently only supported for units that run within a mount namespace (e.g.: with RootImage=, PrivateMounts=, etc.). This command supports bind-mounting directories, regular files, device nodes, AF_UNIX socket nodes, as well as FIFOs. The bind mount is ephemeral, and it is undone as soon as the current unit process exists. Note that the namespace mentioned here, where the bind mount will be added to, is the one where the main service process runs. Other processes (those exececuted by ExecReload=, ExecStartPre=, etc.) run in distinct namespaces. If supported by the kernel, any prior mount on the selected target will be replaced by the new mount. If not supported, any prior mount will be over-mounted, but remain pinned and inaccessible. Added in version 248. mount-image UNIT IMAGE [PATH [PARTITION_NAME:MOUNT_OPTIONS]] Mounts an image from the host into the specified unit's mount namespace. The first path argument is the source image on the host, the second path argument is the destination directory in the unit's mount namespace (i.e. inside RootImage=/RootDirectory=). The following argument, if any, is interpreted as a colon-separated tuple of partition name and comma-separated list of mount options for that partition. The format is the same as the service MountImages= setting. When combined with the --read-only switch, a ready-only mount is created. When combined with the --mkdir switch, the destination path is first created before the mount is applied. Note that this option is currently only supported for units that run within a mount namespace (i.e. with RootImage=, PrivateMounts=, etc.). Note that the namespace mentioned here where the image mount will be added to, is the one where the main service process runs. Note that the namespace mentioned here, where the bind mount will be added to, is the one where the main service process runs. Other processes (those exececuted by ExecReload=, ExecStartPre=, etc.) run in distinct namespaces. If supported by the kernel, any prior mount on the selected target will be replaced by the new mount. If not supported, any prior mount will be over-mounted, but remain pinned and inaccessible. Example: systemctl mount-image foo.service /tmp/img.raw /var/lib/image root:ro,nosuid systemctl mount-image --mkdir bar.service /tmp/img.raw /var/lib/baz/img Added in version 248. service-log-level SERVICE [LEVEL] If the LEVEL argument is not given, print the current log level as reported by service SERVICE. If the optional argument LEVEL is provided, then change the current log level of the service to LEVEL. The log level should be a typical syslog log level, i.e. a value in the range 0...7 or one of the strings emerg, alert, crit, err, warning, notice, info, debug; see syslog(3) for details. The service must have the appropriate BusName=destination property and also implement the generic org.freedesktop.LogControl1(5) interface. (systemctl will use the generic D-Bus protocol to access the org.freedesktop.LogControl1.LogLevel interface for the D-Bus name destination.) Added in version 247. service-log-target SERVICE [TARGET] If the TARGET argument is not given, print the current log target as reported by service SERVICE. If the optional argument TARGET is provided, then change the current log target of the service to TARGET. The log target should be one of the strings console (for log output to the service's standard error stream), kmsg (for log output to the kernel log buffer), journal (for log output to systemd-journald.service(8) using the native journal protocol), syslog (for log output to the classic syslog socket /dev/log), null (for no log output whatsoever) or auto (for an automatically determined choice, typically equivalent to console if the service is invoked interactively, and journal or syslog otherwise). For most services, only a small subset of log targets make sense. In particular, most "normal" services should only implement console, journal, and null. Anything else is only appropriate for low-level services that are active in very early boot before proper logging is established. The service must have the appropriate BusName=destination property and also implement the generic org.freedesktop.LogControl1(5) interface. (systemctl will use the generic D-Bus protocol to access the org.freedesktop.LogControl1.LogLevel interface for the D-Bus name destination.) Added in version 247. reset-failed [PATTERN...] Reset the "failed" state of the specified units, or if no unit name is passed, reset the state of all units. When a unit fails in some way (i.e. process exiting with non-zero error code, terminating abnormally or timing out), it will automatically enter the "failed" state and its exit code and status is recorded for introspection by the administrator until the service is stopped/re-started or reset with this command. In addition to resetting the "failed" state of a unit it also resets various other per-unit properties: the start rate limit counter of all unit types is reset to zero, as is the restart counter of service units. Thus, if a unit's start limit (as configured with StartLimitIntervalSec=/StartLimitBurst=) is hit and the unit refuses to be started again, use this command to make it startable again. whoami [PID...] Returns the units the processes referenced by the given PIDs belong to (one per line). If no PID is specified returns the unit the systemctl command is invoked in. Added in version 254. Unit File Commands list-unit-files [PATTERN...] List unit files installed on the system, in combination with their enablement state (as reported by is-enabled). If one or more PATTERNs are specified, only unit files whose name matches one of them are shown (patterns matching unit file system paths are not supported). Unlike list-units this command will list template units in addition to explicitly instantiated units. Added in version 233. enable UNIT..., enable PATH... Enable one or more units or unit instances. This will create a set of symlinks, as encoded in the [Install] sections of the indicated unit files. After the symlinks have been created, the system manager configuration is reloaded (in a way equivalent to daemon-reload), in order to ensure the changes are taken into account immediately. Note that this does not have the effect of also starting any of the units being enabled. If this is desired, combine this command with the --now switch, or invoke start with appropriate arguments later. Note that in case of unit instance enablement (i.e. enablement of units of the form foo@bar.service), symlinks named the same as instances are created in the unit configuration directory, however they point to the single template unit file they are instantiated from. This command expects either valid unit names (in which case various unit file directories are automatically searched for unit files with appropriate names), or absolute paths to unit files (in which case these files are read directly). If a specified unit file is located outside of the usual unit file directories, an additional symlink is created, linking it into the unit configuration path, thus ensuring it is found when requested by commands such as start. The file system where the linked unit files are located must be accessible when systemd is started (e.g. anything underneath /home/ or /var/ is not allowed, unless those directories are located on the root file system). This command will print the file system operations executed. This output may be suppressed by passing --quiet. Note that this operation creates only the symlinks suggested in the [Install] section of the unit files. While this command is the recommended way to manipulate the unit configuration directory, the administrator is free to make additional changes manually by placing or removing symlinks below this directory. This is particularly useful to create configurations that deviate from the suggested default installation. In this case, the administrator must make sure to invoke daemon-reload manually as necessary, in order to ensure the changes are taken into account. When using this operation on units without install information, a warning about it is shown. --no-warn can be used to suppress the warning. Enabling units should not be confused with starting (activating) units, as done by the start command. Enabling and starting units is orthogonal: units may be enabled without being started and started without being enabled. Enabling simply hooks the unit into various suggested places (for example, so that the unit is automatically started on boot or when a particular kind of hardware is plugged in). Starting actually spawns the daemon process (in case of service units), or binds the socket (in case of socket units), and so on. Depending on whether --system, --user, --runtime, or --global is specified, this enables the unit for the system, for the calling user only, for only this boot of the system, or for all future logins of all users. Note that in the last case, no systemd daemon configuration is reloaded. Using enable on masked units is not supported and results in an error. disable UNIT... Disables one or more units. This removes all symlinks to the unit files backing the specified units from the unit configuration directory, and hence undoes any changes made by enable or link. Note that this removes all symlinks to matching unit files, including manually created symlinks, and not just those actually created by enable or link. Note that while disable undoes the effect of enable, the two commands are otherwise not symmetric, as disable may remove more symlinks than a prior enable invocation of the same unit created. This command expects valid unit names only, it does not accept paths to unit files. In addition to the units specified as arguments, all units are disabled that are listed in the Also= setting contained in the [Install] section of any of the unit files being operated on. This command implicitly reloads the system manager configuration after completing the operation. Note that this command does not implicitly stop the units that are being disabled. If this is desired, either combine this command with the --now switch, or invoke the stop command with appropriate arguments later. This command will print information about the file system operations (symlink removals) executed. This output may be suppressed by passing --quiet. If a unit gets disabled but its triggering units are still active, a warning containing the names of the triggering units is shown. --no-warn can be used to suppress the warning. When this command is used with --user, the units being operated on might still be enabled in global scope, and thus get started automatically even after a successful disablement in user scope. In this case, a warning about it is shown, which can be suppressed using --no-warn. This command honors --system, --user, --runtime, --global and --no-warn in a similar way as enable. Added in version 238. reenable UNIT... Reenable one or more units, as specified on the command line. This is a combination of disable and enable and is useful to reset the symlinks a unit file is enabled with to the defaults configured in its [Install] section. This command expects a unit name only, it does not accept paths to unit files. Added in version 238. preset UNIT... Reset the enable/disable status one or more unit files, as specified on the command line, to the defaults configured in the preset policy files. This has the same effect as disable or enable, depending how the unit is listed in the preset files. Use --preset-mode= to control whether units shall be enabled and disabled, or only enabled, or only disabled. If the unit carries no install information, it will be silently ignored by this command. UNIT must be the real unit name, any alias names are ignored silently. For more information on the preset policy format, see systemd.preset(5). Added in version 238. preset-all Resets all installed unit files to the defaults configured in the preset policy file (see above). Use --preset-mode= to control whether units shall be enabled and disabled, or only enabled, or only disabled. Added in version 215. is-enabled UNIT... Checks whether any of the specified unit files are enabled (as with enable). Returns an exit code of 0 if at least one is enabled, non-zero otherwise. Prints the current enable status (see table). To suppress this output, use --quiet. To show installation targets, use --full. Table 1. is-enabled output Name Description Exit Code "enabled" Enabled via .wants/, "enabled-runtime" .requires/ or Alias= symlinks 0 (permanently in /etc/systemd/system/, or transiently in /run/systemd/system/). "linked" Made available through one or more symlinks "linked-runtime" to the unit file (permanently in /etc/systemd/system/ or transiently in > 0 /run/systemd/system/), even though the unit file might reside outside of the unit file search path. "alias" The name is an alias 0 (symlink to another unit file). "masked" Completely disabled, so that any start "masked-runtime" operation on it fails (permanently in > 0 /etc/systemd/system/ or transiently in /run/systemd/systemd/). "static" The unit file is not 0 enabled, and has no provisions for enabling in the [Install] unit file section. "indirect" The unit file itself is 0 not enabled, but it has a non-empty Also= setting in the [Install] unit file section, listing other unit files that might be enabled, or it has an alias under a different name through a symlink that is not specified in Also=. For template unit files, an instance different than the one specified in DefaultInstance= is enabled. "disabled" The unit file is not > 0 enabled, but contains an [Install] section with installation instructions. "generated" The unit file was 0 generated dynamically via a generator tool. See systemd.generator(7). Generated unit files may not be enabled, they are enabled implicitly by their generator. "transient" The unit file has been 0 created dynamically with the runtime API. Transient units may not be enabled. "bad" The unit file is > 0 invalid or another error occurred. Note that is-enabled will not actually return this state, but print an error message instead. However the unit file listing printed by list-unit-files might show it. "not-found" The unit file doesn't 4 exist. Added in version 238. mask UNIT... Mask one or more units, as specified on the command line. This will link these unit files to /dev/null, making it impossible to start them. This is a stronger version of disable, since it prohibits all kinds of activation of the unit, including enablement and manual activation. Use this option with care. This honors the --runtime option to only mask temporarily until the next reboot of the system. The --now option may be used to ensure that the units are also stopped. This command expects valid unit names only, it does not accept unit file paths. Note that this will create a symlink under the unit's name in /etc/systemd/system/ (in case --runtime is not specified) or /run/systemd/system/ (in case --runtime is specified). If a matching unit file already exists under these directories this operation will hence fail. This means that the operation is primarily useful to mask units shipped by the vendor (as those are shipped in /usr/lib/systemd/system/ and not the aforementioned two directories), but typically doesn't work for units created locally (as those are typically placed precisely in the two aforementioned directories). Similar restrictions apply for --user mode, in which case the directories are below the user's home directory however. If a unit gets masked but its triggering units are still active, a warning containing the names of the triggering units is shown. --no-warn can be used to suppress the warning. Added in version 238. unmask UNIT... Unmask one or more unit files, as specified on the command line. This will undo the effect of mask. This command expects valid unit names only, it does not accept unit file paths. Added in version 238. link PATH... Link a unit file that is not in the unit file search path into the unit file search path. This command expects an absolute path to a unit file. The effect of this may be undone with disable. The effect of this command is that a unit file is made available for commands such as start, even though it is not installed directly in the unit search path. The file system where the linked unit files are located must be accessible when systemd is started (e.g. anything underneath /home/ or /var/ is not allowed, unless those directories are located on the root file system). Added in version 233. revert UNIT... Revert one or more unit files to their vendor versions. This command removes drop-in configuration files that modify the specified units, as well as any user-configured unit file that overrides a matching vendor supplied unit file. Specifically, for a unit "foo.service" the matching directories "foo.service.d/" with all their contained files are removed, both below the persistent and runtime configuration directories (i.e. below /etc/systemd/system and /run/systemd/system); if the unit file has a vendor-supplied version (i.e. a unit file located below /usr/) any matching persistent or runtime unit file that overrides it is removed, too. Note that if a unit file has no vendor-supplied version (i.e. is only defined below /etc/systemd/system or /run/systemd/system, but not in a unit file stored below /usr/), then it is not removed. Also, if a unit is masked, it is unmasked. Effectively, this command may be used to undo all changes made with systemctl edit, systemctl set-property and systemctl mask and puts the original unit file with its settings back in effect. Added in version 230. add-wants TARGET UNIT..., add-requires TARGET UNIT... Adds "Wants=" or "Requires=" dependencies, respectively, to the specified TARGET for one or more units. This command honors --system, --user, --runtime and --global in a way similar to enable. Added in version 217. edit UNIT... Edit or replace a drop-in snippet or the main unit file, to extend or override the definition of the specified unit. Depending on whether --system (the default), --user, or --global is specified, this command will operate on the system unit files, unit files for the calling user, or the unit files shared between all users. The editor (see the "Environment" section below) is invoked on temporary files which will be written to the real location if the editor exits successfully. After the editing is finished, configuration is reloaded, equivalent to systemctl daemon-reload --system or systemctl daemon-reload --user. For edit --global, the reload is not performed and the edits will take effect only for subsequent logins (or after a reload is requested in a different way). If --full is specified, a replacement for the main unit file will be created or edited. Otherwise, a drop-in file will be created or edited. If --drop-in= is specified, the given drop-in file name will be used instead of the default override.conf. The unit must exist, i.e. its main unit file must be present. If --force is specified, this requirement is ignored and a new unit may be created (with --full), or a drop-in for a nonexistent unit may be created. If --runtime is specified, the changes will be made temporarily in /run/ and they will be lost on the next reboot. If --stdin is specified, the new contents will be read from standard input. In this mode, the old contents of the file are discarded. If the temporary file is empty upon exit, the modification of the related unit is canceled. Note that this command cannot be used to remotely edit units and that you cannot temporarily edit units which are in /etc/, since they take precedence over /run/. Added in version 218. get-default Return the default target to boot into. This returns the target unit name default.target is aliased (symlinked) to. Added in version 205. set-default TARGET Set the default target to boot into. This sets (symlinks) the default.target alias to the given target unit. Added in version 205. Machine Commands list-machines [PATTERN...] List the host and all running local containers with their state. If one or more PATTERNs are specified, only containers matching one of them are shown. Added in version 212. Job Commands list-jobs [PATTERN...] List jobs that are in progress. If one or more PATTERNs are specified, only jobs for units matching one of them are shown. When combined with --after or --before the list is augmented with information on which other job each job is waiting for, and which other jobs are waiting for it, see above. Added in version 233. cancel [JOB...] Cancel one or more jobs specified on the command line by their numeric job IDs. If no job ID is specified, cancel all pending jobs. Added in version 233. Environment Commands systemd supports an environment block that is passed to processes the manager spawns. The names of the variables can contain ASCII letters, digits, and the underscore character. Variable names cannot be empty or start with a digit. In variable values, most characters are allowed, but the whole sequence must be valid UTF-8. (Note that control characters like newline (NL), tab (TAB), or the escape character (ESC), are valid ASCII and thus valid UTF-8). The total length of the environment block is limited to _SC_ARG_MAX value defined by sysconf(3). show-environment Dump the systemd manager environment block. This is the environment block that is passed to all processes the manager spawns. The environment block will be dumped in straightforward form suitable for sourcing into most shells. If no special characters or whitespace is present in the variable values, no escaping is performed, and the assignments have the form "VARIABLE=value". If whitespace or characters which have special meaning to the shell are present, dollar-single-quote escaping is used, and assignments have the form "VARIABLE=$'value'". This syntax is known to be supported by bash(1), zsh(1), ksh(1), and busybox(1)'s ash(1), but not dash(1) or fish(1). set-environment VARIABLE=VALUE... Set one or more systemd manager environment variables, as specified on the command line. This command will fail if variable names and values do not conform to the rules listed above. Added in version 233. unset-environment VARIABLE... Unset one or more systemd manager environment variables. If only a variable name is specified, it will be removed regardless of its value. If a variable and a value are specified, the variable is only removed if it has the specified value. Added in version 233. import-environment VARIABLE... Import all, one or more environment variables set on the client into the systemd manager environment block. If a list of environment variable names is passed, client-side values are then imported into the manager's environment block. If any names are not valid environment variable names or have invalid values according to the rules described above, an error is raised. If no arguments are passed, the entire environment block inherited by the systemctl process is imported. In this mode, any inherited invalid environment variables are quietly ignored. Importing of the full inherited environment block (calling this command without any arguments) is deprecated. A shell will set dozens of variables which only make sense locally and are only meant for processes which are descendants of the shell. Such variables in the global environment block are confusing to other processes. Added in version 209. Manager State Commands daemon-reload Reload the systemd manager configuration. This will rerun all generators (see systemd.generator(7)), reload all unit files, and recreate the entire dependency tree. While the daemon is being reloaded, all sockets systemd listens on behalf of user configuration will stay accessible. This command should not be confused with the reload command. daemon-reexec Reexecute the systemd manager. This will serialize the manager state, reexecute the process and deserialize the state again. This command is of little use except for debugging and package upgrades. Sometimes, it might be helpful as a heavy-weight daemon-reload. While the daemon is being reexecuted, all sockets systemd listening on behalf of user configuration will stay accessible. log-level [LEVEL] If no argument is given, print the current log level of the manager. If an optional argument LEVEL is provided, then the command changes the current log level of the manager to LEVEL (accepts the same values as --log-level= described in systemd(1)). Added in version 244. log-target [TARGET] If no argument is given, print the current log target of the manager. If an optional argument TARGET is provided, then the command changes the current log target of the manager to TARGET (accepts the same values as --log-target=, described in systemd(1)). Added in version 244. service-watchdogs [yes|no] If no argument is given, print the current state of service runtime watchdogs of the manager. If an optional boolean argument is provided, then globally enables or disables the service runtime watchdogs (WatchdogSec=) and emergency actions (e.g. OnFailure= or StartLimitAction=); see systemd.service(5). The hardware watchdog is not affected by this setting. Added in version 244. System Commands is-system-running Checks whether the system is operational. This returns success (exit code 0) when the system is fully up and running, specifically not in startup, shutdown or maintenance mode, and with no failed services. Failure is returned otherwise (exit code non-zero). In addition, the current state is printed in a short string to standard output, see the table below. Use --quiet to suppress this output. Use --wait to wait until the boot process is completed before printing the current state and returning the appropriate error status. If --wait is in use, states initializing or starting will not be reported, instead the command will block until a later state (such as running or degraded) is reached. Table 2. is-system-running output Name Description Exit Code initializing Early bootup, > 0 before basic.target is reached or the maintenance state entered. starting Late bootup, > 0 before the job queue becomes idle for the first time, or one of the rescue targets are reached. running The system is 0 fully operational. degraded The system is > 0 operational but one or more units failed. maintenance The rescue or > 0 emergency target is active. stopping The manager is > 0 shutting down. offline The manager is not > 0 running. Specifically, this is the operational state if an incompatible program is running as system manager (PID 1). unknown The operational > 0 state could not be determined, due to lack of resources or another error cause. Added in version 215. default Enter default mode. This is equivalent to systemctl isolate default.target. This operation is blocking by default, use --no-block to request asynchronous behavior. rescue Enter rescue mode. This is equivalent to systemctl isolate rescue.target. This operation is blocking by default, use --no-block to request asynchronous behavior. emergency Enter emergency mode. This is equivalent to systemctl isolate emergency.target. This operation is blocking by default, use --no-block to request asynchronous behavior. halt Shut down and halt the system. This is mostly equivalent to systemctl start halt.target --job-mode=replace-irreversibly --no-block, but also prints a wall message to all users. This command is asynchronous; it will return after the halt operation is enqueued, without waiting for it to complete. Note that this operation will simply halt the OS kernel after shutting down, leaving the hardware powered on. Use systemctl poweroff for powering off the system (see below). If combined with --force, shutdown of all running services is skipped, however all processes are killed and all file systems are unmounted or mounted read-only, immediately followed by the system halt. If --force is specified twice, the operation is immediately executed without terminating any processes or unmounting any file systems. This may result in data loss. Note that when --force is specified twice the halt operation is executed by systemctl itself, and the system manager is not contacted. This means the command should succeed even when the system manager has crashed. If combined with --when=, shutdown will be scheduled after the given timestamp. And --when=cancel will cancel the shutdown. poweroff Shut down and power-off the system. This is mostly equivalent to systemctl start poweroff.target --job-mode=replace-irreversibly --no-block, but also prints a wall message to all users. This command is asynchronous; it will return after the power-off operation is enqueued, without waiting for it to complete. This command honors --force and --when= in a similar way as halt. reboot Shut down and reboot the system. This command mostly equivalent to systemctl start reboot.target --job-mode=replace-irreversibly --no-block, but also prints a wall message to all users. This command is asynchronous; it will return after the reboot operation is enqueued, without waiting for it to complete. If the switch --reboot-argument= is given, it will be passed as the optional argument to the reboot(2) system call. Options --boot-loader-entry=, --boot-loader-menu=, and --firmware-setup can be used to select what to do after the reboot. See the descriptions of those options for details. This command honors --force and --when= in a similar way as halt. If a new kernel has been loaded via kexec --load, a kexec will be performed instead of a reboot, unless "SYSTEMCTL_SKIP_AUTO_KEXEC=1" has been set. If a new root file system has been set up on "/run/nextroot/", a soft-reboot will be performed instead of a reboot, unless "SYSTEMCTL_SKIP_AUTO_SOFT_REBOOT=1" has been set. Added in version 246. kexec Shut down and reboot the system via kexec. This command will load a kexec kernel if one wasn't loaded yet or fail. A kernel may be loaded earlier by a separate step, this is particularly useful if a custom initrd or additional kernel command line options are desired. The --force can be used to continue without a kexec kernel, i.e. to perform a normal reboot. The final reboot step is equivalent to systemctl start kexec.target --job-mode=replace-irreversibly --no-block. To load a kernel, an enumeration is performed following the Boot Loader Specification[1], and the default boot entry is loaded. For this step to succeed, the system must be using UEFI and the boot loader entries must be configured appropriately. bootctl list may be used to list boot entries, see bootctl(1). This command is asynchronous; it will return after the reboot operation is enqueued, without waiting for it to complete. This command honors --force and --when= similarly to halt. If a new kernel has been loaded via kexec --load, a kexec will be performed when reboot is invoked, unless "SYSTEMCTL_SKIP_AUTO_KEXEC=1" has been set. soft-reboot Shut down and reboot userspace. This is equivalent to systemctl start soft-reboot.target --job-mode=replace-irreversibly --no-block. This command is asynchronous; it will return after the reboot operation is enqueued, without waiting for it to complete. This command honors --force and --when= in a similar way as halt. This operation only reboots userspace, leaving the kernel running. See systemd-soft-reboot.service(8) for details. If a new root file system has been set up on "/run/nextroot/", a soft-reboot will be performed when reboot is invoked, unless "SYSTEMCTL_SKIP_AUTO_SOFT_REBOOT=1" has been set. Added in version 254. exit [EXIT_CODE] Ask the service manager to quit. This is only supported for user service managers (i.e. in conjunction with the --user option) or in containers and is equivalent to poweroff otherwise. This command is asynchronous; it will return after the exit operation is enqueued, without waiting for it to complete. The service manager will exit with the specified exit code, if EXIT_CODE is passed. Added in version 227. switch-root [ROOT [INIT]] Switches to a different root directory and executes a new system manager process below it. This is intended for use in the initrd, and will transition from the initrd's system manager process (a.k.a. "init" process, PID 1) to the main system manager process which is loaded from the actual host root files system. This call takes two arguments: the directory that is to become the new root directory, and the path to the new system manager binary below it to execute as PID 1. If both are omitted or the former is an empty string it defaults to /sysroot/. If the latter is omitted or is an empty string, a systemd binary will automatically be searched for and used as service manager. If the system manager path is omitted, equal to the empty string or identical to the path to the systemd binary, the state of the initrd's system manager process is passed to the main system manager, which allows later introspection of the state of the services involved in the initrd boot phase. Added in version 209. sleep Put the system to sleep, through suspend, hibernate, hybrid-sleep, or suspend-then-hibernate. The sleep operation to use is automatically selected by systemd-logind.service(8). By default, suspend-then-hibernate is used, and falls back to suspend and then hibernate if not supported. Refer to SleepOperation= setting in logind.conf(5) for more details. This command is asynchronous, and will return after the sleep operation is successfully enqueued. It will not wait for the sleep/resume cycle to complete. Added in version 256. suspend Suspend the system. This will trigger activation of the special target unit suspend.target. This command is asynchronous, and will return after the suspend operation is successfully enqueued. It will not wait for the suspend/resume cycle to complete. hibernate Hibernate the system. This will trigger activation of the special target unit hibernate.target. This command is asynchronous, and will return after the hibernation operation is successfully enqueued. It will not wait for the hibernate/thaw cycle to complete. hybrid-sleep Hibernate and suspend the system. This will trigger activation of the special target unit hybrid-sleep.target. This command is asynchronous, and will return after the hybrid sleep operation is successfully enqueued. It will not wait for the sleep/wake-up cycle to complete. Added in version 196. suspend-then-hibernate Suspend the system and hibernate it after the delay specified in systemd-sleep.conf. This will trigger activation of the special target unit suspend-then-hibernate.target. This command is asynchronous, and will return after the hybrid sleep operation is successfully enqueued. It will not wait for the sleep/wake-up or hibernate/thaw cycle to complete. Added in version 240. Parameter Syntax Unit commands listed above take either a single unit name (designated as UNIT), or multiple unit specifications (designated as PATTERN...). In the first case, the unit name with or without a suffix must be given. If the suffix is not specified (unit name is "abbreviated"), systemctl will append a suitable suffix, ".service" by default, and a type-specific suffix in case of commands which operate only on specific unit types. For example, # systemctl start sshd and # systemctl start sshd.service are equivalent, as are # systemctl isolate default and # systemctl isolate default.target Note that (absolute) paths to device nodes are automatically converted to device unit names, and other (absolute) paths to mount unit names. # systemctl status /dev/sda # systemctl status /home are equivalent to: # systemctl status dev-sda.device # systemctl status home.mount In the second case, shell-style globs will be matched against the primary names of all units currently in memory; literal unit names, with or without a suffix, will be treated as in the first case. This means that literal unit names always refer to exactly one unit, but globs may match zero units and this is not considered an error. Glob patterns use fnmatch(3), so normal shell-style globbing rules are used, and "*", "?", "[]" may be used. See glob(7) for more details. The patterns are matched against the primary names of units currently in memory, and patterns which do not match anything are silently skipped. For example: # systemctl stop "sshd@*.service" will stop all sshd@.service instances. Note that alias names of units, and units that aren't in memory are not considered for glob expansion. For unit file commands, the specified UNIT should be the name of the unit file (possibly abbreviated, see above), or the absolute path to the unit file: # systemctl enable foo.service or # systemctl link /path/to/foo.service OPTIONS top The following options are understood: -t, --type= The argument is a comma-separated list of unit types such as service and socket. When units are listed with list-units, list-dependencies, show, or status, only units of the specified types will be shown. By default, units of all types are shown. As a special case, if one of the arguments is help, a list of allowed values will be printed and the program will exit. --state= The argument is a comma-separated list of unit LOAD, SUB, or ACTIVE states. When listing units with list-units, list-dependencies, show or status, show only those in the specified states. Use --state=failed or --failed to show only failed units. As a special case, if one of the arguments is help, a list of allowed values will be printed and the program will exit. Added in version 206. -p, --property= When showing unit/job/manager properties with the show command, limit display to properties specified in the argument. The argument should be a comma-separated list of property names, such as "MainPID". Unless specified, all known properties are shown. If specified more than once, all properties with the specified names are shown. Shell completion is implemented for property names. For the manager itself, systemctl show will show all available properties, most of which are derived or closely match the options described in systemd-system.conf(5). Properties for units vary by unit type, so showing any unit (even a non-existent one) is a way to list properties pertaining to this type. Similarly, showing any job will list properties pertaining to all jobs. Properties for units are documented in systemd.unit(5), and the pages for individual unit types systemd.service(5), systemd.socket(5), etc. -P Equivalent to --value --property=, i.e. shows the value of the property without the property name or "=". Note that using -P once will also affect all properties listed with -p/--property=. Added in version 246. -a, --all When listing units with list-units, also show inactive units and units which are following other units. When showing unit/job/manager properties, show all properties regardless whether they are set or not. To list all units installed in the file system, use the list-unit-files command instead. When listing units with list-dependencies, recursively show dependencies of all dependent units (by default only dependencies of target units are shown). When used with status, show journal messages in full, even if they include unprintable characters or are very long. By default, fields with unprintable characters are abbreviated as "blob data". (Note that the pager may escape unprintable characters again.) -r, --recursive When listing units, also show units of local containers. Units of local containers will be prefixed with the container name, separated by a single colon character (":"). Added in version 212. --reverse Show reverse dependencies between units with list-dependencies, i.e. follow dependencies of type WantedBy=, RequiredBy=, UpheldBy=, PartOf=, BoundBy=, instead of Wants= and similar. Added in version 203. --after With list-dependencies, show the units that are ordered before the specified unit. In other words, recursively list units following the After= dependency. Note that any After= dependency is automatically mirrored to create a Before= dependency. Temporal dependencies may be specified explicitly, but are also created implicitly for units which are WantedBy= targets (see systemd.target(5)), and as a result of other directives (for example RequiresMountsFor=). Both explicitly and implicitly introduced dependencies are shown with list-dependencies. When passed to the list-jobs command, for each printed job show which other jobs are waiting for it. May be combined with --before to show both the jobs waiting for each job as well as all jobs each job is waiting for. Added in version 203. --before With list-dependencies, show the units that are ordered after the specified unit. In other words, recursively list units following the Before= dependency. When passed to the list-jobs command, for each printed job show which other jobs it is waiting for. May be combined with --after to show both the jobs waiting for each job as well as all jobs each job is waiting for. Added in version 212. --with-dependencies When used with status, cat, list-units, and list-unit-files, those commands print all specified units and the dependencies of those units. Options --reverse, --after, --before may be used to change what types of dependencies are shown. Added in version 245. -l, --full Do not ellipsize unit names, process tree entries, journal output, or truncate unit descriptions in the output of status, list-units, list-jobs, and list-timers. Also, show installation targets in the output of is-enabled. --value When printing properties with show, only print the value, and skip the property name and "=". Also see option -P above. Added in version 230. --show-types When showing sockets, show the type of the socket. Added in version 202. --job-mode= When queuing a new job, this option controls how to deal with already queued jobs. It takes one of "fail", "replace", "replace-irreversibly", "isolate", "ignore-dependencies", "ignore-requirements", "flush", "triggering", or "restart-dependencies". Defaults to "replace", except when the isolate command is used which implies the "isolate" job mode. If "fail" is specified and a requested operation conflicts with a pending job (more specifically: causes an already pending start job to be reversed into a stop job or vice versa), cause the operation to fail. If "replace" (the default) is specified, any conflicting pending job will be replaced, as necessary. If "replace-irreversibly" is specified, operate like "replace", but also mark the new jobs as irreversible. This prevents future conflicting transactions from replacing these jobs (or even being enqueued while the irreversible jobs are still pending). Irreversible jobs can still be cancelled using the cancel command. This job mode should be used on any transaction which pulls in shutdown.target. "isolate" is only valid for start operations and causes all other units to be stopped when the specified unit is started. This mode is always used when the isolate command is used. "flush" will cause all queued jobs to be canceled when the new job is enqueued. If "ignore-dependencies" is specified, then all unit dependencies are ignored for this new job and the operation is executed immediately. If passed, no required units of the unit passed will be pulled in, and no ordering dependencies will be honored. This is mostly a debugging and rescue tool for the administrator and should not be used by applications. "ignore-requirements" is similar to "ignore-dependencies", but only causes the requirement dependencies to be ignored, the ordering dependencies will still be honored. "triggering" may only be used with systemctl stop. In this mode, the specified unit and any active units that trigger it are stopped. See the discussion of Triggers= in systemd.unit(5) for more information about triggering units. "restart-dependencies" may only be used with systemctl start. In this mode, dependencies of the specified unit will receive restart propagation, as if a restart job had been enqueued for the unit. Added in version 209. -T, --show-transaction When enqueuing a unit job (for example as effect of a systemctl start invocation or similar), show brief information about all jobs enqueued, covering both the requested job and any added because of unit dependencies. Note that the output will only include jobs immediately part of the transaction requested. It is possible that service start-up program code run as effect of the enqueued jobs might request further jobs to be pulled in. This means that completion of the listed jobs might ultimately entail more jobs than the listed ones. Added in version 242. --fail Shorthand for --job-mode=fail. When used with the kill command, if no units were killed, the operation results in an error. Added in version 227. --check-inhibitors= When system shutdown or sleep state is requested, this option controls checking of inhibitor locks. It takes one of "auto", "yes" or "no". Defaults to "auto", which will behave like "yes" for interactive invocations (i.e. from a TTY) and "no" for non-interactive invocations. "yes" lets the request respect inhibitor locks. "no" lets the request ignore inhibitor locks. Applications can establish inhibitor locks to prevent certain important operations (such as CD burning) from being interrupted by system shutdown or sleep. Any user may take these locks and privileged users may override these locks. If any locks are taken, shutdown and sleep state requests will normally fail (unless privileged). However, if "no" is specified or "auto" is specified on a non-interactive requests, the operation will be attempted. If locks are present, the operation may require additional privileges. Option --force provides another way to override inhibitors. Added in version 248. -i Shortcut for --check-inhibitors=no. Added in version 198. --dry-run Just print what would be done. Currently supported by verbs halt, poweroff, reboot, kexec, suspend, hibernate, hybrid-sleep, suspend-then-hibernate, default, rescue, emergency, and exit. Added in version 236. -q, --quiet Suppress printing of the results of various commands and also the hints about truncated log lines. This does not suppress output of commands for which the printed output is the only result (like show). Errors are always printed. --no-warn Don't generate the warnings shown by default in the following cases: when systemctl is invoked without procfs mounted on /proc/, when using enable or disable on units without install information (i.e. don't have or have an empty [Install] section), when using disable combined with --user on units that are enabled in global scope, when a stop-ped, disable-d, or mask-ed unit still has active triggering units. Added in version 253. --no-block Do not synchronously wait for the requested operation to finish. If this is not specified, the job will be verified, enqueued and systemctl will wait until the unit's start-up is completed. By passing this argument, it is only verified and enqueued. This option may not be combined with --wait. --wait Synchronously wait for started units to terminate again. This option may not be combined with --no-block. Note that this will wait forever if any given unit never terminates (by itself or by getting stopped explicitly); particularly services which use "RemainAfterExit=yes". When used with is-system-running, wait until the boot process is completed before returning. Added in version 232. --user Talk to the service manager of the calling user, rather than the service manager of the system. --system Talk to the service manager of the system. This is the implied default. --failed List units in failed state. This is equivalent to --state=failed. Added in version 233. --no-wall Do not send wall message before halt, power-off and reboot. --global When used with enable and disable, operate on the global user configuration directory, thus enabling or disabling a unit file globally for all future logins of all users. --no-reload When used with enable and disable, do not implicitly reload daemon configuration after executing the changes. --no-ask-password When used with start and related commands, disables asking for passwords. Background services may require input of a password or passphrase string, for example to unlock system hard disks or cryptographic certificates. Unless this option is specified and the command is invoked from a terminal, systemctl will query the user on the terminal for the necessary secrets. Use this option to switch this behavior off. In this case, the password must be supplied by some other means (for example graphical password agents) or the service might fail. This also disables querying the user for authentication for privileged operations. --kill-whom= When used with kill, choose which processes to send a UNIX process signal to. Must be one of main, control or all to select whether to kill only the main process, the control process or all processes of the unit. The main process of the unit is the one that defines the life-time of it. A control process of a unit is one that is invoked by the manager to induce state changes of it. For example, all processes started due to the ExecStartPre=, ExecStop= or ExecReload= settings of service units are control processes. Note that there is only one control process per unit at a time, as only one state change is executed at a time. For services of type Type=forking, the initial process started by the manager for ExecStart= is a control process, while the process ultimately forked off by that one is then considered the main process of the unit (if it can be determined). This is different for service units of other types, where the process forked off by the manager for ExecStart= is always the main process itself. A service unit consists of zero or one main process, zero or one control process plus any number of additional processes. Not all unit types manage processes of these types however. For example, for mount units, control processes are defined (which are the invocations of /usr/bin/mount and /usr/bin/umount), but no main process is defined. If omitted, defaults to all. Added in version 252. --kill-value=INT If used with the kill command, enqueues a signal along with the specified integer value parameter to the specified process(es). This operation is only available for POSIX Realtime Signals (i.e. --signal=SIGRTMIN+... or --signal=SIGRTMAX-...), and ensures the signals are generated via the sigqueue(3) system call, rather than kill(3). The specified value must be a 32-bit signed integer, and may be specified either in decimal, in hexadecimal (if prefixed with "0x"), octal (if prefixed with "0o") or binary (if prefixed with "0b") If this option is used the signal will only be enqueued on the control or main process of the unit, never on other processes belonging to the unit, i.e. --kill-whom=all will only affect main and control processes but no other processes. Added in version 254. -s, --signal= When used with kill, choose which signal to send to selected processes. Must be one of the well-known signal specifiers such as SIGTERM, SIGINT or SIGSTOP. If omitted, defaults to SIGTERM. The special value "help" will list the known values and the program will exit immediately, and the special value "list" will list known values along with the numerical signal numbers and the program will exit immediately. --what= Select what type of per-unit resources to remove when the clean command is invoked, see above. Takes one of configuration, state, cache, logs, runtime, fdstore to select the type of resource. This option may be specified more than once, in which case all specified resource types are removed. Also accepts the special value all as a shortcut for specifying all six resource types. If this option is not specified defaults to the combination of cache, runtime and fdstore, i.e. the three kinds of resources that are generally considered to be redundant and can be reconstructed on next invocation. Note that the explicit removal of the fdstore resource type is only useful if the FileDescriptorStorePreserve= option is enabled, since the file descriptor store is otherwise cleaned automatically when the unit is stopped. Added in version 243. -f, --force When used with enable, overwrite any existing conflicting symlinks. When used with edit, create all of the specified units which do not already exist. When used with halt, poweroff, reboot or kexec, execute the selected operation without shutting down all units. However, all processes will be killed forcibly and all file systems are unmounted or remounted read-only. This is hence a drastic but relatively safe option to request an immediate reboot. If --force is specified twice for these operations (with the exception of kexec), they will be executed immediately, without terminating any processes or unmounting any file systems. Warning: specifying --force twice with any of these operations might result in data loss. Note that when --force is specified twice the selected operation is executed by systemctl itself, and the system manager is not contacted. This means the command should succeed even when the system manager has crashed. --message= When used with halt, poweroff or reboot, set a short message explaining the reason for the operation. The message will be logged together with the default shutdown message. Added in version 225. --now When used with enable, the units will also be started. When used with disable or mask, the units will also be stopped. The start or stop operation is only carried out when the respective enable or disable operation has been successful. Added in version 220. --root= When used with enable/disable/is-enabled (and related commands), use the specified root path when looking for unit files. If this option is present, systemctl will operate on the file system directly, instead of communicating with the systemd daemon to carry out changes. --image=image Takes a path to a disk image file or block device node. If specified, all operations are applied to file system in the indicated disk image. This option is similar to --root=, but operates on file systems stored in disk images or block devices. The disk image should either contain just a file system or a set of file systems within a GPT partition table, following the Discoverable Partitions Specification[2]. For further information on supported disk images, see systemd-nspawn(1)'s switch of the same name. Added in version 252. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. --runtime When used with enable, disable, edit, (and related commands), make changes only temporarily, so that they are lost on the next reboot. This will have the effect that changes are not made in subdirectories of /etc/ but in /run/, with identical immediate effects, however, since the latter is lost on reboot, the changes are lost too. Similarly, when used with set-property, make changes only temporarily, so that they are lost on the next reboot. --preset-mode= Takes one of "full" (the default), "enable-only", "disable-only". When used with the preset or preset-all commands, controls whether units shall be disabled and enabled according to the preset rules, or only enabled, or only disabled. Added in version 215. -n, --lines= When used with status, controls the number of journal lines to show, counting from the most recent ones. Takes a positive integer argument, or 0 to disable journal output. Defaults to 10. -o, --output= When used with status, controls the formatting of the journal entries that are shown. For the available choices, see journalctl(1). Defaults to "short". --firmware-setup When used with the reboot, poweroff, or halt command, indicate to the system's firmware to reboot into the firmware setup interface for the next boot. Note that this functionality is not available on all systems. Added in version 220. --boot-loader-menu=timeout When used with the reboot, poweroff, or halt command, indicate to the system's boot loader to show the boot loader menu on the following boot. Takes a time value as parameter indicating the menu timeout. Pass zero in order to disable the menu timeout. Note that not all boot loaders support this functionality. Added in version 242. --boot-loader-entry=ID When used with the reboot, poweroff, or halt command, indicate to the system's boot loader to boot into a specific boot loader entry on the following boot. Takes a boot loader entry identifier as argument, or "help" in order to list available entries. Note that not all boot loaders support this functionality. Added in version 242. --reboot-argument= This switch is used with reboot. The value is architecture and firmware specific. As an example, "recovery" might be used to trigger system recovery, and "fota" might be used to trigger a firmware over the air update. Added in version 246. --plain When used with list-dependencies, list-units or list-machines, the output is printed as a list instead of a tree, and the bullet circles are omitted. Added in version 203. --timestamp= Change the format of printed timestamps. The following values may be used: pretty (this is the default) "Day YYYY-MM-DD HH:MM:SS TZ" Added in version 248. unix "@seconds-since-the-epoch" Added in version 251. us, s "Day YYYY-MM-DD HH:MM:SS.UUUUUU TZ" Added in version 248. utc "Day YYYY-MM-DD HH:MM:SS UTC" Added in version 248. us+utc, s+utc "Day YYYY-MM-DD HH:MM:SS.UUUUUU UTC" Added in version 248. Added in version 247. --mkdir When used with bind, creates the destination file or directory before applying the bind mount. Note that even though the name of this option suggests that it is suitable only for directories, this option also creates the destination file node to mount over if the object to mount is not a directory, but a regular file, device node, socket or FIFO. Added in version 248. --marked Only allowed with reload-or-restart. Enqueues restart jobs for all units that have the "needs-restart" mark, and reload jobs for units that have the "needs-reload" mark. When a unit marked for reload does not support reload, restart will be queued. Those properties can be set using set-property Markers=.... Unless --no-block is used, systemctl will wait for the queued jobs to finish. Added in version 248. --read-only When used with bind, creates a read-only bind mount. Added in version 248. --drop-in=NAME When used with edit, use NAME as the drop-in file name instead of override.conf. Added in version 253. --when= When used with halt, poweroff, reboot or kexec, schedule the action to be performed at the given timestamp, which should adhere to the syntax documented in systemd.time(7) section "PARSING TIMESTAMPS". Specially, if "show" is given, the currently scheduled action will be shown, which can be canceled by passing an empty string or "cancel". Added in version 254. --stdin When used with edit, the contents of the file will be read from standard input and the editor will not be launched. In this mode, the old contents of the file are completely replaced. This is useful to "edit" unit files from scripts: $ systemctl edit --drop-in=limits.conf --stdin some-service.service <<EOF [Unit] AllowedCPUs=7,11 EOF Multiple drop-ins may be "edited" in this mode; the same contents will be written to all of them. Added in version 256. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. --no-pager Do not pipe output into a pager. --legend=BOOL Enable or disable printing of the legend, i.e. column headers and the footer with hints. The legend is printed by default, unless disabled with --quiet or similar. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. systemctl uses the return codes defined by LSB, as defined in LSB 3.0.0[3]. Table 3. LSB return codes Value Description in LSB Use in systemd 0 "program is unit is active running or service is OK" 1 "program is dead unit not failed and /var/run pid (used by file exists" is-failed) 2 "program is dead unused and /var/lock lock file exists" 3 "program is not unit is not active running" 4 "program or no such unit service status is unknown" The mapping of LSB service states to systemd unit states is imperfect, so it is better to not rely on those return values but to look for specific unit states and substates instead. ENVIRONMENT top $SYSTEMD_EDITOR Editor to use when editing units; overrides $EDITOR and $VISUAL. If neither $SYSTEMD_EDITOR nor $EDITOR nor $VISUAL are present or if it is set to an empty string or if their execution failed, systemctl will try to execute well known editors in this order: editor(1), nano(1), vim(1), vi(1). Added in version 218. $SYSTEMD_LOG_LEVEL The maximum log level of emitted messages (messages with a higher log level, i.e. less important ones, will be suppressed). Either one of (in order of decreasing importance) emerg, alert, crit, err, warning, notice, info, debug, or an integer in the range 0...7. See syslog(3) for more information. $SYSTEMD_LOG_COLOR A boolean. If true, messages written to the tty will be colored according to priority. This setting is only useful when messages are written directly to the terminal, because journalctl(1) and other tools that display logs will color messages based on the log level on their own. $SYSTEMD_LOG_TIME A boolean. If true, console log messages will be prefixed with a timestamp. This setting is only useful when messages are written directly to the terminal or a file, because journalctl(1) and other tools that display logs will attach timestamps based on the entry metadata on their own. $SYSTEMD_LOG_LOCATION A boolean. If true, messages will be prefixed with a filename and line number in the source code where the message originates. Note that the log location is often attached as metadata to journal entries anyway. Including it directly in the message text can nevertheless be convenient when debugging programs. $SYSTEMD_LOG_TARGET The destination for log messages. One of console (log to the attached tty), console-prefixed (log to the attached tty but with prefixes encoding the log level and "facility", see syslog(3), kmsg (log to the kernel circular log buffer), journal (log to the journal), journal-or-kmsg (log to the journal if available, and to kmsg otherwise), auto (determine the appropriate log target automatically, the default), null (disable log output). $SYSTEMD_PAGER Pager to use when --no-pager is not given; overrides $PAGER. If neither $SYSTEMD_PAGER nor $PAGER are set, a set of well-known pager implementations are tried in turn, including less(1) and more(1), until one is found. If no pager implementation is discovered no pager is invoked. Setting this environment variable to an empty string or the value "cat" is equivalent to passing --no-pager. Note: if $SYSTEMD_PAGERSECURE is not set, $SYSTEMD_PAGER (as well as $PAGER) will be silently ignored. $SYSTEMD_LESS Override the options passed to less (by default "FRSXMK"). Users might want to change two options in particular: K This option instructs the pager to exit immediately when Ctrl+C is pressed. To allow less to handle Ctrl+C itself to switch back to the pager command prompt, unset this option. If the value of $SYSTEMD_LESS does not include "K", and the pager that is invoked is less, Ctrl+C will be ignored by the executable, and needs to be handled by the pager. X This option instructs the pager to not send termcap initialization and deinitialization strings to the terminal. It is set by default to allow command output to remain visible in the terminal even after the pager exits. Nevertheless, this prevents some pager functionality from working, in particular paged output cannot be scrolled with the mouse. See less(1) for more discussion. $SYSTEMD_LESSCHARSET Override the charset passed to less (by default "utf-8", if the invoking terminal is determined to be UTF-8 compatible). $SYSTEMD_PAGERSECURE Takes a boolean argument. When true, the "secure" mode of the pager is enabled; if false, disabled. If $SYSTEMD_PAGERSECURE is not set at all, secure mode is enabled if the effective UID is not the same as the owner of the login session, see geteuid(2) and sd_pid_get_owner_uid(3). In secure mode, LESSSECURE=1 will be set when invoking the pager, and the pager shall disable commands that open or create new files or start new subprocesses. When $SYSTEMD_PAGERSECURE is not set at all, pagers which are not known to implement secure mode will not be used. (Currently only less(1) implements secure mode.) Note: when commands are invoked with elevated privileges, for example under sudo(8) or pkexec(1), care must be taken to ensure that unintended interactive features are not enabled. "Secure" mode for the pager may be enabled automatically as describe above. Setting SYSTEMD_PAGERSECURE=0 or not removing it from the inherited environment allows the user to invoke arbitrary commands. Note that if the $SYSTEMD_PAGER or $PAGER variables are to be honoured, $SYSTEMD_PAGERSECURE must be set too. It might be reasonable to completely disable the pager using --no-pager instead. $SYSTEMD_COLORS Takes a boolean argument. When true, systemd and related utilities will use colors in their output, otherwise the output will be monochrome. Additionally, the variable can take one of the following special values: "16", "256" to restrict the use of colors to the base 16 or 256 ANSI colors, respectively. This can be specified to override the automatic decision based on $TERM and what the console is connected to. $SYSTEMD_URLIFY The value must be a boolean. Controls whether clickable links should be generated in the output for terminal emulators supporting this. This can be specified to override the decision that systemd makes based on $TERM and other conditions. SEE ALSO top systemd(1), journalctl(1), loginctl(1), machinectl(1), systemd.unit(5), systemd.resource-control(5), systemd.special(7), wall(1), systemd.preset(5), systemd.generator(7), glob(7) NOTES top 1. Boot Loader Specification https://uapi-group.org/specifications/specs/boot_loader_specification 2. Discoverable Partitions Specification https://uapi-group.org/specifications/specs/discoverable_partitions_specification 3. LSB 3.0.0 http://refspecs.linuxbase.org/LSB_3.0.0/LSB-PDA/LSB-PDA/iniscrptact.html COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMCTL(1) Pages that refer to this page: hostnamectl(1), htop(1), journalctl(1), localectl(1), loginctl(1), pcpintro(1), pmie(1), pmlogger(1), systemd(1), systemd-analyze(1), systemd-ask-password(1), systemd-cat(1), systemd-cgls(1), systemd-cgtop(1), systemd-escape(1), systemd-mount(1), systemd-notify(1), systemd-run(1), systemd-tty-ask-password-agent(1), timedatectl(1), uid0(1), reboot(2), sd_notify(3), org.freedesktop.LogControl1(5), org.freedesktop.login1(5), srp_daemon_port@.service(5), srp_daemon.service(5), systemd.automount(5), systemd.device(5), systemd.exec(5), systemd.kill(5), systemd.mount(5), systemd.path(5), systemd.preset(5), systemd.service(5), systemd.socket(5), systemd.swap(5), systemd.target(5), systemd.timer(5), systemd.unit(5), daemon(7), systemd.directives(7), systemd.environment-generator(7), systemd.generator(7), systemd.index(7), systemd.special(7), systemd.time(7), autofs(8), poweroff(8), runlevel(8), shutdown(8), systemd-debug-generator(8), systemd-environment-d-generator(8), systemd-machined.service(8), systemd-poweroff.service(8), systemd-rc-local-generator(8), systemd-run-generator(8), systemd-socket-proxyd(8), systemd-soft-reboot.service(8), systemd-suspend.service(8), telinit(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemctl\n\n> Control the systemd system and service manager.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemctl.html>.\n\n- Show all running services:\n\n`systemctl status`\n\n- List failed units:\n\n`systemctl --failed`\n\n- Start/Stop/Restart/Reload a service:\n\n`systemctl {{start|stop|restart|reload}} {{unit}}`\n\n- Show the status of a unit:\n\n`systemctl status {{unit}}`\n\n- Enable/Disable a unit to be started on bootup:\n\n`systemctl {{enable|disable}} {{unit}}`\n\n- Mask/Unmask a unit to prevent enablement and manual activation:\n\n`systemctl {{mask|unmask}} {{unit}}`\n\n- Reload systemd, scanning for new or changed units:\n\n`systemctl daemon-reload`\n\n- Check if a unit is enabled:\n\n`systemctl is-enabled {{unit}}`\n
systemd-ac-power
systemd-ac-power(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-ac-power(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | SEE ALSO | COLOPHON SYSTEMD-AC-POWER(1) systemd-ac-power SYSTEMD-AC-POWER(1) NAME top systemd-ac-power - Report whether we are connected to an external power source SYNOPSIS top systemd-ac-power [OPTIONS...] DESCRIPTION top systemd-ac-power may be used to check whether the system is running on AC power or not. By default it will simply return success (if we can detect that we are running on AC power) or failure, with no output. This can be useful for example to debug ConditionACPower= (see systemd.unit(5)). OPTIONS top The following options are understood: -v, --verbose Show result as text instead of just returning success or failure. Added in version 253. --low Instead of showing AC power state, show low battery state. In this case will return zero if all batteries are currently discharging and below 5% of maximum charge. Returns non-zero otherwise. Added in version 254. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success (running on AC power), 0 is returned, a non-zero failure code otherwise. SEE ALSO top systemd(1) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-AC-POWER(1) Pages that refer to this page: systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-ac-power\n\n> Report whether the computer is connected to an external power source.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-ac-power.html>.\n\n- Silently check and return a 0 status code when running on AC power, and a non-zero code otherwise:\n\n`systemd-ac-power`\n\n- Additionally print `yes` or `no` to `stdout`:\n\n`systemd-ac-power --verbose`\n
systemd-analyze
systemd-analyze(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-analyze(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | ENVIRONMENT | EXAMPLES | SEE ALSO | NOTES | COLOPHON SYSTEMD-ANALYZE(1) systemd-analyze SYSTEMD-ANALYZE(1) NAME top systemd-analyze - Analyze and debug system manager SYNOPSIS top systemd-analyze [OPTIONS...] [time] systemd-analyze [OPTIONS...] blame systemd-analyze [OPTIONS...] critical-chain [UNIT...] systemd-analyze [OPTIONS...] dump [PATTERN...] systemd-analyze [OPTIONS...] plot [>file.svg] systemd-analyze [OPTIONS...] dot [PATTERN...] [>file.dot] systemd-analyze [OPTIONS...] unit-files systemd-analyze [OPTIONS...] unit-paths systemd-analyze [OPTIONS...] exit-status [STATUS...] systemd-analyze [OPTIONS...] capability [CAPABILITY...] systemd-analyze [OPTIONS...] condition CONDITION... systemd-analyze [OPTIONS...] syscall-filter [SET...] systemd-analyze [OPTIONS...] filesystems [SET...] systemd-analyze [OPTIONS...] calendar SPEC... systemd-analyze [OPTIONS...] timestamp TIMESTAMP... systemd-analyze [OPTIONS...] timespan SPAN... systemd-analyze [OPTIONS...] cat-config NAME|PATH... systemd-analyze [OPTIONS...] compare-versions VERSION1 [OP] VERSION2 systemd-analyze [OPTIONS...] verify [FILE...] systemd-analyze [OPTIONS...] security UNIT... systemd-analyze [OPTIONS...] inspect-elf FILE... systemd-analyze [OPTIONS...] malloc [D-BUS SERVICE...] systemd-analyze [OPTIONS...] fdstore [UNIT...] systemd-analyze [OPTIONS...] image-policy POLICY... systemd-analyze [OPTIONS...] pcrs [PCR...] systemd-analyze [OPTIONS...] srk > FILE systemd-analyze [OPTIONS...] architectures [NAME...] DESCRIPTION top systemd-analyze may be used to determine system boot-up performance statistics and retrieve other state and tracing information from the system and service manager, and to verify the correctness of unit files. It is also used to access special functions useful for advanced system manager debugging. If no command is passed, systemd-analyze time is implied. systemd-analyze time This command prints the time spent in the kernel before userspace has been reached, the time spent in the initrd before normal system userspace has been reached, and the time normal system userspace took to initialize. Note that these measurements simply measure the time passed up to the point where all system services have been spawned, but not necessarily until they fully finished initialization or the disk is idle. Example 1. Show how long the boot took # in a container $ systemd-analyze time Startup finished in 296ms (userspace) multi-user.target reached after 275ms in userspace # on a real machine $ systemd-analyze time Startup finished in 2.584s (kernel) + 19.176s (initrd) + 47.847s (userspace) = 1min 9.608s multi-user.target reached after 47.820s in userspace systemd-analyze blame This command prints a list of all running units, ordered by the time they took to initialize. This information may be used to optimize boot-up times. Note that the output might be misleading as the initialization of one service might be slow simply because it waits for the initialization of another service to complete. Also note: systemd-analyze blame doesn't display results for services with Type=simple, because systemd considers such services to be started immediately, hence no measurement of the initialization delays can be done. Also note that this command only shows the time units took for starting up, it does not show how long unit jobs spent in the execution queue. In particular it shows the time units spent in "activating" state, which is not defined for units such as device units that transition directly from "inactive" to "active". This command hence gives an impression of the performance of program code, but cannot accurately reflect latency introduced by waiting for hardware and similar events. Example 2. Show which units took the most time during boot $ systemd-analyze blame 32.875s pmlogger.service 20.905s systemd-networkd-wait-online.service 13.299s dev-vda1.device ... 23ms sysroot.mount 11ms initrd-udevadm-cleanup-db.service 3ms sys-kernel-config.mount systemd-analyze critical-chain [UNIT...] This command prints a tree of the time-critical chain of units (for each of the specified UNITs or for the default target otherwise). The time after the unit is active or started is printed after the "@" character. The time the unit takes to start is printed after the "+" character. Note that the output might be misleading as the initialization of services might depend on socket activation and because of the parallel execution of units. Also, similarly to the blame command, this only takes into account the time units spent in "activating" state, and hence does not cover units that never went through an "activating" state (such as device units that transition directly from "inactive" to "active"). Moreover it does not show information on jobs (and in particular not jobs that timed out). Example 3. systemd-analyze critical-chain $ systemd-analyze critical-chain multi-user.target @47.820s pmie.service @35.968s +548ms pmcd.service @33.715s +2.247s network-online.target @33.712s systemd-networkd-wait-online.service @12.804s +20.905s systemd-networkd.service @11.109s +1.690s systemd-udevd.service @9.201s +1.904s systemd-tmpfiles-setup-dev.service @7.306s +1.776s kmod-static-nodes.service @6.976s +177ms systemd-journald.socket system.slice -.slice systemd-analyze dump [pattern...] Without any parameter, this command outputs a (usually very long) human-readable serialization of the complete service manager state. Optional glob pattern may be specified, causing the output to be limited to units whose names match one of the patterns. The output format is subject to change without notice and should not be parsed by applications. This command is rate limited for unprivileged users. Example 4. Show the internal state of user manager $ systemd-analyze --user dump Timestamp userspace: Thu 2019-03-14 23:28:07 CET Timestamp finish: Thu 2019-03-14 23:28:07 CET Timestamp generators-start: Thu 2019-03-14 23:28:07 CET Timestamp generators-finish: Thu 2019-03-14 23:28:07 CET Timestamp units-load-start: Thu 2019-03-14 23:28:07 CET Timestamp units-load-finish: Thu 2019-03-14 23:28:07 CET -> Unit proc-timer_list.mount: Description: /proc/timer_list ... -> Unit default.target: Description: Main user target ... systemd-analyze malloc [D-Bus service...] This command can be used to request the output of the internal memory state (as returned by malloc_info(3)) of a D-Bus service. If no service is specified, the query will be sent to org.freedesktop.systemd1 (the system or user service manager). The output format is not guaranteed to be stable and should not be parsed by applications. The service must implement the org.freedesktop.MemoryAllocation1 interface. In the systemd suite, it is currently only implemented by the manager. systemd-analyze plot This command prints either an SVG graphic, detailing which system services have been started at what time, highlighting the time they spent on initialization, or the raw time data in JSON or table format. Example 5. Plot a bootchart $ systemd-analyze plot >bootup.svg $ eog bootup.svg& Note that this plot is based on the most recent per-unit timing data of loaded units. This means that if a unit gets started, then stopped and then started again the information shown will cover the most recent start cycle, not the first one. Thus it's recommended to consult this information only shortly after boot, so that this distinction doesn't matter. Moreover, units that are not referenced by any other unit through a dependency might be unloaded by the service manager once they terminate (and did not fail). Such units will not show up in the plot. systemd-analyze dot [pattern...] This command generates textual dependency graph description in dot format for further processing with the GraphViz dot(1) tool. Use a command line like systemd-analyze dot | dot -Tsvg >systemd.svg to generate a graphical dependency tree. Unless --order or --require is passed, the generated graph will show both ordering and requirement dependencies. Optional pattern globbing style specifications (e.g. *.target) may be given at the end. A unit dependency is included in the graph if any of these patterns match either the origin or destination node. Example 6. Plot all dependencies of any unit whose name starts with "avahi-daemon" $ systemd-analyze dot 'avahi-daemon.*' | dot -Tsvg >avahi.svg $ eog avahi.svg Example 7. Plot the dependencies between all known target units $ systemd-analyze dot --to-pattern='*.target' --from-pattern='*.target' \ | dot -Tsvg >targets.svg $ eog targets.svg systemd-analyze unit-paths This command outputs a list of all directories from which unit files, .d overrides, and .wants, .requires symlinks may be loaded. Combine with --user to retrieve the list for the user manager instance, and --global for the global configuration of user manager instances. Example 8. Show all paths for generated units $ systemd-analyze unit-paths | grep '^/run' /run/systemd/system.control /run/systemd/transient /run/systemd/generator.early /run/systemd/system /run/systemd/system.attached /run/systemd/generator /run/systemd/generator.late Note that this verb prints the list that is compiled into systemd-analyze itself, and does not communicate with the running manager. Use systemctl [--user] [--global] show -p UnitPath --value to retrieve the actual list that the manager uses, with any empty directories omitted. systemd-analyze exit-status [STATUS...] This command prints a list of exit statuses along with their "class", i.e. the source of the definition (one of "glibc", "systemd", "LSB", or "BSD"), see the Process Exit Codes section in systemd.exec(5). If no additional arguments are specified, all known statuses are shown. Otherwise, only the definitions for the specified codes are shown. Example 9. Show some example exit status names $ systemd-analyze exit-status 0 1 {63..65} NAME STATUS CLASS SUCCESS 0 glibc FAILURE 1 glibc - 63 - USAGE 64 BSD DATAERR 65 BSD systemd-analyze capability [CAPABILITY...] This command prints a list of Linux capabilities along with their numeric IDs. See capabilities(7) for details. If no argument is specified the full list of capabilities known to the service manager and the kernel is shown. Capabilities defined by the kernel but not known to the service manager are shown as "cap_???". Optionally, if arguments are specified they may refer to specific cabilities by name or numeric ID, in which case only the indicated capabilities are shown in the table. Example 10. Show some example capability names $ systemd-analyze capability 0 1 {30..32} NAME NUMBER cap_chown 0 cap_dac_override 1 cap_audit_control 30 cap_setfcap 31 cap_mac_override 32 systemd-analyze condition CONDITION... This command will evaluate Condition*=... and Assert*=... assignments, and print their values, and the resulting value of the combined condition set. See systemd.unit(5) for a list of available conditions and asserts. Example 11. Evaluate conditions that check kernel versions $ systemd-analyze condition 'ConditionKernelVersion = ! <4.0' \ 'ConditionKernelVersion = >=5.1' \ 'ConditionACPower=|false' \ 'ConditionArchitecture=|!arm' \ 'AssertPathExists=/etc/os-release' test.service: AssertPathExists=/etc/os-release succeeded. Asserts succeeded. test.service: ConditionArchitecture=|!arm succeeded. test.service: ConditionACPower=|false failed. test.service: ConditionKernelVersion=>=5.1 succeeded. test.service: ConditionKernelVersion=!<4.0 succeeded. Conditions succeeded. systemd-analyze syscall-filter [SET...] This command will list system calls contained in the specified system call set SET, or all known sets if no sets are specified. Argument SET must include the "@" prefix. systemd-analyze filesystems [SET...] This command will list filesystems in the specified filesystem set SET, or all known sets if no sets are specified. Argument SET must include the "@" prefix. systemd-analyze calendar EXPRESSION... This command will parse and normalize repetitive calendar time events, and will calculate when they elapse next. This takes the same input as the OnCalendar= setting in systemd.timer(5), following the syntax described in systemd.time(7). By default, only the next time the calendar expression will elapse is shown; use --iterations= to show the specified number of next times the expression elapses. Each time the expression elapses forms a timestamp, see the timestamp verb below. Example 12. Show leap days in the near future $ systemd-analyze calendar --iterations=5 '*-2-29 0:0:0' Original form: *-2-29 0:0:0 Normalized form: *-02-29 00:00:00 Next elapse: Sat 2020-02-29 00:00:00 UTC From now: 11 months 15 days left Iter. #2: Thu 2024-02-29 00:00:00 UTC From now: 4 years 11 months left Iter. #3: Tue 2028-02-29 00:00:00 UTC From now: 8 years 11 months left Iter. #4: Sun 2032-02-29 00:00:00 UTC From now: 12 years 11 months left Iter. #5: Fri 2036-02-29 00:00:00 UTC From now: 16 years 11 months left systemd-analyze timestamp TIMESTAMP... This command parses a timestamp (i.e. a single point in time) and outputs the normalized form and the difference between this timestamp and now. The timestamp should adhere to the syntax documented in systemd.time(7), section "PARSING TIMESTAMPS". Example 13. Show parsing of timestamps $ systemd-analyze timestamp yesterday now tomorrow Original form: yesterday Normalized form: Mon 2019-05-20 00:00:00 CEST (in UTC): Sun 2019-05-19 22:00:00 UTC UNIX seconds: @15583032000 From now: 1 day 9h ago Original form: now Normalized form: Tue 2019-05-21 09:48:39 CEST (in UTC): Tue 2019-05-21 07:48:39 UTC UNIX seconds: @1558424919.659757 From now: 43us ago Original form: tomorrow Normalized form: Wed 2019-05-22 00:00:00 CEST (in UTC): Tue 2019-05-21 22:00:00 UTC UNIX seconds: @15584760000 From now: 14h left systemd-analyze timespan EXPRESSION... This command parses a time span (i.e. a difference between two timestamps) and outputs the normalized form and the equivalent value in microseconds. The time span should adhere to the syntax documented in systemd.time(7), section "PARSING TIME SPANS". Values without units are parsed as seconds. Example 14. Show parsing of timespans $ systemd-analyze timespan 1s 300s '1year 0.000001s' Original: 1s s: 1000000 Human: 1s Original: 300s s: 300000000 Human: 5min Original: 1year 0.000001s s: 31557600000001 Human: 1y 1us systemd-analyze cat-config NAME|PATH... This command is similar to systemctl cat, but operates on config files. It will copy the contents of a config file and any drop-ins to standard output, using the usual systemd set of directories and rules for precedence. Each argument must be either an absolute path including the prefix (such as /etc/systemd/logind.conf or /usr/lib/systemd/logind.conf), or a name relative to the prefix (such as systemd/logind.conf). Example 15. Showing logind configuration $ systemd-analyze cat-config systemd/logind.conf # /etc/systemd/logind.conf ... [Login] NAutoVTs=8 ... # /usr/lib/systemd/logind.conf.d/20-test.conf ... some override from another package # /etc/systemd/logind.conf.d/50-override.conf ... some administrator override systemd-analyze compare-versions VERSION1 [OP] VERSION2 This command has two distinct modes of operation, depending on whether the operator OP is specified. In the first mode when OP is not specified it will compare the two version strings and print either "VERSION1 < VERSION2", or "VERSION1 == VERSION2", or "VERSION1 > VERSION2" as appropriate. The exit status is 0 if the versions are equal, 11 if the version of the right is smaller, and 12 if the version of the left is smaller. (This matches the convention used by rpmdev-vercmp.) In the second mode when OP is specified it will compare the two version strings using the operation OP and return 0 (success) if they condition is satisfied, and 1 (failure) otherwise. OP may be lt, le, eq, ne, ge, gt. In this mode, no output is printed. (This matches the convention used by dpkg(1) --compare-versions.) Example 16. Compare versions of a package $ systemd-analyze compare-versions systemd-250~rc1.fc36.aarch64 systemd-251.fc36.aarch64 systemd-250~rc1.fc36.aarch64 < systemd-251.fc36.aarch64 $ echo $? 12 $ systemd-analyze compare-versions 1 lt 2; echo $? 0 $ systemd-analyze compare-versions 1 ge 2; echo $? 1 systemd-analyze verify FILE... This command will load unit files and print warnings if any errors are detected. Files specified on the command line will be loaded, but also any other units referenced by them. A unit's name on disk can be overridden by specifying an alias after a colon; see below for an example. The full unit search path is formed by combining the directories for all command line arguments, and the usual unit load paths. The variable $SYSTEMD_UNIT_PATH is supported, and may be used to replace or augment the compiled in set of unit load paths; see systemd.unit(5). All units files present in the directories containing the command line arguments will be used in preference to the other paths. The following errors are currently detected: unknown sections and directives, missing dependencies which are required to start the given unit, man pages listed in Documentation= which are not found in the system, commands listed in ExecStart= and similar which are not found in the system or not executable. Example 17. Misspelt directives $ cat ./user.slice [Unit] WhatIsThis=11 Documentation=man:nosuchfile(1) Requires=different.service [Service] Description=x $ systemd-analyze verify ./user.slice [./user.slice:9] Unknown lvalue 'WhatIsThis' in section 'Unit' [./user.slice:13] Unknown section 'Service'. Ignoring. Error: org.freedesktop.systemd1.LoadFailed: Unit different.service failed to load: No such file or directory. Failed to create user.slice/start: Invalid argument user.slice: man nosuchfile(1) command failed with code 16 Example 18. Missing service units $ tail ./a.socket ./b.socket ==> ./a.socket <== [Socket] ListenStream=100 ==> ./b.socket <== [Socket] ListenStream=100 Accept=yes $ systemd-analyze verify ./a.socket ./b.socket Service a.service not loaded, a.socket cannot be started. Service b@0.service not loaded, b.socket cannot be started. Example 19. Aliasing a unit $ cat /tmp/source [Unit] Description=Hostname printer [Service] Type=simple ExecStart=/usr/bin/echo %H MysteryKey=true $ systemd-analyze verify /tmp/source Failed to prepare filename /tmp/source: Invalid argument $ systemd-analyze verify /tmp/source:alias.service alias.service:7: Unknown key name 'MysteryKey' in section 'Service', ignoring. systemd-analyze security [UNIT...] This command analyzes the security and sandboxing settings of one or more specified service units. If at least one unit name is specified the security settings of the specified service units are inspected and a detailed analysis is shown. If no unit name is specified, all currently loaded, long-running service units are inspected and a terse table with results shown. The command checks for various security-related service settings, assigning each a numeric "exposure level" value, depending on how important a setting is. It then calculates an overall exposure level for the whole unit, which is an estimation in the range 0.0...10.0 indicating how exposed a service is security-wise. High exposure levels indicate very little applied sandboxing. Low exposure levels indicate tight sandboxing and strongest security restrictions. Note that this only analyzes the per-service security features systemd itself implements. This means that any additional security mechanisms applied by the service code itself are not accounted for. The exposure level determined this way should not be misunderstood: a high exposure level neither means that there is no effective sandboxing applied by the service code itself, nor that the service is actually vulnerable to remote or local attacks. High exposure levels do indicate however that most likely the service might benefit from additional settings applied to them. Please note that many of the security and sandboxing settings individually can be circumvented unless combined with others. For example, if a service retains the privilege to establish or undo mount points many of the sandboxing options can be undone by the service code itself. Due to that is essential that each service uses the most comprehensive and strict sandboxing and security settings possible. The tool will take into account some of these combinations and relationships between the settings, but not all. Also note that the security and sandboxing settings analyzed here only apply to the operations executed by the service code itself. If a service has access to an IPC system (such as D-Bus) it might request operations from other services that are not subject to the same restrictions. Any comprehensive security and sandboxing analysis is hence incomplete if the IPC access policy is not validated too. Example 20. Analyze systemd-logind.service $ systemd-analyze security --no-pager systemd-logind.service NAME DESCRIPTION EXPOSURE PrivateNetwork= Service has access to the host's network 0.5 User=/DynamicUser= Service runs as root user 0.4 DeviceAllow= Service has no device ACL 0.2 IPAddressDeny= Service blocks all IP address ranges ... Overall exposure level for systemd-logind.service: 4.1 OK systemd-analyze inspect-elf FILE... This command will load the specified files, and if they are ELF objects (executables, libraries, core files, etc.) it will parse the embedded packaging metadata, if any, and print it in a table or json format. See the Packaging Metadata[1] documentation for more information. Example 21. Print information about a core file as JSON $ systemd-analyze inspect-elf --json=pretty \ core.fsverity.1000.f77dac5dc161402aa44e15b7dd9dcf97.58561.1637106137000000 { "elfType" : "coredump", "elfArchitecture" : "AMD x86-64", "/home/bluca/git/fsverity-utils/fsverity" : { "type" : "deb", "name" : "fsverity-utils", "version" : "1.3-1", "buildId" : "7c895ecd2a271f93e96268f479fdc3c64a2ec4ee" }, "/home/bluca/git/fsverity-utils/libfsverity.so.0" : { "type" : "deb", "name" : "fsverity-utils", "version" : "1.3-1", "buildId" : "b5e428254abf14237b0ae70ed85fffbb98a78f88" } } systemd-analyze fdstore [UNIT...] Lists the current contents of the specified service unit's file descriptor store. This shows names, inode types, device numbers, inode numbers, paths and open modes of the open file descriptors. The specified units must have FileDescriptorStoreMax= enabled, see systemd.service(5) for details. Example 22. Table output $ systemd-analyze fdstore systemd-journald.service FDNAME TYPE DEVNO INODE RDEVNO PATH FLAGS stored sock 0:8 4218620 - socket:[4218620] ro stored sock 0:8 4213198 - socket:[4213198] ro stored sock 0:8 4213190 - socket:[4213190] ro ... Note: the "DEVNO" column refers to the major/minor numbers of the device node backing the file system the file descriptor's inode is on. The "RDEVNO" column refers to the major/minor numbers of the device node itself if the file descriptor refers to one. Compare with corresponding .st_dev and .st_rdev fields in struct stat (see stat(2) for details). The listed inode numbers in the "INODE" column are on the file system indicated by "DEVNO". systemd-analyze image-policy [POLICY...] This command analyzes the specified image policy string, as per systemd.image-policy(7). The policy is normalized and simplified. For each currently defined partition identifier (as per the Discoverable Partitions Specification[2]) the effect of the image policy string is shown in tabular form. Example 23. Example Output $ systemd-analyze image-policy swap=encrypted:usr=read-only-on+verity:root=encrypted Analyzing policy: root=encrypted:usr=verity+read-only-on:swap=encrypted Long form: root=encrypted:usr=verity+read-only-on:swap=encrypted:=unused+absent PARTITION MODE READ-ONLY GROWFS root encrypted - - usr verity yes - home ignore - - srv ignore - - esp ignore - - xbootldr ignore - - swap encrypted - - root-verity ignore - - usr-verity unprotected yes - root-verity-sig ignore - - usr-verity-sig ignore - - tmp ignore - - var ignore - - default ignore - - systemd-analyze pcrs [PCR...] This command shows the known TPM2 PCRs along with their identifying names and current values. Example 24. Example Output $ systemd-analyze pcrs NR NAME SHA256 0 platform-code bcd2eb527108bbb1f5528409bcbe310aa9b74f687854cc5857605993f3d9eb11 1 platform-config b60622856eb7ce52637b80f30a520e6e87c347daa679f3335f4f1a600681bb01 2 external-code 1471262403e9a62f9c392941300b4807fbdb6f0bfdd50abfab752732087017dd 3 external-config 3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969 4 boot-loader-code 939f7fa1458e1f7ce968874d908e524fc0debf890383d355e4ce347b7b78a95c 5 boot-loader-config 864c61c5ea5ecbdb6951e6cb6d9c1f4b4eac79772f7fe13b8bece569d83d3768 6 - 3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969 7 secure-boot-policy 9c905bd9b9891bfb889b90a54c4b537b889cfa817c4389cc25754823a9443255 8 - 0000000000000000000000000000000000000000000000000000000000000000 9 kernel-initrd 9caa29b128113ef42aa53d421f03437be57211e5ebafc0fa8b5d4514ee37ff0c 10 ima 5ea9e3dab53eb6b483b6ec9e3b2c712bea66bca1b155637841216e0094387400 11 kernel-boot 0000000000000000000000000000000000000000000000000000000000000000 12 kernel-config 627ffa4b405e911902fe1f1a8b0164693b31acab04f805f15bccfe2209c7eace 13 sysexts 0000000000000000000000000000000000000000000000000000000000000000 14 shim-policy 0000000000000000000000000000000000000000000000000000000000000000 15 system-identity 0000000000000000000000000000000000000000000000000000000000000000 16 debug 0000000000000000000000000000000000000000000000000000000000000000 17 - ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff 18 - ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff 19 - ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff 20 - ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff 21 - ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff 22 - ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff 23 application-support 0000000000000000000000000000000000000000000000000000000000000000 systemd-analyze srk > FILE This command reads the Storage Root Key (SRK) from the TPM2 device, and writes it in marshalled TPM2B_PUBLIC format to stdout. Example: systemd-analyze srk > srk.tpm2b_public systemd-analyze architectures [NAME...] Lists all known CPU architectures, and which ones are native. The listed architecture names are those ConditionArchitecture= supports, see systemd.unit(5) for details. If architecture names are specified only those specified are listed. Example 25. Table output $ systemd-analyze architectures NAME SUPPORT alpha foreign arc foreign arc-be foreign arm foreign arm64 foreign ... sparc foreign sparc64 foreign tilegx foreign x86 secondary x86-64 native OPTIONS top The following options are understood: --system Operates on the system systemd instance. This is the implied default. Added in version 209. --user Operates on the user systemd instance. Added in version 186. --global Operates on the system-wide configuration for user systemd instance. Added in version 238. --order, --require When used in conjunction with the dot command (see above), selects which dependencies are shown in the dependency graph. If --order is passed, only dependencies of type After= or Before= are shown. If --require is passed, only dependencies of type Requires=, Requisite=, Wants= and Conflicts= are shown. If neither is passed, this shows dependencies of all these types. Added in version 198. --from-pattern=, --to-pattern= When used in conjunction with the dot command (see above), this selects which relationships are shown in the dependency graph. Both options require a glob(7) pattern as an argument, which will be matched against the left-hand and the right-hand, respectively, nodes of a relationship. Each of these can be used more than once, in which case the unit name must match one of the values. When tests for both sides of the relation are present, a relation must pass both tests to be shown. When patterns are also specified as positional arguments, they must match at least one side of the relation. In other words, patterns specified with those two options will trim the list of edges matched by the positional arguments, if any are given, and fully determine the list of edges shown otherwise. Added in version 201. --fuzz=timespan When used in conjunction with the critical-chain command (see above), also show units, which finished timespan earlier, than the latest unit in the same level. The unit of timespan is seconds unless specified with a different unit, e.g. "50ms". Added in version 203. --man=no Do not invoke man(1) to verify the existence of man pages listed in Documentation=. Added in version 235. --generators Invoke unit generators, see systemd.generator(7). Some generators require root privileges. Under a normal user, running with generators enabled will generally result in some warnings. Added in version 235. --recursive-errors=MODE Control verification of units and their dependencies and whether systemd-analyze verify exits with a non-zero process exit status or not. With yes, return a non-zero process exit status when warnings arise during verification of either the specified unit or any of its associated dependencies. With no, return a non-zero process exit status when warnings arise during verification of only the specified unit. With one, return a non-zero process exit status when warnings arise during verification of either the specified unit or its immediate dependencies. If this option is not specified, zero is returned as the exit status regardless whether warnings arise during verification or not. Added in version 250. --root=PATH With cat-files and verify, operate on files underneath the specified root path PATH. Added in version 239. --image=PATH With cat-files and verify, operate on files inside the specified image path PATH. Added in version 250. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. --offline=BOOL With security, perform an offline security review of the specified unit files, i.e. does not have to rely on PID 1 to acquire security information for the files like the security verb when used by itself does. This means that --offline= can be used with --root= and --image= as well. If a unit's overall exposure level is above that set by --threshold= (default value is 100), --offline= will return an error. Added in version 250. --profile=PATH With security --offline=, takes into consideration the specified portable profile when assessing unit settings. The profile can be passed by name, in which case the well-known system locations will be searched, or it can be the full path to a specific drop-in file. Added in version 250. --threshold=NUMBER With security, allow the user to set a custom value to compare the overall exposure level with, for the specified unit files. If a unit's overall exposure level, is greater than that set by the user, security will return an error. --threshold= can be used with --offline= as well and its default value is 100. Added in version 250. --security-policy=PATH With security, allow the user to define a custom set of requirements formatted as a JSON file against which to compare the specified unit file(s) and determine their overall exposure level to security threats. Table 1. Accepted Assessment Test Identifiers Assessment Test Identifier UserOrDynamicUser SupplementaryGroups PrivateMounts PrivateDevices PrivateTmp PrivateNetwork PrivateUsers ProtectControlGroups ProtectKernelModules ProtectKernelTunables ProtectKernelLogs ProtectClock ProtectHome ProtectHostname ProtectSystem RootDirectoryOrRootImage LockPersonality MemoryDenyWriteExecute NoNewPrivileges CapabilityBoundingSet_CAP_SYS_ADMIN CapabilityBoundingSet_CAP_SET_UID_GID_PCAP CapabilityBoundingSet_CAP_SYS_PTRACE CapabilityBoundingSet_CAP_SYS_TIME CapabilityBoundingSet_CAP_NET_ADMIN CapabilityBoundingSet_CAP_SYS_RAWIO CapabilityBoundingSet_CAP_SYS_MODULE CapabilityBoundingSet_CAP_AUDIT CapabilityBoundingSet_CAP_SYSLOG CapabilityBoundingSet_CAP_SYS_NICE_RESOURCE CapabilityBoundingSet_CAP_MKNOD CapabilityBoundingSet_CAP_CHOWN_FSETID_SETFCAP CapabilityBoundingSet_CAP_DAC_FOWNER_IPC_OWNER CapabilityBoundingSet_CAP_KILL CapabilityBoundingSet_CAP_NET_BIND_SERVICE_BROADCAST_RAW CapabilityBoundingSet_CAP_SYS_BOOT CapabilityBoundingSet_CAP_MAC CapabilityBoundingSet_CAP_LINUX_IMMUTABLE CapabilityBoundingSet_CAP_IPC_LOCK CapabilityBoundingSet_CAP_SYS_CHROOT CapabilityBoundingSet_CAP_BLOCK_SUSPEND CapabilityBoundingSet_CAP_WAKE_ALARM CapabilityBoundingSet_CAP_LEASE CapabilityBoundingSet_CAP_SYS_TTY_CONFIG CapabilityBoundingSet_CAP_BPF UMask KeyringMode ProtectProc ProcSubset NotifyAccess RemoveIPC Delegate RestrictRealtime RestrictSUIDSGID RestrictNamespaces_user RestrictNamespaces_mnt RestrictNamespaces_ipc RestrictNamespaces_pid RestrictNamespaces_cgroup RestrictNamespaces_uts RestrictNamespaces_net RestrictAddressFamilies_AF_INET_INET6 RestrictAddressFamilies_AF_UNIX RestrictAddressFamilies_AF_NETLINK RestrictAddressFamilies_AF_PACKET RestrictAddressFamilies_OTHER SystemCallArchitectures SystemCallFilter_swap SystemCallFilter_obsolete SystemCallFilter_clock SystemCallFilter_cpu_emulation SystemCallFilter_debug SystemCallFilter_mount SystemCallFilter_module SystemCallFilter_raw_io SystemCallFilter_reboot SystemCallFilter_privileged SystemCallFilter_resources IPAddressDeny DeviceAllow AmbientCapabilities See example "JSON Policy" below. Added in version 250. --json=MODE With the security command, generate a JSON formatted output of the security analysis table. The format is a JSON array with objects containing the following fields: set which indicates if the setting has been enabled or not, name which is what is used to refer to the setting, json_field which is the JSON compatible identifier of the setting, description which is an outline of the setting state, and exposure which is a number in the range 0.0...10.0, where a higher value corresponds to a higher security threat. The JSON version of the table is printed to standard output. The MODE passed to the option can be one of three: off which is the default, pretty and short which respectively output a prettified or shorted JSON version of the security table. With the plot command, generate a JSON formatted output of the raw time data. The format is a JSON array with objects containing the following fields: name which is the unit name, activated which is the time after startup the service was activated, activating which is how long after startup the service was initially started, time which is how long the service took to activate from when it was initially started, deactivated which is the time after startup that the service was deactivated, deactivating which is the time after startup that the service was initially told to deactivate. Added in version 250. --iterations=NUMBER When used with the calendar command, show the specified number of iterations the specified calendar expression will elapse next. Defaults to 1. Added in version 242. --base-time=TIMESTAMP When used with the calendar command, show next iterations relative to the specified point in time. If not specified defaults to the current time. Added in version 244. --unit=UNIT When used with the condition command, evaluate all the Condition*=... and Assert*=... assignments in the specified unit file. The full unit search path is formed by combining the directories for the specified unit with the usual unit load paths. The variable $SYSTEMD_UNIT_PATH is supported, and may be used to replace or augment the compiled in set of unit load paths; see systemd.unit(5). All units files present in the directory containing the specified unit will be used in preference to the other paths. Added in version 250. --table When used with the plot command, the raw time data is output in a table. Added in version 253. --no-legend When used with the plot command in combination with either --table or --json=, no legends or hints are included in the output. Added in version 253. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. --quiet Suppress hints and other non-essential output. Added in version 250. --tldr With cat-config, only print the "interesting" parts of the configuration files, skipping comments and empty lines and section headers followed only by comments and empty lines. Added in version 255. -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager. EXIT STATUS top For most commands, 0 is returned on success, and a non-zero failure code otherwise. With the verb compare-versions, in the two-argument form, 12, 0, 11 is returned if the second version string is respectively larger, equal, or smaller to the first. In the three-argument form, 0 or 1 if the condition is respectively true or false. ENVIRONMENT top $SYSTEMD_LOG_LEVEL The maximum log level of emitted messages (messages with a higher log level, i.e. less important ones, will be suppressed). Either one of (in order of decreasing importance) emerg, alert, crit, err, warning, notice, info, debug, or an integer in the range 0...7. See syslog(3) for more information. $SYSTEMD_LOG_COLOR A boolean. If true, messages written to the tty will be colored according to priority. This setting is only useful when messages are written directly to the terminal, because journalctl(1) and other tools that display logs will color messages based on the log level on their own. $SYSTEMD_LOG_TIME A boolean. If true, console log messages will be prefixed with a timestamp. This setting is only useful when messages are written directly to the terminal or a file, because journalctl(1) and other tools that display logs will attach timestamps based on the entry metadata on their own. $SYSTEMD_LOG_LOCATION A boolean. If true, messages will be prefixed with a filename and line number in the source code where the message originates. Note that the log location is often attached as metadata to journal entries anyway. Including it directly in the message text can nevertheless be convenient when debugging programs. $SYSTEMD_LOG_TID A boolean. If true, messages will be prefixed with the current numerical thread ID (TID). Note that the this information is attached as metadata to journal entries anyway. Including it directly in the message text can nevertheless be convenient when debugging programs. $SYSTEMD_LOG_TARGET The destination for log messages. One of console (log to the attached tty), console-prefixed (log to the attached tty but with prefixes encoding the log level and "facility", see syslog(3), kmsg (log to the kernel circular log buffer), journal (log to the journal), journal-or-kmsg (log to the journal if available, and to kmsg otherwise), auto (determine the appropriate log target automatically, the default), null (disable log output). $SYSTEMD_LOG_RATELIMIT_KMSG Whether to ratelimit kmsg or not. Takes a boolean. Defaults to "true". If disabled, systemd will not ratelimit messages written to kmsg. $SYSTEMD_PAGER Pager to use when --no-pager is not given; overrides $PAGER. If neither $SYSTEMD_PAGER nor $PAGER are set, a set of well-known pager implementations are tried in turn, including less(1) and more(1), until one is found. If no pager implementation is discovered no pager is invoked. Setting this environment variable to an empty string or the value "cat" is equivalent to passing --no-pager. Note: if $SYSTEMD_PAGERSECURE is not set, $SYSTEMD_PAGER (as well as $PAGER) will be silently ignored. $SYSTEMD_LESS Override the options passed to less (by default "FRSXMK"). Users might want to change two options in particular: K This option instructs the pager to exit immediately when Ctrl+C is pressed. To allow less to handle Ctrl+C itself to switch back to the pager command prompt, unset this option. If the value of $SYSTEMD_LESS does not include "K", and the pager that is invoked is less, Ctrl+C will be ignored by the executable, and needs to be handled by the pager. X This option instructs the pager to not send termcap initialization and deinitialization strings to the terminal. It is set by default to allow command output to remain visible in the terminal even after the pager exits. Nevertheless, this prevents some pager functionality from working, in particular paged output cannot be scrolled with the mouse. See less(1) for more discussion. $SYSTEMD_LESSCHARSET Override the charset passed to less (by default "utf-8", if the invoking terminal is determined to be UTF-8 compatible). $SYSTEMD_PAGERSECURE Takes a boolean argument. When true, the "secure" mode of the pager is enabled; if false, disabled. If $SYSTEMD_PAGERSECURE is not set at all, secure mode is enabled if the effective UID is not the same as the owner of the login session, see geteuid(2) and sd_pid_get_owner_uid(3). In secure mode, LESSSECURE=1 will be set when invoking the pager, and the pager shall disable commands that open or create new files or start new subprocesses. When $SYSTEMD_PAGERSECURE is not set at all, pagers which are not known to implement secure mode will not be used. (Currently only less(1) implements secure mode.) Note: when commands are invoked with elevated privileges, for example under sudo(8) or pkexec(1), care must be taken to ensure that unintended interactive features are not enabled. "Secure" mode for the pager may be enabled automatically as describe above. Setting SYSTEMD_PAGERSECURE=0 or not removing it from the inherited environment allows the user to invoke arbitrary commands. Note that if the $SYSTEMD_PAGER or $PAGER variables are to be honoured, $SYSTEMD_PAGERSECURE must be set too. It might be reasonable to completely disable the pager using --no-pager instead. $SYSTEMD_COLORS Takes a boolean argument. When true, systemd and related utilities will use colors in their output, otherwise the output will be monochrome. Additionally, the variable can take one of the following special values: "16", "256" to restrict the use of colors to the base 16 or 256 ANSI colors, respectively. This can be specified to override the automatic decision based on $TERM and what the console is connected to. $SYSTEMD_URLIFY The value must be a boolean. Controls whether clickable links should be generated in the output for terminal emulators supporting this. This can be specified to override the decision that systemd makes based on $TERM and other conditions. EXAMPLES top Example 26. JSON Policy The JSON file passed as a path parameter to --security-policy= has a top-level JSON object, with keys being the assessment test identifiers mentioned above. The values in the file should be JSON objects with one or more of the following fields: description_na (string), description_good (string), description_bad (string), weight (unsigned integer), and range (unsigned integer). If any of these fields corresponding to a specific id of the unit file is missing from the JSON object, the default built-in field value corresponding to that same id is used for security analysis as default. The weight and range fields are used in determining the overall exposure level of the unit files: the value of each setting is assigned a badness score, which is multiplied by the policy weight and divided by the policy range to determine the overall exposure that the setting implies. The computed badness is summed across all settings in the unit file, normalized to the 1...100 range, and used to determine the overall exposure level of the unit. By allowing users to manipulate these fields, the 'security' verb gives them the option to decide for themself which ids are more important and hence should have a greater effect on the exposure level. A weight of "0" means the setting will not be checked. { "PrivateDevices": { "description_good": "Service has no access to hardware devices", "description_bad": "Service potentially has access to hardware devices", "weight": 1000, "range": 1 }, "PrivateMounts": { "description_good": "Service cannot install system mounts", "description_bad": "Service may install system mounts", "weight": 1000, "range": 1 }, "PrivateNetwork": { "description_good": "Service has no access to the host's network", "description_bad": "Service has access to the host's network", "weight": 2500, "range": 1 }, "PrivateTmp": { "description_good": "Service has no access to other software's temporary files", "description_bad": "Service has access to other software's temporary files", "weight": 1000, "range": 1 }, "PrivateUsers": { "description_good": "Service does not have access to other users", "description_bad": "Service has access to other users", "weight": 1000, "range": 1 } } SEE ALSO top systemd(1), systemctl(1) NOTES top 1. Packaging Metadata https://systemd.io/COREDUMP_PACKAGE_METADATA/ 2. Discoverable Partitions Specification https://uapi-group.org/specifications/specs/discoverable_partitions_specification COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-ANALYZE(1) Pages that refer to this page: systemd-cryptenroll(1), systemd-nspawn(1), org.freedesktop.systemd1(5), systemd.exec(5), systemd.service(5), systemd.unit(5), systemd.directives(7), systemd.index(7), systemd.time(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-analyze\n\n> Analyze and debug system manager.\n> Show timing details about the boot process of units (services, mount points, devices, sockets).\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-analyze.html>.\n\n- List all running units, ordered by the time they took to initialize:\n\n`systemd-analyze blame`\n\n- Print a tree of the time-critical chain of units:\n\n`systemd-analyze critical-chain`\n\n- Create an SVG file showing when each system service started, highlighting the time that they spent on initialization:\n\n`systemd-analyze plot > {{path/to/file.svg}}`\n\n- Plot a dependency graph and convert it to an SVG file:\n\n`systemd-analyze dot | dot -T{{svg}} > {{path/to/file.svg}}`\n\n- Show security scores of running units:\n\n`systemd-analyze security`\n
systemd-ask-password
systemd-ask-password(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-ask-password(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | SEE ALSO | NOTES | COLOPHON SYSTEMD-ASK-PASSWORD(1) systemd-ask-password SYSTEMD-ASK-PASSWORD(1) NAME top systemd-ask-password - Query the user for a system password SYNOPSIS top systemd-ask-password [OPTIONS...] [MESSAGE] DESCRIPTION top systemd-ask-password may be used to query a system password or passphrase from the user, using a question message specified on the command line. When run from a TTY it will query a password on the TTY and print it to standard output. When run with no TTY or with --no-tty it will use the system-wide query mechanism, which allows active users to respond via several agents, listed below. The purpose of this tool is to query system-wide passwords that is passwords not attached to a specific user account. Examples include: unlocking encrypted hard disks when they are plugged in or at boot, entering an SSL certificate passphrase for web and VPN servers. Existing agents are: A boot-time password agent asking the user for passwords using plymouth(8), A boot-time password agent querying the user directly on the console systemd-ask-password-console.service(8), An agent requesting password input via a wall(1) message systemd-ask-password-wall.service(8), A TTY agent that is temporarily spawned during systemctl(1) invocations, A command line agent which can be started temporarily to process queued password requests systemd-tty-ask-password-agent --query. Answering system-wide password queries is a privileged operation, hence all the agents listed above (except for the last one), run as privileged system services. The last one also needs elevated privileges, so should be run through sudo(8) or similar. Additional password agents may be implemented according to the systemd Password Agent Specification[1]. If a password is queried on a TTY, the user may press TAB to hide the asterisks normally shown for each character typed. Pressing Backspace as first key achieves the same effect. OPTIONS top The following options are understood: --icon= Specify an icon name alongside the password query, which may be used in all agents supporting graphical display. The icon name should follow the XDG Icon Naming Specification[2]. --id= Specify an identifier for this password query. This identifier is freely choosable and allows recognition of queries by involved agents. It should include the subsystem doing the query and the specific object the query is done for. Example: "--id=cryptsetup:/dev/sda5". Added in version 227. --keyname= Configure a kernel keyring key name to use as cache for the password. If set, then the tool will try to push any collected passwords into the kernel keyring of the root user, as a key of the specified name. If combined with --accept-cached, it will also try to retrieve such cached passwords from the key in the kernel keyring instead of querying the user right away. By using this option, the kernel keyring may be used as effective cache to avoid repeatedly asking users for passwords, if there are multiple objects that may be unlocked with the same password. The cached key will have a timeout of 2.5min set, after which it will be purged from the kernel keyring. Note that it is possible to cache multiple passwords under the same keyname, in which case they will be stored as NUL-separated list of passwords. Use keyctl(1) to access the cached key via the kernel keyring directly. Example: "--keyname=cryptsetup" Added in version 227. --credential= Configure a credential to read the password from if it exists. This may be used in conjunction with the ImportCredential=, LoadCredential= and SetCredential= settings in unit files. See systemd.exec(5) for details. If not specified, defaults to "password". This option has no effect if no credentials directory is passed to the program (i.e. $CREDENTIALS_DIRECTORY is not set) or if the no credential of the specified name exists. Added in version 249. --timeout= Specify the query timeout in seconds. Defaults to 90s. A timeout of 0 waits indefinitely. --echo=yes|no|masked Controls whether to echo user input. Takes a boolean or the special string "masked", the default being the latter. If enabled the typed characters are echoed literally, which is useful for prompting for usernames and other non-protected data. If disabled the typed characters are not echoed in any form, the user will not get feedback on their input. If set to "masked", an asterisk ("*") is echoed for each character typed. In this mode, if the user hits the tabulator key (""), echo is turned off. (Alternatively, if the user hits the backspace key ("") while no data has been entered otherwise, echo is turned off, too). Added in version 249. --echo, -e Equivalent to --echo=yes, see above. Added in version 217. --emoji=yes|no|auto Controls whether or not to prefix the query with a lock and key emoji (), if the TTY settings permit this. The default is "auto", which defaults to "yes", unless --echo=yes is given. Added in version 249. --no-tty Never ask for password on current TTY even if one is available. Always use agent system. --accept-cached If passed, accept cached passwords, i.e. passwords previously entered. --multiple When used in conjunction with --accept-cached accept multiple passwords. This will output one password per line. --no-output Do not print passwords to standard output. This is useful if you want to store a password in kernel keyring with --keyname= but do not want it to show up on screen or in logs. Added in version 230. -n By default, when the acquired password is written to standard output it is suffixed by a newline character. This may be turned off with the -n switch, similarly to the switch of the same name of the echo(1) command. Added in version 249. -h, --help Print a short help text and exit. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. SEE ALSO top systemd(1), systemd-ask-password-console.service(8), systemd-tty-ask-password-agent(1), keyctl(1), plymouth(8), wall(1) NOTES top 1. systemd Password Agent Specification https://systemd.io/PASSWORD_AGENTS/ 2. XDG Icon Naming Specification https://standards.freedesktop.org/icon-naming-spec/icon-naming-spec-latest.html COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-ASK-PASSWORD(1) Pages that refer to this page: systemd-tty-ask-password-agent(1), systemd.directives(7), systemd.index(7), pam_systemd_loadkey(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-ask-password\n\n> Query the user for a system password.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-ask-password.html>.\n\n- Query a system password with a specific message:\n\n`systemd-ask-password "{{message}}"`\n\n- Specify an identifier for the password query:\n\n`systemd-ask-password --id={{identifier}} "{{message}}"`\n\n- Use a kernel keyring key name as a cache for the password:\n\n`systemd-ask-password --keyname={{key_name}} "{{message}}"`\n\n- Set a custom timeout for the password query:\n\n`systemd-ask-password --timeout={{seconds}} "{{message}}"`\n\n- Force the use of an agent system and never ask on current TTY:\n\n`systemd-ask-password --no-tty "{{message}}"`\n\n- Store a password in the kernel keyring without displaying it:\n\n`systemd-ask-password --no-output --keyname={{key_name}} "{{message}}"`\n
systemd-cat
systemd-cat(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-cat(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | EXAMPLES | SEE ALSO | COLOPHON SYSTEMD-CAT(1) systemd-cat SYSTEMD-CAT(1) NAME top systemd-cat - Connect a pipeline or program's output with the journal SYNOPSIS top systemd-cat [OPTIONS...] [COMMAND] [ARGUMENTS...] systemd-cat [OPTIONS...] DESCRIPTION top systemd-cat may be used to connect the standard input and output of a process to the journal, or as a filter tool in a shell pipeline to pass the output the previous pipeline element generates to the journal. If no parameter is passed, systemd-cat will write everything it reads from standard input (stdin) to the journal. If parameters are passed, they are executed as command line with standard output (stdout) and standard error output (stderr) connected to the journal, so that all it writes is stored in the journal. OPTIONS top The following options are understood: -h, --help Print a short help text and exit. --version Print a short version string and exit. -t, --identifier= Specify a short string that is used to identify the logging tool. If not specified, no identification string is written to the journal. -p, --priority= Specify the default priority level for the logged messages. Pass one of "emerg", "alert", "crit", "err", "warning", "notice", "info", "debug", or a value between 0 and 7 (corresponding to the same named levels). These priority values are the same as defined by syslog(3). Defaults to "info". Note that this simply controls the default, individual lines may be logged with different levels if they are prefixed accordingly. For details, see --level-prefix= below. --stderr-priority= Specifies the default priority level for messages from the process's standard error output (stderr). Usage of this option is the same as the --priority= option, above, and both can be used at once. When both are used, --priority= will specify the default priority for standard output (stdout). If --stderr-priority= is not specified, messages from stderr will still be logged, with the same default priority level as stdout. Also, note that when stdout and stderr use the same default priority, the messages will be strictly ordered, because one channel is used for both. When the default priority differs, two channels are used, and so stdout messages will not be strictly ordered with respect to stderr messages - though they will tend to be approximately ordered. Added in version 241. --level-prefix= Controls whether lines read are parsed for syslog priority level prefixes. If enabled (the default), a line prefixed with a priority prefix such as "<5>" is logged at priority 5 ("notice"), and similarly for the other priority levels. Takes a boolean argument. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. EXAMPLES top Example 1. Invoke a program This calls /bin/ls with standard output and error connected to the journal: # systemd-cat ls Example 2. Usage in a shell pipeline This builds a shell pipeline also invoking /bin/ls and writes the output it generates to the journal: # ls | systemd-cat Even though the two examples have very similar effects, the first is preferable, since only one process is running at a time and both stdout and stderr are captured, while in the second example, only stdout is captured. SEE ALSO top systemd(1), systemctl(1), logger(1) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-CAT(1) Pages that refer to this page: journalctl(1), sd-journal(3), systemd.directives(7), systemd.index(7), systemd-journald.service(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-cat\n\n> Connect a pipeline or program's output streams with the systemd journal.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-cat.html>.\n\n- Write the output of the specified command to the journal (both output streams are captured):\n\n`systemd-cat {{command}}`\n\n- Write the output of a pipeline to the journal (`stderr` stays connected to the terminal):\n\n`{{command}} | systemd-cat`\n
systemd-cgls
systemd-cgls(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-cgls(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | SEE ALSO | COLOPHON SYSTEMD-CGLS(1) systemd-cgls SYSTEMD-CGLS(1) NAME top systemd-cgls - Recursively show control group contents SYNOPSIS top systemd-cgls [OPTIONS...] [CGROUP...] systemd-cgls [OPTIONS...] --unit|--user-unit [UNIT...] DESCRIPTION top systemd-cgls recursively shows the contents of the selected Linux control group hierarchy in a tree. If arguments are specified, shows all member processes of the specified control groups plus all their subgroups and their members. The control groups may either be specified by their full file paths or are assumed in the systemd control group hierarchy. If no argument is specified and the current working directory is beneath the control group mount point /sys/fs/cgroup/, shows the contents of the control group the working directory refers to. Otherwise, the full systemd control group hierarchy is shown. By default, empty control groups are not shown. OPTIONS top The following options are understood: --all Do not hide empty control groups in the output. -l, --full Do not ellipsize process tree members. Added in version 198. -u, --unit Show cgroup subtrees for the specified units. Added in version 233. --user-unit Show cgroup subtrees for the specified user units. Added in version 233. -k Include kernel threads in output. -M MACHINE, --machine=MACHINE Limit control groups shown to the part corresponding to the container MACHINE. Added in version 203. -x, --xattr= Controls whether to include information about extended attributes of the listed control groups in the output. With the long option, expects a boolean value. Defaults to no. Added in version 250. -c, --cgroup-id= Controls whether to include the numeric ID of the listed control groups in the output. With the long option, expects a boolean value. Defaults to no. Added in version 250. -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. SEE ALSO top systemd(1), systemctl(1), systemd-cgtop(1), systemd-nspawn(1), ps(1) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-CGLS(1) Pages that refer to this page: systemd(1), systemd-cgtop(1), cgroups(7), systemd.directives(7), systemd.index(7), systemd-machined.service(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-cgls\n\n> Show the contents of the selected Linux control group hierarchy in a tree.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-cgls.html>.\n\n- Display the whole control group hierarchy on your system:\n\n`systemd-cgls`\n\n- Display a control group tree of a specific resource controller:\n\n`systemd-cgls {{cpu|memory|io}}`\n\n- Display the control group hierarchy of one or more systemd units:\n\n`systemd-cgls --unit {{unit1 unit2 ...}}`\n
systemd-cgtop
systemd-cgtop(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-cgtop(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | KEYS | EXIT STATUS | SEE ALSO | COLOPHON SYSTEMD-CGTOP(1) systemd-cgtop SYSTEMD-CGTOP(1) NAME top systemd-cgtop - Show top control groups by their resource usage SYNOPSIS top systemd-cgtop [OPTIONS...] [GROUP] DESCRIPTION top systemd-cgtop shows the top control groups of the local Linux control group hierarchy, ordered by their CPU, memory, or disk I/O load. The display is refreshed in regular intervals (by default every 1s), similar in style to top(1). If a control group path is specified, shows only the services of the specified control group. If systemd-cgtop is not connected to a tty, no column headers are printed and the default is to only run one iteration. The --iterations= argument, if given, is honored. This mode is suitable for scripting. Resource usage is only accounted for control groups with the appropriate controllers turned on: "cpu" controller for CPU usage, "memory" controller for memory usage, and "io" controller for disk I/O consumption. If resource monitoring for these resources is required, it is recommended to add the CPUAccounting=1, MemoryAccounting=1 and IOAccounting=1 settings in the unit files in question. See systemd.resource-control(5) for details. The CPU load value can be between 0 and 100 times the number of processors the system has. For example, if the system has 8 processors, the CPU load value is going to be between 0% and 800%. The number of processors can be found in "/proc/cpuinfo". To emphasize: unless "CPUAccounting=1", "MemoryAccounting=1", and "IOAccounting=1" are enabled for the services in question, no resource accounting will be available for system services and the data shown by systemd-cgtop will be incomplete. OPTIONS top The following options are understood: -p, --order=path Order by control group path name. -t, --order=tasks Order by number of tasks/processes in the control group. -c, --order=cpu Order by CPU load. -m, --order=memory Order by memory usage. -i, --order=io Order by disk I/O load. -b, --batch Run in "batch" mode: do not accept input and run until the iteration limit set with --iterations= is exhausted or until killed. This mode could be useful for sending output from systemd-cgtop to other programs or to a file. Added in version 188. -r, --raw Format byte counts (as in memory usage and I/O metrics) and CPU time with raw numeric values rather than human-readable numbers. Added in version 221. --cpu=percentage, --cpu=time Controls whether the CPU usage is shown as percentage or time. By default, the CPU usage is shown as percentage. This setting may also be toggled at runtime by pressing the % key. Added in version 226. -P Count only userspace processes instead of all tasks. By default, all tasks are counted: each kernel thread and each userspace thread individually. With this setting, kernel threads are excluded from the count and each userspace process only counts as one task, regardless of how many threads it consists of. This setting may also be toggled at runtime by pressing the P key. This option may not be combined with -k. Added in version 227. -k Count only userspace processes and kernel threads instead of all tasks. By default, all tasks are counted: each kernel thread and each userspace thread individually. With this setting, kernel threads are included in the count and each userspace process only counts as one task, regardless of how many threads it consists of. This setting may also be toggled at runtime by pressing the k key. This option may not be combined with -P. Added in version 226. --recursive= Controls whether the number of processes shown for a control group shall include all processes that are contained in any of the child control groups as well. Takes a boolean argument, which defaults to "yes". If enabled, the processes in child control groups are included, if disabled, only the processes in the control group itself are counted. This setting may also be toggled at runtime by pressing the r key. Note that this setting only applies to process counting, i.e. when the -P or -k options are used. It has not effect if all tasks are counted, in which case the counting is always recursive. Added in version 226. -n, --iterations= Perform only this many iterations. A value of 0 indicates that the program should run indefinitely. Added in version 188. -1 A shortcut for --iterations=1. Added in version 238. -d, --delay= Specify refresh delay in seconds (or if one of "ms", "us", "min" is specified as unit in this time unit). This setting may also be increased and decreased at runtime by pressing the + and - keys. --depth= Maximum control group tree traversal depth. Specifies how deep systemd-cgtop shall traverse the control group hierarchies. If 0 is specified, only the root group is monitored. For 1, only the first level of control groups is monitored, and so on. Defaults to 3. -M MACHINE, --machine=MACHINE Limit control groups shown to the part corresponding to the container MACHINE. This option may not be used when a control group path is specified. Added in version 227. -h, --help Print a short help text and exit. --version Print a short version string and exit. KEYS top systemd-cgtop is an interactive tool and may be controlled via user input using the following keys: h Shows a short help text. Space Immediately refresh output. Added in version 226. q Terminate the program. p, t, c, m, i Sort the control groups by path, number of tasks, CPU load, memory usage, or I/O load, respectively. This setting may also be controlled using the --order= command line switch. % Toggle between showing CPU time as time or percentage. This setting may also be controlled using the --cpu= command line switch. Added in version 201. +, - Increase or decrease refresh delay, respectively. This setting may also be controlled using the --delay= command line switch. P Toggle between counting all tasks, or only userspace processes. This setting may also be controlled using the -P command line switch (see above). Added in version 227. k Toggle between counting all tasks, or only userspace processes and kernel threads. This setting may also be controlled using the -k command line switch (see above). Added in version 226. r Toggle between recursively including or excluding processes in child control groups in control group process counts. This setting may also be controlled using the --recursive= command line switch. This key is not available if all tasks are counted, it is only available if processes are counted, as enabled with the P or k keys. Added in version 226. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. SEE ALSO top systemd(1), systemctl(1), systemd-cgls(1), systemd.resource-control(5), top(1) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-CGTOP(1) Pages that refer to this page: systemd-cgls(1), cgroups(7), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-cgtop\n\n> Show the top control groups of the local Linux control group hierarchy, ordered by their CPU, memory, or disk I/O load.\n> See also: `top`.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-cgtop.html>.\n\n- Start an interactive view:\n\n`systemd-cgtop`\n\n- Change the sort order:\n\n`systemd-cgtop --order={{cpu|memory|path|tasks|io}}`\n\n- Show the CPU usage by time instead of percentage:\n\n`systemd-cgtop --cpu=percentage`\n\n- Change the update interval in seconds (or one of these time units: `ms`, `us`, `min`):\n\n`systemd-cgtop --delay={{interval}}`\n\n- Only count userspace processes (without kernel threads):\n\n`systemd-cgtop -P`\n
systemd-confext
systemd-sysext(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-sysext(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | USES | COMMANDS | OPTIONS | EXIT STATUS | SEE ALSO | NOTES | COLOPHON SYSTEMD-SYSEXT(8) systemd-sysext SYSTEMD-SYSEXT(8) NAME top systemd-sysext, systemd-sysext.service, systemd-confext, systemd- confext.service - Activates System Extension Images SYNOPSIS top systemd-sysext [OPTIONS...] COMMAND systemd-sysext.service systemd-confext [OPTIONS...] COMMAND systemd-confext.service DESCRIPTION top systemd-sysext activates/deactivates system extension images. System extension images may dynamically at runtime extend the /usr/ and /opt/ directory hierarchies with additional files. This is particularly useful on immutable system images where a /usr/ and/or /opt/ hierarchy residing on a read-only file system shall be extended temporarily at runtime without making any persistent modifications. System extension images should contain files and directories similar in fashion to regular operating system tree. When one or more system extension images are activated, their /usr/ and /opt/ hierarchies are combined via "overlayfs" with the same hierarchies of the host OS, and the host /usr/ and /opt/ overmounted with it ("merging"). When they are deactivated, the mount point is disassembled again revealing the unmodified original host version of the hierarchy ("unmerging"). Merging thus makes the extension's resources suddenly appear below the /usr/ and /opt/ hierarchies as if they were included in the base OS image itself. Unmerging makes them disappear again, leaving in place only the files that were shipped with the base OS image itself. Files and directories contained in the extension images outside of the /usr/ and /opt/ hierarchies are not merged, and hence have no effect when included in a system extension image. In particular, files in the /etc/ and /var/ included in a system extension image will not appear in the respective hierarchies after activation. System extension images are strictly read-only, and the host /usr/ and /opt/ hierarchies become read-only too while they are activated. System extensions are supposed to be purely additive, i.e. they are supposed to include only files that do not exist in the underlying basic OS image. However, the underlying mechanism (overlayfs) also allows overlaying or removing files, but it is recommended not to make use of this. System extension images may be provided in the following formats: 1. Plain directories or btrfs subvolumes containing the OS tree 2. Disk images with a GPT disk label, following the Discoverable Partitions Specification[1] 3. Disk images lacking a partition table, with a naked Linux file system (e.g. erofs, squashfs or ext4) These image formats are the same ones that systemd-nspawn(1) supports via its --directory=/--image= switches and those that the service manager supports via RootDirectory=/RootImage=. Similar to them they may optionally carry Verity authentication information. System extensions are searched for in the directories /etc/extensions/, /run/extensions/ and /var/lib/extensions/. The first two listed directories are not suitable for carrying large binary images, however are still useful for carrying symlinks to them. The primary place for installing system extensions is /var/lib/extensions/. Any directories found in these search directories are considered directory based extension images; any files with the .raw suffix are considered disk image based extension images. When invoked in the initrd, the additional directory /.extra/sysext/ is included in the directories that are searched for extension images. Note however, that by default a tighter image policy applies to images found there, though, see below. This directory is populated by systemd-stub(7) with extension images found in the system's EFI System Partition. During boot OS extension images are activated automatically, if the systemd-sysext.service is enabled. Note that this service runs only after the underlying file systems where system extensions may be located have been mounted. This means they are not suitable for shipping resources that are processed by subsystems running in earliest boot. Specifically, OS extension images are not suitable for shipping system services or systemd-sysusers(8) definitions. See the Portable Services[2] page for a simple mechanism for shipping system services in disk images, in a similar fashion to OS extensions. Note the different isolation on these two mechanisms: while system extension directly extend the underlying OS image with additional files that appear in a way very similar to as if they were shipped in the OS image itself and thus imply no security isolation, portable services imply service level sandboxing in one way or another. The systemd-sysext.service service is guaranteed to finish start-up before basic.target is reached; i.e. at the time regular services initialize (those which do not use DefaultDependencies=no), the files and directories system extensions provide are available in /usr/ and /opt/ and may be accessed. Note that there is no concept of enabling/disabling installed system extension images: all installed extension images are automatically activated at boot. However, you can place an empty directory named like the extension (no .raw) in /etc/extensions/ to "mask" an extension with the same name in a system folder with lower precedence. A simple mechanism for version compatibility is enforced: a system extension image must carry a /usr/lib/extension-release.d/extension-release.NAME file, which must match its image name, that is compared with the host os-release file: the contained ID= fields have to match unless "_any" is set for the extension. If the extension ID= is not "_any", the SYSEXT_LEVEL= field (if defined) has to match. If the latter is not defined, the VERSION_ID= field has to match instead. If the extension defines the ARCHITECTURE= field and the value is not "_any" it has to match the kernel's architecture reported by uname(2) but the used architecture identifiers are the same as for ConditionArchitecture= described in systemd.unit(5). EXTENSION_RELOAD_MANAGER= can be set to 1 if the extension requires a service manager reload after application of the extension. Note that the for the reasons mentioned earlier: Portable Services[2] remain the recommended way to ship system services. System extensions should not ship a /usr/lib/os-release file (as that would be merged into the host /usr/ tree, overriding the host OS version data, which is not desirable). The extension-release file follows the same format and semantics, and carries the same content, as the os-release file of the OS, but it describes the resources carried in the extension image. The systemd-confext concept follows the same principle as the systemd-sysext(1) functionality but instead of working on /usr and /opt, confext will extend only /etc. Files and directories contained in the confext images outside of the /etc/ hierarchy are not merged, and hence have no effect when included in the image. Formats for these images are of the same as sysext images. The merged hierarchy will be mounted with "nosuid" and (if not disabled via --noexec=false) "noexec". Confexts are looked for in the directories /run/confexts/, /var/lib/confexts/, /usr/lib/confexts/ and /usr/local/lib/confexts/. The first listed directory is not suitable for carrying large binary images, however is still useful for carrying symlinks to them. The primary place for installing configuration extensions is /var/lib/confexts/. Any directories found in these search directories are considered directory based confext images; any files with the .raw suffix are considered disk image based confext images. Again, just like sysext images, the confext images will contain a /etc/extension-release.d/extension-release.NAME file, which must match the image name (with the usual escape hatch of the user.extension-release.strict xattr(7)), and again with content being one or more of ID=, VERSION_ID=, and CONFEXT_LEVEL. Confext images will then be checked and matched against the base OS layer. USES top The primary use case for system images are immutable environments where debugging and development tools shall optionally be made available, but not included in the immutable base OS image itself (e.g. strace(1) and gdb(1) shall be an optionally installable addition in order to make debugging/development easier). System extension images should not be misunderstood as a generic software packaging framework, as no dependency scheme is available: system extensions should carry all files they need themselves, except for those already shipped in the underlying host system image. Typically, system extension images are built at the same time as the base OS image within the same build system. Another use case for the system extension concept is temporarily overriding OS supplied resources with newer ones, for example to install a locally compiled development version of some low-level component over the immutable OS image without doing a full OS rebuild or modifying the nominally immutable image. (e.g. "install" a locally built package with DESTDIR=/var/lib/extensions/mytest make install && systemd-sysext refresh, making it available in /usr/ as if it was installed in the OS image itself.) This case works regardless if the underlying host /usr/ is managed as immutable disk image or is a traditional package manager controlled (i.e. writable) tree. For the confext case, the OSConfig project aims to perform runtime reconfiguration of OS services. Sometimes, there is a need to swap certain configuration parameter values or restart only a specific service without deployment of new code or a complete OS deployment. In other words, we want to be able to tie the most frequently configured options to runtime updateable flags that can be changed without a system reboot. This will help reduce servicing times when there is a need for changing the OS configuration. COMMANDS top The following commands are understood by both the sysext and confext concepts: status When invoked without any command verb, or when status is specified the current merge status is shown, separately (for both /usr/ and /opt/ of sysext and for /etc/ of confext). Added in version 248. merge Merges all currently installed system extension images into /usr/ and /opt/, by overmounting these hierarchies with an "overlayfs" file system combining the underlying hierarchies with those included in the extension images. This command will fail if the hierarchies are already merged. For confext, the merge happens into the /etc/ directory instead. Added in version 248. unmerge Unmerges all currently installed system extension images from /usr/ and /opt/ for sysext and /etc/, for confext, by unmounting the "overlayfs" file systems created by merge prior. Added in version 248. refresh A combination of unmerge and merge: if already mounted the existing "overlayfs" instance is unmounted temporarily, and then replaced by a new version. This command is useful after installing/removing system extension images, in order to update the "overlayfs" file system accordingly. If no system extensions are installed when this command is executed, the equivalent of unmerge is executed, without establishing any new "overlayfs" instance. Note that currently there's a brief moment where neither the old nor the new "overlayfs" file system is mounted. This implies that all resources supplied by a system extension will briefly disappear even if it exists continuously during the refresh operation. Added in version 248. list A brief list of installed extension images is shown. Added in version 248. -h, --help Print a short help text and exit. --version Print a short version string and exit. OPTIONS top --root= Operate relative to the specified root directory, i.e. establish the "overlayfs" mount not on the top-level host /usr/ and /opt/ hierarchies for sysext or /etc/ for confext, but below some specified root directory. Added in version 248. --force When merging system extensions into /usr/ and /opt/ for sysext and /etc/ for confext, ignore version incompatibilities, i.e. force merging regardless of whether the version information included in the images matches the host or not. Added in version 248. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on system extension disk images. If not specified defaults to "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent" for system extensions, i.e. only the root and /usr/ file systems in the image are used. For configuration extensions defaults to "root=verity+signed+encrypted+unprotected+absent". When run in the initrd and operating on a system extension image stored in the /.extra/sysext/ directory a slightly stricter policy is used by default: "root=signed+absent:usr=signed+absent", see above for details. Added in version 254. --noexec=BOOL When merging configuration extensions into /etc/ the "MS_NOEXEC" mount flag is used by default. This option can be used to disable it. Added in version 254. --no-reload When used with merge, unmerge or refresh, do not reload daemon after executing the changes even if an extension that is applied requires a reload via the EXTENSION_RELOAD_MANAGER= set to 1. Added in version 255. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. --json=MODE Shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks), "pretty" (for a pretty version of the same, with indentation and line breaks) or "off" (to turn off JSON output, the default). EXIT STATUS top On success, 0 is returned. SEE ALSO top systemd(1), systemd-nspawn(1), systemd-stub(7) NOTES top 1. Discoverable Partitions Specification https://uapi-group.org/specifications/specs/discoverable_partitions_specification 2. Portable Services https://systemd.io/PORTABLE_SERVICES COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-SYSEXT(8) Pages that refer to this page: portablectl(1), systemd-cryptenroll(1), org.freedesktop.portable1(5), os-release(5), systemd.directives(7), systemd.image-policy(7), systemd.index(7), systemd-repart(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-confext\n\n> This command is an alias of `systemd-sysext`.\n> It follows the same principle as `systemd-sysext`, but instead of working on `/usr` and `/opt`, `confext` will extend only `/etc`.\n> More information: <https://www.freedesktop.org/software/systemd/man/latest/systemd-sysext.html>.\n\n- View documentation for the original command:\n\n`tldr systemd-sysext`\n
systemd-creds
systemd-creds(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-creds(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMMANDS | OPTIONS | EXIT STATUS | EXAMPLES | SEE ALSO | NOTES | COLOPHON SYSTEMD-CREDS(1) systemd-creds SYSTEMD-CREDS(1) NAME top systemd-creds - Lists, shows, encrypts and decrypts service credentials SYNOPSIS top systemd-creds [OPTIONS...] COMMAND [ARGS...] DESCRIPTION top systemd-creds is a tool for listing, showing, encrypting and decrypting unit credentials. Credentials are limited-size binary or textual objects that may be passed to unit processes. They are primarily used for passing cryptographic keys (both public and private) or certificates, user account information or identity information from the host to services. Credentials are configured in unit files via the ImportCredential=, LoadCredential=, SetCredential=, LoadCredentialEncrypted=, and SetCredentialEncrypted= settings, see systemd.exec(5) for details. For further information see System and Service Credentials[1] documentation. COMMANDS top The following commands are understood: list Show a list of credentials passed into the current execution context. This command shows the files in the directory referenced by the $CREDENTIALS_DIRECTORY environment variable, and is intended to be executed from within service context. Along with each credential name, the size and security state is shown. The latter is one of "secure" (in case the credential is backed by unswappable memory, i.e. "ramfs"), "weak" (in case it is backed by any other type of memory), or "insecure" (if having any access mode that is not 0400, i.e. if readable by anyone but the owner). Added in version 250. cat credential... Show contents of specified credentials passed into the current execution context. Takes one or more credential names, whose contents shall be written to standard output. When combined with --json= or --transcode= the output is transcoded in simple ways before outputting. Added in version 250. setup Generates a host encryption key for credentials, if one has not been generated already. This ensures the /var/lib/systemd/credential.secret file is initialized with a random secret key if it doesn't exist yet. This secret key is used when encrypting/decrypting credentials with encrypt or decrypt, and is only accessible to the root user. Note that there's typically no need to invoke this command explicitly as it is implicitly called when encrypt is invoked, and credential host key encryption selected. Added in version 250. encrypt input|- output|- Loads the specified (unencrypted plaintext) input credential file, encrypts it and writes the (encrypted ciphertext) output to the specified target credential file. The resulting file may be referenced in the LoadCredentialEncrypted= setting in unit files, or its contents used literally in SetCredentialEncrypted= settings. Takes two file system paths. The file name part of the output path is embedded as name in the encrypted credential, to ensure encrypted credentials cannot be renamed and reused for different purposes without this being noticed. The credential name to embed may be overridden with the --name= setting. The input or output paths may be specified as "-", in which case the credential data is read from/written to standard input and standard output. If the output path is specified as "-" the credential name cannot be derived from the file system path, and thus should be specified explicitly via the --name= switch. The credential data is encrypted and authenticated symmetrically with one of the following encryption keys: 1. A secret key automatically derived from the system's TPM2 chip. This encryption key is not stored on the host system and thus decryption is only possible with access to the original TPM2 chip. Or in other words, the credential secured in this way can only be decrypted again by the local machine. 2. A secret key stored in the /var/lib/systemd/credential.secret file which is only accessible to the root user. This "host" encryption key is stored on the host file system, and thus decryption is possible with access to the host file system and sufficient privileges. The key is automatically generated when needed, but can also be created explicitly with the setup command, see above. 3. A combination of the above: an encryption key derived from both the TPM2 chip and the host file system. This means decryption requires both access to the original TPM2 chip and the OS installation. This is the default mode of operation if a TPM2 chip is available and /var/lib/systemd/ resides on persistent media. Which of the three keys shall be used for encryption may be configured with the --with-key= switch. Depending on the use-case for the encrypted credential the key to use may differ. For example, for credentials that shall be accessible from the initrd, encryption with the host key is not appropriate, since access to the host key is typically not available from the initrd. Thus, for such credentials only the TPM2 key should be used. Encrypted credentials are always encoded in Base64. Use decrypt (see below) to undo the encryption operation, and acquire the decrypted plaintext credential from the encrypted ciphertext credential. The credential data is encrypted using AES256-GCM, i.e. providing both confidentiality and integrity, keyed by a SHA256 hash of one or both of the secret keys described above. Added in version 250. decrypt input|- [output|-] Undoes the effect of the encrypt operation: loads the specified (encrypted ciphertext) input credential file, decrypts and authenticates it and writes the (decrypted plaintext) output to the specified target credential file. Takes one or two file system paths. The file name part of the input path is compared with the credential name embedded in the encrypted file. If it does not match decryption fails. This is done in order to ensure that encrypted credentials are not re-purposed without this being detected. The credential name to compare with the embedded credential name may also be overridden with the --name= switch. If the input path is specified as "-", the encrypted credential is read from standard input. If only one path is specified or the output path specified as "-", the decrypted credential is written to standard output. In this mode, the expected name embedded in the credential cannot be derived from the path and should be specified explicitly with --name=. Decrypting credentials requires access to the original TPM2 chip and/or credentials host key, see above. Information about which keys are required is embedded in the encrypted credential data, and thus decryption is entirely automatic. Added in version 250. has-tpm2 Reports whether the system is equipped with a TPM2 device usable for protecting credentials. If a TPM2 device has been discovered, is supported, and is being used by firmware, by the OS kernel drivers and by userspace (i.e. systemd) this prints "yes" and exits with exit status zero. If no such device is discovered/supported/used, prints "no". Otherwise prints "partial". In either of these two cases exits with non-zero exit status. It also shows four lines indicating separately whether firmware, drivers, the system and the kernel discovered/support/use TPM2. Combine with --quiet to suppress the output. Added in version 251. -h, --help Print a short help text and exit. --version Print a short version string and exit. OPTIONS top --system When specified with the list and cat commands operates on the credentials passed to system as a whole instead of on those passed to the current execution context. This is useful in container environments where credentials may be passed in from the container manager. Added in version 250. --transcode= When specified with the cat or decrypt commands, transcodes the output before showing it. Takes one of "base64", "unbase64", "hex" or "unhex" as argument, in order to encode/decode the credential data with Base64 or as series of hexadecimal values. Note that this has no effect on the encrypt command, as encrypted credentials are unconditionally encoded in Base64. Added in version 250. --newline= When specified with cat or decrypt controls whether to add a trailing newline character to the end of the output if it doesn't end in one, anyway. Takes one of "auto", "yes" or "no". The default mode of "auto" will suffix the output with a single newline character only when writing credential data to a TTY. Added in version 250. --pretty, -p When specified with encrypt controls whether to show the encrypted credential as SetCredentialEncrypted= setting that may be pasted directly into a unit file. Has effect only when used together with --name= and "-" as the output file. Added in version 250. --name=name When specified with the encrypt command controls the credential name to embed in the encrypted credential data. If not specified the name is chosen automatically from the filename component of the specified output path. If specified as empty string no credential name is embedded in the encrypted credential, and no verification of credential name is done when the credential is decrypted. When specified with the decrypt command control the credential name to validate the credential name embedded in the encrypted credential with. If not specified the name is chosen automatically from the filename component of the specified input path. If no credential name is embedded in the encrypted credential file (i.e. the --name= with an empty string was used when encrypted) the specified name has no effect as no credential name validation is done. Embedding the credential name in the encrypted credential is done in order to protect against reuse of credentials for purposes they weren't originally intended for, under the assumption the credential name is chosen carefully to encode its intended purpose. Added in version 250. --timestamp=timestamp When specified with the encrypt command controls the timestamp to embed into the encrypted credential. Defaults to the current time. Takes a timestamp specification in the format described in systemd.time(7). When specified with the decrypt command controls the timestamp to use to validate the "not-after" timestamp that was configured with --not-after= during encryption. If not specified defaults to the current system time. Added in version 250. --not-after=timestamp When specified with the encrypt command controls the time when the credential shall not be used anymore. This embeds the specified timestamp in the encrypted credential. During decryption the timestamp is checked against the current system clock, and if the timestamp is in the past the decryption will fail. By default no such timestamp is set. Takes a timestamp specification in the format described in systemd.time(7). Added in version 250. --with-key=, -H, -T When specified with the encrypt command controls the encryption/signature key to use. Takes one of "host", "tpm2", "host+tpm2", "tpm2-absent", "auto", "auto-initrd". See above for details on the three key types. If set to "auto" (which is the default) the TPM2 key is used if a TPM2 device is found and not running in a container. The host key is used if /var/lib/systemd/ is on persistent media. This means on typical systems the encryption is by default bound to both the TPM2 chip and the OS installation, and both need to be available to decrypt the credential again. If "auto" is selected but neither TPM2 is available (or running in container) nor /var/lib/systemd/ is on persistent media, encryption will fail. If set to "tpm2-absent" a fixed zero length key is used (thus, in this mode no confidentiality nor authenticity are provided!). This logic is useful to cover for systems that lack a TPM2 chip but where credentials shall be generated. Note that decryption of such credentials is refused on systems that have a TPM2 chip and where UEFI SecureBoot is enabled (this is done so that such a locked down system cannot be tricked into loading a credential generated this way that lacks authentication information). If set to "auto-initrd" a TPM2 key is used if a TPM2 is found. If not a fixed zero length key is used, equivalent to "tpm2-absent" mode. This option is particularly useful to generate credentials files that are encrypted/authenticated against TPM2 where available but still work on systems lacking support for this. The -H switch is a shortcut for --with-key=host. Similar, -T is a shortcut for --with-key=tpm2. When encrypting credentials that shall be used in the initrd (where /var/lib/systemd/ is typically not available) make sure to use --with-key=auto-initrd mode, to disable binding against the host secret. This switch has no effect on the decrypt command, as information on which key to use for decryption is included in the encrypted credential already. Added in version 250. --tpm2-device=PATH Controls the TPM2 device to use. Expects a device node path referring to the TPM2 chip (e.g. /dev/tpmrm0). Alternatively the special value "auto" may be specified, in order to automatically determine the device node of a suitable TPM2 device (of which there must be exactly one). The special value "list" may be used to enumerate all suitable TPM2 devices currently discovered. Added in version 250. --tpm2-pcrs= [PCR...] Configures the TPM2 PCRs (Platform Configuration Registers) to bind the encryption key to. Takes a "+" separated list of numeric PCR indexes in the range 0...23. If not used, defaults to PCR 7 only. If an empty string is specified, binds the encryption key to no PCRs at all. For details about the PCRs available, see the documentation of the switch of the same name for systemd-cryptenroll(1). Added in version 250. --tpm2-public-key= [PATH], --tpm2-public-key-pcrs= [PCR...] Configures a TPM2 signed PCR policy to bind encryption to, for use with the encrypt command. The --tpm2-public-key= option accepts a path to a PEM encoded RSA public key, to bind the encryption to. If this is not specified explicitly, but a file tpm2-pcr-public-key.pem exists in one of the directories /etc/systemd/, /run/systemd/, /usr/lib/systemd/ (searched in this order), it is automatically used. The --tpm2-public-key-pcrs= option takes a list of TPM2 PCR indexes to bind to (same syntax as --tpm2-pcrs= described above). If not specified defaults to 11 (i.e. this binds the policy to any unified kernel image for which a PCR signature can be provided). Note the difference between --tpm2-pcrs= and --tpm2-public-key-pcrs=: the former binds decryption to the current, specific PCR values; the latter binds decryption to any set of PCR values for which a signature by the specified public key can be provided. The latter is hence more useful in scenarios where software updates shall be possible without losing access to all previously encrypted secrets. Added in version 252. --tpm2-signature= [PATH] Takes a path to a TPM2 PCR signature file as generated by the systemd-measure(1) tool and that may be used to allow the decrypt command to decrypt credentials that are bound to specific signed PCR values. If this is not specified explicitly, and a credential with a signed PCR policy is attempted to be decrypted, a suitable signature file tpm2-pcr-signature.json is searched for in /etc/systemd/, /run/systemd/, /usr/lib/systemd/ (in this order) and used. Added in version 252. --quiet, -q When used with has-tpm2 suppresses the output, and only returns an exit status indicating support for TPM2. Added in version 251. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. --json=MODE Shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks), "pretty" (for a pretty version of the same, with indentation and line breaks) or "off" (to turn off JSON output, the default). EXIT STATUS top On success, 0 is returned. In case of the has-tpm2 command returns 0 if a TPM2 device is discovered, supported and used by firmware, driver, and userspace (i.e. systemd). Otherwise returns the OR combination of the value 1 (in case firmware support is missing), 2 (in case driver support is missing) and 4 (in case userspace support is missing). If no TPM2 support is available at all, value 7 is hence returned. EXAMPLES top Example 1. Encrypt a password for use as credential The following command line encrypts the specified password "hunter2", writing the result to a file password.cred. # echo -n hunter2 | systemd-creds encrypt - password.cred This decrypts the file password.cred again, revealing the literal password: # systemd-creds decrypt password.cred hunter2 Example 2. Encrypt a password and include it in a unit file The following command line prompts the user for a password and generates a SetCredentialEncrypted= line from it for a credential named "mysql-password", suitable for inclusion in a unit file. # systemd-ask-password -n | systemd-creds encrypt --name=mysql-password -p - - Password: **** SetCredentialEncrypted=mysql-password: \ k6iUCUh0RJCQyvL8k8q1UyAAAAABAAAADAAAABAAAAASfFsBoPLIm/dlDoGAAAAAAAAAA \ NAAAAAgAAAAAH4AILIOZ3w6rTzYsBy9G7liaCAd4i+Kpvs8mAgArzwuKxd0ABDjgSeO5k \ mKQc58zM94ZffyRmuNeX1lVHE+9e2YD87KfRFNoDLS7F3YmCb347gCiSk2an9egZ7Y0Xs \ 700Kr6heqQswQEemNEc62k9RJnEl2q7SbcEYguegnPQUATgAIAAsAAAASACA/B90W7E+6 \ yAR9NgiIJvxr9bpElztwzB5lUJAxtMBHIgAQACCaSV9DradOZz4EvO/LSaRyRSq2Hj0ym \ gVJk/dVzE8Uxj8H3RbsT7rIBH02CIgm/Gv1ukSXO3DMHmVQkDG0wEciyageTfrVEer8z5 \ 9cUQfM5ynSaV2UjeUWEHuz4fwDsXGLB9eELXLztzUU9nsAyLvs3ZRR+eEK/A== The generated line can be pasted 1:1 into a unit file, and will ensure the acquired password will be made available in the $CREDENTIALS_DIRECTORY/mysql-password credential file for the started service. Utilizing the unit file drop-in logic this can be used to securely pass a password credential to a unit. A similar, more comprehensive set of commands to insert a password into a service xyz.service: # mkdir -p /etc/systemd/system/xyz.service.d # systemd-ask-password -n | ( echo "[Service]" && systemd-creds encrypt --name=mysql-password -p - - ) >/etc/systemd/system/xyz.service.d/50-password.conf # systemctl daemon-reload # systemctl restart xyz.service SEE ALSO top systemd(1), systemd.exec(5), systemd-measure(1) NOTES top 1. System and Service Credentials https://systemd.io/CREDENTIALS COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-CREDS(1) Pages that refer to this page: systemd(1), systemd.exec(5), systemd.directives(7), systemd.generator(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-creds\n\n> List, show, encrypt and decrypt service credentials.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-creds.html>.\n\n- Encrypt a file and set a specific name:\n\n`systemd-creds encrypt --name={{name}} {{path/to/input_file}} {{path/to/output}}`\n\n- Decrypt the file again:\n\n`systemd-creds decrypt {{path/to/input_file}} {{path/to/output_file}}`\n\n- Encrypt text from `stdin`:\n\n`echo -n {{text}} | systemd-creds encrypt --name={{name}} - {{path/to/output}}`\n\n- Encrypt the text and append it to the service file (the credentials will be available in `$CREDENTIALS_DIRECTORY`):\n\n`echo -n {{text}} | systemd-creds encrypt --name={{name}} --pretty - - >> {{service}}`\n\n- Create a credential that is only valid until the given timestamp:\n\n`systemd-creds encrypt --not-after="{{timestamp}}" {{path/to/input_file}} {{path/to/output_file}}`\n
systemd-cryptenroll
systemd-cryptenroll(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-cryptenroll(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | LIMITATIONS | COMPATIBILITY | OPTIONS | EXIT STATUS | EXAMPLES | SEE ALSO | NOTES | COLOPHON SYSTEMD-CRYPTENROLL(1) systemd-cryptenroll SYSTEMD-CRYPTENROLL(1) NAME top systemd-cryptenroll - Enroll PKCS#11, FIDO2, TPM2 token/devices to LUKS2 encrypted volumes SYNOPSIS top systemd-cryptenroll [OPTIONS...] [DEVICE] DESCRIPTION top systemd-cryptenroll is a tool for enrolling hardware security tokens and devices into a LUKS2 encrypted volume, which may then be used to unlock the volume during boot. Specifically, it supports tokens and credentials of the following kind to be enrolled: 1. PKCS#11 security tokens and smartcards that may carry an RSA or EC key pair (e.g. various YubiKeys) 2. FIDO2 security tokens that implement the "hmac-secret" extension (most FIDO2 keys, including YubiKeys) 3. TPM2 security devices 4. Regular passphrases 5. Recovery keys. These are similar to regular passphrases, however are randomly generated on the computer and thus generally have higher entropy than user-chosen passphrases. Their character set has been designed to ensure they are easy to type in, while having high entropy. They may also be scanned off screen using QR codes. Recovery keys may be used for unlocking LUKS2 volumes wherever passphrases are accepted. They are intended to be used in combination with an enrolled hardware security token, as a recovery option when the token is lost. In addition, the tool may be used to enumerate currently enrolled security tokens and wipe a subset of them. The latter may be combined with the enrollment operation of a new security token, in order to update or replace enrollments. The tool supports only LUKS2 volumes, as it stores token meta-information in the LUKS2 JSON token area, which is not available in other encryption formats. TPM2 PCRs and policies PCRs allow binding of the encryption of secrets to specific software versions and system state, so that the enrolled key is only accessible (may be "unsealed") if specific trusted software and/or configuration is used. Such bindings may be created with the option --tpm2-pcrs= described below. Secrets may also be bound indirectly: a signed policy for a state of some combination of PCR values is provided, and the secret is bound to the public part of the key used to sign this policy. This means that the owner of a key can generate a sequence of signed policies, for specific software versions and system states, and the secret can be decrypted as long as the machine state matches one of those policies. For example, a vendor may provide such a policy for each kernel+initrd update, allowing users to encrypt secrets so that they can be decrypted when running any kernel+initrd signed by the vendor. Such bindings may be created with the options --tpm2-public-key=, --tpm2-public-key-pcrs=, --tpm2-signature= described below. See Linux TPM PCR Registry[1] for an authoritative list of PCRs and how they are updated. The table below contains a quick reference, describing in particular the PCRs modified by systemd. Table 1. Well-known PCR Definitions PCR name Explanation 0 platform-code Core system firmware executable code; changes on firmware updates 1 platform-config Core system firmware data/host platform configuration; typically contains serial and model numbers, changes on basic hardware/CPU/RAM replacements 2 external-code Extended or pluggable executable code; includes option ROMs on pluggable hardware 3 external-config Extended or pluggable firmware data; includes information about pluggable hardware 4 boot-loader-code Boot loader and additional drivers, PE binaries invoked by the boot loader; changes on boot loader updates. sd-stub(7) measures system extension images read from the ESP here too (see systemd-sysext(8)). 5 boot-loader-config GPT/Partition table; changes when the partitions are added, modified, or removed 7 secure-boot-policy Secure Boot state; changes when UEFI SecureBoot mode is enabled/disabled, or firmware certificates (PK, KEK, db, dbx, ...) changes. 9 kernel-initrd The Linux kernel measures all initrds it receives into this PCR. 10 ima The IMA project measures its runtime state into this PCR. 11 kernel-boot systemd-stub(7) measures the ELF kernel image, embedded initrd and other payload of the PE image it is placed in into this PCR. systemd-pcrphase.service(8) measures boot phase strings into this PCR at various milestones of the boot process. 12 kernel-config systemd-boot(7) measures the kernel command line into this PCR. systemd-stub(7) measures any manually specified kernel command line (i.e. a kernel command line that overrides the one embedded in the unified PE image) and loaded credentials into this PCR. 13 sysexts systemd-stub(7) measures any systemd-sysext(8) images it passes to the booted kernel into this PCR. 14 shim-policy The shim project measures its "MOK" certificates and hashes into this PCR. 15 system-identity systemd-cryptsetup(8) optionally measures the volume key of activated LUKS volumes into this PCR. systemd-pcrmachine.service(8) measures the machine-id(5) into this PCR. systemd-pcrfs@.service(8) measures mount points, file system UUIDs, labels, partition UUIDs of the root and /var/ filesystems into this PCR. 16 debug Debug 23 application-support Application Support In general, encrypted volumes would be bound to some combination of PCRs 7, 11, and 14 (if shim/MOK is used). In order to allow firmware and OS version updates, it is typically not advisable to use PCRs such as 0 and 2, since the program code they cover should already be covered indirectly through the certificates measured into PCR 7. Validation through certificates hashes is typically preferable over validation through direct measurements as it is less brittle in context of OS/firmware updates: the measurements will change on every update, but signatures should remain unchanged. See the Linux TPM PCR Registry[1] for more discussion. LIMITATIONS top Note that currently when enrolling a new key of one of the five supported types listed above, it is required to first provide a passphrase, a recovery key or a FIDO2 token. It's currently not supported to unlock a device with a TPM2/PKCS#11 key in order to enroll a new TPM2/PKCS#11 key. Thus, if in future key roll-over is desired it's generally recommended to ensure a passphrase, a recovery key or a FIDO2 token is always enrolled. Also note that support for enrolling multiple FIDO2 tokens is currently limited. When multiple FIDO2 tokens are enrolled, systemd-cryptseup will perform pre-flight requests to attempt to identify which of the enrolled tokens are currently plugged in. However, this is not possible for FIDO2 tokens with user verification (UV, usually via biometrics), in which case it will fall back to attempting each enrolled token one by one. This will result in multiple prompts for PIN and user verification. This limitation does not apply to PKCS#11 tokens. COMPATIBILITY top Security technology both in systemd and in the general industry constantly evolves. In order to provide best security guarantees, the way TPM2, FIDO2, PKCS#11 devices are enrolled is regularly updated in newer versions of systemd. Whenever this happens the following compatibility guarantees are given: Old enrollments continue to be supported and may be unlocked with newer versions of systemd-cryptsetup@.service(8). The opposite is not guaranteed however: it might not be possible to unlock volumes with enrollments done with a newer version of systemd-cryptenroll with an older version of systemd-cryptsetup. That said, it is generally recommended to use matching versions of systemd-cryptenroll and systemd-cryptsetup, since this is best tested and supported. It might be advisable to re-enroll existing enrollments to take benefit of newer security features, as they are added to systemd. OPTIONS top The following options are understood: --password Enroll a regular password/passphrase. This command is mostly equivalent to cryptsetup luksAddKey, however may be combined with --wipe-slot= in one call, see below. Added in version 248. --recovery-key Enroll a recovery key. Recovery keys are mostly identical to passphrases, but are computer-generated instead of being chosen by a human, and thus have a guaranteed high entropy. The key uses a character set that is easy to type in, and may be scanned off screen via a QR code. Added in version 248. --unlock-key-file=PATH Use a file instead of a password/passphrase read from stdin to unlock the volume. Expects the PATH to the file containing your key to unlock the volume. Currently there is nothing like --key-file-offset= or --key-file-size= so this file has to only contain the full key. Added in version 252. --unlock-fido2-device=PATH Use a FIDO2 device instead of a password/passphrase read from stdin to unlock the volume. Expects a hidraw device referring to the FIDO2 device (e.g. /dev/hidraw1). Alternatively the special value "auto" may be specified, in order to automatically determine the device node of a currently plugged in security token (of which there must be exactly one). This automatic discovery is unsupported if --fido2-device= option is also specified. Added in version 253. --pkcs11-token-uri=URI Enroll a PKCS#11 security token or smartcard (e.g. a YubiKey). Expects a PKCS#11 smartcard URI referring to the token. Alternatively the special value "auto" may be specified, in order to automatically determine the URI of a currently plugged in security token (of which there must be exactly one). The special value "list" may be used to enumerate all suitable PKCS#11 tokens currently plugged in. The PKCS#11 token must contain an RSA or EC key pair which will be used to unlock a LUKS2 volume. For RSA, a randomly generated volume key is encrypted with a public key in the token, and stored in the LUKS2 JSON token header area. To unlock a volume, the stored encrypted volume key will be decrypted with a private key in the token. For ECC, ECDH algorithm is used: we generate a pair of EC keys in the same EC group, then derive a shared secret using the generated private key and the public key in the token. The derived shared secret is used as a volume key. The generated public key is stored in the LUKS2 JSON token header area. The generated private key is erased. To unlock a volume, we derive the shared secret with the stored public key and a private key in the token. In order to unlock a LUKS2 volume with an enrolled PKCS#11 security token, specify the pkcs11-uri= option in the respective /etc/crypttab line: myvolume /dev/sda1 - pkcs11-uri=auto See crypttab(5) for a more comprehensive example of a systemd-cryptenroll invocation and its matching /etc/crypttab line. Added in version 248. --fido2-credential-algorithm=STRING Specify COSE algorithm used in credential generation. The default value is "es256". Supported values are "es256", "rs256" and "eddsa". "es256" denotes ECDSA over NIST P-256 with SHA-256. "rs256" denotes 2048-bit RSA with PKCS#1.5 padding and SHA-256. "eddsa" denotes EDDSA over Curve25519 with SHA-512. Note that your authenticator may not support some algorithms. Added in version 251. --fido2-device=PATH Enroll a FIDO2 security token that implements the "hmac-secret" extension (e.g. a YubiKey). Expects a hidraw device referring to the FIDO2 device (e.g. /dev/hidraw1). Alternatively the special value "auto" may be specified, in order to automatically determine the device node of a currently plugged in security token (of which there must be exactly one). This automatic discovery is unsupported if --unlock-fido2-device= option is also specified. The special value "list" may be used to enumerate all suitable FIDO2 tokens currently plugged in. Note that many hardware security tokens that implement FIDO2 also implement the older PKCS#11 standard. Typically FIDO2 is preferable, given it's simpler to use and more modern. In order to unlock a LUKS2 volume with an enrolled FIDO2 security token, specify the fido2-device= option in the respective /etc/crypttab line: myvolume /dev/sda1 - fido2-device=auto See crypttab(5) for a more comprehensive example of a systemd-cryptenroll invocation and its matching /etc/crypttab line. Added in version 248. --fido2-with-client-pin=BOOL When enrolling a FIDO2 security token, controls whether to require the user to enter a PIN when unlocking the volume (the FIDO2 "clientPin" feature). Defaults to "yes". (Note: this setting is without effect if the security token does not support the "clientPin" feature at all, or does not allow enabling or disabling it.) Added in version 249. --fido2-with-user-presence=BOOL When enrolling a FIDO2 security token, controls whether to require the user to verify presence (tap the token, the FIDO2 "up" feature) when unlocking the volume. Defaults to "yes". (Note: this setting is without effect if the security token does not support the "up" feature at all, or does not allow enabling or disabling it.) Added in version 249. --fido2-with-user-verification=BOOL When enrolling a FIDO2 security token, controls whether to require user verification when unlocking the volume (the FIDO2 "uv" feature). Defaults to "no". (Note: this setting is without effect if the security token does not support the "uv" feature at all, or does not allow enabling or disabling it.) Added in version 249. --tpm2-device=PATH Enroll a TPM2 security chip. Expects a device node path referring to the TPM2 chip (e.g. /dev/tpmrm0). Alternatively the special value "auto" may be specified, in order to automatically determine the device node of a currently discovered TPM2 device (of which there must be exactly one). The special value "list" may be used to enumerate all suitable TPM2 devices currently discovered. In order to unlock a LUKS2 volume with an enrolled TPM2 security chip, specify the tpm2-device= option in the respective /etc/crypttab line: myvolume /dev/sda1 - tpm2-device=auto See crypttab(5) for a more comprehensive example of a systemd-cryptenroll invocation and its matching /etc/crypttab line. Use --tpm2-pcrs= (see below) to configure which TPM2 PCR indexes to bind the enrollment to. Added in version 248. --tpm2-device-key=PATH Enroll a TPM2 security chip using its public key. Expects a path referring to the TPM2 public key in TPM2B_PUBLIC format. This cannot be used with --tpm2-device=, as it performs the same operation, but without connecting to the TPM2 security chip; instead the enrollment is calculated using the provided TPM2 key. This is useful in situations where the TPM2 security chip is not available at the time of enrollment. The key, in most cases, should be the Storage Root Key (SRK) from a local TPM2 security chip. If a key from a different handle (not the SRK) is used, you must specify its handle index using --tpm2-seal-key-handle=. The systemd-tpm2-setup.service(8) service writes the SRK to /run/systemd/tpm2-srk-public-key.tpm2b_public automatically during boot, in the correct format. Alternatively, you may use systemd-analyze srk to retrieve the SRK from the TPM2 security chip explicitly. See systemd-analyze(1) for details. Example: systemd-analyze srk > srk.tpm2b_public Added in version 255. --tpm2-seal-key-handle=HANDLE Configures which parent key to use for sealing, using the TPM handle (index) of the key. This is used to "seal" (encrypt) a secret and must be used later to "unseal" (decrypt) the secret. Expects a hexadecimal 32bit integer, optionally prefixed with "0x". Allowable values are any handle index in the persistent ("0x81000000"-"0x81ffffff") or transient ("0x80000000"-"0x80ffffff") ranges. Since transient handles are lost after a TPM reset, and may be flushed during TPM context switching, they should not be used except for very specific use cases, e.g. testing. The default is the Storage Root Key (SRK) handle index "0x81000001". A value of 0 will use the default. For the SRK handle, a new key will be created and stored in the TPM if one does not already exist; for any other handle, the key must already exist in the TPM at the specified handle index. This should not be changed unless you know what you are doing. Added in version 255. --tpm2-pcrs= [PCR...] Configures the TPM2 PCRs (Platform Configuration Registers) to bind to when enrollment is requested via --tpm2-device=. Takes a list of PCR entries, where each entry starts with a name or numeric index in the range 0...23, optionally followed by ":" and a hash algorithm name (specifying the PCR bank), optionally followed by "=" and a hash digest value. Multiple PCR entries are separated by "+". If not specified, the default is to use PCR 7 only. If an empty string is specified, binds the enrollment to no PCRs at all. See the table above for a list of available PCRs. Example: --tpm2-pcrs=boot-loader-code+platform-config+boot-loader-config specifies that PCR registers 4, 1, and 5 should be used. Example: --tpm2-pcrs=7:sha256 specifies that PCR register 7 from the SHA256 bank should be used. Example: --tpm2-pcrs=4:sha1=3a3f780f11a4b49969fcaa80cd6e3957c33b2275 specifies that PCR register 4 from the SHA1 bank should be used, and a hash digest value of 3a3f780f11a4b49969fcaa80cd6e3957c33b2275 will be used instead of reading the current PCR value. Added in version 248. --tpm2-with-pin=BOOL When enrolling a TPM2 device, controls whether to require the user to enter a PIN when unlocking the volume in addition to PCR binding, based on TPM2 policy authentication. Defaults to "no". Despite being called PIN, any character can be used, not just numbers. Note that incorrect PIN entry when unlocking increments the TPM dictionary attack lockout mechanism, and may lock out users for a prolonged time, depending on its configuration. The lockout mechanism is a global property of the TPM, systemd-cryptenroll does not control or configure the lockout mechanism. You may use tpm2-tss tools to inspect or configure the dictionary attack lockout, with tpm2_getcap(1) and tpm2_dictionarylockout(1) commands, respectively. Added in version 251. --tpm2-public-key= [PATH], --tpm2-public-key-pcrs= [PCR...], --tpm2-signature= [PATH] Configures a TPM2 signed PCR policy to bind encryption to. The --tpm2-public-key= option accepts a path to a PEM encoded RSA public key, to bind the encryption to. If this is not specified explicitly, but a file tpm2-pcr-public-key.pem exists in one of the directories /etc/systemd/, /run/systemd/, /usr/lib/systemd/ (searched in this order), it is automatically used. The --tpm2-public-key-pcrs= option takes a list of TPM2 PCR indexes to bind to (same syntax as --tpm2-pcrs= described above). If not specified defaults to 11 (i.e. this binds the policy to any unified kernel image for which a PCR signature can be provided). Note the difference between --tpm2-pcrs= and --tpm2-public-key-pcrs=: the former binds decryption to the current, specific PCR values; the latter binds decryption to any set of PCR values for which a signature by the specified public key can be provided. The latter is hence more useful in scenarios where software updates shell be possible without losing access to all previously encrypted LUKS2 volumes. Like with --tpm2-pcrs=, names defined in the table above can also be used to specify the registers, for instance --tpm2-public-key-pcrs=boot-loader-code+system-identity. The --tpm2-signature= option takes a path to a TPM2 PCR signature file as generated by the systemd-measure(1) tool. If this is not specified explicitly, a suitable signature file tpm2-pcr-signature.json is searched for in /etc/systemd/, /run/systemd/, /usr/lib/systemd/ (in this order) and used. If a signature file is specified or found it is used to verify if the volume can be unlocked with it given the current PCR state, before the new slot is written to disk. This is intended as safety net to ensure that access to a volume is not lost if a public key is enrolled for which no valid signature for the current PCR state is available. If the supplied signature does not unlock the current PCR state and public key combination, no slot is enrolled and the operation will fail. If no signature file is specified or found no such safety verification is done. Added in version 252. --tpm2-pcrlock= [PATH] Configures a TPM2 pcrlock policy to bind encryption to. Expects a path to a pcrlock policy file as generated by the systemd-pcrlock(1) tool. If a TPM2 device is enrolled and this option is not used but a file pcrlock.json is found in /run/systemd/ or /var/lib/systemd/ it is automatically used. Assign an empty string to turn this behaviour off. Added in version 255. --wipe-slot= [SLOT...] Wipes one or more LUKS2 key slots. Takes a comma separated list of numeric slot indexes, or the special strings "all" (for wiping all key slots), "empty" (for wiping all key slots that are unlocked by an empty passphrase), "password" (for wiping all key slots that are unlocked by a traditional passphrase), "recovery" (for wiping all key slots that are unlocked by a recovery key), "pkcs11" (for wiping all key slots that are unlocked by a PKCS#11 token), "fido2" (for wiping all key slots that are unlocked by a FIDO2 token), "tpm2" (for wiping all key slots that are unlocked by a TPM2 chip), or any combination of these strings or numeric indexes, in which case all slots matching either are wiped. As safety precaution an operation that wipes all slots without exception (so that the volume cannot be unlocked at all anymore, unless the volume key is known) is refused. This switch may be used alone, in which case only the requested wipe operation is executed. It may also be used in combination with any of the enrollment options listed above, in which case the enrollment is completed first, and only when successful the wipe operation executed and the newly added slot is always excluded from the wiping. Combining enrollment and slot wiping may thus be used to update existing enrollments: systemd-cryptenroll /dev/sda1 --wipe-slot=tpm2 --tpm2-device=auto The above command will enroll the TPM2 chip, and then wipe all previously created TPM2 enrollments on the LUKS2 volume, leaving only the newly created one. Combining wiping and enrollment may also be used to replace enrollments of different types, for example for changing from a PKCS#11 enrollment to a FIDO2 one: systemd-cryptenroll /dev/sda1 --wipe-slot=pkcs11 --fido2-device=auto Or for replacing an enrolled empty password by TPM2: systemd-cryptenroll /dev/sda1 --wipe-slot=empty --tpm2-device=auto Added in version 248. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. EXAMPLES top crypttab(5) and systemd-measure(1) contain various examples employing systemd-cryptenroll. SEE ALSO top systemd(1), systemd-cryptsetup@.service(8), crypttab(5), cryptsetup(8), systemd-measure(1) NOTES top 1. Linux TPM PCR Registry https://uapi-group.org/specifications/specs/linux_tpm_pcr_registry/ COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-CRYPTENROLL(1) Pages that refer to this page: systemd-creds(1), crypttab(5), repart.d(5), systemd.directives(7), systemd.index(7), systemd-cryptsetup(8), systemd-cryptsetup-generator(8), systemd-repart(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-cryptenroll\n\n> Interactively enroll or remove methods used to unlock LUKS2-encrypted devices. Uses a password to unlock the device unless otherwise specified.\n> In order to allow a partition to be unlocked during system boot, update the `/etc/crypttab` file or the initramfs.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-cryptenroll.html>.\n\n- Enroll a new password (similar to `cryptsetup luksAddKey`):\n\n`systemd-cryptenroll --password {{path/to/luks2_block_device}}`\n\n- Enroll a new recovery key (i.e. a randomly generated passphrase that can be used as a fallback):\n\n`systemd-cryptenroll --recovery-key {{path/to/luks2_block_device}}`\n\n- List available tokens, or enroll a new PKCS#11 token:\n\n`systemd-cryptenroll --pkcs11-token-uri {{list|auto|pkcs11_token_uri}} {{path/to/luks2_block_device}}`\n\n- List available FIDO2 devices, or enroll a new FIDO2 device (`auto` can be used as the device name when there is only one token plugged in):\n\n`systemd-cryptenroll --fido2-device {{list|auto|path/to/fido2_hidraw_device}} {{path/to/luks2_block_device}}`\n\n- Enroll a new FIDO2 device with user verification (biometrics):\n\n`systemd-cryptenroll --fido2-device {{auto|path/to/fido2_hidraw_device}} --fido2-with-user-verification yes {{path/to/luks2_block_device}}`\n\n- Unlock using a FIDO2 device, and enroll a new FIDO2 device:\n\n`systemd-cryptenroll --unlock-fido2-device {{path/to/fido2_hidraw_unlock_device}} --fido2-device {{path/to/fido2_hidraw_enroll_device}} {{path/to/luks2_block_device}}`\n\n- Enroll a TPM2 security chip (only secure-boot-policy PCR) and require an additional alphanumeric PIN:\n\n`systemd-cryptenroll --tpm2-device {{auto|path/to/tpm2_block_device}} --tpm2-with-pin yes {{path/to/luks2_block_device}}`\n\n- Remove all empty passwords/all passwords/all FIDO2 devices/all PKCS#11 tokens/all TPM2 security chips/all recovery keys/all methods:\n\n`systemd-cryptenroll --wipe-slot {{empty|password|fido2|pkcs#11|tpm2|recovery|all}} {{path/to/luks2_block_device}}`\n
systemd-delta
systemd-delta(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-delta(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | EXIT STATUS | SEE ALSO | COLOPHON SYSTEMD-DELTA(1) systemd-delta SYSTEMD-DELTA(1) NAME top systemd-delta - Find overridden configuration files SYNOPSIS top systemd-delta [OPTIONS...] [PREFIX[/SUFFIX]|SUFFIX...] DESCRIPTION top systemd-delta may be used to identify and compare configuration files that override other configuration files. Files in /etc/ have highest priority, files in /run/ have the second highest priority, ..., files in /usr/lib/ have lowest priority. Files in a directory with higher priority override files with the same name in directories of lower priority. In addition, certain configuration files can have ".d" directories which contain "drop-in" files with configuration snippets which augment the main configuration file. "Drop-in" files can be overridden in the same way by placing files with the same name in a directory of higher priority (except that, in case of "drop-in" files, both the "drop-in" file name and the name of the containing directory, which corresponds to the name of the main configuration file, must match). For a fuller explanation, see systemd.unit(5). The command line argument will be split into a prefix and a suffix. Either is optional. The prefix must be one of the directories containing configuration files (/etc/, /run/, /usr/lib/, ...). If it is given, only overriding files contained in this directory will be shown. Otherwise, all overriding files will be shown. The suffix must be a name of a subdirectory containing configuration files like tmpfiles.d, sysctl.d or systemd/system. If it is given, only configuration files in this subdirectory (across all configuration paths) will be analyzed. Otherwise, all configuration files will be analyzed. If the command line argument is not given at all, all configuration files will be analyzed. See below for some examples. OPTIONS top The following options are understood: -t, --type= When listing the differences, only list those that are asked for. The list itself is a comma-separated list of desired difference types. Recognized types are: masked Show masked files equivalent Show overridden files that while overridden, do not differ in content. redirected Show files that are redirected to another. overridden Show overridden, and changed files. extended Show *.conf files in drop-in directories for units. Added in version 205. unchanged Show unmodified files too. --diff= When showing modified files, when a file is overridden show a diff as well. This option takes a boolean argument. If omitted, it defaults to true. -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager. EXAMPLES top To see all local configuration: systemd-delta To see all runtime configuration: systemd-delta /run To see all system unit configuration changes: systemd-delta systemd/system To see all runtime "drop-in" changes for system units: systemd-delta --type=extended /run/systemd/system EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. SEE ALSO top systemd(1), systemd.unit(5) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-DELTA(1) Pages that refer to this page: binfmt.d(5), modules-load.d(5), sysctl.d(5), systemd.preset(5), tmpfiles.d(5), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-delta\n\n> Find overridden systemd-related configuration files.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-delta.html>.\n\n- Show all overridden configuration files:\n\n`systemd-delta`\n\n- Show only files of specific types (comma-separated list):\n\n`systemd-delta --type {{masked|equivalent|redirected|overridden|extended|unchanged}}`\n\n- Show only files whose path starts with the specified prefix (Note: a prefix is a directory containing subdirectories with systemd configuration files):\n\n`systemd-delta {{/etc|/run|/usr/lib|...}}`\n\n- Further restrict the search path by adding a suffix (the prefix is optional):\n\n`systemd-delta {{prefix}}/{{tmpfiles.d|sysctl.d|systemd/system|...}}`\n
systemd-detect-virt
systemd-detect-virt(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-detect-virt(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | SEE ALSO | NOTES | COLOPHON SYSTEMD-DETECT-VIRT(1) systemd-detect-virt SYSTEMD-DETECT-VIRT(1) NAME top systemd-detect-virt - Detect execution in a virtualized environment SYNOPSIS top systemd-detect-virt [OPTIONS...] DESCRIPTION top systemd-detect-virt detects execution in a virtualized environment. It identifies the virtualization technology and can distinguish full machine virtualization from container virtualization. systemd-detect-virt exits with a return value of 0 (success) if a virtualization technology is detected, and non-zero (error) otherwise. By default, any type of virtualization is detected, and the options --container and --vm can be used to limit what types of virtualization are detected. When executed without --quiet will print a short identifier for the detected virtualization technology. The following technologies are currently identified: Table 1. Known virtualization technologies (both VM, i.e. full hardware virtualization, and container, i.e. shared kernel virtualization) Type ID Product VM qemu QEMU software virtualization, without KVM kvm Linux KVM kernel virtual machine, in combination with QEMU. Not used for other virtualizers using the KVM interfaces, such as Oracle VirtualBox or Amazon EC2 Nitro, see below. amazon Amazon EC2 Nitro using Linux KVM zvm s390 z/VM vmware VMware Workstation or Server, and related products microsoft Hyper-V, also known as Viridian or Windows Server Virtualization oracle Oracle VM VirtualBox (historically marketed by innotek and Sun Microsystems), for legacy and KVM hypervisor powervm IBM PowerVM hypervisor comes as firmware with some IBM POWER servers xen Xen hypervisor (only domU, not dom0) bochs Bochs Emulator uml User-mode Linux parallels Parallels Desktop, Parallels Server bhyve bhyve, FreeBSD hypervisor qnx QNX hypervisor acrn ACRN hypervisor[1] apple Apple Virtualization.framework[2] sre LMHS SRE hypervisor[3] Container openvz OpenVZ/Virtuozzo lxc Linux container implementation by LXC lxc-libvirt Linux container implementation by libvirt systemd-nspawn systemd's minimal container implementation, see systemd-nspawn(1) docker Docker container manager podman Podman[4] container manager rkt rkt app container runtime wsl Windows Subsystem for Linux[5] proot proot[6] userspace chroot/bind mount emulation pouch Pouch[7] Container Engine If multiple virtualization solutions are used, only the "innermost" is detected and identified. That means if both machine and container virtualization are used in conjunction, only the latter will be identified (unless --vm is passed). Windows Subsystem for Linux is not a Linux container, but an environment for running Linux userspace applications on top of the Windows kernel using a Linux-compatible interface. WSL is categorized as a container for practical purposes. Multiple WSL environments share the same kernel and services should generally behave like when being run in a container. OPTIONS top The following options are understood: -c, --container Only detects container virtualization (i.e. shared kernel virtualization). -v, --vm Only detects hardware virtualization. -r, --chroot Detect whether invoked in a chroot(2) environment. In this mode, no output is written, but the return value indicates whether the process was invoked in a chroot() environment or not. Added in version 228. --private-users Detect whether invoked in a user namespace. In this mode, no output is written, but the return value indicates whether the process was invoked inside of a user namespace or not. See user_namespaces(7) for more information. Added in version 232. --cvm Detect whether invoked in a confidential virtual machine. The result of this detection may be used to disable features that should not be used in confidential VMs. It must not be used to release security sensitive information. The latter must only be released after attestation of the confidential environment. Added in version 254. -q, --quiet Suppress output of the virtualization technology identifier. --list Output all currently known and detectable container and VM environments. Added in version 239. --list-cvm Output all currently known and detectable confidential virtualization technologies. Added in version 254. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top If a virtualization technology is detected, 0 is returned, a non-zero code otherwise. SEE ALSO top systemd(1), systemd-nspawn(1), chroot(2), namespaces(7) NOTES top 1. ACRN hypervisor https://projectacrn.org 2. Apple Virtualization.framework https://developer.apple.com/documentation/virtualization 3. LMHS SRE hypervisor https://www.lockheedmartin.com/en-us/products/Hardened-Security-for-Intel-Processors.html 4. Podman https://podman.io 5. Windows Subsystem for Linux https://docs.microsoft.com/en-us/windows/wsl/about 6. proot https://proot-me.github.io/ 7. Pouch https://github.com/alibaba/pouch COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-DETECT-VIRT(1) Pages that refer to this page: org.freedesktop.systemd1(5), systemd.unit(5), systemd.directives(7), systemd.generator(7), systemd.index(7), udev(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-detect-virt\n\n> Detect execution in a virtualized environment.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-detect-virt.html>.\n\n- List detectable virtualization technologies:\n\n`systemd-detect-virt --list`\n\n- Detect virtualization, print the result and return a zero status code when running in a VM or a container, and a non-zero code otherwise:\n\n`systemd-detect-virt`\n\n- Silently check without printing anything:\n\n`systemd-detect-virt --quiet`\n\n- Only detect container virtualization:\n\n`systemd-detect-virt --container`\n\n- Only detect hardware virtualization:\n\n`systemd-detect-virt --vm`\n
systemd-dissect
systemd-dissect(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-dissect(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | COMMANDS | OPTIONS | EXIT STATUS | INVOCATION AS /SBIN/MOUNT.DDI | EXAMPLES | SEE ALSO | NOTES | COLOPHON SYSTEMD-DISSECT(1) systemd-dissect SYSTEMD-DISSECT(1) NAME top systemd-dissect, mount.ddi - Dissect Discoverable Disk Images (DDIs) SYNOPSIS top systemd-dissect [OPTIONS...] IMAGE systemd-dissect [OPTIONS...] --mount IMAGE PATH systemd-dissect [OPTIONS...] --umount PATH systemd-dissect [OPTIONS...] --attach IMAGE systemd-dissect [OPTIONS...] --detach PATH systemd-dissect [OPTIONS...] --list IMAGE systemd-dissect [OPTIONS...] --mtree IMAGE systemd-dissect [OPTIONS...] --with IMAGE [COMMAND...] systemd-dissect [OPTIONS...] --copy-from IMAGE PATH [TARGET] systemd-dissect [OPTIONS...] --copy-to IMAGE [SOURCE] PATH systemd-dissect [OPTIONS...] --discover systemd-dissect [OPTIONS...] --validate IMAGE DESCRIPTION top systemd-dissect is a tool for introspecting and interacting with file system OS disk images, specifically Discoverable Disk Images (DDIs). It supports four different operations: 1. Show general OS image information, including the image's os-release(5) data, machine ID, partition information and more. 2. Mount an OS image to a local directory. In this mode it will dissect the OS image and mount the included partitions according to their designation onto a directory and possibly sub-directories. 3. Unmount an OS image from a local directory. In this mode it will recursively unmount the mounted partitions and remove the underlying loop device, including all the partition sub-devices. 4. Copy files and directories in and out of an OS image. The tool may operate on three types of OS images: 1. OS disk images containing a GPT partition table envelope, with partitions marked according to the Discoverable Partitions Specification[1]. 2. OS disk images containing just a plain file-system without an enveloping partition table. (This file system is assumed to be the root file system of the OS.) 3. OS disk images containing a GPT or MBR partition table, with a single partition only. (This partition is assumed to contain the root file system of the OS.) OS images may use any kind of Linux-supported file systems. In addition they may make use of LUKS disk encryption, and contain Verity integrity information. Note that qualifying OS images may be booted with systemd-nspawn(1)'s --image= switch, and be used as root file system for system service using the RootImage= unit file setting, see systemd.exec(5). Note that the partition table shown when invoked without command switch (as listed below) does not necessarily show all partitions included in the image, but just the partitions that are understood and considered part of an OS disk image. Specifically, partitions of unknown types are ignored, as well as duplicate partitions (i.e. more than one per partition type), as are root and /usr/ partitions of architectures not compatible with the local system. In other words: this tool will display what it operates with when mounting the image. To display the complete list of partitions use a tool such as fdisk(8). The systemd-dissect command may be invoked as mount.ddi in which case it implements the mount(8) "external helper" interface. This ensures disk images compatible with systemd-dissect can be mounted directly by mount and fstab(5). For details see below. COMMANDS top If neither of the command switches listed below are passed the specified disk image is opened and general information about the image and the contained partitions and their use is shown. --mount, -m Mount the specified OS image to the specified directory. This will dissect the image, determine the OS root file system as well as possibly other partitions and mount them to the specified directory. If the OS image contains multiple partitions marked with the Discoverable Partitions Specification[1] multiple nested mounts are established. This command expects two arguments: a path to an image file and a path to a directory where to mount the image. To unmount an OS image mounted like this use the --umount operation. When the OS image contains LUKS encrypted or Verity integrity protected file systems appropriate volumes are automatically set up and marked for automatic disassembly when the image is unmounted. The OS image may either be specified as path to an OS image stored in a regular file or may refer to block device node (in the latter case the block device must be the "whole" device, i.e. not a partition device). (The other supported commands described here support this, too.) All mounted file systems are checked with the appropriate fsck(8) implementation in automatic fixing mode, unless explicitly turned off (--fsck=no) or read-only operation is requested (--read-only). Note that this functionality is also available in mount(8) via a command such as mount -t ddi myimage.raw targetdir/, as well as in fstab(5). For details, see below. Added in version 247. -M This is a shortcut for --mount --mkdir. Added in version 247. --umount, -u Unmount an OS image from the specified directory. This command expects one argument: a directory where an OS image was mounted. All mounted partitions will be recursively unmounted, and the underlying loop device will be removed, along with all its partition sub-devices. Added in version 252. -U This is a shortcut for --umount --rmdir. Added in version 252. --attach Attach the specified disk image to an automatically allocated loopback block device, and print the path to the loopback block device to standard output. This is similar to an invocation of losetup --find --show, but will validate the image as DDI before attaching, and derive the correct sector size to use automatically. Moreover, it ensures the per-partition block devices are created before returning. Takes a path to a disk image file. Added in version 254. --detach Detach the specified disk image from a loopback block device. This undoes the effect of --attach above. This expects either a path to a loopback block device as an argument, or the path to the backing image file. In the latter case it will automatically determine the right device to detach. Added in version 254. --list, -l Prints the paths of all the files and directories in the specified OS image or directory to standard output. Added in version 253. --mtree Generates a BSD mtree(8) compatible file manifest of the specified disk image or directory. This is useful for comparing image contents in detail, including inode information and other metadata. While the generated manifest will contain detailed inode information, it currently excludes extended attributes, file system capabilities, MAC labels, chattr(1) file flags, btrfs(5) subvolume information, and various other file metadata. File content information is shown via a SHA256 digest. Additional fields might be added in future. Note that inode information such as link counts, inode numbers and timestamps is excluded from the output on purpose, as it typically complicates reproducibility. Added in version 253. --with Runs the specified command with the specified OS image mounted. This will mount the image to a temporary directory, switch the current working directory to it, and invoke the specified command line as child process. Once the process ends it will unmount the image again, and remove the temporary directory. If no command is specified a shell is invoked. The image is mounted writable, use --read-only to switch to read-only operation. The invoked process will have the $SYSTEMD_DISSECT_ROOT environment variable set, containing the absolute path name of the temporary mount point, i.e. the same directory that is set as the current working directory. It will also have the $SYSTEMD_DISSECT_DEVICE environment variable set, containing the absolute path name of the loop device the image was attached to. Added in version 253. --copy-from, -x Copies a file or directory from the specified OS image or directory into the specified location on the host file system. Expects three arguments: a path to an image file or directory, a source path (relative to the image's root directory) and a destination path (relative to the current working directory, or an absolute path, both outside of the image). If the destination path is omitted or specified as dash ("-"), the specified file is written to standard output. If the source path in the image file system refers to a regular file it is copied to the destination path. In this case access mode, extended attributes and timestamps are copied as well, but file ownership is not. If the source path in the image refers to a directory, it is copied to the destination path, recursively with all containing files and directories. In this case the file ownership is copied too. Added in version 247. --copy-to, -a Copies a file or directory from the specified location in the host file system into the specified OS image or directory. Expects three arguments: a path to an image file or directory, a source path (relative to the current working directory, or an absolute path, both outside of the image) and a destination path (relative to the image's root directory). If the source path is omitted or specified as dash ("-"), the data to write is read from standard input. If the source path in the host file system refers to a regular file, it is copied to the destination path. In this case access mode, extended attributes and timestamps are copied as well, but file ownership is not. If the source path in the host file system refers to a directory it is copied to the destination path, recursively with all containing files and directories. In this case the file ownership is copied too. As with --mount file system checks are implicitly run before the copy operation begins. Added in version 247. --discover Show a list of DDIs in well-known directories. This will show machine, portable service and system/configuration extension disk images in the usual directories /usr/lib/machines/, /usr/lib/portables/, /usr/lib/confexts/, /var/lib/machines/, /var/lib/portables/, /var/lib/extensions/ and so on. Added in version 253. --validate Validates the partition arrangement of a disk image (DDI), and ensures it matches the image policy specified via --image-policy=, if one is specified. This parses the partition table and probes the file systems in the image, but does not attempt to mount them (nor to set up disk encryption/authentication via LUKS/Verity). It does this taking the configured image dissection policy into account. Since this operation does not mount file systems, this command unlike all other commands implemented by this tool requires no privileges other than the ability to access the specified file. Prints "OK" and returns zero if the image appears to be in order and matches the specified image dissection policy. Otherwise prints an error message and returns non-zero. Added in version 254. -h, --help Print a short help text and exit. --version Print a short version string and exit. OPTIONS top The following options are understood: --read-only, -r Operate in read-only mode. By default --mount will establish writable mount points. If this option is specified they are established in read-only mode instead. Added in version 247. --fsck=no Turn off automatic file system checking. By default when an image is accessed for writing (by --mount or --copy-to) the file systems contained in the OS image are automatically checked using the appropriate fsck(8) command, in automatic fixing mode. This behavior may be switched off using --fsck=no. Added in version 247. --growfs=no Turn off automatic growing of accessed file systems to their partition size, if marked for that in the GPT partition table. By default when an image is accessed for writing (by --mount or --copy-to) the file systems contained in the OS image are automatically grown to their partition sizes, if bit 59 in the GPT partition flags is set for partition types that are defined by the Discoverable Partitions Specification[1]. This behavior may be switched off using --growfs=no. File systems are grown automatically on access if all of the following conditions are met: 1. The file system is mounted writable 2. The file system currently is smaller than the partition it is contained in (and thus can be grown) 3. The image contains a GPT partition table 4. The file system is stored on a partition defined by the Discoverable Partitions Specification 5. Bit 59 of the GPT partition flags for this partition is set, as per specification 6. The --growfs=no option is not passed. Added in version 249. --mkdir If combined with --mount the directory to mount the OS image to is created if it is missing. Note that the directory is not automatically removed when the disk image is unmounted again. Added in version 247. --rmdir If combined with --umount the specified directory where the OS image is mounted is removed after unmounting the OS image. Added in version 252. --discard= Takes one of "disabled", "loop", "all", "crypto". If "disabled" the image is accessed with empty block discarding turned off. If "loop" discarding is enabled if operating on a regular file. If "crypt" discarding is enabled even on encrypted file systems. If "all" discarding is unconditionally enabled. Added in version 247. --in-memory If specified an in-memory copy of the specified disk image is used. This may be used to operate with write-access on a (possibly read-only) image, without actually modifying the original file. This may also be used in order to operate on a disk image without keeping the originating file system busy, in order to allow it to be unmounted. Added in version 253. --root-hash=, --root-hash-sig=, --verity-data= Configure various aspects of Verity data integrity for the OS image. Option --root-hash= specifies a hex-encoded top-level Verity hash to use for setting up the Verity integrity protection. Option --root-hash-sig= specifies the path to a file containing a PKCS#7 signature for the hash. This signature is passed to the kernel during activation, which will match it against signature keys available in the kernel keyring. Option --verity-data= specifies a path to a file with the Verity data to use for the OS image, in case it is stored in a detached file. It is recommended to embed the Verity data directly in the image, using the Verity mechanisms in the Discoverable Partitions Specification[1]. Added in version 247. --loop-ref= Configures the "reference" string the kernel shall report as backing file for the loopback block device. While this is supposed to be a path or filename referencing the backing file, this is not enforced and the kernel accepts arbitrary free-form strings, chosen by the user. Accepts arbitrary strings up to a length of 63 characters. This sets the kernel's ".lo_file_name" field for the block device. Note this is distinct from the /sys/class/block/loopX/loop/backing_file attribute file that always reports a path referring to the actual backing file. The latter is subject to mount namespace translation, the former is not. This setting is particularly useful in combination with the --attach command, as it allows later referencing the allocated loop device via /dev/disk/by-loop-ref/... symlinks. Example: first, set up the loopback device via systemd-dissect attach --loop-ref=quux foo.raw, and then reference it in a command via the specified filename: cfdisk /dev/disk/by-loop-ref/quux. Added in version 254. --mtree-hash=no If combined with --mtree, turns off inclusion of file hashes in the mtree output. This makes the --mtree faster when operating on large images. Added in version 254. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. --json=MODE Shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks), "pretty" (for a pretty version of the same, with indentation and line breaks) or "off" (to turn off JSON output, the default). EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. If the --with command is used the exit status of the invoked command is propagated. INVOCATION AS /SBIN/MOUNT.DDI top The systemd-dissect executable may be symlinked to /sbin/mount.ddi. If invoked through that it implements mount(8)'s "external helper" interface for the (pseudo) file system type "ddi". This means conformant disk images may be mounted directly via # mount -t ddi myimage.raw targetdir/ in a fashion mostly equivalent to: # systemd-dissect --mount myimage.raw targetdir/ Note that since a single DDI may contain multiple file systems it should later be unmounted with umount -R targetdir/, for recursive operation. This functionality is particularly useful to mount DDIs automatically at boot via simple /etc/fstab entries. For example: /path/to/myimage.raw /images/myimage/ ddi defaults 0 0 When invoked this way the mount options "ro", "rw", "discard", "nodiscard" map to the corresponding options listed above (i.e. --read-only, --discard=all, --discard=disabled). Mount options are not generically passed on to the file systems inside the images. EXAMPLES top Example 1. Generate a tarball from an OS disk image # systemd-dissect --with foo.raw tar cz . >foo.tar.gz SEE ALSO top systemd(1), systemd-nspawn(1), systemd.exec(5), Discoverable Partitions Specification[1], mount(8), umount(8), fdisk(8) NOTES top 1. Discoverable Partitions Specification https://uapi-group.org/specifications/specs/discoverable_partitions_specification COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-DISSECT(1) Pages that refer to this page: systemd.directives(7), systemd.image-policy(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-dissect\n\n> Introspect and interact with file system OS disk images, specifically Discoverable Disk Images (DDIs).\n> More information: <https://www.freedesktop.org/software/systemd/man/latest/systemd-dissect.html>.\n\n- Show general image information about the OS image:\n\n`systemd-dissect {{path/to/image.raw}}`\n\n- Mount an OS image:\n\n`systemd-dissect --mount {{path/to/image.raw}} {{/mnt/image}}`\n\n- Unmount an OS image:\n\n`systemd-dissect --umount {{/mnt/image}}`\n\n- List files in an image:\n\n`systemd-dissect --list {{path/to/image.raw}}`\n\n- Attach an OS image to an automatically allocated loopback block device and print its path:\n\n`systemd-dissect --attach {{path/to/image.raw}}`\n\n- Detach an OS image from a loopback block device:\n\n`systemd-dissect --detach {{path/to/device}}`\n
systemd-escape
systemd-escape(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-escape(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | EXIT STATUS | SEE ALSO | COLOPHON SYSTEMD-ESCAPE(1) systemd-escape SYSTEMD-ESCAPE(1) NAME top systemd-escape - Escape strings for usage in systemd unit names SYNOPSIS top systemd-escape [OPTIONS...] [STRING...] DESCRIPTION top systemd-escape may be used to escape strings for inclusion in systemd unit names. The command may be used to escape and to undo escaping of strings. The command takes any number of strings on the command line, and will process them individually, one after another. It will output them separated by spaces to stdout. By default, this command will escape the strings passed, unless --unescape is passed which results in the inverse operation being applied. If --mangle is given, a special mode of escaping is applied instead, which assumes the string is already escaped but will escape everything that appears obviously non-escaped. For details on the escaping and unescaping algorithms see the relevant section in systemd.unit(5). OPTIONS top The following options are understood: --suffix= Appends the specified unit type suffix to the escaped string. Takes one of the unit types supported by systemd, such as "service" or "mount". May not be used in conjunction with --template=, --unescape or --mangle. Added in version 216. --template= Inserts the escaped strings in a unit name template. Takes a unit name template such as foobar@.service. With --unescape, expects instantiated unit names for this template and extracts and unescapes just the instance part. May not be used in conjunction with --suffix=, --instance or --mangle. Added in version 216. --path, -p When escaping or unescaping a string, assume it refers to a file system path. This simplifies the path (leading, trailing, and duplicate "/" characters are removed, no-op path "." components are removed, and for absolute paths, leading ".." components are removed). After the simplification, the path must not contain "..". This is particularly useful for generating strings suitable for unescaping with the "%f" specifier in unit files, see systemd.unit(5). Added in version 216. --unescape, -u Instead of escaping the specified strings, undo the escaping, reversing the operation. May not be used in conjunction with --suffix= or --mangle. Added in version 216. --mangle, -m Like --escape, but only escape characters that are obviously not escaped yet, and possibly automatically append an appropriate unit type suffix to the string. May not be used in conjunction with --suffix=, --template= or --unescape. Added in version 216. --instance With --unescape, unescape and print only the instance part of an instantiated unit name template. Results in an error for an uninstantiated template like ssh@.service or a non-template name like ssh.service. Must be used in conjunction with --unescape and may not be used in conjunction with --template. Added in version 240. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXAMPLES top To escape a single string: $ systemd-escape 'Hallchen, Meister' Hall\xc3\xb6chen\x2c\x20Meister To undo escaping on a single string: $ systemd-escape -u 'Hall\xc3\xb6chen\x2c\x20Meister' Hallchen, Meister To generate the mount unit for a path: $ systemd-escape -p --suffix=mount "/tmp//waldi/foobar/" tmp-waldi-foobar.mount To generate instance names of three strings: $ systemd-escape --template=systemd-nspawn@.service 'My Container 1' 'containerb' 'container/III' systemd-nspawn@My\x20Container\x201.service systemd-nspawn@containerb.service systemd-nspawn@container-III.service To extract the instance part of an instantiated unit: $ systemd-escape -u --instance 'systemd-nspawn@My\x20Container\x201.service' My Container 1 To extract the instance part of an instance of a particular template: $ systemd-escape -u --template=systemd-nspawn@.service 'systemd-nspawn@My\x20Container\x201.service' My Container 1 EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. SEE ALSO top systemd(1), systemd.unit(5), systemctl(1) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-ESCAPE(1) Pages that refer to this page: systemd.unit(5), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-escape\n\n> Escape strings for usage in systemd unit names.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-escape.html>.\n\n- Escape the given text:\n\n`systemd-escape {{text}}`\n\n- Reverse the escaping process:\n\n`systemd-escape --unescape {{text}}`\n\n- Treat the given text as a path:\n\n`systemd-escape --path {{text}}`\n\n- Append the given suffix to the escaped text:\n\n`systemd-escape --suffix {{suffix}} {{text}}`\n\n- Use a template and inject the escaped text:\n\n`systemd-escape --template {{template}} {{text}}`\n
systemd-firstboot
systemd-firstboot(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-firstboot(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CREDENTIALS | EXIT STATUS | KERNEL COMMAND LINE | SEE ALSO | NOTES | COLOPHON SYSTEMD-FIRSTBOOT(1) systemd-firstboot SYSTEMD-FIRSTBOOT(1) NAME top systemd-firstboot, systemd-firstboot.service - Initialize basic system settings on or before the first boot-up of a system SYNOPSIS top systemd-firstboot [OPTIONS...] systemd-firstboot.service DESCRIPTION top systemd-firstboot initializes basic system settings interactively during the first boot, or non-interactively on an offline system image. The service is started during boot if ConditionFirstBoot=yes is met, which essentially means that /etc/ is unpopulated, see systemd.unit(5) for details. The following settings may be configured: The machine ID of the system The system locale, more specifically the two locale variables LANG= and LC_MESSAGES The system keyboard map The system time zone The system hostname The kernel command line used when installing kernel images The root user's password and shell Each of the fields may either be queried interactively by users, set non-interactively on the tool's command line, or be copied from a host system that is used to set up the system image. If a setting is already initialized, it will not be overwritten and the user will not be prompted for the setting. Note that this tool operates directly on the file system and does not involve any running system services, unlike localectl(1), timedatectl(1) or hostnamectl(1). This allows systemd-firstboot to operate on mounted but not booted disk images and in early boot. It is not recommended to use systemd-firstboot on the running system after it has been set up. OPTIONS top The following options are understood: --root=root Takes a directory path as an argument. All paths will be prefixed with the given alternate root path, including config search paths. This is useful to operate on a system image mounted to the specified directory instead of the host system itself. Added in version 216. --image=path Takes a path to a disk image file or block device node. If specified all operations are applied to file system in the indicated disk image. This is similar to --root= but operates on file systems stored in disk images or block devices. The disk image should either contain just a file system or a set of file systems within a GPT partition table, following the Discoverable Partitions Specification[1]. For further information on supported disk images, see systemd-nspawn(1)'s switch of the same name. Added in version 246. --locale=LOCALE, --locale-messages=LOCALE Sets the system locale, more specifically the LANG= and LC_MESSAGES settings. The argument should be a valid locale identifier, such as "de_DE.UTF-8". This controls the locale.conf(5) configuration file. Added in version 216. --keymap=KEYMAP Sets the system keyboard layout. The argument should be a valid keyboard map, such as "de-latin1". This controls the "KEYMAP" entry in the vconsole.conf(5) configuration file. Added in version 236. --timezone=TIMEZONE Sets the system time zone. The argument should be a valid time zone identifier, such as "Europe/Berlin". This controls the localtime(5) symlink. Added in version 216. --hostname=HOSTNAME Sets the system hostname. The argument should be a hostname, compatible with DNS. This controls the hostname(5) configuration file. Added in version 216. --setup-machine-id Initialize the system's machine ID to a random ID. This controls the machine-id(5) file. This option only works in combination with --root= or --image=. On a running system, machine-id is written by the manager with help from systemd-machine-id-commit.service(8). Added in version 216. --machine-id=ID Set the system's machine ID to the specified value. The same restrictions apply as to --setup-machine-id. Added in version 216. --root-password=PASSWORD, --root-password-file=PATH, --root-password-hashed=HASHED_PASSWORD Sets the password of the system's root user. This creates/modifies the passwd(5) and shadow(5) files. This setting exists in three forms: --root-password= accepts the password to set directly on the command line, --root-password-file= reads it from a file and --root-password-hashed= accepts an already hashed password on the command line. See shadow(5) for more information on the format of the hashed password. Note that it is not recommended to specify plaintext passwords on the command line, as other users might be able to see them simply by invoking ps(1). Added in version 216. --root-shell=SHELL Sets the shell of the system's root user. This creates/modifies the passwd(5) file. Added in version 246. --kernel-command-line=CMDLINE Sets the system's kernel command line. This controls the /etc/kernel/cmdline file which is used by kernel-install(8). Added in version 246. --prompt-locale, --prompt-keymap, --prompt-timezone, --prompt-hostname, --prompt-root-password, --prompt-root-shell Prompt the user interactively for a specific basic setting. Note that any explicit configuration settings specified on the command line take precedence, and the user is not prompted for it. Added in version 216. --prompt Query the user for locale, keymap, timezone, hostname, root's password, and root's shell. This is equivalent to specifying --prompt-locale, --prompt-keymap, --prompt-timezone, --prompt-hostname, --prompt-root-password, --prompt-root-shell in combination. Added in version 216. --copy-locale, --copy-keymap, --copy-timezone, --copy-root-password, --copy-root-shell Copy a specific basic setting from the host. This only works in combination with --root= or --image=. Added in version 216. --copy Copy locale, keymap, time zone, root password and shell from the host. This is equivalent to specifying --copy-locale, --copy-keymap, --copy-timezone, --copy-root-password, --copy-root-shell in combination. Added in version 216. --force Write configuration even if the relevant files already exist. Without this option, systemd-firstboot doesn't modify or replace existing files. Note that when configuring the root account, even with this option, systemd-firstboot only modifies the entry of the "root" user, leaving other entries in /etc/passwd and /etc/shadow intact. Added in version 246. --reset If specified, all existing files that are configured by systemd-firstboot are removed. Note that the files are removed regardless of whether they'll be configured with a new value or not. This operation ensures that the next boot of the image will be considered a first boot, and systemd-firstboot will prompt again to configure each of the removed files. Added in version 254. --delete-root-password Removes the password of the system's root user, enabling login as root without a password unless the root account is locked. Note that this is extremely insecure and hence this option should not be used lightly. Added in version 246. --welcome= Takes a boolean argument. By default when prompting the user for configuration options a brief welcome text is shown before the first question is asked. Pass false to this option to turn off the welcome text. Added in version 246. -h, --help Print a short help text and exit. --version Print a short version string and exit. CREDENTIALS top systemd-firstboot supports the service credentials logic as implemented by ImportCredential=/LoadCredential=/SetCredential= (see systemd.exec(1) for details). The following credentials are used when passed in: passwd.hashed-password.root, passwd.plaintext-password.root A hashed or plaintext version of the root password to use, in place of prompting the user. These credentials are equivalent to the same ones defined for the systemd-sysusers.service(8) service. Added in version 249. passwd.shell.root Specifies the shell binary to use for the specified account. Equivalent to the credential of the same name defined for the systemd-sysusers.service(8) service. Added in version 249. firstboot.locale, firstboot.locale-messages These credentials specify the locale settings to set during first boot, in place of prompting the user. Added in version 249. firstboot.keymap This credential specifies the keyboard setting to set during first boot, in place of prompting the user. Note the relationship to the vconsole.keymap credential understood by systemd-vconsole-setup.service(8): both ultimately affect the same setting, but firstboot.keymap is written into /etc/vconsole.conf on first boot (if not already configured), and then read from there by systemd-vconsole-setup, while vconsole.keymap is read on every boot, and is not persisted to disk (but any configuration in vconsole.conf will take precedence if present). Added in version 249. firstboot.timezone This credential specifies the system timezone setting to set during first boot, in place of prompting the user. Added in version 249. Note that by default the systemd-firstboot.service unit file is set up to inherit the listed credentials from the service manager. Thus, when invoking a container with an unpopulated /etc/ for the first time it is possible to configure the root user's password to be "systemd" like this: # systemd-nspawn --image=... --set-credential=firstboot.locale:de_DE.UTF-8 ... Note that these credentials are only read and applied during the first boot process. Once they are applied they remain applied for subsequent boots, and the credentials are not considered anymore. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. KERNEL COMMAND LINE top systemd.firstboot= Takes a boolean argument, defaults to on. If off, systemd-firstboot.service won't interactively query the user for basic settings at first boot, even if those settings are not initialized yet. Added in version 233. SEE ALSO top systemd(1), locale.conf(5), vconsole.conf(5), localtime(5), hostname(5), machine-id(5), shadow(5), systemd-machine-id-setup(1), localectl(1), timedatectl(1), hostnamectl(1) NOTES top 1. Discoverable Partitions Specification https://uapi-group.org/specifications/specs/discoverable_partitions_specification COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-FIRSTBOOT(1) Pages that refer to this page: homectl(1), hostnamectl(1), localectl(1), systemd-machine-id-setup(1), systemd-nspawn(1), timedatectl(1), hostname(5), locale.conf(5), localtime(5), machine-id(5), systemd.directives(7), systemd.index(7), systemd.system-credentials(7), systemd-machine-id-commit.service(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-firstboot\n\n> Initialize basic system settings on or before the first boot-up of a system.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-firstboot.html>.\n\n- Operate on the specified directory instead of the root directory of the host system:\n\n`sudo systemd-firstboot --root={{path/to/root_directory}}`\n\n- Set the system keyboard layout:\n\n`sudo systemd-firstboot --keymap={{keymap}}`\n\n- Set the system hostname:\n\n`sudo systemd-firstboot --hostname={{hostname}}`\n\n- Set the root user's password:\n\n`sudo systemd-firstboot --root-password={{password}}`\n\n- Prompt the user interactively for a specific basic setting:\n\n`sudo systemd-firstboot --prompt={{setting}}`\n\n- Force writing configuration even if the relevant files already exist:\n\n`sudo systemd-firstboot --force`\n\n- Remove all existing files that are configured by `systemd-firstboot`:\n\n`sudo systemd-firstboot --reset`\n\n- Remove the password of the system's root user:\n\n`sudo systemd-firstboot --delete-root-password`\n
systemd-hwdb
systemd-hwdb(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-hwdb(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | SEE ALSO | COLOPHON SYSTEMD-HWDB(8) systemd-hwdb SYSTEMD-HWDB(8) NAME top systemd-hwdb - hardware database management tool SYNOPSIS top systemd-hwdb [options] update systemd-hwdb [options] query modalias DESCRIPTION top systemd-hwdb expects a command and command specific arguments. It manages the binary hardware database. OPTIONS top --usr Generate in /usr/lib/udev instead of /etc/udev. Added in version 219. -r, --root=PATH Alternate root path in the filesystem. Added in version 219. -s, --strict When updating, return non-zero exit value on any parsing error. Added in version 239. -h, --help Print a short help text and exit. systemd-hwdb [options] update Update the binary database. systemd-hwdb [options] query [MODALIAS] Query database and print result. SEE ALSO top hwdb(7) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-HWDB(8) Pages that refer to this page: sd-hwdb(3), sd_hwdb_get(3), sd_hwdb_new(3), hwdb(7), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-hwdb\n\n> Hardware database management tool.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-hwdb.html>.\n\n- Update the binary hardware database in `/etc/udev`:\n\n`systemd-hwdb update`\n\n- Query the hardware database and print the result for a specific modalias:\n\n`systemd-hwdb query {{modalias}}`\n\n- Update the binary hardware database, returning a non-zero exit value on any parsing error:\n\n`systemd-hwdb --strict update`\n\n- Update the binary hardware database in `/usr/lib/udev`:\n\n`systemd-hwdb --usr update`\n\n- Update the binary hardware database in the specified root path:\n\n`systemd-hwdb --root={{path/to/root}} update`\n
systemd-id128
systemd-id128(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-id128(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | EXAMPLES | SEE ALSO | NOTES | COLOPHON SYSTEMD-ID128(1) systemd-id128 SYSTEMD-ID128(1) NAME top systemd-id128 - Generate and print sd-128 identifiers SYNOPSIS top systemd-id128 [OPTIONS...] new systemd-id128 [OPTIONS...] machine-id systemd-id128 [OPTIONS...] boot-id systemd-id128 [OPTIONS...] invocation-id systemd-id128 [OPTIONS...] show [NAME|UUID...] DESCRIPTION top id128 may be used to conveniently print sd-id128(3) UUIDs. What identifier is printed depends on the specific verb. With new, a new random identifier will be generated. With machine-id, the identifier of the current machine will be printed. See machine-id(5). With boot-id, the identifier of the current boot will be printed. With invocation-id, the identifier of the current service invocation will be printed. This is available in systemd services. See systemd.exec(5). With show, well-known IDs are printed (for now, only GPT partition type UUIDs), along with brief identifier strings. When no arguments are specified, all known IDs are shown. When arguments are specified, they may be the identifiers or ID values of one or more known IDs, which are then printed with their name, or arbitrary IDs, which are then printed with a placeholder name. Combine with --uuid to list the IDs in UUID style, i.e. the way GPT partition type UUIDs are usually shown. machine-id, boot-id, and show may be combined with the --app-specific=app-id switch to generate application-specific IDs. See sd_id128_get_machine(3) for the discussion when this is useful. Support for show --app-specific= was added in version 255. OPTIONS top The following options are understood: -p, --pretty Generate output as programming language snippets. Added in version 240. -P, --value Only print the value. May be combined with -u/--uuid. Added in version 255. -a app-id, --app-specific=app-id With this option, identifiers will be printed that are the result of hashing the application identifier app-id and another ID. The app-id argument must be a valid sd-id128 string identifying the application. When used with machine-id, the other ID will be the machine ID as described in machine-id(5), when used with boot-id, the other ID will be the boot ID, and when used with show, the other ID or IDs should be specified via the positional arguments. Added in version 240. -u, --uuid Generate output as a UUID formatted in the "canonical representation", with five groups of digits separated by hyphens. See the Wikipedia entry for Universally Unique Identifiers[1] for more discussion. Added in version 244. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success 0 is returned, and a non-zero failure code otherwise. EXAMPLES top Example 1. Show a well-known UUID $ systemd-id128 show --value user-home 773f91ef66d449b5bd83d683bf40ad16 $ systemd-id128 show --value --uuid user-home 773f91ef-66d4-49b5-bd83-d683bf40ad16 $ systemd-id128 show 773f91ef-66d4-49b5-bd83-d683bf40ad16 NAME ID user-home 773f91ef66d449b5bd83d683bf40ad16 Example 2. Generate an application-specific UUID $ systemd-id128 machine-id -u 3a9d668b-4db7-4939-8a4a-5e78a03bffb7 $ systemd-id128 new -u 1fb8f24b-02df-458d-9659-cc8ace68e28a $ systemd-id128 machine-id -u -a 1fb8f24b-02df-458d-9659-cc8ace68e28a 47b82cb1-5339-43da-b2a6-1c350aef1bd1 $ systemd-id128 -Pu show 3a9d668b-4db7-4939-8a4a-5e78a03bffb7 \ -a 1fb8f24b-02df-458d-9659-cc8ace68e28a 47b82cb1-5339-43da-b2a6-1c350aef1bd1 On a given machine with the ID 3a9d668b-4db7-4939-8a4a-5e78a03bffb7, for the application 1fb8f24b-02df-458d-9659-cc8ace68e28a, we generate an application-specific machine ID (47b82cb1-5339-43da-b2a6-1c350aef1bd1). If we want to later recreate the same calculation on a different machine, we need to specify both IDs explicitly as parameters to show. SEE ALSO top systemd(1), sd-id128(3), sd_id128_get_machine(3) NOTES top 1. Universally Unique Identifiers https://en.wikipedia.org/wiki/Universally_unique_identifier#Format COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-ID128(1) Pages that refer to this page: sd-id128(3), sd_id128_get_machine(3), sd_id128_randomize(3), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-id128\n\n> Generate and print sd-128 identifiers.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-id128.html>.\n\n- Generate a new random identifier:\n\n`systemd-id128 new`\n\n- Print the identifier of the current machine:\n\n`systemd-id128 machine-id`\n\n- Print the identifier of the current boot:\n\n`systemd-id128 boot-id`\n\n- Print the identifier of the current service invocation (this is available in systemd services):\n\n`systemd-id128 invocation-id`\n\n- Generate a new random identifier and print it as a UUID (five groups of digits separated by hyphens):\n\n`systemd-id128 new --uuid`\n
systemd-inhibit
systemd-inhibit(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-inhibit(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | EXAMPLE | ENVIRONMENT | SEE ALSO | NOTES | COLOPHON SYSTEMD-INHIBIT(1) systemd-inhibit SYSTEMD-INHIBIT(1) NAME top systemd-inhibit - Execute a program with an inhibition lock taken SYNOPSIS top systemd-inhibit [OPTIONS...] [COMMAND] [ARGUMENTS...] systemd-inhibit [OPTIONS...] --list DESCRIPTION top systemd-inhibit may be used to execute a program with a shutdown, sleep, or idle inhibitor lock taken. The lock will be acquired before the specified command line is executed and released afterwards. Inhibitor locks may be used to block or delay system sleep and shutdown requests from the user, as well as automatic idle handling of the OS. This is useful to avoid system suspends while an optical disc is being recorded, or similar operations that should not be interrupted. For more information see the Inhibitor Lock Developer Documentation[1]. OPTIONS top The following options are understood: --what= Takes a colon-separated list of one or more operations to inhibit: "shutdown", "sleep", "idle", "handle-power-key", "handle-suspend-key", "handle-hibernate-key", "handle-lid-switch", for inhibiting reboot/power-off/halt/kexec/soft-reboot, suspending/hibernating, the automatic idle detection, or the low-level handling of the power/sleep key and the lid switch, respectively. If omitted, defaults to "idle:sleep:shutdown". --who= Takes a short, human-readable descriptive string for the program taking the lock. If not passed, defaults to the command line string. --why= Takes a short, human-readable descriptive string for the reason for taking the lock. Defaults to "Unknown reason". --mode= Takes either "block" or "delay" and describes how the lock is applied. If "block" is used (the default), the lock prohibits any of the requested operations without time limit, and only privileged users may override it. If "delay" is used, the lock can only delay the requested operations for a limited time. If the time elapses, the lock is ignored and the operation executed. The time limit may be specified in logind.conf(5). Note that "delay" is only available for "sleep" and "shutdown". --list Lists all active inhibition locks instead of acquiring one. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top Returns the exit status of the executed program. EXAMPLE top # systemd-inhibit wodim foobar.iso This burns the ISO image foobar.iso on a CD using wodim(1), and inhibits system sleeping, shutdown and idle while doing so. ENVIRONMENT top $SYSTEMD_LOG_LEVEL The maximum log level of emitted messages (messages with a higher log level, i.e. less important ones, will be suppressed). Either one of (in order of decreasing importance) emerg, alert, crit, err, warning, notice, info, debug, or an integer in the range 0...7. See syslog(3) for more information. $SYSTEMD_LOG_COLOR A boolean. If true, messages written to the tty will be colored according to priority. This setting is only useful when messages are written directly to the terminal, because journalctl(1) and other tools that display logs will color messages based on the log level on their own. $SYSTEMD_LOG_TIME A boolean. If true, console log messages will be prefixed with a timestamp. This setting is only useful when messages are written directly to the terminal or a file, because journalctl(1) and other tools that display logs will attach timestamps based on the entry metadata on their own. $SYSTEMD_LOG_LOCATION A boolean. If true, messages will be prefixed with a filename and line number in the source code where the message originates. Note that the log location is often attached as metadata to journal entries anyway. Including it directly in the message text can nevertheless be convenient when debugging programs. $SYSTEMD_LOG_TID A boolean. If true, messages will be prefixed with the current numerical thread ID (TID). Note that the this information is attached as metadata to journal entries anyway. Including it directly in the message text can nevertheless be convenient when debugging programs. $SYSTEMD_LOG_TARGET The destination for log messages. One of console (log to the attached tty), console-prefixed (log to the attached tty but with prefixes encoding the log level and "facility", see syslog(3), kmsg (log to the kernel circular log buffer), journal (log to the journal), journal-or-kmsg (log to the journal if available, and to kmsg otherwise), auto (determine the appropriate log target automatically, the default), null (disable log output). $SYSTEMD_LOG_RATELIMIT_KMSG Whether to ratelimit kmsg or not. Takes a boolean. Defaults to "true". If disabled, systemd will not ratelimit messages written to kmsg. $SYSTEMD_PAGER Pager to use when --no-pager is not given; overrides $PAGER. If neither $SYSTEMD_PAGER nor $PAGER are set, a set of well-known pager implementations are tried in turn, including less(1) and more(1), until one is found. If no pager implementation is discovered no pager is invoked. Setting this environment variable to an empty string or the value "cat" is equivalent to passing --no-pager. Note: if $SYSTEMD_PAGERSECURE is not set, $SYSTEMD_PAGER (as well as $PAGER) will be silently ignored. $SYSTEMD_LESS Override the options passed to less (by default "FRSXMK"). Users might want to change two options in particular: K This option instructs the pager to exit immediately when Ctrl+C is pressed. To allow less to handle Ctrl+C itself to switch back to the pager command prompt, unset this option. If the value of $SYSTEMD_LESS does not include "K", and the pager that is invoked is less, Ctrl+C will be ignored by the executable, and needs to be handled by the pager. X This option instructs the pager to not send termcap initialization and deinitialization strings to the terminal. It is set by default to allow command output to remain visible in the terminal even after the pager exits. Nevertheless, this prevents some pager functionality from working, in particular paged output cannot be scrolled with the mouse. See less(1) for more discussion. $SYSTEMD_LESSCHARSET Override the charset passed to less (by default "utf-8", if the invoking terminal is determined to be UTF-8 compatible). $SYSTEMD_PAGERSECURE Takes a boolean argument. When true, the "secure" mode of the pager is enabled; if false, disabled. If $SYSTEMD_PAGERSECURE is not set at all, secure mode is enabled if the effective UID is not the same as the owner of the login session, see geteuid(2) and sd_pid_get_owner_uid(3). In secure mode, LESSSECURE=1 will be set when invoking the pager, and the pager shall disable commands that open or create new files or start new subprocesses. When $SYSTEMD_PAGERSECURE is not set at all, pagers which are not known to implement secure mode will not be used. (Currently only less(1) implements secure mode.) Note: when commands are invoked with elevated privileges, for example under sudo(8) or pkexec(1), care must be taken to ensure that unintended interactive features are not enabled. "Secure" mode for the pager may be enabled automatically as describe above. Setting SYSTEMD_PAGERSECURE=0 or not removing it from the inherited environment allows the user to invoke arbitrary commands. Note that if the $SYSTEMD_PAGER or $PAGER variables are to be honoured, $SYSTEMD_PAGERSECURE must be set too. It might be reasonable to completely disable the pager using --no-pager instead. $SYSTEMD_COLORS Takes a boolean argument. When true, systemd and related utilities will use colors in their output, otherwise the output will be monochrome. Additionally, the variable can take one of the following special values: "16", "256" to restrict the use of colors to the base 16 or 256 ANSI colors, respectively. This can be specified to override the automatic decision based on $TERM and what the console is connected to. $SYSTEMD_URLIFY The value must be a boolean. Controls whether clickable links should be generated in the output for terminal emulators supporting this. This can be specified to override the decision that systemd makes based on $TERM and other conditions. SEE ALSO top systemd(1), logind.conf(5) NOTES top 1. Inhibitor Lock Developer Documentation https://www.freedesktop.org/wiki/Software/systemd/inhibit COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-INHIBIT(1) Pages that refer to this page: systemd.directives(7), systemd.index(7), rpm-plugin-systemd-inhibit(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-inhibit\n\n> Prohibit the system from entering certain power states.\n> Inhibitor locks may be used to block or delay system sleep and shutdown requests as well as automatic idle handling.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-inhibit.html>.\n\n- List all active inhibition locks and the reasons for their creation:\n\n`systemd-inhibit --list`\n\n- Block system shutdown for a specified number of seconds with the `sleep` command:\n\n`systemd-inhibit --what shutdown sleep {{5}}`\n\n- Keep the system from sleeping or idling until the download is complete:\n\n`systemd-inhibit --what sleep:idle wget {{https://example.com/file}}`\n\n- Ignore lid close switch until the script exits:\n\n`systemd-inhibit --what sleep:handle-lid-switch {{path/to/script}}`\n\n- Ignore power button press while command is running:\n\n`systemd-inhibit --what handle-power-key {{command}}`\n\n- Describe who and why created the inhibitor (default: the command and its arguments for `--who` and `Unknown reason` for `--why`):\n\n`systemd-inhibit --who {{$USER}} --why {{reason}} --what {{operation}} {{command}}`\n
systemd-machine-id-setup
systemd-machine-id-setup(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-machine-id-setup(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | SEE ALSO | NOTES | COLOPHON SYSTEMD-...ID-SETUP(1) systemd-machine-id-setup SYSTEMD-...ID-SETUP(1) NAME top systemd-machine-id-setup - Initialize the machine ID in /etc/machine-id SYNOPSIS top systemd-machine-id-setup DESCRIPTION top systemd-machine-id-setup may be used by system installer tools to initialize the machine ID stored in /etc/machine-id at install time, with a provisioned or randomly generated ID. See machine-id(5) for more information about this file. If the tool is invoked without the --commit switch, /etc/machine-id is initialized with a valid, new machine ID if it is missing or empty. The new machine ID will be acquired in the following fashion: 1. If a valid D-Bus machine ID is already configured for the system, the D-Bus machine ID is copied and used to initialize the machine ID in /etc/machine-id. 2. If run inside a KVM virtual machine and a UUID is configured (via the -uuid option), this UUID is used to initialize the machine ID. The caller must ensure that the UUID passed is sufficiently unique and is different for every booted instance of the VM. 3. Similarly, if run inside a Linux container environment and a UUID is configured for the container, this is used to initialize the machine ID. For details, see the documentation of the Container Interface[1]. 4. Otherwise, a new ID is randomly generated. The --commit switch may be used to commit a transient machined ID to disk, making it persistent. For details, see below. Use systemd-firstboot(1) to initialize the machine ID on mounted (but not booted) system images. OPTIONS top The following options are understood: --root=path Takes a directory path as argument. All paths operated on will be prefixed with the given alternate root path, including the path for /etc/machine-id itself. Added in version 212. --image=path Takes a path to a device node or regular file as argument. This is similar to --root= as described above, but operates on a disk image instead of a directory tree. Added in version 249. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. --commit Commit a transient machine ID to disk. This command may be used to convert a transient machine ID into a persistent one. A transient machine ID file is one that was bind mounted from a memory file system (usually "tmpfs") to /etc/machine-id during the early phase of the boot process. This may happen because /etc/ is initially read-only and was missing a valid machine ID file at that point. This command will execute no operation if /etc/machine-id is not mounted from a memory file system, or if /etc/ is read-only. The command will write the current transient machine ID to disk and unmount the /etc/machine-id mount point in a race-free manner to ensure that this file is always valid and accessible for other processes. This command is primarily used by the systemd-machine-id-commit.service(8) early boot service. Added in version 227. --print Print the machine ID generated or committed after the operation is complete. Added in version 231. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. SEE ALSO top systemd(1), machine-id(5), systemd-machine-id-commit.service(8), dbus-uuidgen(1), systemd-firstboot(1) NOTES top 1. Container Interface https://systemd.io/CONTAINER_INTERFACE COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-...ID-SETUP(1) Pages that refer to this page: systemd-firstboot(1), machine-id(5), lvmsystemid(7), systemd.directives(7), systemd.index(7), systemd-machine-id-commit.service(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-machine-id-setup\n\n> Initialize the machine ID stored in `/etc/machine-id` at install time with a provisioned or randomly generated ID.\n> Note: Always use `sudo` to execute these commands as they require elevated privileges.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-machine-id-setup.html>.\n\n- Print the generated or committed machine ID:\n\n`systemd-machine-id-setup --print`\n\n- Specify an image policy:\n\n`systemd-machine-id-setup --image-policy={{your_policy}}`\n\n- Display the output as JSON:\n\n`sudo systemd-machine-id-setup --json=pretty`\n\n- Operate on a disk image instead of a directory tree:\n\n`systemd-machine-id-setup --image={{/path/to/image}}`\n
systemd-mount
systemd-mount(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-mount(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | THE UDEV DATABASE | EXAMPLE | SEE ALSO | COLOPHON SYSTEMD-MOUNT(1) systemd-mount SYSTEMD-MOUNT(1) NAME top systemd-mount, systemd-umount - Establish and destroy transient mount or auto-mount points SYNOPSIS top systemd-mount [OPTIONS...] WHAT [WHERE] systemd-mount [OPTIONS...] --tmpfs [NAME] WHERE systemd-mount [OPTIONS...] --list systemd-mount [OPTIONS...] --umount WHAT|WHERE... DESCRIPTION top systemd-mount may be used to create and start a transient .mount or .automount unit of the file system WHAT on the mount point WHERE. In many ways, systemd-mount is similar to the lower-level mount(8) command, however instead of executing the mount operation directly and immediately, systemd-mount schedules it through the service manager job queue, so that it may pull in further dependencies (such as parent mounts, or a file system checker to execute a priori), and may make use of the auto-mounting logic. The command takes either one or two arguments. If only one argument is specified it should refer to a block device or regular file containing a file system (e.g. "/dev/sdb1" or "/path/to/disk.img"). The block device or image file is then probed for a file system label and other metadata, and is mounted to a directory below /run/media/system/ whose name is generated from the file system label. In this mode the block device or image file must exist at the time of invocation of the command, so that it may be probed. If the device is found to be a removable block device (e.g. a USB stick), an automount point is created instead of a regular mount point (i.e. the --automount= option is implied, see below). If the option --tmpfs is specified, then the argument is interpreted as the path where the new temporary file system shall be mounted. If two arguments are specified, the first indicates the mount source (the WHAT) and the second indicates the path to mount it on (the WHERE). In this mode no probing of the source is attempted, and a backing device node doesn't have to exist. However, if this mode is combined with --discover, device node probing for additional metadata is enabled, and much like in the single-argument case discussed above the specified device has to exist at the time of invocation of the command. Use the --list command to show a terse table of all local, known block devices with file systems that may be mounted with this command. systemd-umount can be used to unmount a mount or automount point. It is the same as systemd-mount --umount. OPTIONS top The following options are understood: --no-block Do not synchronously wait for the requested operation to finish. If this is not specified, the job will be verified, enqueued and systemd-mount will wait until the mount or automount unit's start-up is completed. By passing this argument, it is only verified and enqueued. Added in version 232. -l, --full Do not ellipsize the output when --list is specified. Added in version 245. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. --no-ask-password Do not query the user for authentication for privileged operations. --quiet, -q Suppresses additional informational output while running. Added in version 232. --discover Enable probing of the mount source. This switch is implied if a single argument is specified on the command line. If passed, additional metadata is read from the device to enhance the unit to create. For example, a descriptive string for the transient units is generated from the file system label and device model. Moreover if a removable block device (e.g. USB stick) is detected an automount unit instead of a regular mount unit is created, with a short idle timeout, in order to ensure the file-system is placed in a clean state quickly after each access. Added in version 232. --type=, -t Specifies the file system type to mount (e.g. "vfat" or "ext4"). If omitted or set to "auto", the file system type is determined automatically. Added in version 232. --options=, -o Additional mount options for the mount point. Added in version 232. --owner=USER Let the specified user USER own the mounted file system. This is done by appending uid= and gid= options to the list of mount options. Only certain file systems support this option. Added in version 237. --fsck= Takes a boolean argument, defaults to on. Controls whether to run a file system check immediately before the mount operation. In the automount case (see --automount= below) the check will be run the moment the first access to the device is made, which might slightly delay the access. Added in version 232. --description= Provide a description for the mount or automount unit. See Description= in systemd.unit(5). Added in version 232. --property=, -p Sets a unit property for the mount unit that is created. This takes an assignment in the same format as systemctl(1)'s set-property command. Added in version 232. --automount= Takes a boolean argument. Controls whether to create an automount point or a regular mount point. If true an automount point is created that is backed by the actual file system at the time of first access. If false a plain mount point is created that is backed by the actual file system immediately. Automount points have the benefit that the file system stays unmounted and hence in clean state until it is first accessed. In automount mode the --timeout-idle-sec= switch (see below) may be used to ensure the mount point is unmounted automatically after the last access and an idle period passed. If this switch is not specified it defaults to false. If not specified and --discover is used (or only a single argument passed, which implies --discover, see above), and the file system block device is detected to be removable, it is set to true, in order to increase the chance that the file system is in a fully clean state if the device is unplugged abruptly. Added in version 232. -A Equivalent to --automount=yes. Added in version 232. --timeout-idle-sec= Takes a time value that controls the idle timeout in automount mode. If set to "infinity" (the default) no automatic unmounts are done. Otherwise the file system backing the automount point is detached after the last access and the idle timeout passed. See systemd.time(7) for details on the time syntax supported. This option has no effect if only a regular mount is established, and automounting is not used. Note that if --discover is used (or only a single argument passed, which implies --discover, see above), and the file system block device is detected to be removable, --timeout-idle-sec=1s is implied. Added in version 232. --automount-property= Similar to --property=, but applies additional properties to the automount unit created, instead of the mount unit. Added in version 232. --bind-device This option only has an effect in automount mode, and controls whether the automount unit shall be bound to the backing device's lifetime. If set, the automount unit will be stopped automatically when the backing device vanishes. By default the automount unit stays around, and subsequent accesses will block until backing device is replugged. This option has no effect in case of non-device mounts, such as network or virtual file system mounts. Note that if --discover is used (or only a single argument passed, which implies --discover, see above), and the file system block device is detected to be removable, this option is implied. Added in version 232. --list Instead of establishing a mount or automount point, print a terse list of block devices containing file systems that may be mounted with "systemd-mount", along with useful metadata such as labels, etc. Added in version 232. -u, --umount Stop the mount and automount units corresponding to the specified mount points WHERE or the devices WHAT. systemd-mount with this option or systemd-umount can take multiple arguments which can be mount points, devices, /etc/fstab style node names, or backing files corresponding to loop devices, like systemd-mount --umount /path/to/umount /dev/sda1 UUID=xxxxxx-xxxx LABEL=xxxxx /path/to/disk.img. Note that when -H or -M is specified, only absolute paths to mount points are supported. Added in version 233. -G, --collect Unload the transient unit after it completed, even if it failed. Normally, without this option, all mount units that mount and failed are kept in memory until the user explicitly resets their failure state with systemctl reset-failed or an equivalent command. On the other hand, units that stopped successfully are unloaded immediately. If this option is turned on the "garbage collection" of units is more aggressive, and unloads units regardless if they exited successfully or failed. This option is a shortcut for --property=CollectMode=inactive-or-failed, see the explanation for CollectMode= in systemd.unit(5) for further information. Added in version 236. -T, --tmpfs Create and mount a new tmpfs file system on WHERE, with an optional NAME that defaults to "tmpfs". The file system is mounted with the top-level directory mode determined by the umask(2) setting of the caller, i.e. rwxrwxrwx masked by the umask of the caller. This matches what mkdir(1) does, but is different from the kernel default of "rwxrwxrwxt", i.e. a world-writable directory with the sticky bit set. Added in version 255. --user Talk to the service manager of the calling user, rather than the service manager of the system. --system Talk to the service manager of the system. This is the implied default. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. THE UDEV DATABASE top If --discover is used, systemd-mount honors a couple of additional udev properties of block devices: SYSTEMD_MOUNT_OPTIONS= The mount options to use, if --options= is not used. Added in version 232. SYSTEMD_MOUNT_WHERE= The file system path to place the mount point at, instead of the automatically generated one. Added in version 232. EXAMPLE top Use a udev rule like the following to automatically mount all USB storage plugged in: ACTION=="add", SUBSYSTEMS=="usb", SUBSYSTEM=="block", ENV{ID_FS_USAGE}=="filesystem", \ RUN{program}+="/usr/bin/systemd-mount --no-block --automount=yes --collect $devnode" SEE ALSO top systemd(1), mount(8), systemctl(1), systemd.unit(5), systemd.mount(5), systemd.automount(5), systemd-run(1) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-MOUNT(1) Pages that refer to this page: systemd-run(1), systemd.mount(5), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-mount\n\n> Establish and destroy transient mount or auto-mount points.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-mount.html>.\n\n- Mount a file system (image or block device) at `/run/media/system/LABEL` where LABEL is the filesystem label or the device name if there is no label:\n\n`systemd-mount {{path/to/file_or_device}}`\n\n- Mount a file system (image or block device) at a specific location:\n\n`systemd-mount {{path/to/file_or_device}} {{path/to/mount_point}}`\n\n- List all local, known block devices with file systems that may be mounted:\n\n`systemd-mount --list`\n\n- Create an automount point that mounts the actual file system at the time of first access:\n\n`systemd-mount --automount=yes {{path/to/file_or_device}}`\n\n- Unmount one or more devices:\n\n`systemd-mount --umount {{path/to/mount_point_or_device1}} {{path/to/mount_point_or_device2}}`\n\n- Mount a file system (image or block device) with a specific file system type:\n\n`systemd-mount --type={{file_system_type}} {{path/to/file_or_device}} {{path/to/mount_point}}`\n\n- Mount a file system (image or block device) with additional mount options:\n\n`systemd-mount --options={{mount_options}} {{path/to/file_or_device}} {{path/to/mount_point}}`\n
systemd-notify
systemd-notify(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-notify(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | EXAMPLE | SEE ALSO | COLOPHON SYSTEMD-NOTIFY(1) systemd-notify SYSTEMD-NOTIFY(1) NAME top systemd-notify - Notify service manager about start-up completion and other daemon status changes SYNOPSIS top systemd-notify [OPTIONS...] [VARIABLE=VALUE...] systemd-notify --exec [OPTIONS...] [VARIABLE=VALUE...] ; [CMDLINE...] DESCRIPTION top systemd-notify may be called by service scripts to notify the invoking service manager about status changes. It can be used to send arbitrary information, encoded in an environment-block-like list of strings. Most importantly, it can be used for start-up completion notification. This is mostly just a wrapper around sd_notify() and makes this functionality available to shell scripts. For details see sd_notify(3). The command line may carry a list of environment variables to send as part of the status update. Note that systemd will refuse reception of status updates from this command unless NotifyAccess= is appropriately set for the service unit this command is called from. See systemd.service(5) for details. Note that sd_notify() notifications may be attributed to units correctly only if either the sending process is still around at the time the service manager processes the message, or if the sending process is explicitly runtime-tracked by the service manager. The latter is the case if the service manager originally forked off the process, i.e. on all processes that match NotifyAccess=main or NotifyAccess=exec. Conversely, if an auxiliary process of the unit sends an sd_notify() message and immediately exits, the service manager might not be able to properly attribute the message to the unit, and thus will ignore it, even if NotifyAccess=all is set for it. To address this systemd-notify will wait until the notification message has been processed by the service manager. When --no-block is used, this synchronization for reception of notifications is disabled, and hence the aforementioned race may occur if the invoking process is not the service manager or spawned by the service manager. systemd-notify will first attempt to invoke sd_notify() pretending to have the PID of the parent process of systemd-notify (i.e. the invoking process). This will only succeed when invoked with sufficient privileges. On failure, it will then fall back to invoking it under its own PID. This behaviour is useful in order that when the tool is invoked from a shell script the shell process and not the systemd-notify process appears as sender of the message, which in turn is helpful if the shell process is the main process of a service, due to the limitations of NotifyAccess=all. Use the --pid= switch to tweak this behaviour. OPTIONS top The following options are understood: --ready Inform the invoking service manager about service start-up or configuration reload completion. This is equivalent to systemd-notify READY=1. For details about the semantics of this option see sd_notify(3). --reloading Inform the invoking service manager about the beginning of a configuration reload cycle. This is equivalent to systemd-notify RELOADING=1 (but implicitly also sets a MONOTONIC_USEC= field as required for Type=notify-reload services, see systemd.service(5) for details). For details about the semantics of this option see sd_notify(3). Added in version 253. --stopping Inform the invoking service manager about the beginning of the shutdown phase of the service. This is equivalent to systemd-notify STOPPING=1. For details about the semantics of this option see sd_notify(3). Added in version 253. --pid= Inform the service manager about the main PID of the service. Takes a PID as argument. If the argument is specified as "auto" or omitted, the PID of the process that invoked systemd-notify is used, except if that's the service manager. If the argument is specified as "self", the PID of the systemd-notify command itself is used, and if "parent" is specified the calling process' PID is used even if it is the service manager. The latter is equivalent to systemd-notify MAINPID=$PID. For details about the semantics of this option see sd_notify(3). If this switch is used in an systemd-notify invocation from a process that shall become the new main process of a service and which is not the process forked off by the service manager (or the current main process) , then it is essential to set NotifyAccess=all in the service unit file, or otherwise the notification will be ignored for security reasons. See systemd.service(5) for details. --uid=USER Set the user ID to send the notification from. Takes a UNIX user name or numeric UID. When specified the notification message will be sent with the specified UID as sender, in place of the user the command was invoked as. This option requires sufficient privileges in order to be able manipulate the user identity of the process. Added in version 237. --status= Send a free-form human readable status string for the daemon to the service manager. This option takes the status string as argument. This is equivalent to systemd-notify STATUS=.... For details about the semantics of this option see sd_notify(3). This information is shown in systemctl(1)'s status output, among other places. --booted Returns 0 if the system was booted up with systemd, non-zero otherwise. If this option is passed, no message is sent. This option is hence unrelated to the other options. For details about the semantics of this option, see sd_booted(3). An alternate way to check for this state is to call systemctl(1) with the is-system-running command. It will return "offline" if the system was not booted with systemd. --no-block Do not synchronously wait for the requested operation to finish. Use of this option is only recommended when systemd-notify is spawned by the service manager, or when the invoking process is directly spawned by the service manager and has enough privileges to allow systemd-notify to send the notification on its behalf. Sending notifications with this option set is prone to race conditions in all other cases. Added in version 246. --exec If specified systemd-notify will execute another command line after it completed its operation, replacing its own process. If used, the list of assignments to include in the message sent must be followed by a ";" character (as separate argument), followed by the command line to execute. This permits "chaining" of commands, i.e. issuing one operation, followed immediately by another, without changing PIDs. Note that many shells interpret ";" as their own separator for command lines, hence when systemd-notify is invoked from a shell the semicolon must usually be escaped as "\;". Added in version 254. --fd= Send a file descriptor along with the notification message. This is useful when invoked in services that have the FileDescriptorStoreMax= setting enabled, see systemd.service(5) for details. The specified file descriptor must be passed to systemd-notify when invoked. This option may be used multiple times to pass multiple file descriptors in a single notification message. To use this functionality from a bash(1) shell, use an expression like the following: systemd-notify --fd=4 --fd=5 4</some/file 5</some/other/file Added in version 254. --fdname= Set a name to assign to the file descriptors passed via --fd= (see above). This controls the "FDNAME=" field. This setting may only be specified once, and applies to all file descriptors passed. Invoke this tool multiple times in case multiple file descriptors with different file descriptor names shall be submitted. Added in version 254. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. EXAMPLE top Example 1. Start-up Notification and Status Updates A simple shell daemon that sends start-up notifications after having set up its communication channel. During runtime it sends further status updates to the init system: #!/bin/sh mkfifo /tmp/waldo systemd-notify --ready --status="Waiting for data..." while : ; do read -r a < /tmp/waldo systemd-notify --status="Processing $a" # Do something with $a ... systemd-notify --status="Waiting for data..." done SEE ALSO top systemd(1), systemctl(1), systemd.unit(5), systemd.service(5), sd_notify(3), sd_booted(3) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-NOTIFY(1) Pages that refer to this page: systemd(1), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-notify\n\n> Notify the service manager about start-up completion and other daemon status changes.\n> This command is useless outside systemd service scripts.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-notify.html>.\n\n- Notify systemd that the service has completed its initialization and is fully started. It should be invoked when the service is ready to accept incoming requests:\n\n`systemd-notify --booted`\n\n- Signal to systemd that the service is ready to handle incoming connections or perform its tasks:\n\n`systemd-notify --ready`\n\n- Provide a custom status message to systemd (this information is shown by `systemctl status`):\n\n`systemd-notify --status="{{Add custom status message here...}}"`\n
systemd-nspawn
systemd-nspawn(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-nspawn(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ENVIRONMENT | EXAMPLES | EXIT STATUS | SEE ALSO | NOTES | COLOPHON SYSTEMD-NSPAWN(1) systemd-nspawn SYSTEMD-NSPAWN(1) NAME top systemd-nspawn - Spawn a command or OS in a light-weight container SYNOPSIS top systemd-nspawn [OPTIONS...] [COMMAND [ARGS...]] systemd-nspawn --boot [OPTIONS...] [ARGS...] DESCRIPTION top systemd-nspawn may be used to run a command or OS in a light-weight namespace container. In many ways it is similar to chroot(1), but more powerful since it fully virtualizes the file system hierarchy, as well as the process tree, the various IPC subsystems and the host and domain name. systemd-nspawn may be invoked on any directory tree containing an operating system tree, using the --directory= command line option. By using the --machine= option an OS tree is automatically searched for in a couple of locations, most importantly in /var/lib/machines/, the suggested directory to place OS container images installed on the system. In contrast to chroot(1) systemd-nspawn may be used to boot full Linux-based operating systems in a container. systemd-nspawn limits access to various kernel interfaces in the container to read-only, such as /sys/, /proc/sys/ or /sys/fs/selinux/. The host's network interfaces and the system clock may not be changed from within the container. Device nodes may not be created. The host system cannot be rebooted and kernel modules may not be loaded from within the container. Use a tool like dnf(8), debootstrap(8), or pacman(8) to set up an OS directory tree suitable as file system hierarchy for systemd-nspawn containers. See the Examples section below for details on suitable invocation of these commands. As a safety check systemd-nspawn will verify the existence of /usr/lib/os-release or /etc/os-release in the container tree before booting a container (see os-release(5)). It might be necessary to add this file to the container tree manually if the OS of the container is too old to contain this file out-of-the-box. systemd-nspawn may be invoked directly from the interactive command line or run as system service in the background. In this mode each container instance runs as its own service instance; a default template unit file systemd-nspawn@.service is provided to make this easy, taking the container name as instance identifier. Note that different default options apply when systemd-nspawn is invoked by the template unit file than interactively on the command line. Most importantly the template unit file makes use of the --boot option which is not the default in case systemd-nspawn is invoked from the interactive command line. Further differences with the defaults are documented along with the various supported options below. The machinectl(1) tool may be used to execute a number of operations on containers. In particular it provides easy-to-use commands to run containers as system services using the systemd-nspawn@.service template unit file. Along with each container a settings file with the .nspawn suffix may exist, containing additional settings to apply when running the container. See systemd.nspawn(5) for details. Settings files override the default options used by the systemd-nspawn@.service template unit file, making it usually unnecessary to alter this template file directly. Note that systemd-nspawn will mount file systems private to the container to /dev/, /run/ and similar. These will not be visible outside of the container, and their contents will be lost when the container exits. Note that running two systemd-nspawn containers from the same directory tree will not make processes in them see each other. The PID namespace separation of the two containers is complete and the containers will share very few runtime objects except for the underlying file system. Rather use machinectl(1)'s login or shell commands to request an additional login session in a running container. systemd-nspawn implements the Container Interface[1] specification. While running, containers invoked with systemd-nspawn are registered with the systemd-machined(8) service that keeps track of running containers, and provides programming interfaces to interact with them. OPTIONS top If option --boot is specified, the arguments are used as arguments for the init program. Otherwise, COMMAND specifies the program to launch in the container, and the remaining arguments are used as arguments for this program. If --boot is not used and no arguments are specified, a shell is launched in the container. The following options are understood: -q, --quiet Turns off any status output by the tool itself. When this switch is used, the only output from nspawn will be the console output of the container OS itself. Added in version 209. --settings=MODE Controls whether systemd-nspawn shall search for and use additional per-container settings from .nspawn files. Takes a boolean or the special values override or trusted. If enabled (the default), a settings file named after the machine (as specified with the --machine= setting, or derived from the directory or image file name) with the suffix .nspawn is searched in /etc/systemd/nspawn/ and /run/systemd/nspawn/. If it is found there, its settings are read and used. If it is not found there, it is subsequently searched in the same directory as the image file or in the immediate parent of the root directory of the container. In this case, if the file is found, its settings will be also read and used, but potentially unsafe settings are ignored. Note that in both these cases, settings on the command line take precedence over the corresponding settings from loaded .nspawn files, if both are specified. Unsafe settings are considered all settings that elevate the container's privileges or grant access to additional resources such as files or directories of the host. For details about the format and contents of .nspawn files, consult systemd.nspawn(5). If this option is set to override, the file is searched, read and used the same way, however, the order of precedence is reversed: settings read from the .nspawn file will take precedence over the corresponding command line options, if both are specified. If this option is set to trusted, the file is searched, read and used the same way, but regardless of being found in /etc/systemd/nspawn/, /run/systemd/nspawn/ or next to the image file or container root directory, all settings will take effect, however, command line arguments still take precedence over corresponding settings. If disabled, no .nspawn file is read and no settings except the ones on the command line are in effect. Added in version 226. Image Options -D, --directory= Directory to use as file system root for the container. If neither --directory=, nor --image= is specified the directory is determined by searching for a directory named the same as the machine name specified with --machine=. See machinectl(1) section "Files and Directories" for the precise search path. If neither --directory=, --image=, nor --machine= are specified, the current directory will be used. May not be specified together with --image=. --template= Directory or "btrfs" subvolume to use as template for the container's root directory. If this is specified and the container's root directory (as configured by --directory=) does not yet exist it is created as "btrfs" snapshot (if supported) or plain directory (otherwise) and populated from this template tree. Ideally, the specified template path refers to the root of a "btrfs" subvolume, in which case a simple copy-on-write snapshot is taken, and populating the root directory is instant. If the specified template path does not refer to the root of a "btrfs" subvolume (or not even to a "btrfs" file system at all), the tree is copied (though possibly in a 'reflink' copy-on-write scheme if the file system supports that), which can be substantially more time-consuming. Note that the snapshot taken is of the specified directory or subvolume, including all subdirectories and subvolumes below it, but excluding any sub-mounts. May not be specified together with --image= or --ephemeral. Note that this switch leaves hostname, machine ID and all other settings that could identify the instance unmodified. Added in version 219. -x, --ephemeral If specified, the container is run with a temporary snapshot of its file system that is removed immediately when the container terminates. May not be specified together with --template=. Note that this switch leaves hostname, machine ID and all other settings that could identify the instance unmodified. Please note that as with --template= taking the temporary snapshot is more efficient on file systems that support subvolume snapshots or 'reflinks' natively ("btrfs" or new "xfs") than on more traditional file systems that do not ("ext4"). Note that the snapshot taken is of the specified directory or subvolume, including all subdirectories and subvolumes below it, but excluding any sub-mounts. With this option no modifications of the container image are retained. Use --volatile= (described below) for other mechanisms to restrict persistency of container images during runtime. Added in version 219. -i, --image= Disk image to mount the root directory for the container from. Takes a path to a regular file or to a block device node. The file or block device must contain either: An MBR partition table with a single partition of type 0x83 that is marked bootable. A GUID partition table (GPT) with a single partition of type 0fc63daf-8483-4772-8e79-3d69d8477de4. A GUID partition table (GPT) with a marked root partition which is mounted as the root directory of the container. Optionally, GPT images may contain a home and/or a server data partition which are mounted to the appropriate places in the container. All these partitions must be identified by the partition types defined by the Discoverable Partitions Specification[2]. No partition table, and a single file system spanning the whole image. On GPT images, if an EFI System Partition (ESP) is discovered, it is automatically mounted to /efi (or /boot as fallback) in case a directory by this name exists and is empty. Partitions encrypted with LUKS are automatically decrypted. Also, on GPT images dm-verity data integrity hash partitions are set up if the root hash for them is specified using the --root-hash= option. Single file system images (i.e. file systems without a surrounding partition table) can be opened using dm-verity if the integrity data is passed using the --root-hash= and --verity-data= (and optionally --root-hash-sig=) options. Any other partitions, such as foreign partitions or swap partitions are not mounted. May not be specified together with --directory=, --template=. Added in version 211. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:esp=unprotected+absent:xbootldr=unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", i.e. all recognized file systems in the image are used, but not the swap partition. Added in version 254. --oci-bundle= Takes the path to an OCI runtime bundle to invoke, as specified in the OCI Runtime Specification[3]. In this case no .nspawn file is loaded, and the root directory and various settings are read from the OCI runtime JSON data (but data passed on the command line takes precedence). Added in version 242. --read-only Mount the container's root file system (and any other file systems container in the container image) read-only. This has no effect on additional mounts made with --bind=, --tmpfs= and similar options. This mode is implied if the container image file or directory is marked read-only itself. It is also implied if --volatile= is used. In this case the container image on disk is strictly read-only, while changes are permitted but kept non-persistently in memory only. For further details, see below. --volatile, --volatile=MODE Boots the container in volatile mode. When no mode parameter is passed or when mode is specified as yes, full volatile mode is enabled. This means the root directory is mounted as a mostly unpopulated "tmpfs" instance, and /usr/ from the OS tree is mounted into it in read-only mode (the system thus starts up with read-only OS image, but pristine state and configuration, any changes are lost on shutdown). When the mode parameter is specified as state, the OS tree is mounted read-only, but /var/ is mounted as a writable "tmpfs" instance into it (the system thus starts up with read-only OS resources and configuration, but pristine state, and any changes to the latter are lost on shutdown). When the mode parameter is specified as overlay the read-only root file system is combined with a writable tmpfs instance through "overlayfs", so that it appears at it normally would, but any changes are applied to the temporary file system only and lost when the container is terminated. When the mode parameter is specified as no (the default), the whole OS tree is made available writable (unless --read-only is specified, see above). Note that if one of the volatile modes is chosen, its effect is limited to the root file system (or /var/ in case of state), and any other mounts placed in the hierarchy are unaffected regardless if they are established automatically (e.g. the EFI system partition that might be mounted to /efi/ or /boot/) or explicitly (e.g. through an additional command line option such as --bind=, see below). This means, even if --volatile=overlay is used changes to /efi/ or /boot/ are prohibited in case such a partition exists in the container image operated on, and even if --volatile=state is used the hypothetical file /etc/foobar is potentially writable if --bind=/etc/foobar if used to mount it from outside the read-only container /etc/ directory. The --ephemeral option is closely related to this setting, and provides similar behaviour by making a temporary, ephemeral copy of the whole OS image and executing that. For further details, see above. The --tmpfs= and --overlay= options provide similar functionality, but for specific sub-directories of the OS image only. For details, see below. This option provides similar functionality for containers as the "systemd.volatile=" kernel command line switch provides for host systems. See kernel-command-line(7) for details. Note that setting this option to yes or state will only work correctly with operating systems in the container that can boot up with only /usr/ mounted, and are able to automatically populate /var/ (and /etc/ in case of "--volatile=yes"). Specifically, this means that operating systems that follow the historic split of /bin/ and /lib/ (and related directories) from /usr/ (i.e. where the former are not symlinks into the latter) are not supported by "--volatile=yes" as container payload. The overlay option does not require any particular preparations in the OS, but do note that "overlayfs" behaviour differs from regular file systems in a number of ways, and hence compatibility is limited. Added in version 216. --root-hash= Takes a data integrity (dm-verity) root hash specified in hexadecimal. This option enables data integrity checks using dm-verity, if the used image contains the appropriate integrity data (see above). The specified hash must match the root hash of integrity data, and is usually at least 256 bits (and hence 64 formatted hexadecimal characters) long (in case of SHA256 for example). If this option is not specified, but the image file carries the "user.verity.roothash" extended file attribute (see xattr(7)), then the root hash is read from it, also as formatted hexadecimal characters. If the extended file attribute is not found (or is not supported by the underlying file system), but a file with the .roothash suffix is found next to the image file, bearing otherwise the same name (except if the image has the .raw suffix, in which case the root hash file must not have it in its name), the root hash is read from it and automatically used, also as formatted hexadecimal characters. Note that this configures the root hash for the root file system. Disk images may also contain separate file systems for the /usr/ hierarchy, which may be Verity protected as well. The root hash for this protection may be configured via the "user.verity.usrhash" extended file attribute or via a .usrhash file adjacent to the disk image, following the same format and logic as for the root hash for the root file system described here. Note that there's currently no switch to configure the root hash for the /usr/ from the command line. Also see the RootHash= option in systemd.exec(5). Added in version 233. --root-hash-sig= Takes a PKCS7 signature of the --root-hash= option. The semantics are the same as for the RootHashSignature= option, see systemd.exec(5). Added in version 246. --verity-data= Takes the path to a data integrity (dm-verity) file. This option enables data integrity checks using dm-verity, if a root-hash is passed and if the used image itself does not contain the integrity data. The integrity data must be matched by the root hash. If this option is not specified, but a file with the .verity suffix is found next to the image file, bearing otherwise the same name (except if the image has the .raw suffix, in which case the verity data file must not have it in its name), the verity data is read from it and automatically used. Added in version 246. --pivot-root= Pivot the specified directory to / inside the container, and either unmount the container's old root, or pivot it to another specified directory. Takes one of: a path argument in which case the specified path will be pivoted to / and the old root will be unmounted; or a colon-separated pair of new root path and pivot destination for the old root. The new root path will be pivoted to /, and the old / will be pivoted to the other directory. Both paths must be absolute, and are resolved in the container's file system namespace. This is for containers which have several bootable directories in them; for example, several OSTree[4] deployments. It emulates the behavior of the boot loader and the initrd which normally select which directory to mount as the root and start the container's PID 1 in. Added in version 233. Execution Options -a, --as-pid2 Invoke the shell or specified program as process ID (PID) 2 instead of PID 1 (init). By default, if neither this option nor --boot is used, the selected program is run as the process with PID 1, a mode only suitable for programs that are aware of the special semantics that the process with PID 1 has on UNIX. For example, it needs to reap all processes reparented to it, and should implement sysvinit compatible signal handling (specifically: it needs to reboot on SIGINT, reexecute on SIGTERM, reload configuration on SIGHUP, and so on). With --as-pid2 a minimal stub init process is run as PID 1 and the selected program is executed as PID 2 (and hence does not need to implement any special semantics). The stub init process will reap processes as necessary and react appropriately to signals. It is recommended to use this mode to invoke arbitrary commands in containers, unless they have been modified to run correctly as PID 1. Or in other words: this switch should be used for pretty much all commands, except when the command refers to an init or shell implementation, as these are generally capable of running correctly as PID 1. This option may not be combined with --boot. Added in version 229. -b, --boot Automatically search for an init program and invoke it as PID 1, instead of a shell or a user supplied program. If this option is used, arguments specified on the command line are used as arguments for the init program. This option may not be combined with --as-pid2. The following table explains the different modes of invocation and relationship to --as-pid2 (see above): Table 1. Invocation Mode Switch Explanation Neither --as-pid2 nor The passed parameters --boot specified are interpreted as the command line, which is executed as PID 1 in the container. --as-pid2 specified The passed parameters are interpreted as the command line, which is executed as PID 2 in the container. A stub init process is run as PID 1. --boot specified An init program is automatically searched for and run as PID 1 in the container. The passed parameters are used as invocation parameters for this process. Note that --boot is the default mode of operation if the systemd-nspawn@.service template unit file is used. --chdir= Change to the specified working directory before invoking the process in the container. Expects an absolute path in the container's file system namespace. Added in version 229. -E NAME[=VALUE], --setenv=NAME[=VALUE] Specifies an environment variable to pass to the init process in the container. This may be used to override the default variables or to set additional variables. It may be used more than once to set multiple variables. When "=" and VALUE are omitted, the value of the variable with the same name in the program environment will be used. Added in version 209. -u, --user= After transitioning into the container, change to the specified user defined in the container's user database. Like all other systemd-nspawn features, this is not a security feature and provides protection against accidental destructive operations only. --kill-signal= Specify the process signal to send to the container's PID 1 when nspawn itself receives SIGTERM, in order to trigger an orderly shutdown of the container. Defaults to SIGRTMIN+3 if --boot is used (on systemd-compatible init systems SIGRTMIN+3 triggers an orderly shutdown). If --boot is not used and this option is not specified the container's processes are terminated abruptly via SIGKILL. For a list of valid signals, see signal(7). Added in version 220. --notify-ready= Configures support for notifications from the container's init process. --notify-ready= takes a boolean (no and yes). With option no systemd-nspawn notifies systemd with a "READY=1" message when the init process is created. With option yes systemd-nspawn waits for the "READY=1" message from the init process in the container before sending its own to systemd. For more details about notifications see sd_notify(3). Added in version 231. --suppress-sync= Expects a boolean argument. If true, turns off any form of on-disk file system synchronization for the container payload. This means all system calls such as sync(2), fsync(), syncfs(), ... will execute no operation, and the O_SYNC/O_DSYNC flags to open(2) and related calls will be made unavailable. This is potentially dangerous, as assumed data integrity guarantees to the container payload are not actually enforced (i.e. data assumed to have been written to disk might be lost if the system is shut down abnormally). However, this can dramatically improve container runtime performance as long as these guarantees are not required or desirable, for example because any data written by the container is of temporary, redundant nature, or just an intermediary artifact that will be further processed and finalized by a later step in a pipeline. Defaults to false. Added in version 250. System Identity Options -M, --machine= Sets the machine name for this container. This name may be used to identify this container during its runtime (for example in tools like machinectl(1) and similar), and is used to initialize the container's hostname (which the container can choose to override, however). If not specified, the last component of the root directory path of the container is used, possibly suffixed with a random identifier in case --ephemeral mode is selected. If the root directory selected is the host's root directory the host's hostname is used as default instead. Added in version 202. --hostname= Controls the hostname to set within the container, if different from the machine name. Expects a valid hostname as argument. If this option is used, the kernel hostname of the container will be set to this value, otherwise it will be initialized to the machine name as controlled by the --machine= option described above. The machine name is used for various aspect of identification of the container from the outside, the kernel hostname configurable with this option is useful for the container to identify itself from the inside. It is usually a good idea to keep both forms of identification synchronized, in order to avoid confusion. It is hence recommended to avoid usage of this option, and use --machine= exclusively. Note that regardless whether the container's hostname is initialized from the name set with --hostname= or the one set with --machine=, the container can later override its kernel hostname freely on its own as well. Added in version 239. --uuid= Set the specified UUID for the container. The init system will initialize /etc/machine-id from this if this file is not set yet. Note that this option takes effect only if /etc/machine-id in the container is unpopulated. Property Options -S, --slice= Make the container part of the specified slice, instead of the default machine.slice. This applies only if the machine is run in its own scope unit, i.e. if --keep-unit isn't used. Added in version 206. --property= Set a unit property on the scope unit to register for the machine. This applies only if the machine is run in its own scope unit, i.e. if --keep-unit isn't used. Takes unit property assignments in the same format as systemctl set-property. This is useful to set memory limits and similar for the container. Added in version 220. --register= Controls whether the container is registered with systemd-machined(8). Takes a boolean argument, which defaults to "yes". This option should be enabled when the container runs a full Operating System (more specifically: a system and service manager as PID 1), and is useful to ensure that the container is accessible via machinectl(1) and shown by tools such as ps(1). If the container does not run a service manager, it is recommended to set this option to "no". Added in version 209. --keep-unit Instead of creating a transient scope unit to run the container in, simply use the service or scope unit systemd-nspawn has been invoked in. If --register=yes is set this unit is registered with systemd-machined(8). This switch should be used if systemd-nspawn is invoked from within a service unit, and the service unit's sole purpose is to run a single systemd-nspawn container. This option is not available if run from a user session. Note that passing --keep-unit disables the effect of --slice= and --property=. Use --keep-unit and --register=no in combination to disable any kind of unit allocation or registration with systemd-machined. Added in version 209. User Namespacing Options --private-users= Controls user namespacing. If enabled, the container will run with its own private set of UNIX user and group ids (UIDs and GIDs). This involves mapping the private UIDs/GIDs used in the container (starting with the container's root user 0 and up) to a range of UIDs/GIDs on the host that are not used for other purposes (usually in the range beyond the host's UID/GID 65536). The parameter may be specified as follows: 1. If one or two colon-separated numbers are specified, user namespacing is turned on. The first parameter specifies the first host UID/GID to assign to the container, the second parameter specifies the number of host UIDs/GIDs to assign to the container. If the second parameter is omitted, 65536 UIDs/GIDs are assigned. 2. If the parameter is "yes", user namespacing is turned on. The UID/GID range to use is determined automatically from the file ownership of the root directory of the container's directory tree. To use this option, make sure to prepare the directory tree in advance, and ensure that all files and directories in it are owned by UIDs/GIDs in the range you'd like to use. Also, make sure that used file ACLs exclusively reference UIDs/GIDs in the appropriate range. In this mode, the number of UIDs/GIDs assigned to the container is 65536, and the owner UID/GID of the root directory must be a multiple of 65536. 3. If the parameter is "no", user namespacing is turned off. This is the default. 4. If the parameter is "identity", user namespacing is employed with an identity mapping for the first 65536 UIDs/GIDs. This is mostly equivalent to --private-users=0:65536. While it does not provide UID/GID isolation, since all host and container UIDs/GIDs are chosen identically it does provide process capability isolation, and hence is often a good choice if proper user namespacing with distinct UID maps is not appropriate. 5. The special value "pick" turns on user namespacing. In this case the UID/GID range is automatically chosen. As first step, the file owner UID/GID of the root directory of the container's directory tree is read, and it is checked that no other container is currently using it. If this check is successful, the UID/GID range determined this way is used, similarly to the behavior if "yes" is specified. If the check is not successful (and thus the UID/GID range indicated in the root directory's file owner is already used elsewhere) a new currently unused UID/GID range of 65536 UIDs/GIDs is randomly chosen between the host UID/GIDs of 524288 and 1878982656, always starting at a multiple of 65536, and, if possible, consistently hashed from the machine name. This setting implies --private-users-ownership=auto (see below), which possibly has the effect that the files and directories in the container's directory tree will be owned by the appropriate users of the range picked. Using this option makes user namespace behavior fully automatic. Note that the first invocation of a previously unused container image might result in picking a new UID/GID range for it, and thus in the (possibly expensive) file ownership adjustment operation. However, subsequent invocations of the container will be cheap (unless of course the picked UID/GID range is assigned to a different use by then). It is recommended to assign at least 65536 UIDs/GIDs to each container, so that the usable UID/GID range in the container covers 16 bit. For best security, do not assign overlapping UID/GID ranges to multiple containers. It is hence a good idea to use the upper 16 bit of the host 32-bit UIDs/GIDs as container identifier, while the lower 16 bit encode the container UID/GID used. This is in fact the behavior enforced by the --private-users=pick option. When user namespaces are used, the GID range assigned to each container is always chosen identical to the UID range. In most cases, using --private-users=pick is the recommended option as it enhances container security massively and operates fully automatically in most cases. Note that the picked UID/GID range is not written to /etc/passwd or /etc/group. In fact, the allocation of the range is not stored persistently anywhere, except in the file ownership of the files and directories of the container. Note that when user namespacing is used file ownership on disk reflects this, and all of the container's files and directories are owned by the container's effective user and group IDs. This means that copying files from and to the container image requires correction of the numeric UID/GID values, according to the UID/GID shift applied. Added in version 220. --private-users-ownership= Controls how to adjust the container image's UIDs and GIDs to match the UID/GID range chosen with --private-users=, see above. Takes one of "off" (to leave the image as is), "chown" (to recursively chown() the container's directory tree as needed), "map" (in order to use transparent ID mapping mounts) or "auto" for automatically using "map" where available and "chown" where not. If "chown" is selected, all files and directories in the container's directory tree will be adjusted so that they are owned by the appropriate UIDs/GIDs selected for the container (see above). This operation is potentially expensive, as it involves iterating through the full directory tree of the container. Besides actual file ownership, file ACLs are adjusted as well. Typically "map" is the best choice, since it transparently maps UIDs/GIDs in memory as needed without modifying the image, and without requiring an expensive recursive adjustment operation. However, it is not available for all file systems, currently. The --private-users-ownership=auto option is implied if --private-users=pick is used. This option has no effect if user namespacing is not used. Added in version 230. -U If the kernel supports the user namespaces feature, equivalent to --private-users=pick --private-users-ownership=auto, otherwise equivalent to --private-users=no. Note that -U is the default if the systemd-nspawn@.service template unit file is used. Note: it is possible to undo the effect of --private-users-ownership=chown (or -U) on the file system by redoing the operation with the first UID of 0: systemd-nspawn ... --private-users=0 --private-users-ownership=chown Added in version 230. Networking Options --private-network Disconnect networking of the container from the host. This makes all network interfaces unavailable in the container, with the exception of the loopback device and those specified with --network-interface= and configured with --network-veth. If this option is specified, the CAP_NET_ADMIN capability will be added to the set of capabilities the container retains. The latter may be disabled by using --drop-capability=. If this option is not specified (or implied by one of the options listed below), the container will have full access to the host network. --network-interface= Assign the specified network interface to the container. Either takes a single interface name, referencing the name on the host, or a colon-separated pair of interfaces, in which case the first one references the name on the host, and the second one the name in the container. When the container terminates, the interface is moved back to the calling namespace and renamed to its original name. Note that --network-interface= implies --private-network. This option may be used more than once to add multiple network interfaces to the container. Note that any network interface specified this way must already exist at the time the container is started. If the container shall be started automatically at boot via a systemd-nspawn@.service unit file instance, it might hence make sense to add a unit file drop-in to the service instance (e.g. /etc/systemd/system/systemd-nspawn@foobar.service.d/50-network.conf) with contents like the following: [Unit] Wants=sys-subsystem-net-devices-ens1.device After=sys-subsystem-net-devices-ens1.device This will make sure that activation of the container service will be delayed until the "ens1" network interface has shown up. This is required since hardware probing is fully asynchronous, and network interfaces might be discovered only later during the boot process, after the container would normally be started without these explicit dependencies. Added in version 209. --network-macvlan= Create a "macvlan" interface of the specified Ethernet network interface and add it to the container. Either takes a single interface name, referencing the name on the host, or a colon-separated pair of interfaces, in which case the first one references the name on the host, and the second one the name in the container. A "macvlan" interface is a virtual interface that adds a second MAC address to an existing physical Ethernet link. If the container interface name is not defined, the interface in the container will be named after the interface on the host, prefixed with "mv-". Note that --network-macvlan= implies --private-network. This option may be used more than once to add multiple network interfaces to the container. As with --network-interface=, the underlying Ethernet network interface must already exist at the time the container is started, and thus similar unit file drop-ins as described above might be useful. Added in version 211. --network-ipvlan= Create an "ipvlan" interface of the specified Ethernet network interface and add it to the container. Either takes a single interface name, referencing the name on the host, or a colon-separated pair of interfaces, in which case the first one references the name on the host, and the second one the name in the container. An "ipvlan" interface is a virtual interface, similar to a "macvlan" interface, which uses the same MAC address as the underlying interface. If the container interface name is not defined, the interface in the container will be named after the interface on the host, prefixed with "iv-". Note that --network-ipvlan= implies --private-network. This option may be used more than once to add multiple network interfaces to the container. As with --network-interface=, the underlying Ethernet network interface must already exist at the time the container is started, and thus similar unit file drop-ins as described above might be useful. Added in version 219. -n, --network-veth Create a virtual Ethernet link ("veth") between host and container. The host side of the Ethernet link will be available as a network interface named after the container's name (as specified with --machine=), prefixed with "ve-". The container side of the Ethernet link will be named "host0". The --network-veth option implies --private-network. Note that systemd-networkd.service(8) includes by default a network file /usr/lib/systemd/network/80-container-ve.network matching the host-side interfaces created this way, which contains settings to enable automatic address provisioning on the created virtual link via DHCP, as well as automatic IP routing onto the host's external network interfaces. It also contains /usr/lib/systemd/network/80-container-host0.network matching the container-side interface created this way, containing settings to enable client side address assignment via DHCP. In case systemd-networkd is running on both the host and inside the container, automatic IP communication from the container to the host is thus available, with further connectivity to the external network. Note that --network-veth is the default if the systemd-nspawn@.service template unit file is used. Note that on Linux network interface names may have a length of 15 characters at maximum, while container names may have a length up to 64 characters. As this option derives the host-side interface name from the container name the name is possibly truncated. Thus, care needs to be taken to ensure that interface names remain unique in this case, or even better container names are generally not chosen longer than 12 characters, to avoid the truncation. If the name is truncated, systemd-nspawn will automatically append a 4-digit hash value to the name to reduce the chance of collisions. However, the hash algorithm is not collision-free. (See systemd.net-naming-scheme(7) for details on older naming algorithms for this interface). Alternatively, the --network-veth-extra= option may be used, which allows free configuration of the host-side interface name independently of the container name but might require a bit more additional configuration in case bridging in a fashion similar to --network-bridge= is desired. Added in version 209. --network-veth-extra= Adds an additional virtual Ethernet link between host and container. Takes a colon-separated pair of host interface name and container interface name. The latter may be omitted in which case the container and host sides will be assigned the same name. This switch is independent of --network-veth, and in contrast may be used multiple times, and allows configuration of the network interface names. Note that --network-bridge= has no effect on interfaces created with --network-veth-extra=. Added in version 228. --network-bridge= Adds the host side of the Ethernet link created with --network-veth to the specified Ethernet bridge interface. Expects a valid network interface name of a bridge device as argument. Note that --network-bridge= implies --network-veth. If this option is used, the host side of the Ethernet link will use the "vb-" prefix instead of "ve-". Regardless of the used naming prefix the same network interface name length limits imposed by Linux apply, along with the complications this creates (for details see above). As with --network-interface=, the underlying bridge network interface must already exist at the time the container is started, and thus similar unit file drop-ins as described above might be useful. Added in version 209. --network-zone= Creates a virtual Ethernet link ("veth") to the container and adds it to an automatically managed Ethernet bridge interface. The bridge interface is named after the passed argument, prefixed with "vz-". The bridge interface is automatically created when the first container configured for its name is started, and is automatically removed when the last container configured for its name exits. Hence, each bridge interface configured this way exists only as long as there's at least one container referencing it running. This option is very similar to --network-bridge=, besides this automatic creation/removal of the bridge device. This setting makes it easy to place multiple related containers on a common, virtual Ethernet-based broadcast domain, here called a "zone". Each container may only be part of one zone, but each zone may contain any number of containers. Each zone is referenced by its name. Names may be chosen freely (as long as they form valid network interface names when prefixed with "vz-"), and it is sufficient to pass the same name to the --network-zone= switch of the various concurrently running containers to join them in one zone. Note that systemd-networkd.service(8) includes by default a network file /usr/lib/systemd/network/80-container-vz.network matching the bridge interfaces created this way, which contains settings to enable automatic address provisioning on the created virtual network via DHCP, as well as automatic IP routing onto the host's external network interfaces. Using --network-zone= is hence in most cases fully automatic and sufficient to connect multiple local containers in a joined broadcast domain to the host, with further connectivity to the external network. Added in version 230. --network-namespace-path= Takes the path to a file representing a kernel network namespace that the container shall run in. The specified path should refer to a (possibly bind-mounted) network namespace file, as exposed by the kernel below /proc/$PID/ns/net. This makes the container enter the given network namespace. One of the typical use cases is to give a network namespace under /run/netns created by ip-netns(8), for example, --network-namespace-path=/run/netns/foo. Note that this option cannot be used together with other network-related options, such as --private-network or --network-interface=. Added in version 236. -p, --port= If private networking is enabled, maps an IP port on the host onto an IP port on the container. Takes a protocol specifier (either "tcp" or "udp"), separated by a colon from a host port number in the range 1 to 65535, separated by a colon from a container port number in the range from 1 to 65535. The protocol specifier and its separating colon may be omitted, in which case "tcp" is assumed. The container port number and its colon may be omitted, in which case the same port as the host port is implied. This option is only supported if private networking is used, such as with --network-veth, --network-zone= --network-bridge=. Added in version 219. Security Options --capability= List one or more additional capabilities to grant the container. Takes a comma-separated list of capability names, see capabilities(7) for more information. Note that the following capabilities will be granted in any way: CAP_AUDIT_CONTROL, CAP_AUDIT_WRITE, CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_DAC_READ_SEARCH, CAP_FOWNER, CAP_FSETID, CAP_IPC_OWNER, CAP_KILL, CAP_LEASE, CAP_LINUX_IMMUTABLE, CAP_MKNOD, CAP_NET_BIND_SERVICE, CAP_NET_BROADCAST, CAP_NET_RAW, CAP_SETFCAP, CAP_SETGID, CAP_SETPCAP, CAP_SETUID, CAP_SYS_ADMIN, CAP_SYS_BOOT, CAP_SYS_CHROOT, CAP_SYS_NICE, CAP_SYS_PTRACE, CAP_SYS_RESOURCE, CAP_SYS_TTY_CONFIG. Also CAP_NET_ADMIN is retained if --private-network is specified. If the special value "all" is passed, all capabilities are retained. If the special value of "help" is passed, the program will print known capability names and exit. This option sets the bounding set of capabilities which also limits the ambient capabilities as given with the --ambient-capability=. Added in version 186. --drop-capability= Specify one or more additional capabilities to drop for the container. This allows running the container with fewer capabilities than the default (see above). If the special value of "help" is passed, the program will print known capability names and exit. This option sets the bounding set of capabilities which also limits the ambient capabilities as given with the --ambient-capability=. Added in version 209. --ambient-capability= Specify one or more additional capabilities to pass in the inheritable and ambient set to the program started within the container. The value "all" is not supported for this setting. All capabilities specified here must be in the set allowed with the --capability= and --drop-capability= options. Otherwise, an error message will be shown. This option cannot be combined with the boot mode of the container (as requested via --boot). If the special value of "help" is passed, the program will print known capability names and exit. Added in version 248. --no-new-privileges= Takes a boolean argument. Specifies the value of the PR_SET_NO_NEW_PRIVS flag for the container payload. Defaults to off. When turned on the payload code of the container cannot acquire new privileges, i.e. the "setuid" file bit as well as file system capabilities will not have an effect anymore. See prctl(2) for details about this flag. Added in version 239. --system-call-filter= Alter the system call filter applied to containers. Takes a space-separated list of system call names or group names (the latter prefixed with "@", as listed by the syscall-filter command of systemd-analyze(1)). Passed system calls will be permitted. The list may optionally be prefixed by "~", in which case all listed system calls are prohibited. If this command line option is used multiple times the configured lists are combined. If both a positive and a negative list (that is one system call list without and one with the "~" prefix) are configured, the negative list takes precedence over the positive list. Note that systemd-nspawn always implements a system call allow list (as opposed to a deny list!), and this command line option hence adds or removes entries from the default allow list, depending on the "~" prefix. Note that the applied system call filter is also altered implicitly if additional capabilities are passed using the --capabilities=. Added in version 235. -Z, --selinux-context= Sets the SELinux security context to be used to label processes in the container. Added in version 209. -L, --selinux-apifs-context= Sets the SELinux security context to be used to label files in the virtual API file systems in the container. Added in version 209. Resource Options --rlimit= Sets the specified POSIX resource limit for the container payload. Expects an assignment of the form "LIMIT=SOFT:HARD" or "LIMIT=VALUE", where LIMIT should refer to a resource limit type, such as RLIMIT_NOFILE or RLIMIT_NICE. The SOFT and HARD fields should refer to the numeric soft and hard resource limit values. If the second form is used, VALUE may specify a value that is used both as soft and hard limit. In place of a numeric value the special string "infinity" may be used to turn off resource limiting for the specific type of resource. This command line option may be used multiple times to control limits on multiple limit types. If used multiple times for the same limit type, the last use wins. For details about resource limits see setrlimit(2). By default resource limits for the container's init process (PID 1) are set to the same values the Linux kernel originally passed to the host init system. Note that some resource limits are enforced on resources counted per user, in particular RLIMIT_NPROC. This means that unless user namespacing is deployed (i.e. --private-users= is used, see above), any limits set will be applied to the resource usage of the same user on all local containers as well as the host. This means particular care needs to be taken with these limits as they might be triggered by possibly less trusted code. Example: "--rlimit=RLIMIT_NOFILE=8192:16384". Added in version 239. --oom-score-adjust= Changes the OOM ("Out Of Memory") score adjustment value for the container payload. This controls /proc/self/oom_score_adj which influences the preference with which this container is terminated when memory becomes scarce. For details see proc(5). Takes an integer in the range -1000...1000. Added in version 239. --cpu-affinity= Controls the CPU affinity of the container payload. Takes a comma separated list of CPU numbers or number ranges (the latter's start and end value separated by dashes). See sched_setaffinity(2) for details. Added in version 239. --personality= Control the architecture ("personality") reported by uname(2) in the container. Currently, only "x86" and "x86-64" are supported. This is useful when running a 32-bit container on a 64-bit host. If this setting is not used, the personality reported in the container is the same as the one reported on the host. Added in version 209. Integration Options --resolv-conf= Configures how /etc/resolv.conf inside of the container shall be handled (i.e. DNS configuration synchronization from host to container). Takes one of "off", "copy-host", "copy-static", "copy-uplink", "copy-stub", "replace-host", "replace-static", "replace-uplink", "replace-stub", "bind-host", "bind-static", "bind-uplink", "bind-stub", "delete" or "auto". If set to "off" the /etc/resolv.conf file in the container is left as it is included in the image, and neither modified nor bind mounted over. If set to "copy-host", the /etc/resolv.conf file from the host is copied into the container, unless the file exists already and is not a regular file (e.g. a symlink). Similarly, if "replace-host" is used the file is copied, replacing any existing inode, including symlinks. Similarly, if "bind-host" is used, the file is bind mounted from the host into the container. If set to "copy-static", "replace-static" or "bind-static" the static resolv.conf file supplied with systemd-resolved.service(8) (specifically: /usr/lib/systemd/resolv.conf) is copied or bind mounted into the container. If set to "copy-uplink", "replace-uplink" or "bind-uplink" the uplink resolv.conf file managed by systemd-resolved.service (specifically: /run/systemd/resolve/resolv.conf) is copied or bind mounted into the container. If set to "copy-stub", "replace-stub" or "bind-stub" the stub resolv.conf file managed by systemd-resolved.service (specifically: /run/systemd/resolve/stub-resolv.conf) is copied or bind mounted into the container. If set to "delete" the /etc/resolv.conf file in the container is deleted if it exists. Finally, if set to "auto" the file is left as it is if private networking is turned on (see --private-network). Otherwise, if systemd-resolved.service is running its stub resolv.conf file is used, and if not the host's /etc/resolv.conf file. In the latter cases the file is copied if the image is writable, and bind mounted otherwise. It's recommended to use "copy-..." or "replace-..." if the container shall be able to make changes to the DNS configuration on its own, deviating from the host's settings. Otherwise "bind" is preferable, as it means direct changes to /etc/resolv.conf in the container are not allowed, as it is a read-only bind mount (but note that if the container has enough privileges, it might simply go ahead and unmount the bind mount anyway). Note that both if the file is bind mounted and if it is copied no further propagation of configuration is generally done after the one-time early initialization (this is because the file is usually updated through copying and renaming). Defaults to "auto". Added in version 239. --timezone= Configures how /etc/localtime inside of the container (i.e. local timezone synchronization from host to container) shall be handled. Takes one of "off", "copy", "bind", "symlink", "delete" or "auto". If set to "off" the /etc/localtime file in the container is left as it is included in the image, and neither modified nor bind mounted over. If set to "copy" the /etc/localtime file of the host is copied into the container. Similarly, if "bind" is used, the file is bind mounted from the host into the container. If set to "symlink", a symlink is created pointing from /etc/localtime in the container to the timezone file in the container that matches the timezone setting on the host. If set to "delete", the file in the container is deleted, should it exist. If set to "auto" and the /etc/localtime file of the host is a symlink, then "symlink" mode is used, and "copy" otherwise, except if the image is read-only in which case "bind" is used instead. Defaults to "auto". Added in version 239. --link-journal= Control whether the container's journal shall be made visible to the host system. If enabled, allows viewing the container's journal files from the host (but not vice versa). Takes one of "no", "host", "try-host", "guest", "try-guest", "auto". If "no", the journal is not linked. If "host", the journal files are stored on the host file system (beneath /var/log/journal/machine-id) and the subdirectory is bind-mounted into the container at the same location. If "guest", the journal files are stored on the guest file system (beneath /var/log/journal/machine-id) and the subdirectory is symlinked into the host at the same location. "try-host" and "try-guest" do the same but do not fail if the host does not have persistent journaling enabled. If "auto" (the default), and the right subdirectory of /var/log/journal exists, it will be bind mounted into the container. If the subdirectory does not exist, no linking is performed. Effectively, booting a container once with "guest" or "host" will link the journal persistently if further on the default of "auto" is used. Note that --link-journal=try-guest is the default if the systemd-nspawn@.service template unit file is used. Added in version 187. -j Equivalent to --link-journal=try-guest. Added in version 187. Mount Options --bind=, --bind-ro= Bind mount a file or directory from the host into the container. Takes one of: a path argument in which case the specified path will be mounted from the host to the same path in the container, or a colon-separated pair of paths in which case the first specified path is the source in the host, and the second path is the destination in the container, or a colon-separated triple of source path, destination path and mount options. The source path may optionally be prefixed with a "+" character. If so, the source path is taken relative to the image's root directory. This permits setting up bind mounts within the container image. The source path may be specified as empty string, in which case a temporary directory below the host's /var/tmp/ directory is used. It is automatically removed when the container is shut down. If the source path is not absolute, it is resolved relative to the current working directory. The --bind-ro= option creates read-only bind mounts. Backslash escapes are interpreted, so "\:" may be used to embed colons in either path. This option may be specified multiple times for creating multiple independent bind mount points. Mount options are comma-separated. rbind and norbind control whether to create a recursive or a regular bind mount. Defaults to rbind. noidmap, idmap, and rootidmap control ID mapping. Using idmap or rootidmap requires support by the source filesystem for user/group ID mapped mounts. Defaults to noidmap. With x being the container's UID range offset, y being the length of the container's UID range, and p being the owner UID of the bind mount source inode on the host: If noidmap is used, any user z in the range 0 ... y seen from inside of the container is mapped to x + z in the x ... x + y range on the host. Other host users are mapped to nobody inside the container. If idmap is used, any user z in the UID range 0 ... y as seen from inside the container is mapped to the same z in the same 0 ... y range on the host. Other host users are mapped to nobody inside the container. If rootidmap is used, the user 0 seen from inside of the container is mapped to p on the host. Other host users are mapped to nobody inside the container. Whichever ID mapping option is used, the same mapping will be used for users and groups IDs. If rootidmap is used, the group owning the bind mounted directory will have no effect. Note that when this option is used in combination with --private-users, the resulting mount points will be owned by the nobody user. That's because the mount and its files and directories continue to be owned by the relevant host users and groups, which do not exist in the container, and thus show up under the wildcard UID 65534 (nobody). If such bind mounts are created, it is recommended to make them read-only, using --bind-ro=. Alternatively you can use the "idmap" mount option to map the filesystem IDs. Added in version 198. --bind-user= Binds the home directory of the specified user on the host into the container. Takes the name of an existing user on the host as argument. May be used multiple times to bind multiple users into the container. This does three things: 1. The user's home directory is bind mounted from the host into /run/host/home/. 2. An additional UID/GID mapping is added that maps the host user's UID/GID to a container UID/GID, allocated from the 60514...60577 range. 3. A JSON user and group record is generated in /run/userdb/ that describes the mapped user. It contains a minimized representation of the host's user record, adjusted to the UID/GID and home directory path assigned to the user in the container. The nss-systemd(8) glibc NSS module will pick up these records from there and make them available in the container's user/group databases. The combination of the three operations above ensures that it is possible to log into the container using the same account information as on the host. The user is only mapped transiently, while the container is running, and the mapping itself does not result in persistent changes to the container (except maybe for log messages generated at login time, and similar). Note that in particular the UID/GID assignment in the container is not made persistently. If the user is mapped transiently, it is best to not allow the user to make persistent changes to the container. If the user leaves files or directories owned by the user, and those UIDs/GIDs are reused during later container invocations (possibly with a different --bind-user= mapping), those files and directories will be accessible to the "new" user. The user/group record mapping only works if the container contains systemd 249 or newer, with nss-systemd properly configured in nsswitch.conf. See nss-systemd(8) for details. Note that the user record propagated from the host into the container will contain the UNIX password hash of the user, so that seamless logins in the container are possible. If the container is less trusted than the host it's hence important to use a strong UNIX password hash function (e.g. yescrypt or similar, with the "$y$" hash prefix). When binding a user from the host into the container checks are executed to ensure that the username is not yet known in the container. Moreover, it is checked that the UID/GID allocated for it is not currently defined in the user/group databases of the container. Both checks directly access the container's /etc/passwd and /etc/group, and thus might not detect existing accounts in other databases. This operation is only supported in combination with --private-users=/-U. Added in version 249. --inaccessible= Make the specified path inaccessible in the container. This over-mounts the specified path (which must exist in the container) with a file node of the same type that is empty and has the most restrictive access mode supported. This is an effective way to mask files, directories and other file system objects from the container payload. This option may be used more than once in case all specified paths are masked. Added in version 242. --tmpfs= Mount a tmpfs file system into the container. Takes a single absolute path argument that specifies where to mount the tmpfs instance to (in which case the directory access mode will be chosen as 0755, owned by root/root), or optionally a colon-separated pair of path and mount option string that is used for mounting (in which case the kernel default for access mode and owner will be chosen, unless otherwise specified). Backslash escapes are interpreted in the path, so "\:" may be used to embed colons in the path. Note that this option cannot be used to replace the root file system of the container with a temporary file system. However, the --volatile= option described below provides similar functionality, with a focus on implementing stateless operating system images. Added in version 214. --overlay=, --overlay-ro= Combine multiple directory trees into one overlay file system and mount it into the container. Takes a list of colon-separated paths to the directory trees to combine and the destination mount point. Backslash escapes are interpreted in the paths, so "\:" may be used to embed colons in the paths. If three or more paths are specified, then the last specified path is the destination mount point in the container, all paths specified before refer to directory trees on the host and are combined in the specified order into one overlay file system. The left-most path is hence the lowest directory tree, the second-to-last path the highest directory tree in the stacking order. If --overlay-ro= is used instead of --overlay=, a read-only overlay file system is created. If a writable overlay file system is created, all changes made to it are written to the highest directory tree in the stacking order, i.e. the second-to-last specified. If only two paths are specified, then the second specified path is used both as the top-level directory tree in the stacking order as seen from the host, as well as the mount point for the overlay file system in the container. At least two paths have to be specified. The source paths may optionally be prefixed with "+" character. If so they are taken relative to the image's root directory. The uppermost source path may also be specified as an empty string, in which case a temporary directory below the host's /var/tmp/ is used. The directory is removed automatically when the container is shut down. This behaviour is useful in order to make read-only container directories writable while the container is running. For example, use "--overlay=+/var::/var" in order to automatically overlay a writable temporary directory on a read-only /var/ directory. If a source path is not absolute, it is resolved relative to the current working directory. For details about overlay file systems, see Overlay Filesystem[5]. Note that the semantics of overlay file systems are substantially different from normal file systems, in particular regarding reported device and inode information. Device and inode information may change for a file while it is being written to, and processes might see out-of-date versions of files at times. Note that this switch automatically derives the "workdir=" mount option for the overlay file system from the top-level directory tree, making it a sibling of it. It is hence essential that the top-level directory tree is not a mount point itself (since the working directory must be on the same file system as the top-most directory tree). Also note that the "lowerdir=" mount option receives the paths to stack in the opposite order of this switch. Note that this option cannot be used to replace the root file system of the container with an overlay file system. However, the --volatile= option described above provides similar functionality, with a focus on implementing stateless operating system images. Added in version 220. Input/Output Options --console=MODE Configures how to set up standard input, output and error output for the container payload, as well as the /dev/console device for the container. Takes one of interactive, read-only, passive, pipe or autopipe. If interactive, a pseudo-TTY is allocated and made available as /dev/console in the container. It is then bi-directionally connected to the standard input and output passed to systemd-nspawn. read-only is similar but only the output of the container is propagated and no input from the caller is read. If passive, a pseudo TTY is allocated, but it is not connected anywhere. In pipe mode no pseudo TTY is allocated, but the standard input, output and error output file descriptors passed to systemd-nspawn are passed on as they are to the container payload, see the following paragraph. Finally, autopipe mode operates like interactive when systemd-nspawn is invoked on a terminal, and like pipe otherwise. Defaults to interactive if systemd-nspawn is invoked from a terminal, and read-only otherwise. In pipe mode, /dev/console will not exist in the container. This means that the container payload generally cannot be a full init system as init systems tend to require /dev/console to be available. On the other hand, in this mode container invocations can be used within shell pipelines. This is because intermediary pseudo TTYs do not permit independent bidirectional propagation of the end-of-file (EOF) condition, which is necessary for shell pipelines to work correctly. Note that the pipe mode should be used carefully, as passing arbitrary file descriptors to less trusted container payloads might open up unwanted interfaces for access by the container payload. For example, if a passed file descriptor refers to a TTY of some form, APIs such as TIOCSTI may be used to synthesize input that might be used for escaping the container. Hence pipe mode should only be used if the payload is sufficiently trusted or when the standard input/output/error output file descriptors are known safe, for example pipes. Added in version 242. --pipe, -P Equivalent to --console=pipe. Added in version 242. Credentials --load-credential=ID:PATH, --set-credential=ID:VALUE Pass a credential to the container. These two options correspond to the LoadCredential= and SetCredential= settings in unit files. See systemd.exec(5) for details about these concepts, as well as the syntax of the option's arguments. Note: when systemd-nspawn runs as systemd system service it can propagate the credentials it received via LoadCredential=/SetCredential= to the container payload. A systemd service manager running as PID 1 in the container can further propagate them to the services it itself starts. It is thus possible to easily propagate credentials from a parent service manager to a container manager service and from there into its payload. This can even be done recursively. In order to embed binary data into the credential data for --set-credential=, use C-style escaping (i.e. "\n" to embed a newline, or "\x00" to embed a NUL byte). Note that the invoking shell might already apply unescaping once, hence this might require double escaping!. The systemd-sysusers.service(8) and systemd-firstboot(1) services read credentials configured this way for the purpose of configuring the container's root user's password and shell, as well as system locale, keymap and timezone during the first boot process of the container. This is particularly useful in combination with --volatile=yes where every single boot appears as first boot, since configuration applied to /etc/ is lost on container reboot cycles. See the respective man pages for details. Example: # systemd-nspawn -i image.raw \ --volatile=yes \ --set-credential=firstboot.locale:de_DE.UTF-8 \ --set-credential=passwd.hashed-password.root:'$y$j9T$yAuRJu1o5HioZAGDYPU5d.$F64ni6J2y2nNQve90M/p0ZP0ECP/qqzipNyaY9fjGpC' \ -b The above command line will invoke the specified image file image.raw in volatile mode, i.e. with empty /etc/ and /var/. The container payload will recognize this as a first boot, and will invoke systemd-firstboot.service, which then reads the two passed credentials to configure the system's initial locale and root password. Added in version 247. Other --no-pager Do not pipe output into a pager. -h, --help Print a short help text and exit. --version Print a short version string and exit. ENVIRONMENT top $SYSTEMD_LOG_LEVEL The maximum log level of emitted messages (messages with a higher log level, i.e. less important ones, will be suppressed). Either one of (in order of decreasing importance) emerg, alert, crit, err, warning, notice, info, debug, or an integer in the range 0...7. See syslog(3) for more information. $SYSTEMD_LOG_COLOR A boolean. If true, messages written to the tty will be colored according to priority. This setting is only useful when messages are written directly to the terminal, because journalctl(1) and other tools that display logs will color messages based on the log level on their own. $SYSTEMD_LOG_TIME A boolean. If true, console log messages will be prefixed with a timestamp. This setting is only useful when messages are written directly to the terminal or a file, because journalctl(1) and other tools that display logs will attach timestamps based on the entry metadata on their own. $SYSTEMD_LOG_LOCATION A boolean. If true, messages will be prefixed with a filename and line number in the source code where the message originates. Note that the log location is often attached as metadata to journal entries anyway. Including it directly in the message text can nevertheless be convenient when debugging programs. $SYSTEMD_LOG_TID A boolean. If true, messages will be prefixed with the current numerical thread ID (TID). Note that the this information is attached as metadata to journal entries anyway. Including it directly in the message text can nevertheless be convenient when debugging programs. $SYSTEMD_LOG_TARGET The destination for log messages. One of console (log to the attached tty), console-prefixed (log to the attached tty but with prefixes encoding the log level and "facility", see syslog(3), kmsg (log to the kernel circular log buffer), journal (log to the journal), journal-or-kmsg (log to the journal if available, and to kmsg otherwise), auto (determine the appropriate log target automatically, the default), null (disable log output). $SYSTEMD_LOG_RATELIMIT_KMSG Whether to ratelimit kmsg or not. Takes a boolean. Defaults to "true". If disabled, systemd will not ratelimit messages written to kmsg. $SYSTEMD_PAGER Pager to use when --no-pager is not given; overrides $PAGER. If neither $SYSTEMD_PAGER nor $PAGER are set, a set of well-known pager implementations are tried in turn, including less(1) and more(1), until one is found. If no pager implementation is discovered no pager is invoked. Setting this environment variable to an empty string or the value "cat" is equivalent to passing --no-pager. Note: if $SYSTEMD_PAGERSECURE is not set, $SYSTEMD_PAGER (as well as $PAGER) will be silently ignored. $SYSTEMD_LESS Override the options passed to less (by default "FRSXMK"). Users might want to change two options in particular: K This option instructs the pager to exit immediately when Ctrl+C is pressed. To allow less to handle Ctrl+C itself to switch back to the pager command prompt, unset this option. If the value of $SYSTEMD_LESS does not include "K", and the pager that is invoked is less, Ctrl+C will be ignored by the executable, and needs to be handled by the pager. X This option instructs the pager to not send termcap initialization and deinitialization strings to the terminal. It is set by default to allow command output to remain visible in the terminal even after the pager exits. Nevertheless, this prevents some pager functionality from working, in particular paged output cannot be scrolled with the mouse. See less(1) for more discussion. $SYSTEMD_LESSCHARSET Override the charset passed to less (by default "utf-8", if the invoking terminal is determined to be UTF-8 compatible). $SYSTEMD_PAGERSECURE Takes a boolean argument. When true, the "secure" mode of the pager is enabled; if false, disabled. If $SYSTEMD_PAGERSECURE is not set at all, secure mode is enabled if the effective UID is not the same as the owner of the login session, see geteuid(2) and sd_pid_get_owner_uid(3). In secure mode, LESSSECURE=1 will be set when invoking the pager, and the pager shall disable commands that open or create new files or start new subprocesses. When $SYSTEMD_PAGERSECURE is not set at all, pagers which are not known to implement secure mode will not be used. (Currently only less(1) implements secure mode.) Note: when commands are invoked with elevated privileges, for example under sudo(8) or pkexec(1), care must be taken to ensure that unintended interactive features are not enabled. "Secure" mode for the pager may be enabled automatically as describe above. Setting SYSTEMD_PAGERSECURE=0 or not removing it from the inherited environment allows the user to invoke arbitrary commands. Note that if the $SYSTEMD_PAGER or $PAGER variables are to be honoured, $SYSTEMD_PAGERSECURE must be set too. It might be reasonable to completely disable the pager using --no-pager instead. $SYSTEMD_COLORS Takes a boolean argument. When true, systemd and related utilities will use colors in their output, otherwise the output will be monochrome. Additionally, the variable can take one of the following special values: "16", "256" to restrict the use of colors to the base 16 or 256 ANSI colors, respectively. This can be specified to override the automatic decision based on $TERM and what the console is connected to. $SYSTEMD_URLIFY The value must be a boolean. Controls whether clickable links should be generated in the output for terminal emulators supporting this. This can be specified to override the decision that systemd makes based on $TERM and other conditions. EXAMPLES top Example 1. Download a Fedora image and start a shell in it # machinectl pull-raw --verify=no \ https://download.fedoraproject.org/pub/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.raw.xz \ Fedora-Cloud-Base-38-1.6.x86-64 # systemd-nspawn -M Fedora-Cloud-Base-38-1.6.x86-64 This downloads an image using machinectl(1) and opens a shell in it. Example 2. Build and boot a minimal Fedora distribution in a container # dnf -y --releasever=38 --installroot=/var/lib/machines/f38 \ --repo=fedora --repo=updates --setopt=install_weak_deps=False install \ passwd dnf fedora-release vim-minimal util-linux systemd systemd-networkd # systemd-nspawn -bD /var/lib/machines/f38 This installs a minimal Fedora distribution into the directory /var/lib/machines/f38 and then boots that OS in a namespace container. Because the installation is located underneath the standard /var/lib/machines/ directory, it is also possible to start the machine using systemd-nspawn -M f38. Example 3. Spawn a shell in a container of a minimal Debian unstable distribution # debootstrap unstable ~/debian-tree/ # systemd-nspawn -D ~/debian-tree/ This installs a minimal Debian unstable distribution into the directory ~/debian-tree/ and then spawns a shell from this image in a namespace container. debootstrap supports Debian[7], Ubuntu[8], and Tanglu[9] out of the box, so the same command can be used to install any of those. For other distributions from the Debian family, a mirror has to be specified, see debootstrap(8). Example 4. Boot a minimal Arch Linux distribution in a container # pacstrap -c ~/arch-tree/ base # systemd-nspawn -bD ~/arch-tree/ This installs a minimal Arch Linux distribution into the directory ~/arch-tree/ and then boots an OS in a namespace container in it. Example 5. Install the OpenSUSE Tumbleweed rolling distribution # zypper --root=/var/lib/machines/tumbleweed ar -c \ https://download.opensuse.org/tumbleweed/repo/oss tumbleweed # zypper --root=/var/lib/machines/tumbleweed refresh # zypper --root=/var/lib/machines/tumbleweed install --no-recommends \ systemd shadow zypper openSUSE-release vim # systemd-nspawn -M tumbleweed passwd root # systemd-nspawn -M tumbleweed -b Example 6. Boot into an ephemeral snapshot of the host system # systemd-nspawn -D / -xb This runs a copy of the host system in a snapshot which is removed immediately when the container exits. All file system changes made during runtime will be lost on shutdown, hence. Example 7. Run a container with SELinux sandbox security contexts # chcon system_u:object_r:svirt_sandbox_file_t:s0:c0,c1 -R /srv/container # systemd-nspawn -L system_u:object_r:svirt_sandbox_file_t:s0:c0,c1 \ -Z system_u:system_r:svirt_lxc_net_t:s0:c0,c1 -D /srv/container /bin/sh Example 8. Run a container with an OSTree deployment # systemd-nspawn -b -i ~/image.raw \ --pivot-root=/ostree/deploy/$OS/deploy/$CHECKSUM:/sysroot \ --bind=+/sysroot/ostree/deploy/$OS/var:/var EXIT STATUS top The exit code of the program executed in the container is returned. SEE ALSO top systemd(1), systemd.nspawn(5), chroot(1), dnf(8), debootstrap(8), pacman(8), zypper(8), systemd.slice(5), machinectl(1), btrfs(8) NOTES top 1. Container Interface https://systemd.io/CONTAINER_INTERFACE 2. Discoverable Partitions Specification https://uapi-group.org/specifications/specs/discoverable_partitions_specification 3. OCI Runtime Specification https://github.com/opencontainers/runtime-spec/blob/master/spec.md 4. OSTree https://ostree.readthedocs.io/en/latest/ 5. Overlay Filesystem https://docs.kernel.org/filesystems/overlayfs.html 6. Fedora https://getfedora.org 7. Debian https://www.debian.org 8. Ubuntu https://www.ubuntu.com 9. Tanglu https://www.tanglu.org 10. Arch Linux https://www.archlinux.org 11. OpenSUSE Tumbleweed https://software.opensuse.org/distributions/tumbleweed COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-NSPAWN(1) Pages that refer to this page: coredumpctl(1), journalctl(1), machinectl(1), systemctl(1), systemd-cgls(1), systemd-detect-virt(1), systemd-dissect(1), systemd-firstboot(1), systemd-vmspawn(1), org.freedesktop.import1(5), repart.d(5), systemd.network(5), systemd.nspawn(5), systemd.directives(7), systemd.image-policy(7), systemd.index(7), systemd.net-naming-scheme(7), kernel-install(8), nss-mymachines(8), nss-systemd(8), systemd-importd.service(8), systemd-machined.service(8), systemd-sysext(8), systemd-sysusers(8), systemd-tmpfiles(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-nspawn\n\n> Spawn a command or OS in a lightweight container.\n> More information: <https://www.freedesktop.org/software/systemd/man/latest/systemd-nspawn.html>.\n\n- Run a command in a container:\n\n`systemd-nspawn --directory {{path/to/container_root}}`\n\n- Run a full Linux-based OS in a container:\n\n`systemd-nspawn --boot --directory {{path/to/container_root}}`\n\n- Run the specified command as PID 2 in the container (as opposed to PID 1) using a stub init process:\n\n`systemd-nspawn --directory {{path/to/container_root}} --as-pid2`\n\n- Specify the machine name and hostname:\n\n`systemd-nspawn --machine={{container_name}} --hostname={{container_host}} --directory {{path/to/container_root}}`\n
systemd-path
systemd-path(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-path(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | SEE ALSO | COLOPHON SYSTEMD-PATH(1) systemd-path SYSTEMD-PATH(1) NAME top systemd-path - List and query system and user paths SYNOPSIS top systemd-path [OPTIONS...] [NAME...] DESCRIPTION top systemd-path may be used to query system and user paths. The tool makes many of the paths described in file-hierarchy(7) available for querying. When invoked without arguments, a list of known paths and their current values is shown. When at least one argument is passed, the path with this name is queried and its value shown. The variables whose name begins with "search-" do not refer to individual paths, but instead to a list of colon-separated search paths, in their order of precedence. OPTIONS top The following options are understood: --suffix= Printed paths are suffixed by the specified string. Added in version 215. --no-pager Do not pipe output into a pager. Added in version 255. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. SEE ALSO top systemd(1), file-hierarchy(7) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-PATH(1) Pages that refer to this page: sd_path_lookup(3), file-hierarchy(7), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-path\n\n> List and query system and user paths.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-path.html>.\n\n- Display a list of known paths and their current values:\n\n`systemd-path`\n\n- Query the specified path and display its value:\n\n`systemd-path "{{path_name}}"`\n\n- Suffix printed paths with `suffix_string`:\n\n`systemd-path --suffix {{suffix_string}}`\n\n- Print a short version string and then exit:\n\n`systemd-path --version`\n
systemd-repart
systemd-repart(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-repart(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | EXAMPLE | SEE ALSO | NOTES | COLOPHON SYSTEMD-REPART(8) systemd-repart SYSTEMD-REPART(8) NAME top systemd-repart, systemd-repart.service - Automatically grow and add partitions SYNOPSIS top systemd-repart [OPTIONS...] [[BLOCKDEVICE]...] systemd-repart.service DESCRIPTION top systemd-repart grows and adds partitions to a partition table, based on the configuration files described in repart.d(5). If invoked with no arguments, it operates on the block device backing the root file system partition of the running OS, thus growing and adding partitions of the booted OS image itself. If --image= is used it will operate on the specified image file. When called in the initrd it operates on the block device backing /sysroot/ instead, i.e. on the block device the system will soon transition into. The systemd-repart.service service is generally run at boot in the initrd, in order to augment the partition table of the OS before its partitions are mounted. systemd-repart (mostly) operates in a purely incremental mode: it only grows existing and adds new partitions; it does not shrink, delete or move existing partitions. The service is intended to be run on every boot, but when it detects that the partition table already matches the installed repart.d/*.conf configuration files, it executes no operation. systemd-repart is intended to be used when deploying OS images, to automatically adjust them to the system they are running on, during first boot. This way the deployed image can be minimal in size and may be augmented automatically at boot when needed, taking possession of disk space available but not yet used. Specifically the following use cases are among those covered: The root partition may be grown to cover the whole available disk space. A /home/, swap or /srv/ partition can be added. A second (or third, ...) root partition may be added, to cover A/B style setups where a second version of the root file system is alternatingly used for implementing update schemes. The deployed image would carry only a single partition ("A") but on first boot a second partition ("B") for this purpose is automatically created. The algorithm executed by systemd-repart is roughly as follows: 1. The repart.d/*.conf configuration files are loaded and parsed, and ordered by filename (without the directory prefix). For each configuration file, drop-in files are looked for in directories with same name as the configuration file with a suffix ".d" added. 2. The partition table already existing on the block device is loaded and parsed. 3. The existing partitions in the partition table are matched up with the repart.d/*.conf files by GPT partition type UUID. The first existing partition of a specific type is assigned the first configuration file declaring the same type. The second existing partition of a specific type is then assigned the second configuration file declaring the same type, and so on. After this iterative assigning is complete any left-over existing partitions that have no matching configuration file are considered "foreign" and left as they are. And any configuration files for which no partition currently exists are understood as a request to create such a partition. 4. Partitions that shall be created are now allocated on the disk, taking the size constraints and weights declared in the configuration files into account. Free space is used within the limits set by size and padding requests. In addition, existing partitions that should be grown are grown. New partitions are always appended to the end of the partition table, taking the first partition table slot whose index is greater than the indexes of all existing partitions. Partitions are never reordered and thus partition numbers remain stable. When partitions are created, they are placed in the smallest area of free space that is large enough to satisfy the size and padding limits. This means that partitions might have different order on disk than in the partition table. Note that this allocation happens in memory only, the partition table on disk is not updated yet. 5. All existing partitions for which configuration files exist and which currently have no GPT partition label set will be assigned a label, either explicitly configured in the configuration or if that's missing derived automatically from the partition type. The same is done for all partitions that are newly created. These assignments are done in memory only, too, the disk is not updated yet. 6. Similarly, all existing partitions for which configuration files exist and which currently have an all-zero identifying UUID will be assigned a new UUID. This UUID is cryptographically hashed from a common seed value together with the partition type UUID (and a counter in case multiple partitions of the same type are defined), see below. The same is done for all partitions that are created anew. These assignments are done in memory only, too, the disk is not updated yet. 7. Similarly, if the disk's volume UUID is all zeroes it is also initialized, also cryptographically hashed from the same common seed value. This is done in memory only too. 8. The disk space assigned to new partitions (i.e. what was previously free space) is now erased. Specifically, all file system signatures are removed, and if the device supports it, the BLKDISCARD I/O control command is issued to inform the hardware that the space is now empty. In addition any "padding" between partitions and at the end of the device is similarly erased. 9. The new partition table is finally written to disk. The kernel is asked to reread the partition table. As exception to the normally strictly incremental operation, when called in a special "factory reset" mode, systemd-repart may also be used to erase existing partitions to reset an installation back to vendor defaults. This mode of operation is used when either the --factory-reset=yes switch is passed on the tool's command line, or the systemd.factory_reset=yes option specified on the kernel command line, or the FactoryReset EFI variable (vendor UUID 8cf2644b-4b0b-428f-9387-6d876050dc67) is set to "yes". It alters the algorithm above slightly: between the 3rd and the 4th step above any partition marked explicitly via the FactoryReset= boolean is deleted, and the algorithm restarted, thus immediately re-creating these partitions anew empty. Note that systemd-repart by default only changes partition tables, it does not create or resize any file systems within these partitions, unless the Format= configuration option is specified. Also note that there are also separate mechanisms available for this purpose, for example systemd-growfs(8) and systemd-makefs. The UUIDs identifying the new partitions created (or assigned to existing partitions that have no UUID yet), as well as the disk as a whole are hashed cryptographically from a common seed value. This seed value is usually the machine-id(5) of the system, so that the machine ID reproducibly determines the UUIDs assigned to all partitions. If the machine ID cannot be read (or the user passes --seed=random, see below) the seed is generated randomly instead, so that the partition UUIDs are also effectively random. The seed value may also be set explicitly, formatted as UUID via the --seed= option. By hashing these UUIDs from a common seed images prepared with this tool become reproducible and the result of the algorithm above deterministic. The positional argument should specify the block device to operate on. Instead of a block device node path a regular file may be specified too, in which case the command operates on it like it would if a loopback block device node was specified with the file attached. If --empty=create is specified the specified path is created as regular file, which is useful for generating disk images from scratch. OPTIONS top The following options are understood: --dry-run= Takes a boolean. If this switch is not specified --dry-run=yes is the implied default. Controls whether systemd-repart executes the requested re-partition operations or whether it should only show what it would do. Unless --dry-run=no is specified systemd-repart will not actually touch the device's partition table. Added in version 245. --empty= Takes one of "refuse", "allow", "require", "force" or "create". Controls how to operate on block devices that are entirely empty, i.e. carry no partition table/disk label yet. If this switch is not specified the implied default is "refuse". If "refuse" systemd-repart requires that the block device it shall operate on already carries a partition table and refuses operation if none is found. If "allow" the command will extend an existing partition table or create a new one if none exists. If "require" the command will create a new partition table if none exists so far, and refuse operation if one already exists. If "force" it will create a fresh partition table unconditionally, erasing the disk fully in effect. If "force" no existing partitions will be taken into account or survive the operation. Hence: use with care, this is a great way to lose all your data. If "create" a new loopback file is create under the path passed via the device node parameter, of the size indicated with --size=, see below. Added in version 245. --discard= Takes a boolean. If this switch is not specified --discard=yes is the implied default. Controls whether to issue the BLKDISCARD I/O control command on the space taken up by any added partitions or on the space in between them. Usually, it's a good idea to issue this request since it tells the underlying hardware that the covered blocks shall be considered empty, improving performance. If operating on a regular file instead of a block device node, a sparse file is generated. Added in version 245. --size= Takes a size in bytes, using the usual K, M, G, T suffixes, or the special value "auto". If used the specified device node path must refer to a regular file, which is then grown to the specified size if smaller, before any change is made to the partition table. If specified as "auto" the minimal size for the disk image is automatically determined (i.e. the minimal sizes of all partitions are summed up, taking space for additional metadata into account). This switch is not supported if the specified node is a block device. This switch has no effect if the file is already as large as the specified size or larger. The specified size is implicitly rounded up to multiples of 4096. When used with --empty=create this specifies the initial size of the loopback file to create. The --size=auto option takes the sizes of pre-existing partitions into account. However, it does not accommodate for partition tables that are not tightly packed: the configured partitions might still not fit into the backing device if empty space exists between pre-existing partitions (or before the first partition) that cannot be fully filled by partitions to grow or create. Also note that the automatic size determination does not take files or directories specified with CopyFiles= into account: operation might fail if the specified files or directories require more disk space then the configured per-partition minimal size limit. Added in version 246. --factory-reset= Takes boolean. If this switch is not specified --factory=reset=no is the implied default. Controls whether to operate in "factory reset" mode, see above. If set to true this will remove all existing partitions marked with FactoryReset= set to yes early while executing the re-partitioning algorithm. Use with care, this is a great way to lose all your data. Note that partition files need to explicitly turn FactoryReset= on, as the option defaults to off. If no partitions are marked for factory reset this switch has no effect. Note that there are two other methods to request factory reset operation: via the kernel command line and via an EFI variable, see above. Added in version 245. --can-factory-reset If this switch is specified the disk is not re-partitioned. Instead it is determined if any existing partitions are marked with FactoryReset=. If there are the tool will exit with exit status zero, otherwise non-zero. This switch may be used to quickly determine whether the running system supports a factory reset mechanism built on systemd-repart. Added in version 245. --root= Takes a path to a directory to use as root file system when searching for repart.d/*.conf files, for the machine ID file to use as seed and for the CopyFiles= and CopyBlocks= source files and directories. By default when invoked on the regular system this defaults to the host's root file system /. If invoked from the initrd this defaults to /sysroot/, so that the tool operates on the configuration and machine ID stored in the root file system later transitioned into itself. See --copy-source= for a more restricted option that only affects CopyFiles=. Added in version 245. --image= Takes a path to a disk image file or device to mount and use in a similar fashion to --root=, see above. Added in version 249. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. --seed= Takes a UUID as argument or the special value random. If a UUID is specified the UUIDs to assign to partitions and the partition table itself are derived via cryptographic hashing from it. If not specified it is attempted to read the machine ID from the host (or more precisely, the root directory configured via --root=) and use it as seed instead, falling back to a randomized seed otherwise. Use --seed=random to force a randomized seed. Explicitly specifying the seed may be used to generated strictly reproducible partition tables. Added in version 245. --pretty= Takes a boolean argument. If this switch is not specified, it defaults to on when called from an interactive terminal and off otherwise. Controls whether to show a user friendly table and graphic illustrating the changes applied. Added in version 245. --definitions= Takes a file system path. If specified the *.conf files are read from the specified directory instead of searching in /usr/lib/repart.d/*.conf, /etc/repart.d/*.conf, /run/repart.d/*.conf. This parameter can be specified multiple times. Added in version 245. --key-file= Takes a file system path. Configures the encryption key to use when setting up LUKS2 volumes configured with the Encrypt=key-file setting in partition files. Should refer to a regular file containing the key, or an AF_UNIX stream socket in the file system. In the latter case a connection is made to it and the key read from it. If this switch is not specified the empty key (i.e. zero length key) is used. This behaviour is useful for setting up encrypted partitions during early first boot that receive their user-supplied password only in a later setup step. Added in version 247. --private-key= Takes a file system path. Configures the signing key to use when creating verity signature partitions with the Verity=signature setting in partition files. Added in version 252. --certificate= Takes a file system path. Configures the PEM encoded X.509 certificate to use when creating verity signature partitions with the Verity=signature setting in partition files. Added in version 252. --tpm2-device=, --tpm2-pcrs= Configures the TPM2 device and list of PCRs to use for LUKS2 volumes configured with the Encrypt=tpm2 option. These options take the same parameters as the identically named options to systemd-cryptenroll(1) and have the same effect on partitions where TPM2 enrollment is requested. Added in version 248. --tpm2-device-key= [PATH], --tpm2-seal-key-handle= [HANDLE] Configures a TPM2 SRK key to bind encryption to. See systemd-cryptenroll(1) for details on this option. Added in version 255. --tpm2-public-key= [PATH], --tpm2-public-key-pcrs= [PCR...] Configures a TPM2 signed PCR policy to bind encryption to. See systemd-cryptenroll(1) for details on these two options. Added in version 252. --tpm2-pcrlock= [PATH] Configures a TPM2 pcrlock policy to bind encryption to. See systemd-cryptenroll(1) for details on this option. Added in version 255. --split= [BOOL] Enables generation of split artifacts from partitions configured with SplitName=. If enabled, for each partition with SplitName= set, a separate output file containing just the contents of that partition is generated. The output filename consists of the loopback filename suffixed with the name configured with SplitName=. If the loopback filename ends with ".raw", the suffix is inserted before the ".raw" extension instead. Note that --split is independent from --dry-run. Even if --dry-run is enabled, split artifacts will still be generated from an existing image if --split is enabled. Added in version 252. --include-partitions= [PARTITION...], --exclude-partitions= [PARTITION...] These options specify which partition types systemd-repart should operate on. If --include-partitions= is used, all partitions that aren't specified are excluded. If --exclude-partitions= is used, all partitions that are specified are excluded. Both options take a comma separated list of GPT partition type UUIDs or identifiers (see Type= in repart.d(5)). Added in version 253. --defer-partitions= [PARTITION...] This option specifies for which partition types systemd-repart should defer. All partitions that are deferred using this option are still taken into account when calculating the sizes and offsets of other partitions, but aren't actually written to the disk image. The net effect of this option is that if you run systemd-repart again without this option, the missing partitions will be added as if they had not been deferred the first time systemd-repart was executed. Added in version 253. --sector-size= [BYTES] This option allows configuring the sector size of the image produced by systemd-repart. It takes a value that is a power of "2" between "512" and "4096". This option is useful when building images for disks that use a different sector size as the disk on which the image is produced. Added in version 253. --architecture= [ARCH] This option allows overriding the architecture used for architecture specific partition types. For example, if set to "arm64" a partition type of "root-x86-64" referenced in repart.d/ drop-ins will be patched dynamically to refer to "root-arm64" instead. Takes one of "alpha", "arc", "arm", "arm64", "ia64", "loongarch64", "mips-le", "mips64-le", "parisc", "ppc", "ppc64", "ppc64-le", "riscv32", "riscv64", "s390", "s390x", "tilegx", "x86" or "x86-64". Added in version 254. --offline= [BOOL] Instructs systemd-repart to build the image offline. Takes a boolean or "auto". Defaults to "auto". If enabled, the image is built without using loop devices. This is useful to build images unprivileged or when loop devices are not available. If disabled, the image is always built using loop devices. If "auto", systemd-repart will build the image online if possible and fall back to building the image offline if loop devices are not available or cannot be accessed due to missing permissions. Added in version 254. --copy-from= [IMAGE] Instructs systemd-repart to synthesize partition definitions from the partition table in the given image. This option can be specified multiple times to synthesize definitions from each of the given images. The generated definitions will copy the partitions into the destination partition table. The copied partitions will have the same size, metadata and contents but might have a different partition number and might be located at a different offset in the destination partition table. These definitions can be combined with partition definitions read from regular partition definition files. The synthesized definitions take precedence over the definitions read from partition definition files. Added in version 255. --copy-source=PATH, -s PATH Specifies a source directory all CopyFiles= source paths shall be considered relative to. This is similar to --root=, but exclusively applies to the CopyFiles= setting. If --root= and --copy-source= are used in combination the former applies as usual, except for CopyFiles= where the latter takes precedence. Added in version 255. --make-ddi=TYPE Takes one of "sysext", "confext" or "portable". Generates a Discoverable Disk Image (DDI) for a system extension (sysext, see systemd-sysext(8) for details), configuration extension (confext) or portable service[1]. The generated image will consist of a signed Verity "erofs" file system as root partition. In this mode of operation the partition definitions in /usr/lib/repart.d/*.conf and related directories are not read, and --definitions= is not supported, as appropriate definitions for the selected DDI class will be chosen automatically. Must be used in conjunction with --copy-source= to specify the file hierarchy to populate the DDI with. The specified directory should contain an etc/ subdirectory if "confext" is selected. If "sysext" is selected it should contain either a usr/ or opt/ directory, or both. If "portable" is used a full OS file hierarchy can be provided. This option implies --empty=create, --size=auto and --seed=random (the latter two can be overridden). The private key and certificate for signing the DDI must be specified via the --private-key= and --certificate= switches. Added in version 255. -S, -C, -P Shortcuts for --make-ddi=sysext, --make-ddi=confext, --make-ddi=portable, respectively. Added in version 255. -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. --json=MODE Shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks), "pretty" (for a pretty version of the same, with indentation and line breaks) or "off" (to turn off JSON output, the default). EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. EXAMPLE top Example 1. Generate a configuration extension image The following creates a configuration extension DDI (confext) for an /etc/motd update. mkdir tree tree/etc tree/etc/extension-release.d echo "Hello World" > tree/etc/motd cat > tree/etc/extension-release.d/extension-release.my-motd <<EOF ID=fedora VERSION_ID=38 IMAGE_ID=my-motd IMAGE_VERSION=7 EOF systemd-repart -C --private-key=privkey.pem --certificate=cert.crt -s tree/ /var/lib/confexts/my-motd.confext.raw systemd-confext refresh The DDI generated that way may be applied to the system with systemd-confext(1). SEE ALSO top systemd(1), repart.d(5), machine-id(5), systemd-cryptenroll(1), portablectl(1), systemd-sysext(8) NOTES top 1. portable service https://systemd.io/PORTABLE_SERVICES COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-REPART(8) Pages that refer to this page: repart.d(5), sysupdate.d(5), systemd.directives(7), systemd.index(7), systemd-makefs@.service(8), systemd-sysupdate(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-repart\n\n> Automatically grow and add partitions.\n> Grows and adds partitions based on the configuration files described in repart.d.\n> Does not automatically resize file system on partition. See systemd-growfs to extend file system.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-repart.html>.\n\n- Grow the root partition (/) to all available disk space:\n\n`systemd-repart`\n\n- View changes without applying:\n\n`systemd-repart --dry-run=yes`\n\n- Grow root partition size to 10 gigabytes:\n\n`systemd-repart --size=10G --root /`\n
systemd-run
systemd-run(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-run(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | EXAMPLES | SEE ALSO | NOTES | COLOPHON SYSTEMD-RUN(1) systemd-run SYSTEMD-RUN(1) NAME top systemd-run - Run programs in transient scope units, service units, or path-, socket-, or timer-triggered service units SYNOPSIS top systemd-run [OPTIONS...] COMMAND [ARGS...] systemd-run [OPTIONS...] [PATH OPTIONS...] {COMMAND} [ARGS...] systemd-run [OPTIONS...] [SOCKET OPTIONS...] {COMMAND} [ARGS...] systemd-run [OPTIONS...] [TIMER OPTIONS...] {COMMAND} [ARGS...] DESCRIPTION top systemd-run may be used to create and start a transient .service or .scope unit and run the specified COMMAND in it. It may also be used to create and start a transient .path, .socket, or .timer unit, that activates a .service unit when elapsing. If a command is run as transient service unit, it will be started and managed by the service manager like any other service, and thus shows up in the output of systemctl list-units like any other unit. It will run in a clean and detached execution environment, with the service manager as its parent process. In this mode, systemd-run will start the service asynchronously in the background and return after the command has begun execution (unless --no-block or --wait are specified, see below). If a command is run as transient scope unit, it will be executed by systemd-run itself as parent process and will thus inherit the execution environment of the caller. However, the processes of the command are managed by the service manager similarly to normal services, and will show up in the output of systemctl list-units. Execution in this case is synchronous, and will return only when the command finishes. This mode is enabled via the --scope switch (see below). If a command is run with path, socket, or timer options such as --on-calendar= (see below), a transient path, socket, or timer unit is created alongside the service unit for the specified command. Only the transient path, socket, or timer unit is started immediately, the transient service unit will be triggered by the path, socket, or timer unit. If the --unit= option is specified, the COMMAND may be omitted. In this case, systemd-run creates only a .path, .socket, or .timer unit that triggers the specified unit. By default, services created with systemd-run default to the simple type, see the description of Type= in systemd.service(5) for details. Note that when this type is used, the service manager (and thus the systemd-run command) considers service start-up successful as soon as the fork() for the main service process succeeded, i.e. before the execve() is invoked, and thus even if the specified command cannot be started. Consider using the exec service type (i.e. --property=Type=exec) to ensure that systemd-run returns successfully only if the specified command line has been successfully started. After systemd-run passes the command to the service manager, the manager performs variable expansion. This means that dollar characters ("$") which should not be expanded need to be escaped as "$$". Expansion can also be disabled using --expand-environment=no. OPTIONS top The following options are understood: --no-ask-password Do not query the user for authentication for privileged operations. Added in version 226. --scope Create a transient .scope unit instead of the default transient .service unit (see above). Added in version 206. --unit=, -u Use this unit name instead of an automatically generated one. Added in version 206. --property=, -p Sets a property on the scope or service unit that is created. This option takes an assignment in the same format as systemctl(1)'s set-property command. Added in version 211. --description= Provide a description for the service, scope, path, socket, or timer unit. If not specified, the command itself will be used as a description. See Description= in systemd.unit(5). Added in version 206. --slice= Make the new .service or .scope unit part of the specified slice, instead of system.slice (when running in --system mode) or the root slice (when running in --user mode). Added in version 206. --slice-inherit Make the new .service or .scope unit part of the slice the systemd-run itself has been invoked in. This option may be combined with --slice=, in which case the slice specified via --slice= is placed within the slice the systemd-run command is invoked in. Example: consider systemd-run being invoked in the slice foo.slice, and the --slice= argument is bar. The unit will then be placed under foo-bar.slice. Added in version 246. --expand-environment=BOOL Expand environment variables in command arguments. If enabled, environment variables specified as "${VARIABLE}" will be expanded in the same way as in commands specified via ExecStart= in units. With --scope, this expansion is performed by systemd-run itself, and in other cases by the service manager that spawns the command. Note that this is similar to, but not the same as variable expansion in bash(1) and other shells. The default is to enable this option in all cases, except for --scope where it is disabled by default, for backward compatibility reasons. Note that this will be changed in a future release, where it will be switched to enabled by default as well. See systemd.service(5) for a description of variable expansion. Disabling variable expansion is useful if the specified command includes or may include a "$" sign. Added in version 254. -r, --remain-after-exit After the service process has terminated, keep the service around until it is explicitly stopped. This is useful to collect runtime information about the service after it finished running. Also see RemainAfterExit= in systemd.service(5). Added in version 207. --send-sighup When terminating the scope or service unit, send a SIGHUP immediately after SIGTERM. This is useful to indicate to shells and shell-like processes that the connection has been severed. Also see SendSIGHUP= in systemd.kill(5). Added in version 207. --service-type= Sets the service type. Also see Type= in systemd.service(5). This option has no effect in conjunction with --scope. Defaults to simple. Added in version 211. --uid=, --gid= Runs the service process under the specified UNIX user and group. Also see User= and Group= in systemd.exec(5). Added in version 211. --nice= Runs the service process with the specified nice level. Also see Nice= in systemd.exec(5). Added in version 211. --working-directory= Runs the service process with the specified working directory. Also see WorkingDirectory= in systemd.exec(5). Added in version 240. --same-dir, -d Similar to --working-directory=, but uses the current working directory of the caller for the service to execute. Added in version 240. -E NAME[=VALUE], --setenv=NAME[=VALUE] Runs the service process with the specified environment variable set. This parameter may be used more than once to set multiple variables. When "=" and VALUE are omitted, the value of the variable with the same name in the program environment will be used. Also see Environment= in systemd.exec(5). Added in version 211. --pty, -t When invoking the command, the transient service connects its standard input, output and error to the terminal systemd-run is invoked on, via a pseudo TTY device. This allows running programs that expect interactive user input/output as services, such as interactive command shells. Note that machinectl(1)'s shell command is usually a better alternative for requesting a new, interactive login session on the local host or a local container. See below for details on how this switch combines with --pipe. Added in version 219. --pipe, -P If specified, standard input, output, and error of the transient service are inherited from the systemd-run command itself. This allows systemd-run to be used within shell pipelines. Note that this mode is not suitable for interactive command shells and similar, as the service process will not become a TTY controller when invoked on a terminal. Use --pty instead in that case. When both --pipe and --pty are used in combination the more appropriate option is automatically determined and used. Specifically, when invoked with standard input, output and error connected to a TTY --pty is used, and otherwise --pipe. When this option is used the original file descriptors systemd-run receives are passed to the service processes as-is. If the service runs with different privileges than systemd-run, this means the service might not be able to re-open the passed file descriptors, due to normal file descriptor access restrictions. If the invoked process is a shell script that uses the echo "hello" >/dev/stderr construct for writing messages to stderr, this might cause problems, as this only works if stderr can be re-opened. To mitigate this use the construct echo "hello" >&2 instead, which is mostly equivalent and avoids this pitfall. Added in version 235. --shell, -S A shortcut for "--pty --same-dir --wait --collect --service-type=exec $SHELL", i.e. requests an interactive shell in the current working directory, running in service context, accessible with a single switch. Added in version 240. --quiet, -q Suppresses additional informational output while running. This is particularly useful in combination with --pty when it will suppress the initial message explaining how to terminate the TTY connection. Added in version 219. --on-active=, --on-boot=, --on-startup=, --on-unit-active=, --on-unit-inactive= Defines a monotonic timer relative to different starting points for starting the specified command. See OnActiveSec=, OnBootSec=, OnStartupSec=, OnUnitActiveSec= and OnUnitInactiveSec= in systemd.timer(5) for details. These options are shortcuts for --timer-property= with the relevant properties. These options may not be combined with --scope or --pty. Added in version 218. --on-calendar= Defines a calendar timer for starting the specified command. See OnCalendar= in systemd.timer(5). This option is a shortcut for --timer-property=OnCalendar=. This option may not be combined with --scope or --pty. Added in version 218. --on-clock-change, --on-timezone-change Defines a trigger based on system clock jumps or timezone changes for starting the specified command. See OnClockChange= and OnTimezoneChange= in systemd.timer(5). These options are shortcuts for --timer-property=OnClockChange=yes and --timer-property=OnTimezoneChange=yes. These options may not be combined with --scope or --pty. Added in version 242. --path-property=, --socket-property=, --timer-property= Sets a property on the path, socket, or timer unit that is created. This option is similar to --property=, but applies to the transient path, socket, or timer unit rather than the transient service unit created. This option takes an assignment in the same format as systemctl(1)'s set-property command. These options may not be combined with --scope or --pty. Added in version 218. --no-block Do not synchronously wait for the unit start operation to finish. If this option is not specified, the start request for the transient unit will be verified, enqueued and systemd-run will wait until the unit's start-up is completed. By passing this argument, it is only verified and enqueued. This option may not be combined with --wait. Added in version 220. --wait Synchronously wait for the transient service to terminate. If this option is specified, the start request for the transient unit is verified, enqueued, and waited for. Subsequently the invoked unit is monitored, and it is waited until it is deactivated again (most likely because the specified command completed). On exit, terse information about the unit's runtime is shown, including total runtime (as well as CPU usage, if --property=CPUAccounting=1 was set) and the exit code and status of the main process. This output may be suppressed with --quiet. This option may not be combined with --no-block, --scope or the various path, socket, or timer options. Added in version 232. -G, --collect Unload the transient unit after it completed, even if it failed. Normally, without this option, all units that ran and failed are kept in memory until the user explicitly resets their failure state with systemctl reset-failed or an equivalent command. On the other hand, units that ran successfully are unloaded immediately. If this option is turned on the "garbage collection" of units is more aggressive, and unloads units regardless if they exited successfully or failed. This option is a shortcut for --property=CollectMode=inactive-or-failed, see the explanation for CollectMode= in systemd.unit(5) for further information. Added in version 236. --ignore-failure By default, if the specified command fails the invoked unit will be marked failed (though possibly still unloaded, see --collect=, above), and this is reported in the logs. If this switch is specified this is suppressed and any non-success exit status/code of the command is treated as success. Added in version 256. --background=COLOR Change the terminal background color to the specified ANSI color as long as the session lasts. The color specified should be an ANSI X3.64 SGR background color, i.e. strings such as "40", "41", ..., "47", "48;2;...", "48;5;...". See ANSI Escape Code (Wikipedia)[1] for details. Added in version 256. --user Talk to the service manager of the calling user, rather than the service manager of the system. --system Talk to the service manager of the system. This is the implied default. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -h, --help Print a short help text and exit. --version Print a short version string and exit. All command line arguments after the first non-option argument become part of the command line of the launched process. EXIT STATUS top On success, 0 is returned. If systemd-run failed to start the service, a non-zero return value will be returned. If systemd-run waits for the service to terminate, the return value will be propagated from the service. 0 will be returned on success, including all the cases where systemd considers a service to have exited cleanly, see the discussion of SuccessExitStatus= in systemd.service(5). EXAMPLES top Example 1. Logging environment variables provided by systemd to services # systemd-run env Running as unit: run-19945.service # journalctl -u run-19945.service Sep 08 07:37:21 bupkis systemd[1]: Starting /usr/bin/env... Sep 08 07:37:21 bupkis systemd[1]: Started /usr/bin/env. Sep 08 07:37:21 bupkis env[19948]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin Sep 08 07:37:21 bupkis env[19948]: LANG=en_US.UTF-8 Sep 08 07:37:21 bupkis env[19948]: BOOT_IMAGE=/vmlinuz-3.11.0-0.rc5.git6.2.fc20.x86_64 Example 2. Limiting resources available to a command # systemd-run -p IOWeight=10 updatedb This command invokes the updatedb(8) tool, but lowers the block I/O weight for it to 10. See systemd.resource-control(5) for more information on the IOWeight= property. Example 3. Running commands at a specified time The following command will touch a file after 30 seconds. # date; systemd-run --on-active=30 --timer-property=AccuracySec=100ms /bin/touch /tmp/foo Mon Dec 8 20:44:24 KST 2014 Running as unit: run-71.timer Will run service as unit: run-71.service # journalctl -b -u run-71.timer -- Journal begins at Fri 2014-12-05 19:09:21 KST, ends at Mon 2014-12-08 20:44:54 KST. -- Dec 08 20:44:38 container systemd[1]: Starting /bin/touch /tmp/foo. Dec 08 20:44:38 container systemd[1]: Started /bin/touch /tmp/foo. # journalctl -b -u run-71.service -- Journal begins at Fri 2014-12-05 19:09:21 KST, ends at Mon 2014-12-08 20:44:54 KST. -- Dec 08 20:44:48 container systemd[1]: Starting /bin/touch /tmp/foo... Dec 08 20:44:48 container systemd[1]: Started /bin/touch /tmp/foo. Example 4. Allowing access to the tty The following command invokes bash(1) as a service passing its standard input, output and error to the calling TTY. # systemd-run -t --send-sighup bash Example 5. Start screen as a user service $ systemd-run --scope --user screen Running scope as unit run-r14b0047ab6df45bfb45e7786cc839e76.scope. $ screen -ls There is a screen on: 492..laptop (Detached) 1 Socket in /var/run/screen/S-fatima. This starts the screen process as a child of the systemd --user process that was started by user@.service, in a scope unit. A systemd.scope(5) unit is used instead of a systemd.service(5) unit, because screen will exit when detaching from the terminal, and a service unit would be terminated. Running screen as a user unit has the advantage that it is not part of the session scope. If KillUserProcesses=yes is configured in logind.conf(5), the default, the session scope will be terminated when the user logs out of that session. The user@.service is started automatically when the user first logs in, and stays around as long as at least one login session is open. After the user logs out of the last session, user@.service and all services underneath it are terminated. This behavior is the default, when "lingering" is not enabled for that user. Enabling lingering means that user@.service is started automatically during boot, even if the user is not logged in, and that the service is not terminated when the user logs out. Enabling lingering allows the user to run processes without being logged in, for example to allow screen to persist after the user logs out, even if the session scope is terminated. In the default configuration, users can enable lingering for themselves: $ loginctl enable-linger Example 6. Variable expansion by the manager $ systemd-run -t echo "<${INVOCATION_ID}>" '<${INVOCATION_ID}>' <> <5d0149bfa2c34b79bccb13074001eb20> The first argument is expanded by the shell (double quotes), but the second one is not expanded by the shell (single quotes). echo(1) is called with ["/usr/bin/echo", "<>", "<${INVOCATION_ID}>"] as the argument array, and then systemd(1) generates ${INVOCATION_ID} and substitutes it in the command-line. This substitution could not be done on the client side, because the target ID that will be set for the service isn't known before the call is made. Example 7. Variable expansion and output redirection using a shell Variable expansion by systemd(1) can be disabled with --expand-environment=no. Disabling variable expansion can be useful if the command to execute contains dollar characters and escaping them would be inconvenient. For example, when a shell is used: $ systemd-run --expand-environment=no -t bash \ -c 'echo $SHELL $$ >/dev/stdout' /bin/bash 12345 The last argument is passed verbatim to the bash(1) shell which is started by the service unit. The shell expands "$SHELL" to the path of the shell, and "$$" to its process number, and then those strings are passed to the echo built-in and printed to standard output (which in this case is connected to the calling terminal). Example 8. Return value $ systemd-run --user --wait true $ systemd-run --user --wait -p SuccessExitStatus=11 bash -c 'exit 11' $ systemd-run --user --wait -p SuccessExitStatus=SIGUSR1 --expand-environment=no \ bash -c 'kill -SIGUSR1 $$' Those three invocations will succeed, i.e. terminate with an exit code of 0. SEE ALSO top systemd(1), systemctl(1), systemd.unit(5), systemd.service(5), systemd.scope(5), systemd.slice(5), systemd.exec(5), systemd.resource-control(5), systemd.timer(5), systemd-mount(1), machinectl(1), uid0(1) NOTES top 1. ANSI Escape Code (Wikipedia) https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_(Select_Graphic_Rendition)_parameters COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-RUN(1) Pages that refer to this page: machinectl(1), systemd-mount(1), systemd-socket-activate(1), uid0(1), logind.conf(5), systemd.exec(5), systemd.scope(5), systemd.service(5), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-run\n\n> Run programs in transient scope units, service units, or path-, socket-, or timer-triggered service units.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-run.html>.\n\n- Start a transient service:\n\n`sudo systemd-run {{command}} {{argument1 argument2 ...}}`\n\n- Start a transient service under the service manager of the current user (no privileges):\n\n`systemd-run --user {{command}} {{argument1 argument2 ...}}`\n\n- Start a transient service with a custom unit name and description:\n\n`sudo systemd-run --unit={{name}} --description={{string}} {{command}} {{argument1 argument2 ...}}`\n\n- Start a transient service that does not get cleaned up after it terminates with a custom environment variable:\n\n`sudo systemd-run --remain-after-exit --set-env={{name}}={{value}} {{command}} {{argument1 argument2 ...}}`\n\n- Start a transient timer that periodically runs its transient service (see `man systemd.time` for calendar event format):\n\n`sudo systemd-run --on-calendar={{calendar_event}} {{command}} {{argument1 argument2 ...}}`\n\n- Share the terminal with the program (allowing interactive input/output) and make sure the execution details remain after the program exits:\n\n`systemd-run --remain-after-exit --pty {{command}}`\n\n- Set properties (e.g. CPUQuota, MemoryMax) of the process and wait until it exits:\n\n`systemd-run --property MemoryMax={{memory_in_bytes}} --property CPUQuota={{percentage_of_CPU_time}}% --wait {{command}}`\n\n- Use the program in a shell pipeline:\n\n`{{command1}} | systemd-run --pipe {{command2}} | {{command3}}`\n
systemd-socket-activate
systemd-socket-activate(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-socket-activate(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | ENVIRONMENT VARIABLES | EXAMPLES | SEE ALSO | COLOPHON SYSTEMD-...-ACTIVATE(1) systemd-socket-activate SYSTEMD-...-ACTIVATE(1) NAME top systemd-socket-activate - Test socket activation of daemons SYNOPSIS top systemd-socket-activate [OPTIONS...] daemon [OPTIONS...] DESCRIPTION top systemd-socket-activate may be used to launch a socket-activated service program from the command line for testing purposes. It may also be used to launch individual instances of the service program per connection. The daemon to launch and its options should be specified after options intended for systemd-socket-activate. If the --inetd option is given, the socket file descriptor will be used as the standard input and output of the launched process. Otherwise, standard input and output will be inherited, and sockets will be passed through file descriptors 3 and higher. Sockets passed through $LISTEN_FDS to systemd-socket-activate will be passed through to the daemon, in the original positions. Other sockets specified with --listen= will use consecutive descriptors. By default, systemd-socket-activate listens on a stream socket, use --datagram and --seqpacket to listen on datagram or sequential packet sockets instead (see below). OPTIONS top -l address, --listen=address Listen on this address. Takes a string like "2000" or "127.0.0.1:2001". Added in version 230. -a, --accept Launch an instance of the service program for each connection and pass the connection socket. Added in version 230. -d, --datagram Listen on a datagram socket (SOCK_DGRAM), instead of a stream socket (SOCK_STREAM). May not be combined with --seqpacket. Added in version 230. --seqpacket Listen on a sequential packet socket (SOCK_SEQPACKET), instead of a stream socket (SOCK_STREAM). May not be combined with --datagram. Added in version 230. --inetd Use the inetd protocol for passing file descriptors, i.e. as standard input and standard output, instead of the new-style protocol for passing file descriptors using $LISTEN_FDS (see above). Added in version 230. -E VAR[=VALUE], --setenv=VAR[=VALUE] Add this variable to the environment of the launched process. If VAR is followed by "=", assume that it is a variablevalue pair. Otherwise, obtain the value from the environment of systemd-socket-activate itself. Added in version 230. --fdname=NAME[:NAME...] Specify names for the file descriptors passed. This is equivalent to setting FileDescriptorName= in socket unit files, and enables use of sd_listen_fds_with_names(3). Multiple entries may be specifies using separate options or by separating names with colons (":") in one option. In case more names are given than descriptors, superfluous ones will be ignored. In case less names are given than descriptors, the remaining file descriptors will be unnamed. Added in version 230. -h, --help Print a short help text and exit. --version Print a short version string and exit. ENVIRONMENT VARIABLES top $LISTEN_FDS, $LISTEN_PID, $LISTEN_FDNAMES See sd_listen_fds(3). Added in version 230. $SYSTEMD_LOG_TARGET, $SYSTEMD_LOG_LEVEL, $SYSTEMD_LOG_TIME, $SYSTEMD_LOG_COLOR, $SYSTEMD_LOG_LOCATION Same as in systemd(1). Added in version 230. EXAMPLES top Example 1. Run an echo server on port 2000 $ systemd-socket-activate -l 2000 --inetd -a cat Example 2. Run a socket-activated instance of systemd-journal- gatewayd(8) $ systemd-socket-activate -l 19531 /usr/lib/systemd/systemd-journal-gatewayd SEE ALSO top systemd(1), systemd.socket(5), systemd.service(5), systemd-run(1), sd_listen_fds(3), sd_listen_fds_with_names(3), cat(1) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-...-ACTIVATE(1) Pages that refer to this page: systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-socket-activate\n\n> Socket activation for systemd services.\n> More information: <https://www.freedesktop.org/software/systemd/man/latest/systemd-socket-activate.html>.\n\n- Activate a service when a specific socket is connected:\n\n`systemd-socket-activate {{path/to/socket.service}}`\n\n- Activate multiple sockets for a service:\n\n`systemd-socket-activate {{path/to/socket1.service}} {{path/to/socket2.service}}`\n\n- Pass environment variables to the service being activated:\n\n`{{SYSTEMD_SOCKET_ACTIVATION=1}} systemd-socket-activate {{path/to/socket.service}}`\n\n- Activate a service along with a notification socket:\n\n`systemd-socket-activate {{path/to/socket.socket}} {{path/to/service.service}}`\n\n- Activate a service with a specified port:\n\n`systemd-socket-activate {{path/to/socket.service}} -l {{8080}}`\n
systemd-stdio-bridge
systemd-stdio-bridge(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-stdio-bridge(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | SEE ALSO | NOTES | COLOPHON SYSTEMD-STDIO-BRIDGE(1) systemd-stdio-bridge SYSTEMD-STDIO-BRIDGE(1) NAME top systemd-stdio-bridge - D-Bus proxy SYNOPSIS top systemd-stdio-bridge [OPTIONS...] DESCRIPTION top systemd-stdio-bridge implements a proxy between STDIN/STDOUT and a D-Bus bus. It expects to receive an open connection via STDIN/STDOUT when started, and will create a new connection to the specified bus. It will then forward messages between the two connections. This program is suitable for socket activation: the first connection may be a pipe or a socket and must be passed as either standard input, or as an open file descriptor according to the protocol described in sd_listen_fds(3). The second connection will be made by default to the local system bus, but this can be influenced by the --user, --system, --machine=, and --bus-path= options described below. sd-bus(3) uses systemd-stdio-bridge to forward D-Bus connections over ssh(1), or to connect to the bus of a different user, see sd_bus_set_address(3). OPTIONS top The following options are understood: --user Talk to the service manager of the calling user, rather than the service manager of the system. --system Talk to the service manager of the system. This is the implied default. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -p PATH, --bus-path=PATH Path to the bus address. Default: "unix:path=/run/dbus/system_bus_socket" Added in version 251. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. SEE ALSO top dbus-daemon(1), dbus-broker(1), D-Bus[1], systemd(1) NOTES top 1. D-Bus https://www.freedesktop.org/wiki/Software/dbus COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-STDIO-BRIDGE(1) Pages that refer to this page: systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-stdio-bridge\n\n> Implement a proxy between `stdin`/`stdout` and a D-Bus.\n> Note: It expects to receive an open connection via `stdin`/`stdout` when started, and will create a new connection to the specified bus.\n> More information: <https://www.freedesktop.org/software/systemd/man/latest/systemd-stdio-bridge.html>.\n\n- Forward `stdin`/`stdout` to the local system bus:\n\n`systemd-stdio-bridge`\n\n- Forward `stdin`/`stdout` to a specific user's D-Bus:\n\n`systemd-stdio-bridge --{{user}}`\n\n- Forward `stdin`/`stdout` to the local system bus within a specific container:\n\n`systemd-stdio-bridge --machine={{mycontainer}}`\n\n- Forward `stdin`/`stdout` to a custom D-Bus address:\n\n`systemd-stdio-bridge --bus-path=unix:path={{/custom/dbus/socket}}`\n
systemd-sysext
systemd-sysext(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-sysext(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | USES | COMMANDS | OPTIONS | EXIT STATUS | SEE ALSO | NOTES | COLOPHON SYSTEMD-SYSEXT(8) systemd-sysext SYSTEMD-SYSEXT(8) NAME top systemd-sysext, systemd-sysext.service, systemd-confext, systemd- confext.service - Activates System Extension Images SYNOPSIS top systemd-sysext [OPTIONS...] COMMAND systemd-sysext.service systemd-confext [OPTIONS...] COMMAND systemd-confext.service DESCRIPTION top systemd-sysext activates/deactivates system extension images. System extension images may dynamically at runtime extend the /usr/ and /opt/ directory hierarchies with additional files. This is particularly useful on immutable system images where a /usr/ and/or /opt/ hierarchy residing on a read-only file system shall be extended temporarily at runtime without making any persistent modifications. System extension images should contain files and directories similar in fashion to regular operating system tree. When one or more system extension images are activated, their /usr/ and /opt/ hierarchies are combined via "overlayfs" with the same hierarchies of the host OS, and the host /usr/ and /opt/ overmounted with it ("merging"). When they are deactivated, the mount point is disassembled again revealing the unmodified original host version of the hierarchy ("unmerging"). Merging thus makes the extension's resources suddenly appear below the /usr/ and /opt/ hierarchies as if they were included in the base OS image itself. Unmerging makes them disappear again, leaving in place only the files that were shipped with the base OS image itself. Files and directories contained in the extension images outside of the /usr/ and /opt/ hierarchies are not merged, and hence have no effect when included in a system extension image. In particular, files in the /etc/ and /var/ included in a system extension image will not appear in the respective hierarchies after activation. System extension images are strictly read-only, and the host /usr/ and /opt/ hierarchies become read-only too while they are activated. System extensions are supposed to be purely additive, i.e. they are supposed to include only files that do not exist in the underlying basic OS image. However, the underlying mechanism (overlayfs) also allows overlaying or removing files, but it is recommended not to make use of this. System extension images may be provided in the following formats: 1. Plain directories or btrfs subvolumes containing the OS tree 2. Disk images with a GPT disk label, following the Discoverable Partitions Specification[1] 3. Disk images lacking a partition table, with a naked Linux file system (e.g. erofs, squashfs or ext4) These image formats are the same ones that systemd-nspawn(1) supports via its --directory=/--image= switches and those that the service manager supports via RootDirectory=/RootImage=. Similar to them they may optionally carry Verity authentication information. System extensions are searched for in the directories /etc/extensions/, /run/extensions/ and /var/lib/extensions/. The first two listed directories are not suitable for carrying large binary images, however are still useful for carrying symlinks to them. The primary place for installing system extensions is /var/lib/extensions/. Any directories found in these search directories are considered directory based extension images; any files with the .raw suffix are considered disk image based extension images. When invoked in the initrd, the additional directory /.extra/sysext/ is included in the directories that are searched for extension images. Note however, that by default a tighter image policy applies to images found there, though, see below. This directory is populated by systemd-stub(7) with extension images found in the system's EFI System Partition. During boot OS extension images are activated automatically, if the systemd-sysext.service is enabled. Note that this service runs only after the underlying file systems where system extensions may be located have been mounted. This means they are not suitable for shipping resources that are processed by subsystems running in earliest boot. Specifically, OS extension images are not suitable for shipping system services or systemd-sysusers(8) definitions. See the Portable Services[2] page for a simple mechanism for shipping system services in disk images, in a similar fashion to OS extensions. Note the different isolation on these two mechanisms: while system extension directly extend the underlying OS image with additional files that appear in a way very similar to as if they were shipped in the OS image itself and thus imply no security isolation, portable services imply service level sandboxing in one way or another. The systemd-sysext.service service is guaranteed to finish start-up before basic.target is reached; i.e. at the time regular services initialize (those which do not use DefaultDependencies=no), the files and directories system extensions provide are available in /usr/ and /opt/ and may be accessed. Note that there is no concept of enabling/disabling installed system extension images: all installed extension images are automatically activated at boot. However, you can place an empty directory named like the extension (no .raw) in /etc/extensions/ to "mask" an extension with the same name in a system folder with lower precedence. A simple mechanism for version compatibility is enforced: a system extension image must carry a /usr/lib/extension-release.d/extension-release.NAME file, which must match its image name, that is compared with the host os-release file: the contained ID= fields have to match unless "_any" is set for the extension. If the extension ID= is not "_any", the SYSEXT_LEVEL= field (if defined) has to match. If the latter is not defined, the VERSION_ID= field has to match instead. If the extension defines the ARCHITECTURE= field and the value is not "_any" it has to match the kernel's architecture reported by uname(2) but the used architecture identifiers are the same as for ConditionArchitecture= described in systemd.unit(5). EXTENSION_RELOAD_MANAGER= can be set to 1 if the extension requires a service manager reload after application of the extension. Note that the for the reasons mentioned earlier: Portable Services[2] remain the recommended way to ship system services. System extensions should not ship a /usr/lib/os-release file (as that would be merged into the host /usr/ tree, overriding the host OS version data, which is not desirable). The extension-release file follows the same format and semantics, and carries the same content, as the os-release file of the OS, but it describes the resources carried in the extension image. The systemd-confext concept follows the same principle as the systemd-sysext(1) functionality but instead of working on /usr and /opt, confext will extend only /etc. Files and directories contained in the confext images outside of the /etc/ hierarchy are not merged, and hence have no effect when included in the image. Formats for these images are of the same as sysext images. The merged hierarchy will be mounted with "nosuid" and (if not disabled via --noexec=false) "noexec". Confexts are looked for in the directories /run/confexts/, /var/lib/confexts/, /usr/lib/confexts/ and /usr/local/lib/confexts/. The first listed directory is not suitable for carrying large binary images, however is still useful for carrying symlinks to them. The primary place for installing configuration extensions is /var/lib/confexts/. Any directories found in these search directories are considered directory based confext images; any files with the .raw suffix are considered disk image based confext images. Again, just like sysext images, the confext images will contain a /etc/extension-release.d/extension-release.NAME file, which must match the image name (with the usual escape hatch of the user.extension-release.strict xattr(7)), and again with content being one or more of ID=, VERSION_ID=, and CONFEXT_LEVEL. Confext images will then be checked and matched against the base OS layer. USES top The primary use case for system images are immutable environments where debugging and development tools shall optionally be made available, but not included in the immutable base OS image itself (e.g. strace(1) and gdb(1) shall be an optionally installable addition in order to make debugging/development easier). System extension images should not be misunderstood as a generic software packaging framework, as no dependency scheme is available: system extensions should carry all files they need themselves, except for those already shipped in the underlying host system image. Typically, system extension images are built at the same time as the base OS image within the same build system. Another use case for the system extension concept is temporarily overriding OS supplied resources with newer ones, for example to install a locally compiled development version of some low-level component over the immutable OS image without doing a full OS rebuild or modifying the nominally immutable image. (e.g. "install" a locally built package with DESTDIR=/var/lib/extensions/mytest make install && systemd-sysext refresh, making it available in /usr/ as if it was installed in the OS image itself.) This case works regardless if the underlying host /usr/ is managed as immutable disk image or is a traditional package manager controlled (i.e. writable) tree. For the confext case, the OSConfig project aims to perform runtime reconfiguration of OS services. Sometimes, there is a need to swap certain configuration parameter values or restart only a specific service without deployment of new code or a complete OS deployment. In other words, we want to be able to tie the most frequently configured options to runtime updateable flags that can be changed without a system reboot. This will help reduce servicing times when there is a need for changing the OS configuration. COMMANDS top The following commands are understood by both the sysext and confext concepts: status When invoked without any command verb, or when status is specified the current merge status is shown, separately (for both /usr/ and /opt/ of sysext and for /etc/ of confext). Added in version 248. merge Merges all currently installed system extension images into /usr/ and /opt/, by overmounting these hierarchies with an "overlayfs" file system combining the underlying hierarchies with those included in the extension images. This command will fail if the hierarchies are already merged. For confext, the merge happens into the /etc/ directory instead. Added in version 248. unmerge Unmerges all currently installed system extension images from /usr/ and /opt/ for sysext and /etc/, for confext, by unmounting the "overlayfs" file systems created by merge prior. Added in version 248. refresh A combination of unmerge and merge: if already mounted the existing "overlayfs" instance is unmounted temporarily, and then replaced by a new version. This command is useful after installing/removing system extension images, in order to update the "overlayfs" file system accordingly. If no system extensions are installed when this command is executed, the equivalent of unmerge is executed, without establishing any new "overlayfs" instance. Note that currently there's a brief moment where neither the old nor the new "overlayfs" file system is mounted. This implies that all resources supplied by a system extension will briefly disappear even if it exists continuously during the refresh operation. Added in version 248. list A brief list of installed extension images is shown. Added in version 248. -h, --help Print a short help text and exit. --version Print a short version string and exit. OPTIONS top --root= Operate relative to the specified root directory, i.e. establish the "overlayfs" mount not on the top-level host /usr/ and /opt/ hierarchies for sysext or /etc/ for confext, but below some specified root directory. Added in version 248. --force When merging system extensions into /usr/ and /opt/ for sysext and /etc/ for confext, ignore version incompatibilities, i.e. force merging regardless of whether the version information included in the images matches the host or not. Added in version 248. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on system extension disk images. If not specified defaults to "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent" for system extensions, i.e. only the root and /usr/ file systems in the image are used. For configuration extensions defaults to "root=verity+signed+encrypted+unprotected+absent". When run in the initrd and operating on a system extension image stored in the /.extra/sysext/ directory a slightly stricter policy is used by default: "root=signed+absent:usr=signed+absent", see above for details. Added in version 254. --noexec=BOOL When merging configuration extensions into /etc/ the "MS_NOEXEC" mount flag is used by default. This option can be used to disable it. Added in version 254. --no-reload When used with merge, unmerge or refresh, do not reload daemon after executing the changes even if an extension that is applied requires a reload via the EXTENSION_RELOAD_MANAGER= set to 1. Added in version 255. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. --json=MODE Shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks), "pretty" (for a pretty version of the same, with indentation and line breaks) or "off" (to turn off JSON output, the default). EXIT STATUS top On success, 0 is returned. SEE ALSO top systemd(1), systemd-nspawn(1), systemd-stub(7) NOTES top 1. Discoverable Partitions Specification https://uapi-group.org/specifications/specs/discoverable_partitions_specification 2. Portable Services https://systemd.io/PORTABLE_SERVICES COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-SYSEXT(8) Pages that refer to this page: portablectl(1), systemd-cryptenroll(1), org.freedesktop.portable1(5), os-release(5), systemd.directives(7), systemd.image-policy(7), systemd.index(7), systemd-repart(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-sysext\n\n> Activate or deactivate system extension images.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-sysext.html>.\n\n- List installed extension images:\n\n`systemd-sysext list`\n\n- Merge system extension images into `/usr/` and `/opt/`:\n\n`systemd-sysext merge`\n\n- Check the current merge status:\n\n`systemd-sysext status`\n\n- Unmerge all currently installed system extension images from `/usr/` and `/opt/`:\n\n`systemd-sysext unmerge`\n\n- Refresh system extension images (a combination of `unmerge` and `merge`):\n\n`systemd-sysext refresh`\n
systemd-sysusers
systemd-sysusers(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-sysusers(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CREDENTIALS | EXIT STATUS | SEE ALSO | NOTES | COLOPHON SYSTEMD-SYSUSERS(8) systemd-sysusers SYSTEMD-SYSUSERS(8) NAME top systemd-sysusers, systemd-sysusers.service - Allocate system users and groups SYNOPSIS top systemd-sysusers [OPTIONS...] [CONFIGFILE...] systemd-sysusers.service DESCRIPTION top systemd-sysusers creates system users and groups, based on files in the format described in sysusers.d(5). If invoked with no arguments, it applies all directives from all files found in the directories specified by sysusers.d(5). When invoked with positional arguments, if option --replace=PATH is specified, arguments specified on the command line are used instead of the configuration file PATH. Otherwise, just the configuration specified by the command line arguments is executed. The string "-" may be specified instead of a filename to instruct systemd-sysusers to read the configuration from standard input. If the argument is a relative path, all configuration directories are searched for a matching file and the file found that has the highest priority is executed. If the argument is an absolute path, that file is used directly without searching of the configuration directories. OPTIONS top The following options are understood: --root=root Takes a directory path as an argument. All paths will be prefixed with the given alternate root path, including config search paths. Added in version 215. --image=image Takes a path to a disk image file or block device node. If specified all operations are applied to file system in the indicated disk image. This is similar to --root= but operates on file systems stored in disk images or block devices. The disk image should either contain just a file system or a set of file systems within a GPT partition table, following the Discoverable Partitions Specification[1]. For further information on supported disk images, see systemd-nspawn(1)'s switch of the same name. Added in version 247. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. --replace=PATH When this option is given, one or more positional arguments must be specified. All configuration files found in the directories listed in sysusers.d(5) will be read, and the configuration given on the command line will be handled instead of and with the same priority as the configuration file PATH. This option is intended to be used when package installation scripts are running and files belonging to that package are not yet available on disk, so their contents must be given on the command line, but the admin configuration might already exist and should be given higher priority. Example 1. RPM installation script for radvd echo 'u radvd - "radvd daemon"' | \ systemd-sysusers --replace=/usr/lib/sysusers.d/radvd.conf - This will create the radvd user as if /usr/lib/sysusers.d/radvd.conf was already on disk. An admin might override the configuration specified on the command line by placing /etc/sysusers.d/radvd.conf or even /etc/sysusers.d/00-overrides.conf. Note that this is the expanded form, and when used in a package, this would be written using a macro with "radvd" and a file containing the configuration line as arguments. Added in version 238. --dry-run Process the configuration and figure out what entries would be created, but don't actually write anything. Added in version 250. --inline Treat each positional argument as a separate configuration line instead of a file name. Added in version 238. --cat-config Copy the contents of config files to standard output. Before each file, the filename is printed as a comment. --tldr Copy the contents of config files to standard output. Only the "interesting" parts of the configuration files are printed, comments and empty lines are skipped. Before each file, the filename is printed as a comment. --no-pager Do not pipe output into a pager. -h, --help Print a short help text and exit. --version Print a short version string and exit. CREDENTIALS top systemd-sysusers supports the service credentials logic as implemented by ImportCredential=/LoadCredential=/SetCredential= (see systemd.exec(1) for details). The following credentials are used when passed in: passwd.hashed-password.user A UNIX hashed password string to use for the specified user, when creating an entry for it. This is particularly useful for the "root" user as it allows provisioning the default root password to use via a unit file drop-in or from a container manager passing in this credential. Note that setting this credential has no effect if the specified user account already exists. This credential is hence primarily useful in first boot scenarios or systems that are fully stateless and come up with an empty /etc/ on every boot. Added in version 249. passwd.plaintext-password.user Similar to "passwd.hashed-password.user" but expect a literal, plaintext password, which is then automatically hashed before used for the user account. If both the hashed and the plaintext credential are specified for the same user the former takes precedence. It's generally recommended to specify the hashed version; however in test environments with weaker requirements on security it might be easier to pass passwords in plaintext instead. Added in version 249. passwd.shell.user Specifies the shell binary to use for the specified account when creating it. Added in version 249. sysusers.extra The contents of this credential may contain additional lines to operate on. The credential contents should follow the same format as any other sysusers.d/ drop-in. If this credential is passed it is processed after all of the drop-in files read from the file system. Added in version 252. Note that by default the systemd-sysusers.service unit file is set up to inherit the "passwd.hashed-password.root", "passwd.plaintext-password.root", "passwd.shell.root" and "sysusers.extra" credentials from the service manager. Thus, when invoking a container with an unpopulated /etc/ for the first time it is possible to configure the root user's password to be "systemd" like this: # systemd-nspawn --image=... --set-credential=passwd.hashed-password.root:'$y$j9T$yAuRJu1o5HioZAGDYPU5d.$F64ni6J2y2nNQve90M/p0ZP0ECP/qqzipNyaY9fjGpC' ... Note again that the data specified in this credential is consulted only when creating an account for the first time, it may not be used for changing the password or shell of an account that already exists. Use mkpasswd(1) for generating UNIX password hashes from the command line. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. SEE ALSO top systemd(1), sysusers.d(5), Users, Groups, UIDs and GIDs on systemd systems[2], systemd.exec(1), mkpasswd(1) NOTES top 1. Discoverable Partitions Specification https://uapi-group.org/specifications/specs/discoverable_partitions_specification 2. Users, Groups, UIDs and GIDs on systemd systems https://systemd.io/UIDS-GIDS COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-SYSUSERS(8) Pages that refer to this page: systemd-firstboot(1), systemd-nspawn(1), sysusers.d(5), systemd.directives(7), systemd.index(7), systemd-sysext(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-sysusers\n\n> Create system users and groups.\n> If the config file is not specified, files in the `sysusers.d` directories are used.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-sysusers.html>.\n\n- Create users and groups from a specific configuration file:\n\n`systemd-sysusers {{path/to/file}}`\n\n- Process configuration files and print what would be done without actually doing anything:\n\n`systemd-sysusers --dry-run {{path/to/file}}`\n\n- Print the contents of all configuration files (before each file, its name is printed as a comment):\n\n`systemd-sysusers --cat-config`\n
systemd-tmpfiles
systemd-tmpfiles(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-tmpfiles(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | CREDENTIALS | ENVIRONMENT | UNPRIVILEGED --CLEANUP OPERATION | EXIT STATUS | SEE ALSO | NOTES | COLOPHON SYSTEMD-TMPFILES(8) systemd-tmpfiles SYSTEMD-TMPFILES(8) NAME top systemd-tmpfiles, systemd-tmpfiles-setup.service, systemd- tmpfiles-setup-dev-early.service, systemd-tmpfiles-setup- dev.service, systemd-tmpfiles-clean.service, systemd-tmpfiles- clean.timer - Creates, deletes and cleans up volatile and temporary files and directories SYNOPSIS top systemd-tmpfiles [OPTIONS...] [CONFIGFILE...] System units: systemd-tmpfiles-setup.service systemd-tmpfiles-setup-dev-early.service systemd-tmpfiles-setup-dev.service systemd-tmpfiles-clean.service systemd-tmpfiles-clean.timer User units: systemd-tmpfiles-setup.service systemd-tmpfiles-clean.service systemd-tmpfiles-clean.timer DESCRIPTION top systemd-tmpfiles creates, deletes, and cleans up volatile and temporary files and directories, using the configuration file format and location specified in tmpfiles.d(5). It must be invoked with one or more options --create, --remove, and --clean, to select the respective subset of operations. By default, directives from all configuration files are applied. When invoked with --replace=PATH, arguments specified on the command line are used instead of the configuration file PATH. Otherwise, if one or more absolute filenames are passed on the command line, only the directives in these files are applied. If "-" is specified instead of a filename, directives are read from standard input. If only the basename of a configuration file is specified, all configuration directories as specified in tmpfiles.d(5) are searched for a matching file and the file found that has the highest priority is executed. System services (systemd-tmpfiles-setup.service, systemd-tmpfiles-setup-dev-early.service, systemd-tmpfiles-setup-dev.service, systemd-tmpfiles-clean.service) invoke systemd-tmpfiles to create system files and to perform system wide cleanup. Those services read administrator-controlled configuration files in tmpfiles.d/ directories. User services (systemd-tmpfiles-setup.service, systemd-tmpfiles-clean.service) also invoke systemd-tmpfiles, but it reads a separate set of files, which includes user-controlled files under ~/.config/user-tmpfiles.d/ and ~/.local/share/user-tmpfiles.d/, and administrator-controlled files under /usr/share/user-tmpfiles.d/. Users may use this to create and clean up files under their control, but the system instance performs global cleanup and is not influenced by user configuration. Note that this means a time-based cleanup configured in the system instance, such as the one typically configured for /tmp/, will thus also affect files created by the user instance if they are placed in /tmp/, even if the user instance's time-based cleanup is turned off. To re-apply settings after configuration has been modified, simply restart systemd-tmpfiles-clean.service, which will apply any settings which can be safely executed at runtime. To debug systemd-tmpfiles, it may be useful to invoke it directly from the command line with increased log level (see $SYSTEMD_LOG_LEVEL below). OPTIONS top The following options are understood: --create If this option is passed, all files and directories marked with f, F, w, d, D, v, p, L, c, b, m in the configuration files are created or written to. Files and directories marked with z, Z, t, T, a, and A have their ownership, access mode and security labels set. --clean If this option is passed, all files and directories with an age parameter configured will be cleaned up. --remove If this option is passed, the contents of directories marked with D or R, and files or directories themselves marked with r or R are removed unless an exclusive or shared BSD lock is taken on them (see flock(2)). --user Execute "user" configuration, i.e. tmpfiles.d files in user configuration directories. Added in version 236. --boot Also execute lines with an exclamation mark. Lines that are not safe to be executed on a running system may be marked in this way. systemd-tmpfiles is executed in early boot with --boot specified and will execute those lines. When invoked again later, it should be called without --boot. Added in version 209. --graceful Ignore configuration lines pertaining to unknown users or groups. This option is intended to be used in early boot before all users or groups have been created. Added in version 254. --prefix=path Only apply rules with paths that start with the specified prefix. This option can be specified multiple times. Added in version 212. --exclude-prefix=path Ignore rules with paths that start with the specified prefix. This option can be specified multiple times. Added in version 207. -E A shortcut for "--exclude-prefix=/dev --exclude-prefix=/proc --exclude-prefix=/run --exclude-prefix=/sys", i.e. exclude the hierarchies typically backed by virtual or memory file systems. This is useful in combination with --root=, if the specified directory tree contains an OS tree without these virtual/memory file systems mounted in, as it is typically not desirable to create any files and directories below these subdirectories if they are supposed to be overmounted during runtime. Added in version 247. --root=root Takes a directory path as an argument. All paths will be prefixed with the given alternate root path, including config search paths. When this option is used, the libc Name Service Switch (NSS) is bypassed for resolving users and groups. Instead the files /etc/passwd and /etc/group inside the alternate root are read directly. This means that users/groups not listed in these files will not be resolved, i.e. LDAP NIS and other complex databases are not considered. Consider combining this with -E to ensure the invocation does not create files or directories below mount points in the OS image operated on that are typically overmounted during runtime. Added in version 212. --image=image Takes a path to a disk image file or block device node. If specified all operations are applied to file system in the indicated disk image. This is similar to --root= but operates on file systems stored in disk images or block devices. The disk image should either contain just a file system or a set of file systems within a GPT partition table, following the Discoverable Partitions Specification[1]. For further information on supported disk images, see systemd-nspawn(1)'s switch of the same name. Implies -E. Added in version 247. --image-policy=policy Takes an image policy string as argument, as per systemd.image-policy(7). The policy is enforced when operating on the disk image specified via --image=, see above. If not specified defaults to the "*" policy, i.e. all recognized file systems in the image are used. --replace=PATH When this option is given, one or more positional arguments must be specified. All configuration files found in the directories listed in tmpfiles.d(5) will be read, and the configuration given on the command line will be handled instead of and with the same priority as the configuration file PATH. This option is intended to be used when package installation scripts are running and files belonging to that package are not yet available on disk, so their contents must be given on the command line, but the admin configuration might already exist and should be given higher priority. Added in version 238. --cat-config Copy the contents of config files to standard output. Before each file, the filename is printed as a comment. --tldr Copy the contents of config files to standard output. Only the "interesting" parts of the configuration files are printed, comments and empty lines are skipped. Before each file, the filename is printed as a comment. --no-pager Do not pipe output into a pager. -h, --help Print a short help text and exit. --version Print a short version string and exit. It is possible to combine --create, --clean, and --remove in one invocation (in which case removal and cleanup are executed before creation of new files). For example, during boot the following command line is executed to ensure that all temporary and volatile directories are removed and created according to the configuration file: systemd-tmpfiles --remove --create CREDENTIALS top systemd-tmpfiles supports the service credentials logic as implemented by ImportCredential=/LoadCredential=/SetCredential= (see systemd.exec(1) for details). The following credentials are used when passed in: tmpfiles.extra The contents of this credential may contain additional lines to operate on. The credential contents should follow the same format as any other tmpfiles.d/ drop-in configuration file. If this credential is passed it is processed after all of the drop-in files read from the file system. The lines in the credential can hence augment existing lines of the OS, but not override them. Added in version 252. Note that by default the systemd-tmpfiles-setup.service unit file (and related unit files) is set up to inherit the "tmpfiles.extra" credential from the service manager. ENVIRONMENT top $SYSTEMD_LOG_LEVEL The maximum log level of emitted messages (messages with a higher log level, i.e. less important ones, will be suppressed). Either one of (in order of decreasing importance) emerg, alert, crit, err, warning, notice, info, debug, or an integer in the range 0...7. See syslog(3) for more information. $SYSTEMD_LOG_COLOR A boolean. If true, messages written to the tty will be colored according to priority. This setting is only useful when messages are written directly to the terminal, because journalctl(1) and other tools that display logs will color messages based on the log level on their own. $SYSTEMD_LOG_TIME A boolean. If true, console log messages will be prefixed with a timestamp. This setting is only useful when messages are written directly to the terminal or a file, because journalctl(1) and other tools that display logs will attach timestamps based on the entry metadata on their own. $SYSTEMD_LOG_LOCATION A boolean. If true, messages will be prefixed with a filename and line number in the source code where the message originates. Note that the log location is often attached as metadata to journal entries anyway. Including it directly in the message text can nevertheless be convenient when debugging programs. $SYSTEMD_LOG_TARGET The destination for log messages. One of console (log to the attached tty), console-prefixed (log to the attached tty but with prefixes encoding the log level and "facility", see syslog(3), kmsg (log to the kernel circular log buffer), journal (log to the journal), journal-or-kmsg (log to the journal if available, and to kmsg otherwise), auto (determine the appropriate log target automatically, the default), null (disable log output). $SYSTEMD_PAGER Pager to use when --no-pager is not given; overrides $PAGER. If neither $SYSTEMD_PAGER nor $PAGER are set, a set of well-known pager implementations are tried in turn, including less(1) and more(1), until one is found. If no pager implementation is discovered no pager is invoked. Setting this environment variable to an empty string or the value "cat" is equivalent to passing --no-pager. Note: if $SYSTEMD_PAGERSECURE is not set, $SYSTEMD_PAGER (as well as $PAGER) will be silently ignored. $SYSTEMD_LESS Override the options passed to less (by default "FRSXMK"). Users might want to change two options in particular: K This option instructs the pager to exit immediately when Ctrl+C is pressed. To allow less to handle Ctrl+C itself to switch back to the pager command prompt, unset this option. If the value of $SYSTEMD_LESS does not include "K", and the pager that is invoked is less, Ctrl+C will be ignored by the executable, and needs to be handled by the pager. X This option instructs the pager to not send termcap initialization and deinitialization strings to the terminal. It is set by default to allow command output to remain visible in the terminal even after the pager exits. Nevertheless, this prevents some pager functionality from working, in particular paged output cannot be scrolled with the mouse. See less(1) for more discussion. $SYSTEMD_LESSCHARSET Override the charset passed to less (by default "utf-8", if the invoking terminal is determined to be UTF-8 compatible). $SYSTEMD_PAGERSECURE Takes a boolean argument. When true, the "secure" mode of the pager is enabled; if false, disabled. If $SYSTEMD_PAGERSECURE is not set at all, secure mode is enabled if the effective UID is not the same as the owner of the login session, see geteuid(2) and sd_pid_get_owner_uid(3). In secure mode, LESSSECURE=1 will be set when invoking the pager, and the pager shall disable commands that open or create new files or start new subprocesses. When $SYSTEMD_PAGERSECURE is not set at all, pagers which are not known to implement secure mode will not be used. (Currently only less(1) implements secure mode.) Note: when commands are invoked with elevated privileges, for example under sudo(8) or pkexec(1), care must be taken to ensure that unintended interactive features are not enabled. "Secure" mode for the pager may be enabled automatically as describe above. Setting SYSTEMD_PAGERSECURE=0 or not removing it from the inherited environment allows the user to invoke arbitrary commands. Note that if the $SYSTEMD_PAGER or $PAGER variables are to be honoured, $SYSTEMD_PAGERSECURE must be set too. It might be reasonable to completely disable the pager using --no-pager instead. $SYSTEMD_COLORS Takes a boolean argument. When true, systemd and related utilities will use colors in their output, otherwise the output will be monochrome. Additionally, the variable can take one of the following special values: "16", "256" to restrict the use of colors to the base 16 or 256 ANSI colors, respectively. This can be specified to override the automatic decision based on $TERM and what the console is connected to. $SYSTEMD_URLIFY The value must be a boolean. Controls whether clickable links should be generated in the output for terminal emulators supporting this. This can be specified to override the decision that systemd makes based on $TERM and other conditions. UNPRIVILEGED --CLEANUP OPERATION top systemd-tmpfiles tries to avoid changing the access and modification times on the directories it accesses, which requires CAP_FOWNER privileges. When running as non-root, directories which are checked for files to clean up will have their access time bumped, which might prevent their cleanup. EXIT STATUS top On success, 0 is returned. If the configuration was syntactically invalid (syntax errors, missing arguments, ...), so some lines had to be ignored, but no other errors occurred, 65 is returned (EX_DATAERR from /usr/include/sysexits.h). If the configuration was syntactically valid, but could not be executed (lack of permissions, creation of files in missing directories, invalid contents when writing to /sys/ values, ...), 73 is returned (EX_CANTCREAT from /usr/include/sysexits.h). Otherwise, 1 is returned (EXIT_FAILURE from /usr/include/stdlib.h). Note: when creating items, if the target already exists, but is of the wrong type or otherwise does not match the requested state, and forced operation has not been requested with "+", a message is emitted, but the failure is otherwise ignored. SEE ALSO top systemd(1), tmpfiles.d(5) NOTES top 1. Discoverable Partitions Specification https://uapi-group.org/specifications/specs/discoverable_partitions_specification COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-TMPFILES(8) Pages that refer to this page: coredump.conf(5), repart.d(5), systemd.exec(5), tmpfiles.d(5), systemd.directives(7), systemd.index(7), systemd-coredump(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-tmpfiles\n\n> Create, delete and clean up volatile and temporary files and directories.\n> This command is automatically invoked on boot by systemd services, and running it manually is usually not needed.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles.html>.\n\n- Create files and directories as specified in the configuration:\n\n`systemd-tmpfiles --create`\n\n- Clean up files and directories with age parameters configured:\n\n`systemd-tmpfiles --clean`\n\n- Remove files and directories as specified in the configuration:\n\n`systemd-tmpfiles --remove`\n\n- Apply operations for user-specific configurations:\n\n`systemd-tmpfiles --create --user`\n\n- Execute lines marked for early boot:\n\n`systemd-tmpfiles --create --boot`\n
systemd-tty-ask-password-agent
systemd-tty-ask-password-agent(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-tty-ask-password-agent(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | SEE ALSO | NOTES | COLOPHON SYSTEMD...D-AGENT(1) systemd-tty-ask-password-agent SYSTEMD...D-AGENT(1) NAME top systemd-tty-ask-password-agent - List or process pending systemd password requests SYNOPSIS top systemd-tty-ask-password-agent [OPTIONS...] [VARIABLE=VALUE...] DESCRIPTION top systemd-tty-ask-password-agent is a password agent that handles password requests of the system, for example for hard disk encryption passwords or SSL certificate passwords that need to be queried at boot-time or during runtime. systemd-tty-ask-password-agent implements the Password Agents Specification[1], and is one of many possible response agents which answer to queries formulated with systemd-ask-password(1). OPTIONS top The following options are understood: --list Lists all currently pending system password requests. Added in version 186. --query Process all currently pending system password requests by querying the user on the calling TTY. Added in version 186. --watch Continuously process password requests. Added in version 186. --wall Forward password requests to wall(1) instead of querying the user on the calling TTY. Added in version 186. --plymouth Ask question with plymouth(8) instead of querying the user on the calling TTY. Added in version 186. --console[=DEVICE] Ask question on TTY DEVICE instead of querying the user on the calling TTY. If DEVICE is not specified, /dev/console will be used. Added in version 186. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. SEE ALSO top systemd(1), systemctl(1), systemd-ask-password-console.service(8), wall(1), plymouth(8) NOTES top 1. Password Agents Specification https://systemd.io/PASSWORD_AGENTS/ COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD...D-AGENT(1) Pages that refer to this page: systemd-ask-password(1), systemd.directives(7), systemd.index(7), systemd-ask-password-console.service(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-tty-ask-password-agent\n\n> List or process pending systemd password requests.\n> More information: <https://www.freedesktop.org/software/systemd/man/systemd-tty-ask-password-agent.html>.\n\n- List all currently pending system password requests:\n\n`systemd-tty-ask-password-agent --list`\n\n- Continuously process password requests:\n\n`systemd-tty-ask-password-agent --watch`\n\n- Process all currently pending system password requests by querying the user on the calling TTY:\n\n`systemd-tty-ask-password-agent --query`\n\n- Forward password requests to wall instead of querying the user on the calling TTY:\n\n`systemd-tty-ask-password-agent --wall`\n
systemd-umount
systemd-mount(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training systemd-mount(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXIT STATUS | THE UDEV DATABASE | EXAMPLE | SEE ALSO | COLOPHON SYSTEMD-MOUNT(1) systemd-mount SYSTEMD-MOUNT(1) NAME top systemd-mount, systemd-umount - Establish and destroy transient mount or auto-mount points SYNOPSIS top systemd-mount [OPTIONS...] WHAT [WHERE] systemd-mount [OPTIONS...] --tmpfs [NAME] WHERE systemd-mount [OPTIONS...] --list systemd-mount [OPTIONS...] --umount WHAT|WHERE... DESCRIPTION top systemd-mount may be used to create and start a transient .mount or .automount unit of the file system WHAT on the mount point WHERE. In many ways, systemd-mount is similar to the lower-level mount(8) command, however instead of executing the mount operation directly and immediately, systemd-mount schedules it through the service manager job queue, so that it may pull in further dependencies (such as parent mounts, or a file system checker to execute a priori), and may make use of the auto-mounting logic. The command takes either one or two arguments. If only one argument is specified it should refer to a block device or regular file containing a file system (e.g. "/dev/sdb1" or "/path/to/disk.img"). The block device or image file is then probed for a file system label and other metadata, and is mounted to a directory below /run/media/system/ whose name is generated from the file system label. In this mode the block device or image file must exist at the time of invocation of the command, so that it may be probed. If the device is found to be a removable block device (e.g. a USB stick), an automount point is created instead of a regular mount point (i.e. the --automount= option is implied, see below). If the option --tmpfs is specified, then the argument is interpreted as the path where the new temporary file system shall be mounted. If two arguments are specified, the first indicates the mount source (the WHAT) and the second indicates the path to mount it on (the WHERE). In this mode no probing of the source is attempted, and a backing device node doesn't have to exist. However, if this mode is combined with --discover, device node probing for additional metadata is enabled, and much like in the single-argument case discussed above the specified device has to exist at the time of invocation of the command. Use the --list command to show a terse table of all local, known block devices with file systems that may be mounted with this command. systemd-umount can be used to unmount a mount or automount point. It is the same as systemd-mount --umount. OPTIONS top The following options are understood: --no-block Do not synchronously wait for the requested operation to finish. If this is not specified, the job will be verified, enqueued and systemd-mount will wait until the mount or automount unit's start-up is completed. By passing this argument, it is only verified and enqueued. Added in version 232. -l, --full Do not ellipsize the output when --list is specified. Added in version 245. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. --no-ask-password Do not query the user for authentication for privileged operations. --quiet, -q Suppresses additional informational output while running. Added in version 232. --discover Enable probing of the mount source. This switch is implied if a single argument is specified on the command line. If passed, additional metadata is read from the device to enhance the unit to create. For example, a descriptive string for the transient units is generated from the file system label and device model. Moreover if a removable block device (e.g. USB stick) is detected an automount unit instead of a regular mount unit is created, with a short idle timeout, in order to ensure the file-system is placed in a clean state quickly after each access. Added in version 232. --type=, -t Specifies the file system type to mount (e.g. "vfat" or "ext4"). If omitted or set to "auto", the file system type is determined automatically. Added in version 232. --options=, -o Additional mount options for the mount point. Added in version 232. --owner=USER Let the specified user USER own the mounted file system. This is done by appending uid= and gid= options to the list of mount options. Only certain file systems support this option. Added in version 237. --fsck= Takes a boolean argument, defaults to on. Controls whether to run a file system check immediately before the mount operation. In the automount case (see --automount= below) the check will be run the moment the first access to the device is made, which might slightly delay the access. Added in version 232. --description= Provide a description for the mount or automount unit. See Description= in systemd.unit(5). Added in version 232. --property=, -p Sets a unit property for the mount unit that is created. This takes an assignment in the same format as systemctl(1)'s set-property command. Added in version 232. --automount= Takes a boolean argument. Controls whether to create an automount point or a regular mount point. If true an automount point is created that is backed by the actual file system at the time of first access. If false a plain mount point is created that is backed by the actual file system immediately. Automount points have the benefit that the file system stays unmounted and hence in clean state until it is first accessed. In automount mode the --timeout-idle-sec= switch (see below) may be used to ensure the mount point is unmounted automatically after the last access and an idle period passed. If this switch is not specified it defaults to false. If not specified and --discover is used (or only a single argument passed, which implies --discover, see above), and the file system block device is detected to be removable, it is set to true, in order to increase the chance that the file system is in a fully clean state if the device is unplugged abruptly. Added in version 232. -A Equivalent to --automount=yes. Added in version 232. --timeout-idle-sec= Takes a time value that controls the idle timeout in automount mode. If set to "infinity" (the default) no automatic unmounts are done. Otherwise the file system backing the automount point is detached after the last access and the idle timeout passed. See systemd.time(7) for details on the time syntax supported. This option has no effect if only a regular mount is established, and automounting is not used. Note that if --discover is used (or only a single argument passed, which implies --discover, see above), and the file system block device is detected to be removable, --timeout-idle-sec=1s is implied. Added in version 232. --automount-property= Similar to --property=, but applies additional properties to the automount unit created, instead of the mount unit. Added in version 232. --bind-device This option only has an effect in automount mode, and controls whether the automount unit shall be bound to the backing device's lifetime. If set, the automount unit will be stopped automatically when the backing device vanishes. By default the automount unit stays around, and subsequent accesses will block until backing device is replugged. This option has no effect in case of non-device mounts, such as network or virtual file system mounts. Note that if --discover is used (or only a single argument passed, which implies --discover, see above), and the file system block device is detected to be removable, this option is implied. Added in version 232. --list Instead of establishing a mount or automount point, print a terse list of block devices containing file systems that may be mounted with "systemd-mount", along with useful metadata such as labels, etc. Added in version 232. -u, --umount Stop the mount and automount units corresponding to the specified mount points WHERE or the devices WHAT. systemd-mount with this option or systemd-umount can take multiple arguments which can be mount points, devices, /etc/fstab style node names, or backing files corresponding to loop devices, like systemd-mount --umount /path/to/umount /dev/sda1 UUID=xxxxxx-xxxx LABEL=xxxxx /path/to/disk.img. Note that when -H or -M is specified, only absolute paths to mount points are supported. Added in version 233. -G, --collect Unload the transient unit after it completed, even if it failed. Normally, without this option, all mount units that mount and failed are kept in memory until the user explicitly resets their failure state with systemctl reset-failed or an equivalent command. On the other hand, units that stopped successfully are unloaded immediately. If this option is turned on the "garbage collection" of units is more aggressive, and unloads units regardless if they exited successfully or failed. This option is a shortcut for --property=CollectMode=inactive-or-failed, see the explanation for CollectMode= in systemd.unit(5) for further information. Added in version 236. -T, --tmpfs Create and mount a new tmpfs file system on WHERE, with an optional NAME that defaults to "tmpfs". The file system is mounted with the top-level directory mode determined by the umask(2) setting of the caller, i.e. rwxrwxrwx masked by the umask of the caller. This matches what mkdir(1) does, but is different from the kernel default of "rwxrwxrwxt", i.e. a world-writable directory with the sticky bit set. Added in version 255. --user Talk to the service manager of the calling user, rather than the service manager of the system. --system Talk to the service manager of the system. This is the implied default. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS top On success, 0 is returned, a non-zero failure code otherwise. THE UDEV DATABASE top If --discover is used, systemd-mount honors a couple of additional udev properties of block devices: SYSTEMD_MOUNT_OPTIONS= The mount options to use, if --options= is not used. Added in version 232. SYSTEMD_MOUNT_WHERE= The file system path to place the mount point at, instead of the automatically generated one. Added in version 232. EXAMPLE top Use a udev rule like the following to automatically mount all USB storage plugged in: ACTION=="add", SUBSYSTEMS=="usb", SUBSYSTEM=="block", ENV{ID_FS_USAGE}=="filesystem", \ RUN{program}+="/usr/bin/systemd-mount --no-block --automount=yes --collect $devnode" SEE ALSO top systemd(1), mount(8), systemctl(1), systemd.unit(5), systemd.mount(5), systemd.automount(5), systemd-run(1) COLOPHON top This page is part of the systemd (systemd system and service manager) project. Information about the project can be found at http://www.freedesktop.org/wiki/Software/systemd. If you have a bug report for this manual page, see http://www.freedesktop.org/wiki/Software/systemd/#bugreports. This page was obtained from the project's upstream Git repository https://github.com/systemd/systemd.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-22.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org systemd 255 SYSTEMD-MOUNT(1) Pages that refer to this page: systemd-run(1), systemd.mount(5), systemd.directives(7), systemd.index(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# systemd-umount\n\n> This command is an alias of `systemd-mount --umount`.\n\n- View documentation for the original command:\n\n`tldr systemd-mount`\n
tac
tac(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training tac(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON TAC(1) User Commands TAC(1) NAME top tac - concatenate and print files in reverse SYNOPSIS top tac [OPTION]... [FILE]... DESCRIPTION top Write each FILE to standard output, last line first. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -b, --before attach the separator before instead of after -r, --regex interpret the separator as a regular expression -s, --separator=STRING use STRING as the separator instead of newline --help display this help and exit --version output version information and exit AUTHOR top Written by Jay Lepreau and David MacKenzie. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top cat(1), rev(1) Full documentation <https://www.gnu.org/software/coreutils/tac> or available locally via: info '(coreutils) tac invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 TAC(1) Pages that refer to this page: cat(1), rev(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# tac\n\n> Display and concatenate files with lines in reversed order.\n> See also: `cat`.\n> More information: <https://www.gnu.org/software/coreutils/tac>.\n\n- Concatenate specific files in reversed order:\n\n`tac {{path/to/file1 path/to/file2 ...}}`\n\n- Display `stdin` in reversed order:\n\n`{{cat path/to/file}} | tac`\n\n- Use a specific separator:\n\n`tac --separator {{,}} {{path/to/file1 path/to/file2 ...}}`\n\n- Use a specific regex as a separator:\n\n`tac --regex --separator {{[,;]}} {{path/to/file1 path/to/file2 ...}}`\n\n- Use a separator before each file:\n\n`tac --before {{path/to/file1 path/to/file2 ...}}`\n
tail
tail(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training tail(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | AUTHOR | REPORTING BUGS | COPYRIGHT | SEE ALSO | COLOPHON TAIL(1) User Commands TAIL(1) NAME top tail - output the last part of files SYNOPSIS top tail [OPTION]... [FILE]... DESCRIPTION top Print the last 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -c, --bytes=[+]NUM output the last NUM bytes; or use -c +NUM to output starting with byte NUM of each file -f, --follow[={name|descriptor}] output appended data as the file grows; an absent option argument means 'descriptor' -F same as --follow=name --retry -n, --lines=[+]NUM output the last NUM lines, instead of the last 10; or use -n +NUM to skip NUM-1 lines at the start --max-unchanged-stats=N with --follow=name, reopen a FILE which has not changed size after N (default 5) iterations to see if it has been unlinked or renamed (this is the usual case of rotated log files); with inotify, this option is rarely useful --pid=PID with -f, terminate after process ID, PID dies -q, --quiet, --silent never output headers giving file names --retry keep trying to open a file if it is inaccessible -s, --sleep-interval=N with -f, sleep for approximately N seconds (default 1.0) between iterations; with inotify and --pid=P, check process P at least once every N seconds -v, --verbose always output headers giving file names -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit NUM may have a multiplier suffix: b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y, R, Q. Binary prefixes can be used, too: KiB=K, MiB=M, and so on. With --follow (-f), tail defaults to following the file descriptor, which means that even if a tail'ed file is renamed, tail will continue to track its end. This default behavior is not desirable when you really want to track the actual name of the file, not the file descriptor (e.g., log rotation). Use --follow=name in that case. That causes tail to track the named file in a way that accommodates renaming, removal and creation. AUTHOR top Written by Paul Rubin, David MacKenzie, Ian Lance Taylor, and Jim Meyering. REPORTING BUGS top GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO top head(1) Full documentation <https://www.gnu.org/software/coreutils/tail> or available locally via: info '(coreutils) tail invocation' COLOPHON top This page is part of the coreutils (basic file, shell and text manipulation utilities) project. Information about the project can be found at http://www.gnu.org/software/coreutils/. If you have a bug report for this manual page, see http://www.gnu.org/software/coreutils/. This page was obtained from the tarball coreutils-9.4.tar.xz fetched from http://ftp.gnu.org/gnu/coreutils/ on 2023-12-22. If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org GNU coreutils 9.4 August 2023 TAIL(1) Pages that refer to this page: head(1), pmcd(1), pmdalogger(1), pmdasystemd(1), pmdaweblog(1), pon(1) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# tail\n\n> Display the last part of a file.\n> See also: `head`.\n> More information: <https://www.gnu.org/software/coreutils/tail>.\n\n- Show last 'count' lines in file:\n\n`tail --lines {{count}} {{path/to/file}}`\n\n- Print a file from a specific line number:\n\n`tail --lines +{{count}} {{path/to/file}}`\n\n- Print a specific count of bytes from the end of a given file:\n\n`tail --bytes {{count}} {{path/to/file}}`\n\n- Print the last lines of a given file and keep reading it until `Ctrl + C`:\n\n`tail --follow {{path/to/file}}`\n\n- Keep reading file until `Ctrl + C`, even if the file is inaccessible:\n\n`tail --retry --follow {{path/to/file}}`\n\n- Show last 'num' lines in 'file' and refresh every 'n' seconds:\n\n`tail --lines {{count}} --sleep-interval {{seconds}} --follow {{path/to/file}}`\n
talk
talk(1p) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training talk(1p) Linux manual page PROLOG | NAME | SYNOPSIS | DESCRIPTION | OPTIONS | OPERANDS | STDIN | INPUT FILES | ENVIRONMENT VARIABLES | ASYNCHRONOUS EVENTS | STDOUT | STDERR | OUTPUT FILES | EXTENDED DESCRIPTION | EXIT STATUS | CONSEQUENCES OF ERRORS | APPLICATION USAGE | EXAMPLES | RATIONALE | FUTURE DIRECTIONS | SEE ALSO | COPYRIGHT TALK(1P) POSIX Programmer's Manual TALK(1P) PROLOG top This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME top talk talk to another user SYNOPSIS top talk address [terminal] DESCRIPTION top The talk utility is a two-way, screen-oriented communication program. When first invoked, talk shall send a message similar to: Message from <unspecified string> talk: connection requested by your_address talk: respond with: talk your_address to the specified address. At this point, the recipient of the message can reply by typing: talk your_address Once communication is established, the two parties can type simultaneously, with their output displayed in separate regions of the screen. Characters shall be processed as follows: * Typing the <alert> character shall alert the recipient's terminal. * Typing <control>L shall cause the sender's screen regions to be refreshed. * Typing the erase and kill characters shall affect the sender's terminal in the manner described by the termios interface in the Base Definitions volume of POSIX.12017, Chapter 11, General Terminal Interface. * Typing the interrupt or end-of-file characters shall terminate the local talk utility. Once the talk session has been terminated on one side, the other side of the talk session shall be notified that the talk session has been terminated and shall be able to do nothing except exit. * Typing characters from LC_CTYPE classifications print or space shall cause those characters to be sent to the recipient's terminal. * When and only when the stty iexten local mode is enabled, the existence and processing of additional special control characters and multi-byte or single-byte functions shall be implementation-defined. * Typing other non-printable characters shall cause implementation-defined sequences of printable characters to be sent to the recipient's terminal. Permission to be a recipient of a talk message can be denied or granted by use of the mesg utility. However, a user's privilege may further constrain the domain of accessibility of other users' terminals. The talk utility shall fail when the user lacks appropriate privileges to perform the requested action. Certain block-mode terminals do not have all the capabilities necessary to support the simultaneous exchange of messages required for talk. When this type of exchange cannot be supported on such terminals, the implementation may support an exchange with reduced levels of simultaneous interaction or it may report an error describing the terminal-related deficiency. OPTIONS top None. OPERANDS top The following operands shall be supported: address The recipient of the talk session. One form of address is the <user name>, as returned by the who utility. Other address formats and how they are handled are unspecified. terminal If the recipient is logged in more than once, the terminal argument can be used to indicate the appropriate terminal name. If terminal is not specified, the talk message shall be displayed on one or more accessible terminals in use by the recipient. The format of terminal shall be the same as that returned by the who utility. STDIN top Characters read from standard input shall be copied to the recipient's terminal in an unspecified manner. If standard input is not a terminal, talk shall write a diagnostic message and exit with a non-zero status. INPUT FILES top None. ENVIRONMENT VARIABLES top The following environment variables shall affect the execution of talk: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of POSIX.12017, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments and input files). If the recipient's locale does not use an LC_CTYPE equivalent to the sender's, the results are undefined. LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error and informative messages written to standard output. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES. TERM Determine the name of the invoker's terminal type. If this variable is unset or null, an unspecified default terminal type shall be used. ASYNCHRONOUS EVENTS top When the talk utility receives a SIGINT signal, the utility shall terminate and exit with a zero status. It shall take the standard action for all other signals. STDOUT top If standard output is a terminal, characters copied from the recipient's standard input may be written to standard output. Standard output also may be used for diagnostic messages. If standard output is not a terminal, talk shall exit with a non- zero status. STDERR top None. OUTPUT FILES top None. EXTENDED DESCRIPTION top None. EXIT STATUS top The following exit values shall be returned: 0 Successful completion. >0 An error occurred or talk was invoked on a terminal incapable of supporting it. CONSEQUENCES OF ERRORS top Default. The following sections are informative. APPLICATION USAGE top Because the handling of non-printable, non-<space> characters is tied to the stty description of iexten, implementation extensions within the terminal driver can be accessed. For example, some implementations provide line editing functions with certain control character sequences. EXAMPLES top None. RATIONALE top The write utility was included in this volume of POSIX.12017 since it can be implemented on all terminal types. The talk utility, which cannot be implemented on certain terminals, was considered to be a ``better'' communications interface. Both of these programs are in widespread use on historical implementations. Therefore, both utilities have been specified. All references to networking abilities (talking to a user on another system) were removed as being outside the scope of this volume of POSIX.12017. Historical BSD and System V versions of talk terminate both of the conversations when either user breaks out of the session. This can lead to adverse consequences if a user unwittingly continues to enter text that is interpreted by the shell when the other terminates the session. Therefore, the version of talk specified by this volume of POSIX.12017 requires both users to terminate their end of the session explicitly. Only messages sent to the terminal of the invoking user can be internationalized in any way: * The original ``Message from <unspecified string> ...'' message sent to the terminal of the recipient cannot be internationalized because the environment of the recipient is as yet inaccessible to the talk utility. The environment of the invoking party is irrelevant. * Subsequent communication between the two parties cannot be internationalized because the two parties may specify different languages in their environment (and non-portable characters cannot be mapped from one language to another). * Neither party can be required to communicate in a language other than C and/or the one specified by their environment because unavailable terminal hardware support (for example, fonts) may be required. The text in the STDOUT section reflects the usage of the verb ``display'' in this section; some talk implementations actually use standard output to write to the terminal, but this volume of POSIX.12017 does not require that to be the case. The format of the terminal name is unspecified, but the descriptions of ps, talk, who, and write require that they all use or accept the same format. The handling of non-printable characters is partially implementation-defined because the details of mapping them to printable sequences is not needed by the user. Historical implementations, for security reasons, disallow the transmission of non-printable characters that may send commands to the other terminal. FUTURE DIRECTIONS top None. SEE ALSO top mesg(1p), stty(1p), who(1p), write(1p) The Base Definitions volume of POSIX.12017, Chapter 8, Environment Variables, Chapter 11, General Terminal Interface COPYRIGHT top Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1-2017, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, 2018 Edition, Copyright (C) 2018 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html . IEEE/The Open Group 2017 TALK(1P) Pages that refer to this page: mesg(1p), write(1p) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# talk\n\n> A visual communication program which copies lines from your terminal to that of another user.\n> More information: <https://www.gnu.org/software/inetutils/manual/html_node/talk-invocation.html>.\n\n- Start a talk session with a user on the same machine:\n\n`talk {{username}}`\n\n- Start a talk session with a user on the same machine, who is logged in on tty3:\n\n`talk {{username}} {{tty3}}`\n\n- Start a talk session with a user on a remote machine:\n\n`talk {{username}}@{{hostname}}`\n\n- Clear text on both terminal screens:\n\n`<Ctrl>+D`\n\n- Exit the talk session:\n\n`<Ctrl>+C`\n
tar
tar(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training tar(1) Linux manual page NAME | SYNOPSIS | NOTE | DESCRIPTION | OPTIONS | RETURN VALUE | SEE ALSO | BUG REPORTS | COPYRIGHT | COLOPHON TAR(1) GNU TAR Manual TAR(1) NAME top tar - an archiving utility SYNOPSIS top Traditional usage tar {A|c|d|r|t|u|x}[GnSkUWOmpsMBiajJzZhPlRvwo] [ARG...] UNIX-style usage tar -A [OPTIONS] -f ARCHIVE ARCHIVE... tar -c [-f ARCHIVE] [OPTIONS] [FILE...] tar -d [-f ARCHIVE] [OPTIONS] [FILE...] tar -r [-f ARCHIVE] [OPTIONS] [FILE...] tar -t [-f ARCHIVE] [OPTIONS] [MEMBER...] tar -u [-f ARCHIVE] [OPTIONS] [FILE...] tar -x [-f ARCHIVE] [OPTIONS] [MEMBER...] GNU-style usage tar {--catenate|--concatenate} [OPTIONS] --file ARCHIVE ARCHIVE... tar --create [--file ARCHIVE] [OPTIONS] [FILE...] tar {--diff|--compare} [--file ARCHIVE] [OPTIONS] [FILE...] tar --delete [--file ARCHIVE] [OPTIONS] [MEMBER...] tar --append [--file ARCHIVE] [OPTIONS] [FILE...] tar --list [--file ARCHIVE] [OPTIONS] [MEMBER...] tar --test-label [--file ARCHIVE] [OPTIONS] [LABEL...] tar --update [--file ARCHIVE] [OPTIONS] [FILE...] tar {--extract|--get} [--file ARCHIVE] [OPTIONS] [MEMBER...] NOTE top This manpage is a short description of GNU tar. For a detailed discussion, including examples and usage recommendations, refer to the GNU Tar Manual available in texinfo format. If the info reader and the tar documentation are properly installed on your system, the command info tar should give you access to the complete manual. You can also view the manual using the info mode in emacs(1), or find it in various formats online at https://www.gnu.org/software/tar/manual If any discrepancies occur between this manpage and the GNU Tar Manual, the later shall be considered the authoritative source. DESCRIPTION top GNU tar is an archiving program designed to store multiple files in a single file (an archive), and to manipulate such archives. The archive can be either a regular file or a device (e.g. a tape drive, hence the name of the program, which stands for tape archiver), which can be located either on the local or on a remote machine. Option styles Options to GNU tar can be given in three different styles. In traditional style, the first argument is a cluster of option letters and all subsequent arguments supply arguments to those options that require them. The arguments are read in the same order as the option letters. Any command line words that remain after all options have been processed are treated as non-option arguments: file or archive member names. For example, the c option requires creating the archive, the v option requests the verbose operation, and the f option takes an argument that sets the name of the archive to operate upon. The following command, written in the traditional style, instructs tar to store all files from the directory /etc into the archive file etc.tar, verbosely listing the files being archived: tar cfv etc.tar /etc In UNIX or short-option style, each option letter is prefixed with a single dash, as in other command line utilities. If an option takes an argument, the argument follows it, either as a separate command line word, or immediately following the option. However, if the option takes an optional argument, the argument must follow the option letter without any intervening whitespace, as in -g/tmp/snar.db. Any number of options not taking arguments can be clustered together after a single dash, e.g. -vkp. An option that takes an argument (whether mandatory or optional) can appear at the end of such a cluster, e.g. -vkpf a.tar. The example command above written in the short-option style could look like: tar -cvf etc.tar /etc or tar -c -v -f etc.tar /etc In GNU or long-option style, each option begins with two dashes and has a meaningful name, consisting of lower-case letters and dashes. When used, the long option can be abbreviated to its initial letters, provided that this does not create ambiguity. Arguments to long options are supplied either as a separate command line word, immediately following the option, or separated from the option by an equals sign with no intervening whitespace. Optional arguments must always use the latter method. Here are several ways of writing the example command in this style: tar --create --file etc.tar --verbose /etc or (abbreviating some options): tar --cre --file=etc.tar --verb /etc The options in all three styles can be intermixed, although doing so with old options is not encouraged. Operation mode The options listed in the table below tell GNU tar what operation it is to perform. Exactly one of them must be given. The meaning of non-option arguments depends on the operation mode requested. -A, --catenate, --concatenate Append archives to the end of another archive. The arguments are treated as the names of archives to append. All archives must be of the same format as the archive they are appended to, otherwise the resulting archive might be unusable with non-GNU implementations of tar. Notice also that when more than one archive is given, the members from archives other than the first one will be accessible in the resulting archive only when using the -i (--ignore-zeros) option. Compressed archives cannot be concatenated. -c, --create Create a new archive. Arguments supply the names of the files to be archived. Directories are archived recursively, unless the --no-recursion option is given. -d, --diff, --compare Find differences between archive and file system. The arguments are optional and specify archive members to compare. If not given, the current working directory is assumed. --delete Delete from the archive. The arguments supply names of the archive members to be removed. At least one argument must be given. This option does not operate on compressed archives. There is no short option equivalent. -r, --append Append files to the end of an archive. Arguments have the same meaning as for -c (--create). -t, --list List the contents of an archive. Arguments are optional. When given, they specify the names of the members to list. --test-label Test the archive volume label and exit. When used without arguments, it prints the volume label (if any) and exits with status 0. When one or more command line arguments are given. tar compares the volume label with each argument. It exits with code 0 if a match is found, and with code 1 otherwise. No output is displayed, unless used together with the -v (--verbose) option. There is no short option equivalent for this option. -u, --update Append files which are newer than the corresponding copy in the archive. Arguments have the same meaning as with the -c and -r options. Notice, that newer files don't replace their old archive copies, but instead are appended to the end of archive. The resulting archive can thus contain several members of the same name, corresponding to various versions of the same file. -x, --extract, --get Extract files from an archive. Arguments are optional. When given, they specify names of the archive members to be extracted. --show-defaults Show built-in defaults for various tar options and exit. -?, --help Display a short option summary and exit. --usage Display a list of available options and exit. --version Print program version and copyright information and exit. OPTIONS top Operation modifiers --check-device Check device numbers when creating incremental archives (default). -g, --listed-incremental=FILE Handle new GNU-format incremental backups. FILE is the name of a snapshot file, where tar stores additional information which is used to decide which files changed since the previous incremental dump and, consequently, must be dumped again. If FILE does not exist when creating an archive, it will be created and all files will be added to the resulting archive (the level 0 dump). To create incremental archives of non-zero level N, you need a copy of the snapshot file created for level N-1, and use it as FILE. When listing or extracting, the actual content of FILE is not inspected, it is needed only due to syntactical requirements. It is therefore common practice to use /dev/null in its place. --hole-detection=METHOD Use METHOD to detect holes in sparse files. This option implies --sparse. Valid values for METHOD are seek and raw. Default is seek with fallback to raw when not applicable. -G, --incremental Handle old GNU-format incremental backups. --ignore-failed-read Do not exit with nonzero on unreadable files. --level=NUMBER Set dump level for a created listed-incremental archive. Currently only --level=0 is meaningful: it instructs tar to truncate the snapshot file before dumping, thereby forcing a level 0 dump. -n, --seek Assume the archive is seekable. Normally tar determines automatically whether the archive can be seeked or not. This option is intended for use in cases when such recognition fails. It takes effect only if the archive is open for reading (e.g. with --list or --extract options). --no-check-device Do not check device numbers when creating incremental archives. --no-seek Assume the archive is not seekable. --occurrence[=N] Process only the Nth occurrence of each file in the archive. This option is valid only when used with one of the following subcommands: --delete, --diff, --extract or --list and when a list of files is given either on the command line or via the -T option. The default N is 1. --restrict Disable the use of some potentially harmful options. --sparse-version=MAJOR[.MINOR] Set which version of the sparse format to use. This option implies --sparse. Valid argument values are 0.0, 0.1, and 1.0. For a detailed discussion of sparse formats, refer to the GNU Tar Manual, appendix D, "Sparse Formats". Using the info reader, it can be accessed running the following command: info tar 'Sparse Formats'. -S, --sparse Handle sparse files efficiently. Some files in the file system may have segments which were actually never written (quite often these are database files created by such systems as DBM). When given this option, tar attempts to determine if the file is sparse prior to archiving it, and if so, to reduce the resulting archive size by not dumping empty parts of the file. Overwrite control These options control tar actions when extracting a file over an existing copy on disk. -k, --keep-old-files Don't replace existing files when extracting. --keep-newer-files Don't replace existing files that are newer than their archive copies. --keep-directory-symlink Don't replace existing symlinks to directories when extracting. --no-overwrite-dir Preserve metadata of existing directories. --one-top-level[=DIR] Extract all files into DIR, or, if used without argument, into a subdirectory named by the base name of the archive (minus standard compression suffixes recognizable by --auto-compress). --overwrite Overwrite existing files when extracting. --overwrite-dir Overwrite metadata of existing directories when extracting (default). --recursive-unlink Recursively remove all files in the directory prior to extracting it. --remove-files Remove files from disk after adding them to the archive. --skip-old-files Don't replace existing files when extracting, silently skip over them. -U, --unlink-first Remove each file prior to extracting over it. -W, --verify Verify the archive after writing it. Output stream selection --ignore-command-error Ignore subprocess exit codes. --no-ignore-command-error Treat non-zero exit codes of children as error (default). -O, --to-stdout Extract files to standard output. --to-command=COMMAND Pipe extracted files to COMMAND. The argument is the pathname of an external program, optionally with command line arguments. The program will be invoked and the contents of the file being extracted supplied to it on its standard input. Additional data will be supplied via the following environment variables: TAR_FILETYPE Type of the file. It is a single letter with the following meaning: f Regular file d Directory l Symbolic link h Hard link b Block device c Character device Currently only regular files are supported. TAR_MODE File mode, an octal number. TAR_FILENAME The name of the file. TAR_REALNAME Name of the file as stored in the archive. TAR_UNAME Name of the file owner. TAR_GNAME Name of the file owner group. TAR_ATIME Time of last access. It is a decimal number, representing seconds since the Epoch. If the archive provides times with nanosecond precision, the nanoseconds are appended to the timestamp after a decimal point. TAR_MTIME Time of last modification. TAR_CTIME Time of last status change. TAR_SIZE Size of the file. TAR_UID UID of the file owner. TAR_GID GID of the file owner. Additionally, the following variables contain information about tar operation mode and the archive being processed: TAR_VERSION GNU tar version number. TAR_ARCHIVE The name of the archive tar is processing. TAR_BLOCKING_FACTOR Current blocking factor, i.e. number of 512-byte blocks in a record. TAR_VOLUME Ordinal number of the volume tar is processing (set if reading a multi-volume archive). TAR_FORMAT Format of the archive being processed. One of: gnu, oldgnu, posix, ustar, v7. TAR_SUBCOMMAND A short option (with a leading dash) describing the operation tar is executing. Handling of file attributes --atime-preserve[=METHOD] Preserve access times on dumped files, either by restoring the times after reading (METHOD=replace, this is the default) or by not setting the times in the first place (METHOD=system). --delay-directory-restore Delay setting modification times and permissions of extracted directories until the end of extraction. Use this option when extracting from an archive which has unusual member ordering. --group=NAME[:GID] Force NAME as group for added files. If GID is not supplied, NAME can be either a user name or numeric GID. In this case the missing part (GID or name) will be inferred from the current host's group database. When used with --group-map=FILE, affects only those files whose owner group is not listed in FILE. --group-map=FILE Read group translation map from FILE. Empty lines are ignored. Comments are introduced with # sign and extend to the end of line. Each non-empty line in FILE defines translation for a single group. It must consist of two fields, delimited by any amount of whitespace: OLDGRP NEWGRP[:NEWGID] OLDGRP is either a valid group name or a GID prefixed with +. Unless NEWGID is supplied, NEWGRP must also be either a valid group name or a +GID. Otherwise, both NEWGRP and NEWGID need not be listed in the system group database. As a result, each input file with owner group OLDGRP will be stored in archive with owner group NEWGRP and GID NEWGID. --mode=CHANGES Force symbolic mode CHANGES for added files. --mtime=DATE-OR-FILE Set mtime for added files. DATE-OR-FILE is either a date/time in almost arbitrary format, or the name of an existing file. In the latter case the mtime of that file will be used. -m, --touch Don't extract file modified time. --no-delay-directory-restore Cancel the effect of the prior --delay-directory-restore option. --no-same-owner Extract files as yourself (default for ordinary users). --no-same-permissions Apply the user's umask when extracting permissions from the archive (default for ordinary users). --numeric-owner Always use numbers for user/group names. --owner=NAME[:UID] Force NAME as owner for added files. If UID is not supplied, NAME can be either a user name or numeric UID. In this case the missing part (UID or name) will be inferred from the current host's user database. When used with --owner-map=FILE, affects only those files whose owner is not listed in FILE. --owner-map=FILE Read owner translation map from FILE. Empty lines are ignored. Comments are introduced with # sign and extend to the end of line. Each non-empty line in FILE defines translation for a single UID. It must consist of two fields, delimited by any amount of whitespace: OLDUSR NEWUSR[:NEWUID] OLDUSR is either a valid user name or a UID prefixed with +. Unless NEWUID is supplied, NEWUSR must also be either a valid user name or a +UID. Otherwise, both NEWUSR and NEWUID need not be listed in the system user database. As a result, each input file owned by OLDUSR will be stored in archive with owner name NEWUSR and UID NEWUID. -p, --preserve-permissions, --same-permissions Set permissions of extracted files to those recorded in the archive (default for superuser). --same-owner Try extracting files with the same ownership as exists in the archive (default for superuser). -s, --preserve-order, --same-order Tell tar that the list of file names to process is sorted in the same order as the files in the archive. --sort=ORDER When creating an archive, sort directory entries according to ORDER, which is one of none, name, or inode. The default is --sort=none, which stores archive members in the same order as returned by the operating system. Using --sort=name ensures the member ordering in the created archive is uniform and reproducible. Using --sort=inode reduces the number of disk seeks made when creating the archive and thus can considerably speed up archivation. This sorting order is supported only if the underlying system provides the necessary information. Extended file attributes --acls Enable POSIX ACLs support. --no-acls Disable POSIX ACLs support. --selinux Enable SELinux context support. --no-selinux Disable SELinux context support. --xattrs Enable extended attributes support. --no-xattrs Disable extended attributes support. --xattrs-exclude=PATTERN Specify the exclude pattern for xattr keys. PATTERN is a globbing pattern, e.g. --xattrs-exclude='user.*' to include only attributes from the user namespace. --xattrs-include=PATTERN Specify the include pattern for xattr keys. PATTERN is a globbing pattern. Device selection and switching -f, --file=ARCHIVE Use archive file or device ARCHIVE. If this option is not given, tar will first examine the environment variable `TAPE'. If it is set, its value will be used as the archive name. Otherwise, tar will assume the compiled-in default. The default value can be inspected either using the --show-defaults option, or at the end of the tar --help output. An archive name that has a colon in it specifies a file or device on a remote machine. The part before the colon is taken as the machine name or IP address, and the part after it as the file or device pathname, e.g.: --file=remotehost:/dev/sr0 An optional username can be prefixed to the hostname, placing a @ sign between them. By default, the remote host is accessed via the rsh(1) command. Nowadays it is common to use ssh(1) instead. You can do so by giving the following command line option: --rsh-command=/usr/bin/ssh The remote machine should have the rmt(8) command installed. If its pathname does not match tar's default, you can inform tar about the correct pathname using the --rmt-command option. --force-local Archive file is local even if it has a colon. -F, --info-script=COMMAND, --new-volume-script=COMMAND Run COMMAND at the end of each tape (implies -M). The command can include arguments. When started, it will inherit tar's environment plus the following variables: TAR_VERSION GNU tar version number. TAR_ARCHIVE The name of the archive tar is processing. TAR_BLOCKING_FACTOR Current blocking factor, i.e. number of 512-byte blocks in a record. TAR_VOLUME Ordinal number of the volume tar is processing (set if reading a multi-volume archive). TAR_FORMAT Format of the archive being processed. One of: gnu, oldgnu, posix, ustar, v7. TAR_SUBCOMMAND A short option (with a leading dash) describing the operation tar is executing. TAR_FD File descriptor which can be used to communicate the new volume name to tar. If the info script fails, tar exits; otherwise, it begins writing the next volume. -L, --tape-length=N Change tape after writing Nx1024 bytes. If N is followed by a size suffix (see the subsection Size suffixes below), the suffix specifies the multiplicative factor to be used instead of 1024. This option implies -M. -M, --multi-volume Create/list/extract multi-volume archive. --rmt-command=COMMAND Use COMMAND instead of rmt when accessing remote archives. See the description of the -f option, above. --rsh-command=COMMAND Use COMMAND instead of rsh when accessing remote archives. See the description of the -f option, above. --volno-file=FILE When this option is used in conjunction with --multi-volume, tar will keep track of which volume of a multi-volume archive it is working in FILE. Device blocking -b, --blocking-factor=BLOCKS Set record size to BLOCKSx512 bytes. -B, --read-full-records When listing or extracting, accept incomplete input records after end-of-file marker. -i, --ignore-zeros Ignore zeroed blocks in archive. Normally two consecutive 512-blocks filled with zeroes mean EOF and tar stops reading after encountering them. This option instructs it to read further and is useful when reading archives created with the -A option. --record-size=NUMBER Set record size. NUMBER is the number of bytes per record. It must be multiple of 512. It can can be suffixed with a size suffix, e.g. --record-size=10K, for 10 Kilobytes. See the subsection Size suffixes, for a list of valid suffixes. Archive format selection -H, --format=FORMAT Create archive of the given format. Valid formats are: gnu GNU tar 1.13.x format oldgnu GNU format as per tar <= 1.12. pax, posix POSIX 1003.1-2001 (pax) format. ustar POSIX 1003.1-1988 (ustar) format. v7 Old V7 tar format. --old-archive, --portability Same as --format=v7. --pax-option=keyword[[:]=value][,keyword[[:]=value]]... Control pax keywords when creating PAX archives (-H pax). This option is equivalent to the -o option of the pax(1) utility. --posix Same as --format=posix. -V, --label=TEXT Create archive with volume name TEXT. If listing or extracting, use TEXT as a globbing pattern for volume name. Compression options -a, --auto-compress Use archive suffix to determine the compression program. -I, --use-compress-program=COMMAND Filter data through COMMAND. It must accept the -d option, for decompression. The argument can contain command line options. -j, --bzip2 Filter the archive through bzip2(1). -J, --xz Filter the archive through xz(1). --lzip Filter the archive through lzip(1). --lzma Filter the archive through lzma(1). --lzop Filter the archive through lzop(1). --no-auto-compress Do not use archive suffix to determine the compression program. -z, --gzip, --gunzip, --ungzip Filter the archive through gzip(1). -Z, --compress, --uncompress Filter the archive through compress(1). --zstd Filter the archive through zstd(1). Local file selection --add-file=FILE Add FILE to the archive (useful if its name starts with a dash). --backup[=CONTROL] Backup before removal. The CONTROL argument, if supplied, controls the backup policy. Its valid values are: none, off Never make backups. t, numbered Make numbered backups. nil, existing Make numbered backups if numbered backups exist, simple backups otherwise. never, simple Always make simple backups If CONTROL is not given, the value is taken from the VERSION_CONTROL environment variable. If it is not set, existing is assumed. -C, --directory=DIR Change to DIR before performing any operations. This option is order-sensitive, i.e. it affects all options that follow. --exclude=PATTERN Exclude files matching PATTERN, a glob(3)-style wildcard pattern. --exclude-backups Exclude backup and lock files. --exclude-caches Exclude contents of directories containing file CACHEDIR.TAG, except for the tag file itself. --exclude-caches-all Exclude directories containing file CACHEDIR.TAG and the file itself. --exclude-caches-under Exclude everything under directories containing CACHEDIR.TAG --exclude-ignore=FILE Before dumping a directory, see if it contains FILE. If so, read exclusion patterns from this file. The patterns affect only the directory itself. --exclude-ignore-recursive=FILE Same as --exclude-ignore, except that patterns from FILE affect both the directory and all its subdirectories. --exclude-tag=FILE Exclude contents of directories containing FILE, except for FILE itself. --exclude-tag-all=FILE Exclude directories containing FILE. --exclude-tag-under=FILE Exclude everything under directories containing FILE. --exclude-vcs Exclude version control system directories. --exclude-vcs-ignores Exclude files that match patterns read from VCS-specific ignore files. Supported files are: .cvsignore, .gitignore, .bzrignore, and .hgignore. -h, --dereference Follow symlinks; archive and dump the files they point to. --hard-dereference Follow hard links; archive and dump the files they refer to. -K, --starting-file=MEMBER Begin at the given member in the archive. --newer-mtime=DATE Work on files whose data changed after the DATE. If DATE starts with / or . it is taken to be a file name; the mtime of that file is used as the date. --no-null Disable the effect of the previous --null option. --no-recursion Avoid descending automatically in directories. --no-unquote Do not unquote input file or member names. --no-verbatim-files-from Treat each line read from a file list as if it were supplied in the command line. I.e., leading and trailing whitespace is removed and, if the resulting string begins with a dash, it is treated as tar command line option. This is the default behavior. The --no-verbatim-files-from option is provided as a way to restore it after --verbatim-files-from option. This option is positional: it affects all --files-from options that occur after it in, until --verbatim-files-from option or end of line, whichever occurs first. It is implied by the --no-null option. --null Instruct subsequent -T options to read null-terminated names verbatim (disables special handling of names that start with a dash). See also --verbatim-files-from. -N, --newer=DATE, --after-date=DATE Only store files newer than DATE. If DATE starts with / or . it is taken to be a file name; the mtime of that file is used as the date. --one-file-system Stay in local file system when creating archive. -P, --absolute-names Don't strip leading slashes from file names when creating archives. --recursion Recurse into directories (default). --suffix=STRING Backup before removal, override usual suffix. Default suffix is ~, unless overridden by environment variable SIMPLE_BACKUP_SUFFIX. -T, --files-from=FILE Get names to extract or create from FILE. Unless specified otherwise, the FILE must contain a list of names separated by ASCII LF (i.e. one name per line). The names read are handled the same way as command line arguments. They undergo quote removal and word splitting, and any string that starts with a - is handled as tar command line option. If this behavior is undesirable, it can be turned off using the --verbatim-files-from option. The --null option instructs tar that the names in FILE are separated by ASCII NUL character, instead of LF. It is useful if the list is generated by find(1) -print0 predicate. --unquote Unquote file or member names (default). --verbatim-files-from Treat each line obtained from a file list as a file name, even if it starts with a dash. File lists are supplied with the --files-from (-T) option. The default behavior is to handle names supplied in file lists as if they were typed in the command line, i.e. any names starting with a dash are treated as tar options. The --verbatim-files-from option disables this behavior. This option affects all --files-from options that occur after it in the command line. Its effect is reverted by the --no-verbatim-files-from option. This option is implied by the --null option. See also --add-file. -X, --exclude-from=FILE Exclude files matching patterns listed in FILE. File name transformations --strip-components=NUMBER Strip NUMBER leading components from file names on extraction. --transform=EXPRESSION, --xform=EXPRESSION Use sed replace EXPRESSION to transform file names. File name matching options These options affect both exclude and include patterns. --anchored Patterns match file name start. --ignore-case Ignore case. --no-anchored Patterns match after any / (default for exclusion). --no-ignore-case Case sensitive matching (default). --no-wildcards Verbatim string matching. --no-wildcards-match-slash Wildcards do not match /. --wildcards Use wildcards (default for exclusion). --wildcards-match-slash Wildcards match / (default for exclusion). Informative output --checkpoint[=N] Display progress messages every Nth record (default 10). --checkpoint-action=ACTION Run ACTION on each checkpoint. --clamp-mtime Only set time when the file is more recent than what was given with --mtime. --full-time Print file time to its full resolution. --index-file=FILE Send verbose output to FILE. -l, --check-links Print a message if not all links are dumped. --no-quote-chars=STRING Disable quoting for characters from STRING. --quote-chars=STRING Additionally quote characters from STRING. --quoting-style=STYLE Set quoting style for file and member names. Valid values for STYLE are literal, shell, shell-always, c, c-maybe, escape, locale, clocale. -R, --block-number Show block number within archive with each message. --show-omitted-dirs When listing or extracting, list each directory that does not match search criteria. --show-transformed-names, --show-stored-names Show file or archive names after transformation by --strip and --transform options. --totals[=SIGNAL] Print total bytes after processing the archive. If SIGNAL is given, print total bytes when this signal is delivered. Allowed signals are: SIGHUP, SIGQUIT, SIGINT, SIGUSR1, and SIGUSR2. The SIG prefix can be omitted. --utc Print file modification times in UTC. -v, --verbose Verbosely list files processed. Each instance of this option on the command line increases the verbosity level by one. The maximum verbosity level is 3. For a detailed discussion of how various verbosity levels affect tar's output, please refer to GNU Tar Manual, subsection 2.5.2 "The '--verbose' Option". --warning=KEYWORD Enable or disable warning messages identified by KEYWORD. The messages are suppressed if KEYWORD is prefixed with no- and enabled otherwise. Multiple --warning options accumulate. Keywords controlling general tar operation: all Enable all warning messages. This is the default. none Disable all warning messages. filename-with-nuls "%s: file name read contains nul character" alone-zero-block "A lone zero block at %s" Keywords applicable for tar --create: cachedir "%s: contains a cache directory tag %s; %s" file-shrank "%s: File shrank by %s bytes; padding with zeros" xdev "%s: file is on a different filesystem; not dumped" file-ignored "%s: Unknown file type; file ignored" "%s: socket ignored" "%s: door ignored" file-unchanged "%s: file is unchanged; not dumped" ignore-archive "%s: archive cannot contain itself; not dumped" file-removed "%s: File removed before we read it" file-changed "%s: file changed as we read it" failed-read Suppresses warnings about unreadable files or directories. This keyword applies only if used together with the --ignore-failed-read option. Keywords applicable for tar --extract: existing-file "%s: skipping existing file" timestamp "%s: implausibly old time stamp %s" "%s: time stamp %s is %s s in the future" contiguous-cast "Extracting contiguous files as regular files" symlink-cast "Attempting extraction of symbolic links as hard links" unknown-cast "%s: Unknown file type '%c', extracted as normal file" ignore-newer "Current %s is newer or same age" unknown-keyword "Ignoring unknown extended header keyword '%s'" decompress-program Controls verbose description of failures occurring when trying to run alternative decompressor programs. This warning is disabled by default (unless --verbose is used). A common example of what you can get when using this warning is: $ tar --warning=decompress-program -x -f archive.Z tar (child): cannot run compress: No such file or directory tar (child): trying gzip This means that tar first tried to decompress archive.Z using compress, and, when that failed, switched to gzip. record-size "Record size = %lu blocks" Keywords controlling incremental extraction: rename-directory "%s: Directory has been renamed from %s" "%s: Directory has been renamed" new-directory "%s: Directory is new" xdev "%s: directory is on a different device: not purging" bad-dumpdir "Malformed dumpdir: 'X' never used" -w, --interactive, --confirmation Ask for confirmation for every action. Compatibility options -o When creating, same as --old-archive. When extracting, same as --no-same-owner. Size suffixes Suffix Units Byte Equivalent b Blocks SIZE x 512 B Kilobytes SIZE x 1024 c Bytes SIZE G Gigabytes SIZE x 1024^3 K Kilobytes SIZE x 1024 k Kilobytes SIZE x 1024 M Megabytes SIZE x 1024^2 P Petabytes SIZE x 1024^5 T Terabytes SIZE x 1024^4 w Words SIZE x 2 RETURN VALUE top Tar's exit code indicates whether it was able to successfully perform the requested operation, and if not, what kind of error occurred. 0 Successful termination. 1 Some files differ. If tar was invoked with the --compare (--diff, -d) command line option, this means that some files in the archive differ from their disk counterparts. If tar was given one of the --create, --append or --update options, this exit code means that some files were changed while being archived and so the resulting archive does not contain the exact copy of the file set. 2 Fatal error. This means that some fatal, unrecoverable error occurred. If a subprocess that had been invoked by tar exited with a nonzero exit code, tar itself exits with that code as well. This can happen, for example, if a compression option (e.g. -z) was used and the external compressor program failed. Another example is rmt failure during backup to a remote device. SEE ALSO top bzip2(1), compress(1), gzip(1), lzma(1), lzop(1), rmt(8), symlink(7), xz(1), zstd(1). Complete tar manual: run info tar or use emacs(1) info mode to read it. Online copies of GNU tar documentation in various formats can be found at: https://www.gnu.org/software/tar/manual BUG REPORTS top Report bugs to <bug-tar@gnu.org>. COPYRIGHT top Copyright 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. COLOPHON top This page is part of the tar (an archiver program) project. Information about the project can be found at http://savannah.gnu.org/projects/tar/. If you have a bug report for this manual page, see http://savannah.gnu.org/bugs/?group=tar. This page was obtained from the project's upstream Git repository git://git.savannah.gnu.org/tar.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-09-12.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org TAR July 11, 2022 TAR(1) Pages that refer to this page: attr(1), dpkg-deb(1), dpkg-source(1), machinectl(1), rsync(1), st(4), suffixes(7), symlink(7), cupsd-helper(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# tar\n\n> Archiving utility.\n> Often combined with a compression method, such as `gzip` or `bzip2`.\n> More information: <https://www.gnu.org/software/tar>.\n\n- [c]reate an archive and write it to a [f]ile:\n\n`tar cf {{path/to/target.tar}} {{path/to/file1 path/to/file2 ...}}`\n\n- [c]reate a g[z]ipped archive and write it to a [f]ile:\n\n`tar czf {{path/to/target.tar.gz}} {{path/to/file1 path/to/file2 ...}}`\n\n- [c]reate a g[z]ipped archive from a directory using relative paths:\n\n`tar czf {{path/to/target.tar.gz}} --directory={{path/to/directory}} .`\n\n- E[x]tract a (compressed) archive [f]ile into the current directory [v]erbosely:\n\n`tar xvf {{path/to/source.tar[.gz|.bz2|.xz]}}`\n\n- E[x]tract a (compressed) archive [f]ile into the target directory:\n\n`tar xf {{path/to/source.tar[.gz|.bz2|.xz]}} --directory={{path/to/directory}}`\n\n- [c]reate a compressed archive and write it to a [f]ile, using the file extension to [a]utomatically determine the compression program:\n\n`tar caf {{path/to/target.tar.xz}} {{path/to/file1 path/to/file2 ...}}`\n\n- Lis[t] the contents of a tar [f]ile [v]erbosely:\n\n`tar tvf {{path/to/source.tar}}`\n\n- E[x]tract files matching a pattern from an archive [f]ile:\n\n`tar xf {{path/to/source.tar}} --wildcards "{{*.html}}"`\n
taskset
taskset(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training taskset(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | USAGE | PERMISSIONS | RETURN VALUE | AUTHORS | COPYRIGHT | SEE ALSO | REPORTING BUGS | AVAILABILITY TASKSET(1) User Commands TASKSET(1) NAME top taskset - set or retrieve a process's CPU affinity SYNOPSIS top taskset [options] mask command [argument...] taskset [options] -p [mask] pid DESCRIPTION top The taskset command is used to set or retrieve the CPU affinity of a running process given its pid, or to launch a new command with a given CPU affinity. CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. Note that the Linux scheduler also supports natural CPU affinity: the scheduler attempts to keep processes on the same CPU as long as practical for performance reasons. Therefore, forcing a specific CPU affinity is useful only in certain applications. The affinity of some processes like kernel per-CPU threads cannot be set. The CPU affinity is represented as a bitmask, with the lowest order bit corresponding to the first logical CPU and the highest order bit corresponding to the last logical CPU. Not all CPUs may exist on a given system but a mask may specify more CPUs than are present. A retrieved mask will reflect only the bits that correspond to CPUs physically on the system. If an invalid mask is given (i.e., one that corresponds to no valid CPUs on the current system) an error is returned. The masks may be specified in hexadecimal (with or without a leading "0x"), or as a CPU list with the --cpu-list option. For example, 0x00000001 is processor #0, 0x00000003 is processors #0 and #1, FFFFFFFF is processors #0 through #31, 0x32 is processors #1, #4, and #5, --cpu-list 0-2,6 is processors #0, #1, #2, and #6. --cpu-list 0-10:2 is processors #0, #2, #4, #6, #8 and #10. The suffix ":N" specifies stride in the range, for example 0-10:3 is interpreted as 0,3,6,9 list. When taskset returns, it is guaranteed that the given program has been scheduled to a legal CPU. OPTIONS top -a, --all-tasks Set or retrieve the CPU affinity of all the tasks (threads) for a given PID. -c, --cpu-list Interpret mask as numerical list of processors instead of a bitmask. Numbers are separated by commas and may include ranges. For example: 0,5,8-11. -p, --pid Operate on an existing PID and do not launch a new task. -h, --help Display help text and exit. -V, --version Print version and exit. USAGE top The default behavior is to run a new command with a given affinity mask: taskset mask command [arguments] You can also retrieve the CPU affinity of an existing task: taskset -p pid Or set it: taskset -p mask pid When a cpu-list is specified for an existing process, the -p and -c options must be grouped together: taskset -pc cpu-list pid The --cpu-list form is applicable only for launching new commands: taskset --cpu-list cpu-list command PERMISSIONS top A user can change the CPU affinity of a process belonging to the same user. A user must possess CAP_SYS_NICE to change the CPU affinity of a process belonging to another user. A user can retrieve the affinity mask of any process. RETURN VALUE top taskset returns 0 in its affinity-getting mode as long as the provided PID exists. taskset returns 0 in its affinity-setting mode as long as the underlying sched_setaffinity(2) system call does. The success of the command does not guarantee that the specified thread has actually migrated to the indicated CPU(s), but only that the thread will not migrate to a CPU outside the new affinity mask. For example, the affinity of the kernel thread kswapd can be set, but the thread may not immediately migrate and is not guaranteed to ever do so: $ ps ax -o comm,psr,pid | grep kswapd kswapd0 4 82 $ sudo taskset -p 1 82 pid 82s current affinity mask: 1 pid 82s new affinity mask: 1 $ echo $? 0 $ ps ax -o comm,psr,pid | grep kswapd kswapd0 4 82 $ taskset -p 82 pid 82s current affinity mask: 1 In contrast, when the user specifies an illegal affinity, taskset will print an error and return 1: $ ps ax -o comm,psr,pid | grep ksoftirqd/0 ksoftirqd/0 0 14 $ sudo taskset -p 1 14 pid 14s current affinity mask: 1 taskset: failed to set pid 14s affinity: Invalid argument $ echo $? 1 AUTHORS top Written by Robert M. Love. COPYRIGHT top Copyright 2004 Robert M. Love. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. SEE ALSO top chrt(1), nice(1), renice(1), sched_getaffinity(2), sched_setaffinity(2) See sched(7) for a description of the Linux scheduling scheme. REPORTING BUGS top For bug reports, use the issue tracker at https://github.com/util-linux/util-linux/issues. AVAILABILITY top The taskset command is part of the util-linux package which can be downloaded from Linux Kernel Archive <https://www.kernel.org/pub/linux/utils/util-linux/>. This page is part of the util-linux (a random collection of Linux utilities) project. Information about the project can be found at https://www.kernel.org/pub/linux/utils/util-linux/. If you have a bug report for this manual page, send it to util-linux@vger.kernel.org. This page was obtained from the project's upstream Git repository git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org util-linux 2.39.594-1e0ad 2023-07-19 TASKSET(1) Pages that refer to this page: chrt(1), uclampset(1), sched_setaffinity(2), cpuset(7), sched(7), migratepages(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# taskset\n\n> Get or set a process' CPU affinity or start a new process with a defined CPU affinity.\n> More information: <https://manned.org/taskset>.\n\n- Get a running process' CPU affinity by PID:\n\n`taskset --pid --cpu-list {{pid}}`\n\n- Set a running process' CPU affinity by PID:\n\n`taskset --pid --cpu-list {{cpu_id}} {{pid}}`\n\n- Start a new process with affinity for a single CPU:\n\n`taskset --cpu-list {{cpu_id}} {{command}}`\n\n- Start a new process with affinity for multiple non-sequential CPUs:\n\n`taskset --cpu-list {{cpu_id_1}},{{cpu_id_2}},{{cpu_id_3}}`\n\n- Start a new process with affinity for CPUs 1 through 4:\n\n`taskset --cpu-list {{cpu_id_1}}-{{cpu_id_4}}`\n
tbl
tbl(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training tbl(1) Linux manual page Name | Synopsis | Description | Options | Limitations | Examples | See also | COLOPHON tbl(1) General Commands Manual tbl(1) Name top tbl - prepare tables for groff documents Synopsis top tbl [-C] [file ...] tbl --help tbl -v tbl --version Description top The GNU implementation of tbl is part of the groff(1) document formatting system. tbl is a troff(1) preprocessor that translates descriptions of tables embedded in roff(7) input files into the language understood by troff. It copies the contents of each file to the standard output stream, except that lines between .TS and .TE are interpreted as table descriptions. While GNU tbl's input syntax is highly compatible with AT&T tbl, the output GNU tbl produces cannot be processed by AT&T troff; GNU troff (or a troff implementing any GNU extensions employed) must be used. Normally, tbl is not executed directly by the user, but invoked by specifying the -t option to groff(1). If no file operands are present, or if file is -, tbl reads the standard input stream. Overview tbl expects to find table descriptions between input lines that begin with .TS (table start) and .TE (table end). Each such table region encloses one or more table descriptions. Within a table region, table descriptions beyond the first must each be preceded by an input line beginning with .T&. This mechanism does not start a new table region; all table descriptions are treated as part of their .TS/.TE enclosure, even if they are boxed or have column headings that repeat on subsequent pages (see below). (Experienced roff users should observe that tbl is not a roff language interpreter: the default control character must be used, and no spaces or tabs are permitted between the control character and the macro name. These tbl input tokens remain as-is in the output, where they become ordinary macro calls. Macro packages often define TS, T&, and TE macros to handle issues of table placement on the page. tbl produces groff code to define these macros as empty if their definitions do not exist when the formatter encounters a table region.) Each table region may begin with region options, and must contain one or more table definitions; each table definition contains a format specification followed by one or more input lines (rows) of entries. These entries comprise the table data. Region options The line immediately following the .TS token may specify region options, keywords that influence the interpretation or rendering of the region as a whole or all table entries within it indiscriminately. They must be separated by commas, spaces, or tabs. Those that require a parenthesized argument permit spaces and tabs between the option's name and the opening parenthesis. Options accumulate and cannot be unset within a region once declared; if an option that takes a parameter is repeated, the last occurrence controls. If present, the set of region options must be terminated with a semicolon (;). Any of the allbox, box, doublebox, frame, and doubleframe region options makes a table boxed for the purpose of later discussion. allbox Enclose each table entry in a box; implies box. box Enclose the entire table region in a box. GNU tbl recognizes frame as a synonym. center Center the table region with respect to the current indentation and line length; the default is to left-align it. GNU tbl recognizes centre as a synonym. decimalpoint(c) Recognize character c as the decimal separator in columns using the N (numeric) classifier (see subsection Column classifiers below). This is a GNU extension. delim(xy) Recognize characters x and y as start and end delimiters, respectively, for eqn(1) input, and ignore input between them. x and y need not be distinct. doublebox Enclose the entire table region in a double box; implies box. GNU tbl recognizes doubleframe as a synonym. expand Spread the table horizontally to fill the available space (line length minus indentation) by increasing column separation. Ordinarily, a table is made only as wide as necessary to accommodate the widths of its entries and its column separations (whether specified or default). When expand applies to a table that exceeds the available horizontal space, column separation is reduced as far as necessary (even to zero). tbl produces groff input that issues a diagnostic if such compression occurs. The column modifier x (see below) overrides this option. linesize(n) Draw lines or rules (e.g., from box) with a thickness of n points. The default is the current type size when the region begins. This option is ignored on terminal devices. nokeep Don't use roff diversions to manage page breaks. Normally, tbl employs them to avoid breaking a page within a table row. This usage can sometimes interact badly with macro packages' own use of diversionswhen footnotes, for example, are employed. This is a GNU extension. nospaces Ignore leading and trailing spaces in table entries. This is a GNU extension. nowarn Suppress diagnostic messages produced at document formatting time when the line or page lengths are inadequate to contain a table row. This is a GNU extension. tab(c) Use the character c instead of a tab to separate entries in a row of table data. Table format specification The table format specification is mandatory: it determines the number of columns in the table and directs how the entries within it are to be typeset. The format specification is a series of column descriptors. Each descriptor encodes a classifier followed by zero or more modifiers. Classifiers are letters (recognized case-insensitively) or punctuation symbols; modifiers consist of or begin with letters or numerals. Spaces, tabs, newlines, and commas separate descriptors. Newlines and commas are special; they apply the descriptors following them to a subsequent row of the table. (This enables column headings to be centered or emboldened while the table entries for the data are not, for instance.) We term the resulting group of column descriptors a row definition. Within a row definition, separation between column descriptors (by spaces or tabs) is often optional; only some modifiers, described below, make separation necessary. Each column descriptor begins with a mandatory classifier, a character that selects from one of several arrangements. Some determine the positioning of table entries within a rectangular cell: centered, left-aligned, numeric (aligned to a configurable decimal separator), and so on. Others perform special operations like drawing lines or spanning entries from adjacent cells in the table. Except for |, any classifier can be followed by one or more modifiers; some of these accept an argument, which in GNU tbl can be parenthesized. Modifiers select fonts, set the type size, and perform other tasks described below. The format specification can occupy multiple input lines, but must conclude with a dot . followed by a newline. Each row definition is applied in turn to one row of the table. The last row definition is applied to rows of table data in excess of the row definitions. For clarity in this document's examples, we shall write classifiers in uppercase and modifiers in lowercase. Thus, CbCb,LR. defines two rows of two columns. The first row's entries are centered and boldfaced; the second and any further rows' first and second columns are left- and right-aligned, respectively. If more rows of entries are added to the table data, they reuse the row definition LR. The row definition with the most column descriptors determines the number of columns in the table; any row definition with fewer is implicitly extended on the right-hand side with L classifiers as many times as necessary to make the table rectangular. Column classifiers The L, R, and C classifiers are the easiest to understand and use. A, a Center longest entry in this column, left-align remaining entries in the column with respect to the centered entry, then indent all entries by one en. Such alphabetic entries (hence the name of the classifier) can be used in the same column as L-classified entries, as in LL,AR.. The A entries are often termed sub-columns due to their indentation. C, c Center entry within the column. L, l Left-align entry within the column. N, n Numerically align entry in the column. tbl aligns columns of numbers vertically at the units place. If multiple decimal separators are adjacent to a digit, it uses the rightmost one for vertical alignment. If there is no decimal separator, the rightmost digit is used for vertical alignment; otherwise, tbl centers the entry within the column. The roff dummy character \& in an entry marks the glyph preceding it (if any) as the units place; if multiple instances occur in the data, the leftmost is used for alignment. If N-classified entries share a column with L or R entries, tbl centers the widest N entry with respect to the widest L or R entry, preserving the alignment of N entries with respect to each other. Decimal separators in eqn equations within N-classified columns can conflict with tbl's use of them for alignment. Specify the delim region option to make tbl ignore the data within eqn delimiters. R, r Right-align entry within the column. S, s Span previous entry on the left into this column. ^ Span entry in the same column from the previous row into this row. _, - Replace table entry with a horizontal rule. An empty table entry is expected to correspond to this classifier; if data are found there, tbl issues a diagnostic message. If the entire row definition consists of these classifiers, it is treated as a _ occupying a row of table entries, and no corresponding data are expected. = Replace table entry with a double horizontal rule. An empty table entry is expected to correspond to this classifier; if data are found there, tbl issues a diagnostic message. If the entire row definition consists of these classifiers, it is treated as a = occupying a row of table entries, and no corresponding data are expected. | Place a vertical rule (line) on the corresponding row of the table (if two of these are adjacent, a double vertical rule). This classifier does not contribute to the column count and no table entries correspond to it. A | to the left of the first column descriptor or to the right of the last one produces a vertical rule at the edge of the table; these are redundant (and ignored) in boxed tables. To change the table format within a tbl region, use the .T& token at the start of a line. Follow it with a format specification and table data, but not region options. The quantity of columns in a format thus introduced cannot increase relative to the previous format; in that case, you must end the table region and start another. If that will not serve because the region uses box options or the columns align in an undesirable manner, you must design the initial table format specification to include the maximum quantity of columns required, and use the S horizontal spanning classifier where necessary to achieve the desired columnar alignment. Spanning horizontally in the first column or vertically on the first row is an error. tbl does not support non-rectangular span areas. Column modifiers Any number of modifiers can follow a column classifier. Modifier arguments, where accepted, are case-sensitive. If a given modifier is applied to a classifier more than once, or if conflicting modifiers are applied, only the last occurrence has effect. The modifier x is mutually exclusive with e and w, but e is not mutually exclusive with w; if these are used in combination, x unsets both e and w, while either e or w overrides x. b, B Typeset entry in boldface, abbreviating f(B). d, D Align a vertically spanned table entry to the bottom (down), instead of the center, of its range. This is a GNU extension. e, E Equalize the widths of columns with this modifier. The column with the largest width controls. This modifier sets the default line length used in a text block. f, F Select the typeface for the table entry. This modifier must be followed by a font or style name (one or two characters not starting with a digit), font mounting position (a single digit), or a name or mounting position of any length in parentheses. The last form is a GNU extension. (The parameter corresponds to that accepted by the troff ft request.) A one-character argument not in parentheses must be separated by one or more spaces or tabs from what follows. i, I Typeset entry in an oblique or italic face, abbreviating f(I). m, M Call a groff macro before typesetting a text block (see subsection Text blocks below). This is a GNU extension. This modifier must be followed by a macro name of one or two characters or a name of any length in parentheses. A one-character macro name not in parentheses must be separated by one or more spaces or tabs from what follows. The named macro must be defined before the table region containing this column modifier is encountered. The macro should contain only simple groff requests to change text formatting, like adjustment or hyphenation. The macro is called after the column modifiers b, f, i, p, and v take effect; it can thus override other column modifiers. p, P Set the type size for the table entry. This modifier must be followed by an integer n with an optional leading sign. If unsigned, the type size is set to n scaled points. Otherwise, the type size is incremented or decremented per the sign by n scaled points. The use of a signed multi- digit number is a GNU extension. (The parameter corresponds to that accepted by the troff ps request.) If a type size modifier is followed by a column separation modifier (see below), they must be separated by at least one space or tab. t, T Align a vertically spanned table entry to the top, instead of the center, of its range. u, U Move the column up one half-line, staggering the rows. This is a Research Tenth Edition Unix extension. v, V Set the vertical spacing to be used in a text block. This modifier must be followed by an integer n with an optional leading sign. If unsigned, the vertical spacing is set to n points. Otherwise, the vertical spacing is incremented or decremented per the sign by n points. The use of a signed multi-digit number is a GNU extension. (This parameter corresponds to that accepted by the troff vs request.) If a vertical spacing modifier is followed by a column separation modifier (see below), they must be separated by at least one space or tab. w, W Set the column's minimum width. This modifier must be followed by a number, which is either a unitless integer, or a roff horizontal measurement in parentheses. Parentheses are required if the width is to be followed immediately by an explicit column separation (alternatively, follow the width with one or more spaces or tabs). If no unit is specified, ens are assumed. This modifier sets the default line length used in a text block. x, X Expand the column. After computing the column widths, distribute any remaining line length evenly over all columns bearing this modifier. Applying the x modifier to more than one column is a GNU extension. This modifier sets the default line length used in a text block. z, Z Ignore the table entries corresponding to this column for width calculation purposes; that is, compute the column's width using only the information in its descriptor. n A numeric suffix on a column descriptor sets the separation distance (in ens) from the succeeding column; the default separation is 3n. This separation is proportionally multiplied if the expand region option is in effect; in the case of tables wider than the output line length, this separation might be zero. A negative separation cannot be specified. A separation amount after the last column in a row is nonsensical and provokes a diagnostic from tbl. Table data The table data come after the format specification. Each input line corresponds to a table row, except that a backslash at the end of a line of table data continues an entry on the next input line. (Text blocks, discussed below, also spread table entries across multiple input lines.) Table entries within a row are separated in the input by a tab character by default; see the tab region option above. Excess entries in a row of table data (those that have no corresponding column descriptor, not even an implicit one arising from rectangularization of the table) are discarded with a diagnostic message. roff control lines are accepted between rows of table data and within text blocks. If you wish to visibly mark an empty table entry in the document source, populate it with the \& roff dummy character. The table data are interrupted by a line consisting of the .T& input token, and conclude with the line .TE. Ordinarily, a table entry is typeset rigidly. It is not filled, broken, hyphenated, adjusted, or populated with additional inter- sentence space. tbl instructs the formatter to measure each table entry as it occurs in the input, updating the width required by its corresponding column. If the z modifier applies to the column, this measurement is ignored; if w applies and its argument is larger than this width, that argument is used instead. In contrast to conventional roff input (within a paragraph, say), changes to text formatting, such as font selection or vertical spacing, do not persist between entries. Several forms of table entry are interpreted specially. If a table row contains only an underscore or equals sign (_ or =), a single or double horizontal rule (line), respectively, is drawn across the table at that point. A table entry containing only _ or = on an otherwise populated row is replaced by a single or double horizontal rule, respectively, joining its neighbors. Prefixing a lone underscore or equals sign with a backslash also has meaning. If a table entry consists only of \_ or \= on an otherwise populated row, it is replaced by a single or double horizontal rule, respectively, that does not (quite) join its neighbors. A table entry consisting of \Rx, where x is any roff ordinary or special character, is replaced by enough repetitions of the glyph corresponding to x to fill the column, albeit without joining its neighbors. On any row but the first, a table entry of \^ causes the entry above it to span down into the current one. On occasion, these special tokens may be required as literal table data. To use either _ or = literally and alone in an entry, prefix or suffix it with the roff dummy character \&. To express \_, \=, or \R, use a roff escape sequence to interpolate the backslash (\e or \[rs]). A reliable way to emplace the \^ glyph sequence within a table entry is to use a pair of groff special character escape sequences (\[rs]\[ha]). Rows of table entries can be interleaved with groff control lines; these do not count as table data. On such lines the default control character (.) must be used (and not changed); the no-break control character is not recognized. To start the first table entry in a row with a dot, precede it with the roff dummy character \&. Text blocks An ordinary table entry's contents can make a column, and therefore the table, excessively wide; the table then exceeds the line length of the page, and becomes ugly or is exposed to truncation by the output device. When a table entry requires more conventional typesetting, breaking across more than one output line (and thereby increasing the height of its row), it can be placed within a text block. tbl interprets a table entry beginning with T{ at the end of an input line not as table data, but as a token starting a text block. Similarly, T} at the start of an input line ends a text block; it must also end the table entry. Text block tokens can share an input line with other table data (preceding T{ and following T}). Input lines between these tokens are formatted in a diversion by troff. Text blocks cannot be nested. Multiple text blocks can occur in a table row. Text blocks are formatted as was the text prior to the table, modified by applicable column descriptors. Specifically, the classifiers A, C, L, N, R, and S determine a text block's alignment within its cell, but not its adjustment. Add na or ad requests to the beginning of a text block to alter its adjustment distinctly from other text in the document. As with other table entries, when a text block ends, any alterations to formatting parameters are discarded. They do not affect subsequent table entries, not even other text blocks. If w or x modifiers are not specified for all columns of a text block's span, the default length of the text block (more precisely, the line length used to process the text block's diversion) is computed as LC/(N+1), where L is the current line length, C the number of columns spanned by the text block, and N the number of columns in the table. If necessary, you can also control a text block's width by including an ll (line length) request in it prior to any text to be formatted. Because a diversion is used to format the text block, its height and width are subsequently available in the registers dn and dl, respectively. roff interface The register TW stores the width of the table region in basic units; it can't be used within the region itself, but is defined before the .TE token is output so that a groff macro named TE can make use of it. T. is a Boolean-valued register indicating whether the bottom of the table is being processed. The #T register marks the top of the table. Avoid using these names for any other purpose. tbl also defines a macro T# to produce the bottom and side lines of a boxed table. While tbl itself arranges for the output to include a call of this macro at the end of such a table, it can also be used by macro packages to create boxes for multi-page tables by calling it from a page footer macro that is itself called by a trap planted near the bottom of the page. See section Limitations below for more on multi-page tables. GNU tbl internally employs register, string, macro, and diversion names beginning with the numeral 3. A document to be preprocessed with GNU tbl should not use any such identifiers. Interaction with eqn tbl should always be called before eqn(1). (groff(1) automatically arranges preprocessors in the correct order.) Don't call the EQ and EN macros within tables; instead, set up delimiters in your eqn input and use the delim region option so that tbl will recognize them. GNU tbl enhancements In addition to extensions noted above, GNU tbl removes constraints endured by users of AT&T tbl. Region options can be specified in any lettercase. There is no limit on the number of columns in a table, regardless of their classification, nor any limit on the number of text blocks. All table rows are considered when deciding column widths, not just those occurring in the first 200 input lines of a region. Similarly, table continuation (.T&) tokens are recognized outside a region's first 200 input lines. Numeric and alphabetic entries may appear in the same column. Numeric and alphabetic entries may span horizontally. Using GNU tbl within macros You can embed a table region inside a macro definition. However, since tbl writes its own macro definitions at the beginning of each table region, it is necessary to call end macros instead of ending macro definitions with ... Additionally, the escape character must be disabled. Not all tbl features can be exercised from such macros because tbl is a roff preprocessor: it sees the input earlier than troff does. For example, vertically aligning decimal separators fails if the numbers containing them occur as macro or string parameters; the alignment is performed by tbl itself, which sees only \$1, \$2, and so on, and therefore can't recognize a decimal separator that appears only later when troff interpolates a macro or string definition. Using tbl macros within conditional input (that is, contingent upon an if, ie, el, or while request) can result in misleading line numbers in subsequent diagnostics. tbl unconditionally injects its output into the source document, but the conditional branch containing it may not be taken, and if it is not, the lf requests that tbl injects to restore the source line number cannot take effect. Consider copying the input line counter register c. and restoring its value at a convenient location after applicable arithmetic. Options top --help displays a usage message, while -v and --version show version information; all exit afterward. -C Enable AT&T compatibility mode: recognize .TS and .TE even when followed by a character other than space or newline. Furthermore, interpret the uninterpreted leader escape sequence \a. Limitations top Multi-page tables, if boxed and/or if you want their column headings repeated after page breaks, require support at the time the document is formatted. A convention for such support has arisen in macro packages such as ms, mm, and me. To use it, follow the .TS token with a space and then H; this will be interpreted by the formatter as a TS macro call with an H argument. Then, within the table data, call the TH macro; this informs the macro package where the headings end. If your table has no such heading rows, or you do not desire their repetition, call TH immediately after the table format specification. If a multi-page table is boxed or has repeating column headings, do not enclose it with keep/release macros, or divert it in any other way. Further, the bp request will not cause a page break in a TS H table. Define a macro to wrap bp: invoke it normally if there is no current diversion. Otherwise, pass the macro call to the enclosing diversion using the transparent line escape sequence \!; this will bubble up the page break to the output device. See section Examples below for a demonstration. grotty(1) does not support double horizontal rules; it uses single rules instead. It also ignores half-line motions, so the u column modifier has no effect. On terminal devices (nroff mode), horizontal rules and box borders occupy a full vee of space; doublebox doubles that for borders. Tables using these features thus require more vertical space in nroff mode than in troff mode: write ne requests accordingly. Vertical rules between columns are drawn in the space between columns in nroff mode; using double vertical rules and/or reducing the column separation below the default can make them ugly or overstrike them with table data. A text block within a table must be able to fit on one page. Using \a to put leaders in table entries does not work in GNU tbl, except in compatibility mode. This is correct behavior: \a is an uninterpreted leader. You can still use the roff leader character (Control+A) or define a string to use \a as it was designed: to be interpreted only in copy mode. .ds a \a .TS box center tab(;); Lw(2i)0 L. Population\*a;6,327,119 .TE Population..........6,327,119 A leading and/or trailing | in a format specification, such as |LCR|., produces an en space between the vertical rules and the content of the adjacent columns. If no such space is desired (so that the rule abuts the content), you can introduce dummy columns with zero separation and empty corresponding table entries before and/or after. .TS center tab(#); R0|L C R0|L. _ #levulose#glucose#dextrose# _ .TE These dummy columns have zero width and are therefore invisible; unfortunately they usually don't work as intended on terminal devices. Examples top It can be easier to acquire the language of tbl through examples than formal description, especially at first. .TS box center tab(#); Cb Cb L L. Ability#Application Strength#crushes a tomato Dexterity#dodges a thrown tomato Constitution#eats a month-old tomato without becoming ill Intelligence#knows that a tomato is a fruit Wisdom#chooses \f[I]not\f[] to put tomato in a fruit salad Charisma#sells obligate carnivores tomato-based fruit salads .TE Ability Application Strength crushes a tomato Dexterity dodges a thrown tomato Constitution eats a month-old tomato without becoming ill Intelligence knows that a tomato is a fruit Wisdom chooses not to put tomato in a fruit salad Charisma sells obligate carnivores tomato-based fruit salads The A and N column classifiers can be easier to grasp in visual rendering than in description. .TS center tab(;); CbS,LN,AN. Daily energy intake (in MJ) Macronutrients .\" assume 3 significant figures of precision Carbohydrates;4.5 Fats;2.25 Protein;3 .T& LN,AN. Mineral Pu-239;14.6 _ .T& LN. Total;\[ti]24.4 .TE Daily energy intake (in MJ) Macronutrients Carbohydrates 4.5 Fats 2.25 Protein 3 Mineral Pu-239 14.6 Total ~24.4 Next, we'll lightly adapt a compact presentation of spanning, vertical alignment, and zero-width column modifiers from the mandoc reference for its tbl interpreter. It rewards close study. .TS box center tab(:); Lz S | Rt Ld| Cb| ^ ^ | Rz S. left:r l:center: :right .TE left r center l right Row staggering is not visually achievable on terminal devices, but a table using it can remain comprehensible nonetheless. .TS center tab(|); Cf(BI) Cf(BI) Cf(B), C C Cu. n|n\f[B]\[tmu]\f[]n|difference 1|1 2|4|3 3|9|5 4|16|7 5|25|9 6|36|11 .TE n nn difference 1 1 2 4 3 3 9 5 4 16 7 5 25 9 6 36 11 Some tbl features cannot be illustrated in the limited environment of a portable man page. We can define a macro outside of a tbl region that we can call from within it to cause a page break inside a multi-page boxed table. You can choose a different name; be sure to change both occurrences of BP. .de BP . ie '\\n(.z'' .bp \\$1 . el \!.BP \\$1 .. See also top TblA Program to Format Tables, by M. E. Lesk, 1976 (revised 16 January 1979), AT&T Bell Laboratories Computing Science Technical Report No. 49. The spanning example above was taken from mandoc's man page for its tbl implementation https://man.openbsd.org/tbl.7. groff(1), troff(1) COLOPHON top This page is part of the groff (GNU troff) project. Information about the project can be found at http://www.gnu.org/software/groff/. If you have a bug report for this manual page, see http://www.gnu.org/software/groff/. This page was obtained from the project's upstream Git repository https://git.savannah.gnu.org/git/groff.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-08.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org groff 1.23.0.453-330f9-dirty 1 November 2023 tbl(1) Pages that refer to this page: col(1), colcrt(1), man(7) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# tbl\n\n> Table preprocessor for the groff (GNU Troff) document formatting system.\n> See also `groff` and `troff`.\n> More information: <https://manned.org/tbl>.\n\n- Process input with tables, saving the output for future typesetting with groff to PostScript:\n\n`tbl {{path/to/input_file}} > {{path/to/output.roff}}`\n\n- Typeset input with tables to PDF using the [me] macro package:\n\n`tbl -T {{pdf}} {{path/to/input.tbl}} | groff -{{me}} -T {{pdf}} > {{path/to/output.pdf}}`\n
tc
tc(8) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training tc(8) Linux manual page NAME | SYNOPSIS | DESCRIPTION | QDISCS | CLASSES | FILTERS | QEVENTS | CLASSLESS QDISCS | CONFIGURING CLASSLESS QDISCS | CLASSFUL QDISCS | THEORY OF OPERATION | NAMING | PARAMETERS | TC COMMANDS | MONITOR | OPTIONS | FORMAT | EXAMPLES | HISTORY | SEE ALSO | AUTHOR | COLOPHON TC(8) Linux TC(8) NAME top tc - show / manipulate traffic control settings SYNOPSIS top tc [ OPTIONS ] qdisc [ add | change | replace | link | delete ] dev DEV [ parent qdisc-id | root ] [ handle qdisc-id ] [ ingress_block BLOCK_INDEX ] [ egress_block BLOCK_INDEX ] qdisc [ qdisc specific parameters ] tc [ OPTIONS ] class [ add | change | replace | delete | show ] dev DEV parent qdisc-id [ classid class-id ] qdisc [ qdisc specific parameters ] tc [ OPTIONS ] filter [ add | change | replace | delete | get ] dev DEV [ parent qdisc-id | root ] [ handle filter-id ] protocol protocol prio priority filtertype [ filtertype specific parameters ] flowid flow-id tc [ OPTIONS ] filter [ add | change | replace | delete | get ] block BLOCK_INDEX [ handle filter-id ] protocol protocol prio priority filtertype [ filtertype specific parameters ] flowid flow-id tc [ OPTIONS ] chain [ add | delete | get ] dev DEV [ parent qdisc-id | root ] filtertype [ filtertype specific parameters ] tc [ OPTIONS ] chain [ add | delete | get ] block BLOCK_INDEX filtertype [ filtertype specific parameters ] tc [ OPTIONS ] [ FORMAT ] qdisc { show | list } [ dev DEV ] [ root | ingress | handle QHANDLE | parent CLASSID ] [ invisible ] tc [ OPTIONS ] [ FORMAT ] class show dev DEV tc [ OPTIONS ] filter show dev DEV tc [ OPTIONS ] filter show block BLOCK_INDEX tc [ OPTIONS ] chain show dev DEV tc [ OPTIONS ] chain show block BLOCK_INDEX tc [ OPTIONS ] monitor [ file FILENAME ] OPTIONS := { [ -force ] -b[atch] [ filename ] | [ -n[etns] name ] | [ -N[umeric] ] | [ -nm | -nam[es] ] | [ { -cf | -c[onf] } [ filename ] ] [ -t[imestamp] ] | [ -t[short] | [ -o[neline] ] } FORMAT := { -s[tatistics] | -d[etails] | -r[aw] | -i[ec] | -g[raph] | -j[json] | -p[retty] | -col[or] } DESCRIPTION top Tc is used to configure Traffic Control in the Linux kernel. Traffic Control consists of the following: SHAPING When traffic is shaped, its rate of transmission is under control. Shaping may be more than lowering the available bandwidth - it is also used to smooth out bursts in traffic for better network behaviour. Shaping occurs on egress. SCHEDULING By scheduling the transmission of packets it is possible to improve interactivity for traffic that needs it while still guaranteeing bandwidth to bulk transfers. Reordering is also called prioritizing, and happens only on egress. POLICING Whereas shaping deals with transmission of traffic, policing pertains to traffic arriving. Policing thus occurs on ingress. DROPPING Traffic exceeding a set bandwidth may also be dropped forthwith, both on ingress and on egress. Processing of traffic is controlled by three kinds of objects: qdiscs, classes and filters. QDISCS top qdisc is short for 'queueing discipline' and it is elementary to understanding traffic control. Whenever the kernel needs to send a packet to an interface, it is enqueued to the qdisc configured for that interface. Immediately afterwards, the kernel tries to get as many packets as possible from the qdisc, for giving them to the network adaptor driver. A simple QDISC is the 'pfifo' one, which does no processing at all and is a pure First In, First Out queue. It does however store traffic when the network interface can't handle it momentarily. CLASSES top Some qdiscs can contain classes, which contain further qdiscs - traffic may then be enqueued in any of the inner qdiscs, which are within the classes. When the kernel tries to dequeue a packet from such a classful qdisc it can come from any of the classes. A qdisc may for example prioritize certain kinds of traffic by trying to dequeue from certain classes before others. FILTERS top A filter is used by a classful qdisc to determine in which class a packet will be enqueued. Whenever traffic arrives at a class with subclasses, it needs to be classified. Various methods may be employed to do so, one of these are the filters. All filters attached to the class are called, until one of them returns with a verdict. If no verdict was made, other criteria may be available. This differs per qdisc. It is important to notice that filters reside within qdiscs - they are not masters of what happens. The available filters are: basic Filter packets based on an ematch expression. See tc-ematch(8) for details. bpf Filter packets using (e)BPF, see tc-bpf(8) for details. cgroup Filter packets based on the control group of their process. See tc-cgroup(8) for details. flow, flower Flow-based classifiers, filtering packets based on their flow (identified by selectable keys). See tc-flow(8) and tc-flower(8) for details. fw Filter based on fwmark. Directly maps fwmark value to traffic class. See tc-fw(8). route Filter packets based on routing table. See tc-route(8) for details. u32 Generic filtering on arbitrary packet data, assisted by syntax to abstract common operations. See tc-u32(8) for details. matchall Traffic control filter that matches every packet. See tc-matchall(8) for details. QEVENTS top Qdiscs may invoke user-configured actions when certain interesting events take place in the qdisc. Each qevent can either be unused, or can have a block attached to it. To this block are then attached filters using the "tc block BLOCK_IDX" syntax. The block is executed when the qevent associated with the attachment point takes place. For example, packet could be dropped, or delayed, etc., depending on the qdisc and the qevent in question. For example: tc qdisc add dev eth0 root handle 1: red limit 500K avpkt 1K \ qevent early_drop block 10 tc filter add block 10 matchall action mirred egress mirror dev eth1 CLASSLESS QDISCS top The classless qdiscs are: choke CHOKe (CHOose and Keep for responsive flows, CHOose and Kill for unresponsive flows) is a classless qdisc designed to both identify and penalize flows that monopolize the queue. CHOKe is a variation of RED, and the configuration is similar to RED. codel CoDel (pronounced "coddle") is an adaptive "no-knobs" active queue management algorithm (AQM) scheme that was developed to address the shortcomings of RED and its variants. [p|b]fifo Simplest usable qdisc, pure First In, First Out behaviour. Limited in packets or in bytes. fq Fair Queue Scheduler realises TCP pacing and scales to millions of concurrent flows per qdisc. fq_codel Fair Queuing Controlled Delay is queuing discipline that combines Fair Queuing with the CoDel AQM scheme. FQ_Codel uses a stochastic model to classify incoming packets into different flows and is used to provide a fair share of the bandwidth to all the flows using the queue. Each such flow is managed by the CoDel queuing discipline. Reordering within a flow is avoided since Codel internally uses a FIFO queue. fq_pie FQ-PIE (Flow Queuing with Proportional Integral controller Enhanced) is a queuing discipline that combines Flow Queuing with the PIE AQM scheme. FQ-PIE uses a Jenkins hash function to classify incoming packets into different flows and is used to provide a fair share of the bandwidth to all the flows using the qdisc. Each such flow is managed by the PIE algorithm. gred Generalized Random Early Detection combines multiple RED queues in order to achieve multiple drop priorities. This is required to realize Assured Forwarding (RFC 2597). hhf Heavy-Hitter Filter differentiates between small flows and the opposite, heavy-hitters. The goal is to catch the heavy-hitters and move them to a separate queue with less priority so that bulk traffic does not affect the latency of critical traffic. ingress This is a special qdisc as it applies to incoming traffic on an interface, allowing for it to be filtered and policed. mqprio The Multiqueue Priority Qdisc is a simple queuing discipline that allows mapping traffic flows to hardware queue ranges using priorities and a configurable priority to traffic class mapping. A traffic class in this context is a set of contiguous qdisc classes which map 1:1 to a set of hardware exposed queues. multiq Multiqueue is a qdisc optimized for devices with multiple Tx queues. It has been added for hardware that wishes to avoid head-of-line blocking. It will cycle though the bands and verify that the hardware queue associated with the band is not stopped prior to dequeuing a packet. netem Network Emulator is an enhancement of the Linux traffic control facilities that allow one to add delay, packet loss, duplication and more other characteristics to packets outgoing from a selected network interface. pfifo_fast Standard qdisc for 'Advanced Router' enabled kernels. Consists of a three-band queue which honors Type of Service flags, as well as the priority that may be assigned to a packet. pie Proportional Integral controller-Enhanced (PIE) is a control theoretic active queue management scheme. It is based on the proportional integral controller but aims to control delay. red Random Early Detection simulates physical congestion by randomly dropping packets when nearing configured bandwidth allocation. Well suited to very large bandwidth applications. sfb Stochastic Fair Blue is a classless qdisc to manage congestion based on packet loss and link utilization history while trying to prevent non-responsive flows (i.e. flows that do not react to congestion marking or dropped packets) from impacting performance of responsive flows. Unlike RED, where the marking probability has to be configured, BLUE tries to determine the ideal marking probability automatically. sfq Stochastic Fairness Queueing reorders queued traffic so each 'session' gets to send a packet in turn. tbf The Token Bucket Filter is suited for slowing traffic down to a precisely configured rate. Scales well to large bandwidths. CONFIGURING CLASSLESS QDISCS top In the absence of classful qdiscs, classless qdiscs can only be attached at the root of a device. Full syntax: tc qdisc add dev DEV root QDISC QDISC-PARAMETERS To remove, issue tc qdisc del dev DEV root The pfifo_fast qdisc is the automatic default in the absence of a configured qdisc. CLASSFUL QDISCS top The classful qdiscs are: ATM Map flows to virtual circuits of an underlying asynchronous transfer mode device. DRR The Deficit Round Robin Scheduler is a more flexible replacement for Stochastic Fairness Queuing. Unlike SFQ, there are no built-in queues -- you need to add classes and then set up filters to classify packets accordingly. This can be useful e.g. for using RED qdiscs with different settings for particular traffic. There is no default class -- if a packet cannot be classified, it is dropped. ETS The ETS qdisc is a queuing discipline that merges functionality of PRIO and DRR qdiscs in one scheduler. ETS makes it easy to configure a set of strict and bandwidth- sharing bands to implement the transmission selection described in 802.1Qaz. HFSC Hierarchical Fair Service Curve guarantees precise bandwidth and delay allocation for leaf classes and allocates excess bandwidth fairly. Unlike HTB, it makes use of packet dropping to achieve low delays which interactive sessions benefit from. HTB The Hierarchy Token Bucket implements a rich linksharing hierarchy of classes with an emphasis on conforming to existing practices. HTB facilitates guaranteeing bandwidth to classes, while also allowing specification of upper limits to inter-class sharing. It contains shaping elements, based on TBF and can prioritize classes. PRIO The PRIO qdisc is a non-shaping container for a configurable number of classes which are dequeued in order. This allows for easy prioritization of traffic, where lower classes are only able to send if higher ones have no packets available. To facilitate configuration, Type Of Service bits are honored by default. QFQ Quick Fair Queueing is an O(1) scheduler that provides near-optimal guarantees, and is the first to achieve that goal with a constant cost also with respect to the number of groups and the packet length. The QFQ algorithm has no loops, and uses very simple instructions and data structures that lend themselves very well to a hardware implementation. THEORY OF OPERATION top Classes form a tree, where each class has a single parent. A class may have multiple children. Some qdiscs allow for runtime addition of classes (HTB) while others (PRIO) are created with a static number of children. Qdiscs which allow dynamic addition of classes can have zero or more subclasses to which traffic may be enqueued. Furthermore, each class contains a leaf qdisc which by default has pfifo behaviour, although another qdisc can be attached in place. This qdisc may again contain classes, but each class can have only one leaf qdisc. When a packet enters a classful qdisc it can be classified to one of the classes within. Three criteria are available, although not all qdiscs will use all three: tc filters If tc filters are attached to a class, they are consulted first for relevant instructions. Filters can match on all fields of a packet header, as well as on the firewall mark applied by iptables. Type of Service Some qdiscs have built in rules for classifying packets based on the TOS field. skb->priority Userspace programs can encode a class-id in the 'skb->priority' field using the SO_PRIORITY option. Each node within the tree can have its own filters but higher level filters may also point directly to lower classes. If classification did not succeed, packets are enqueued to the leaf qdisc attached to that class. Check qdisc specific manpages for details, however. NAMING top All qdiscs, classes and filters have IDs, which can either be specified or be automatically assigned. IDs consist of a major number and a minor number, separated by a colon - major:minor. Both major and minor are hexadecimal numbers and are limited to 16 bits. There are two special values: root is signified by major and minor of all ones, and unspecified is all zeros. QDISCS A qdisc, which potentially can have children, gets assigned a major number, called a 'handle', leaving the minor number namespace available for classes. The handle is expressed as '10:'. It is customary to explicitly assign a handle to qdiscs expected to have children. CLASSES Classes residing under a qdisc share their qdisc major number, but each have a separate minor number called a 'classid' that has no relation to their parent classes, only to their parent qdisc. The same naming custom as for qdiscs applies. FILTERS Filters have a three part ID, which is only needed when using a hashed filter hierarchy. PARAMETERS top The following parameters are widely used in TC. For other parameters, see the man pages for individual qdiscs. RATES Bandwidths or rates. These parameters accept a floating point number, possibly followed by either a unit (both SI and IEC units supported), or a float followed by a '%' character to specify the rate as a percentage of the device's speed (e.g. 5%, 99.5%). Warning: specifying the rate as a percentage means a fraction of the current speed; if the speed changes, the value will not be recalculated. bit or a bare number Bits per second kbit Kilobits per second mbit Megabits per second gbit Gigabits per second tbit Terabits per second bps Bytes per second kbps Kilobytes per second mbps Megabytes per second gbps Gigabytes per second tbps Terabytes per second To specify in IEC units, replace the SI prefix (k-, m-, g-, t-) with IEC prefix (ki-, mi-, gi- and ti-) respectively. TC store rates as a 32-bit unsigned integer in bps internally, so we can specify a max rate of 4294967295 bps. TIMES Length of time. Can be specified as a floating point number followed by an optional unit: s, sec or secs Whole seconds ms, msec or msecs Milliseconds us, usec, usecs or a bare number Microseconds. TC defined its own time unit (equal to microsecond) and stores time values as 32-bit unsigned integer, thus we can specify a max time value of 4294967295 usecs. SIZES Amounts of data. Can be specified as a floating point number followed by an optional unit: b or a bare number Bytes. kbit Kilobits kb or k Kilobytes mbit Megabits mb or m Megabytes gbit Gigabits gb or g Gigabytes TC stores sizes internally as 32-bit unsigned integer in byte, so we can specify a max size of 4294967295 bytes. VALUES Other values without a unit. These parameters are interpreted as decimal by default, but you can indicate TC to interpret them as octal and hexadecimal by adding a '0' or '0x' prefix respectively. TC COMMANDS top The following commands are available for qdiscs, classes and filter: add Add a qdisc, class or filter to a node. For all entities, a parent must be passed, either by passing its ID or by attaching directly to the root of a device. When creating a qdisc or a filter, it can be named with the handle parameter. A class is named with the classid parameter. delete A qdisc can be deleted by specifying its handle, which may also be 'root'. All subclasses and their leaf qdiscs are automatically deleted, as well as any filters attached to them. change Some entities can be modified 'in place'. Shares the syntax of 'add', with the exception that the handle cannot be changed and neither can the parent. In other words, change cannot move a node. replace Performs a nearly atomic remove/add on an existing node id. If the node does not exist yet it is created. get Displays a single filter given the interface DEV, qdisc- id, priority, protocol and filter-id. show Displays all filters attached to the given interface. A valid parent ID must be passed. link Only available for qdiscs and performs a replace where the node must exist already. MONITOR top The tc utility can monitor events generated by the kernel such as adding/deleting qdiscs, filters or actions, or modifying existing ones. The following command is available for monitor : file If the file option is given, the tc does not listen to kernel events, but opens the given file and dumps its contents. The file has to be in binary format and contain netlink messages. OPTIONS top -b, -b filename, -batch, -batch filename read commands from provided file or standard input and invoke them. First failure will cause termination of tc. -force don't terminate tc on errors in batch mode. If there were any errors during execution of the commands, the application return code will be non zero. -o, -oneline output each record on a single line, replacing line feeds with the '\' character. This is convenient when you want to count records with wc(1) or to grep(1) the output. -n, -net, -netns <NETNS> switches tc to the specified network namespace NETNS. Actually it just simplifies executing of: ip netns exec NETNS tc [ OPTIONS ] OBJECT { COMMAND | help } to tc -n[etns] NETNS [ OPTIONS ] OBJECT { COMMAND | help } -N, -Numeric Print the number of protocol, scope, dsfield, etc directly instead of converting it to human readable name. -cf, -conf <FILENAME> specifies path to the config file. This option is used in conjunction with other options (e.g. -nm). -t, -timestamp When tc monitor runs, print timestamp before the event message in format: Timestamp: <Day> <Month> <DD> <hh:mm:ss> <YYYY> <usecs> usec -ts, -tshort When tc monitor runs, prints short timestamp before the event message in format: [<YYYY>-<MM>-<DD>T<hh:mm:ss>.<ms>] FORMAT top The show command has additional formatting options: -s, -stats, -statistics output more statistics about packet usage. -d, -details output more detailed information about rates and cell sizes. -r, -raw output raw hex values for handles. -p, -pretty for u32 filter, decode offset and mask values to equivalent filter commands based on TCP/IP. In JSON output, add whitespace to improve readability. -iec print rates in IEC units (ie. 1K = 1024). -g, -graph shows classes as ASCII graph. Prints generic stats info under each class if -s option was specified. Classes can be filtered only by dev option. -c[color][={always|auto|never} Configure color output. If parameter is omitted or always, color output is enabled regardless of stdout state. If parameter is auto, stdout is checked to be a terminal before enabling color output. If parameter is never, color output is disabled. If specified multiple times, the last one takes precedence. This flag is ignored if -json is also given. -j, -json Display results in JSON format. -nm, -name resolve class name from /etc/iproute2/tc_cls file or from file specified by -cf option. This file is just a mapping of classid to class name: # Here is comment 1:40 voip # Here is another comment 1:50 web 1:60 ftp 1:2 home tc will not fail if -nm was specified without -cf option but /etc/iproute2/tc_cls file does not exist, which makes it possible to pass -nm option for creating tc alias. -br, -brief Print only essential data needed to identify the filter and action (handle, cookie, etc.) and stats. This option is currently only supported by tc filter show and tc actions ls commands. EXAMPLES top tc -g class show dev eth0 Shows classes as ASCII graph on eth0 interface. tc -g -s class show dev eth0 Shows classes as ASCII graph with stats info under each class. HISTORY top tc was written by Alexey N. Kuznetsov and added in Linux 2.2. SEE ALSO top tc-basic(8), tc-bfifo(8), tc-bpf(8), tc-cake(8), tc-cgroup(8), tc-choke(8), tc-codel(8), tc-drr(8), tc-ematch(8), tc-ets(8), tc-flow(8), tc-flower(8), tc-fq(8), tc-fq_codel(8), tc-fq_pie(8), tc-fw(8), tc-hfsc(7), tc-hfsc(8), tc-htb(8), tc-mqprio(8), tc-pfifo(8), tc-pfifo_fast(8), tc-pie(8), tc-red(8), tc-route(8), tc-sfb(8), tc-sfq(8), tc-stab(8), tc-tbf(8), tc-u32(8) User documentation at http://lartc.org/ , but please direct bugreports and patches to: <netdev@vger.kernel.org> AUTHOR top Manpage maintained by bert hubert (ahu@ds9a.nl) COLOPHON top This page is part of the iproute2 (utilities for controlling TCP/IP networking and traffic) project. Information about the project can be found at http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2. If you have a bug report for this manual page, send it to netdev@vger.kernel.org, shemminger@osdl.org. This page was obtained from the project's upstream Git repository https://git.kernel.org/pub/scm/network/iproute2/iproute2.git on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org iproute2 16 December 2001 TC(8) Pages that refer to this page: bpf(2), cgroups(7), tc-hfsc(7), dcb-buffer(8), dcb-maxrate(8), netsniff-ng(8), tc-actions(8), tc-basic(8), tc-bfifo(8), tc-bpf(8), tc-cake(8), tc-cgroup(8), tc-choke(8), tc-codel(8), tc-connmark(8), tc-csum(8), tc-ct(8), tc-ctinfo(8), tc-drr(8), tc-ets(8), tc-flow(8), tc-flower(8), tc-fq(8), tc-fq_codel(8), tc-fq_pie(8), tc-fw(8), tc-hfsc(8), tc-htb(8), tc-ife(8), tc-matchall(8), tc-mirred(8), tc-mpls(8), tc-nat(8), tc-netem(8), tc-pedit(8), tc-pfifo_fast(8), tc-pie(8), tc-police(8), tc-red(8), tc-route(8), tc-sample(8), tc-sfb(8), tc-sfq(8), tc-simple(8), tc-skbedit(8), tc-skbmod(8), tc-stab(8), tc-tbf(8), tc-tunnel_key(8), tc-u32(8), tc-vlan(8), tc-xt(8), trafgen(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# tc\n\n> Show/manipulate traffic control settings.\n> More information: <https://manned.org/tc>.\n\n- Add constant network delay to outbound packages:\n\n`tc qdisc add dev {{eth0}} root netem delay {{delay_in_milliseconds}}ms`\n\n- Add normal distributed network delay to outbound packages:\n\n`tc qdisc add dev {{eth0}} root netem delay {{mean_delay_ms}}ms {{delay_std_ms}}ms`\n\n- Add package corruption/loss/duplication to a portion of packages:\n\n`tc qdisc add dev {{eth0}} root netem {{corruption|loss|duplication}} {{effect_percentage}}%`\n\n- Limit bandwidth, burst rate and max latency:\n\n`tc qdisc add dev eth0 root tbf rate {{max_bandwidth_mb}}mbit burst {{max_burst_rate_kb}}kbit latency {{max_latency_before_drop_ms}}ms`\n\n- Show active traffic control policies:\n\n`tc qdisc show dev {{eth0}}`\n\n- Delete all traffic control rules:\n\n`tc qdisc del dev {{eth0}}`\n\n- Change traffic control rule:\n\n`tc qdisc change dev {{eth0}} root netem {{policy}} {{policy_parameters}}`\n
tcpdump
tcpdump(1) - Linux manual page man7.org > Linux > man-pages Linux/UNIX system programming training tcpdump(1) Linux manual page NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | OUTPUT FORMAT | BACKWARD COMPATIBILITY | SEE ALSO | AUTHORS | BUGS | COLOPHON TCPDUMP(1) General Commands Manual TCPDUMP(1) NAME top tcpdump - dump traffic on a network SYNOPSIS top tcpdump [ -AbdDefhHIJKlLnNOpqStuUvxX# ] [ -B buffer_size ] [ -c count ] [ --count ] [ -C file_size ] [ -E spi@ipaddr algo:secret,... ] [ -F file ] [ -G rotate_seconds ] [ -i interface ] [ --immediate-mode ] [ -j tstamp_type ] [ -m module ] [ -M secret ] [ --number ] [ --print ] [ --print-sampling nth ] [ -Q in|out|inout ] [ -r file ] [ -s snaplen ] [ -T type ] [ --version ] [ -V file ] [ -w file ] [ -W filecount ] [ -y datalinktype ] [ -z postrotate-command ] [ -Z user ] [ --time-stamp-precision=tstamp_precision ] [ --micro ] [ --nano ] [ expression ] DESCRIPTION top tcpdump prints out a description of the contents of packets on a network interface that match the Boolean expression (see pcap-filter(@MAN_MISC_INFO@) for the expression syntax); the description is preceded by a time stamp, printed, by default, as hours, minutes, seconds, and fractions of a second since midnight. It can also be run with the -w flag, which causes it to save the packet data to a file for later analysis, and/or with the -r flag, which causes it to read from a saved packet file rather than to read packets from a network interface. It can also be run with the -V flag, which causes it to read a list of saved packet files. In all cases, only packets that match expression will be processed by tcpdump. tcpdump will, if not run with the -c flag, continue capturing packets until it is interrupted by a SIGINT signal (generated, for example, by typing your interrupt character, typically control-C) or a SIGTERM signal (typically generated with the kill(1) command); if run with the -c flag, it will capture packets until it is interrupted by a SIGINT or SIGTERM signal or the specified number of packets have been processed. When tcpdump finishes capturing packets, it will report counts of: packets ``captured'' (this is the number of packets that tcpdump has received and processed); packets ``received by filter'' (the meaning of this depends on the OS on which you're running tcpdump, and possibly on the way the OS was configured - if a filter was specified on the command line, on some OSes it counts packets regardless of whether they were matched by the filter expression and, even if they were matched by the filter expression, regardless of whether tcpdump has read and processed them yet, on other OSes it counts only packets that were matched by the filter expression regardless of whether tcpdump has read and processed them yet, and on other OSes it counts only packets that were matched by the filter expression and were processed by tcpdump); packets ``dropped by kernel'' (this is the number of packets that were dropped, due to a lack of buffer space, by the packet capture mechanism in the OS on which tcpdump is running, if the OS reports that information to applications; if not, it will be reported as 0). On platforms that support the SIGINFO signal, such as most BSDs (including macOS) and Digital/Tru64 UNIX, it will report those counts when it receives a SIGINFO signal (generated, for example, by typing your ``status'' character, typically control-T, although on some platforms, such as macOS, the ``status'' character is not set by default, so you must set it with stty(1) in order to use it) and will continue capturing packets. On platforms that do not support the SIGINFO signal, the same can be achieved by using the SIGUSR1 signal. Using the SIGUSR2 signal along with the -w flag will forcibly flush the packet buffer into the output file. Reading packets from a network interface may require that you have special privileges; see the pcap(3PCAP) man page for details. Reading a saved packet file doesn't require special privileges. OPTIONS top -A Print each packet (minus its link level header) in ASCII. Handy for capturing web pages. -b Print the AS number in BGP packets in ASDOT notation rather than ASPLAIN notation. -B buffer_size --buffer-size=buffer_size Set the operating system capture buffer size to buffer_size, in units of KiB (1024 bytes). -c count Exit after receiving count packets. --count Print only on stdout the packet count when reading capture file(s) instead of parsing/printing the packets. If a filter is specified on the command line, tcpdump counts only packets that were matched by the filter expression. -C file_size Before writing a raw packet to a savefile, check whether the file is currently larger than file_size and, if so, close the current savefile and open a new one. Savefiles after the first savefile will have the name specified with the -w flag, with a number after it, starting at 1 and continuing upward. The default unit of file_size is millions of bytes (1,000,000 bytes, not 1,048,576 bytes). By adding a suffix of k/K, m/M or g/G to the value, the unit can be changed to 1,024 (KiB), 1,048,576 (MiB), or 1,073,741,824 (GiB) respectively. -d Dump the compiled packet-matching code in a human readable form to standard output and stop. Please mind that although code compilation is always DLT- specific, typically it is impossible (and unnecessary) to specify which DLT to use for the dump because tcpdump uses either the DLT of the input pcap file specified with -r, or the default DLT of the network interface specified with -i, or the particular DLT of the network interface specified with -y and -i respectively. In these cases the dump shows the same exact code that would filter the input file or the network interface without -d. However, when neither -r nor -i is specified, specifying -d prevents tcpdump from guessing a suitable network interface (see -i). In this case the DLT defaults to EN10MB and can be set to another valid value manually with -y. -dd Dump packet-matching code as a C program fragment. -ddd Dump packet-matching code as decimal numbers (preceded with a count). -D --list-interfaces Print the list of the network interfaces available on the system and on which tcpdump can capture packets. For each network interface, a number and an interface name, possibly followed by a text description of the interface, are printed. The interface name or the number can be supplied to the -i flag to specify an interface on which to capture. This can be useful on systems that don't have a command to list them (e.g., Windows systems, or UNIX systems lacking ifconfig -a); the number can be useful on Windows 2000 and later systems, where the interface name is a somewhat complex string. The -D flag will not be supported if tcpdump was built with an older version of libpcap that lacks the pcap_findalldevs(3PCAP) function. -e Print the link-level header on each dump line. This can be used, for example, to print MAC layer addresses for protocols such as Ethernet and IEEE 802.11. -E Use spi@ipaddr algo:secret for decrypting IPsec ESP packets that are addressed to addr and contain Security Parameter Index value spi. This combination may be repeated with comma or newline separation. Note that setting the secret for IPv4 ESP packets is supported at this time. Algorithms may be des-cbc, 3des-cbc, blowfish-cbc, rc3-cbc, cast128-cbc, or none. The default is des-cbc. The ability to decrypt packets is only present if tcpdump was compiled with cryptography enabled. secret is the ASCII text for ESP secret key. If preceded by 0x, then a hex value will be read. The option assumes RFC 2406 ESP, not RFC 1827 ESP. The option is only for debugging purposes, and the use of this option with a true `secret' key is discouraged. By presenting IPsec secret key onto command line you make it visible to others, via ps(1) and other occasions. In addition to the above syntax, the syntax file name may be used to have tcpdump read the provided file in. The file is opened upon receiving the first ESP packet, so any special permissions that tcpdump may have been given should already have been given up. -f Print `foreign' IPv4 addresses numerically rather than symbolically (this option is intended to get around serious brain damage in Sun's NIS server usually it hangs forever translating non-local internet numbers). The test for `foreign' IPv4 addresses is done using the IPv4 address and netmask of the interface on that capture is being done. If that address or netmask are not available, either because the interface on that capture is being done has no address or netmask or because it is the "any" pseudo-interface, which is available in Linux and in recent versions of macOS and Solaris, and which can capture on more than one interface, this option will not work correctly. -F file Use file as input for the filter expression. An additional expression given on the command line is ignored. -G rotate_seconds If specified, rotates the dump file specified with the -w option every rotate_seconds seconds. Savefiles will have the name specified by -w which should include a time format as defined by strftime(3). If no time format is specified, each new file will overwrite the previous. Whenever a generated filename is not unique, tcpdump will overwrite the preexisting data; providing a time specification that is coarser than the capture period is therefore not advised. If used in conjunction with the -C option, filenames will take the form of `file<count>'. -h --help Print the tcpdump and libpcap version strings, print a usage message, and exit. --version Print the tcpdump and libpcap version strings and exit. -H Attempt to detect 802.11s draft mesh headers. -i interface --interface=interface Listen, report the list of link-layer types, report the list of time stamp types, or report the results of compiling a filter expression on interface. If unspecified and if the -d flag is not given, tcpdump searches the system interface list for the lowest numbered, configured up interface (excluding loopback), which may turn out to be, for example, ``eth0''. On Linux systems with 2.2 or later kernels and on recent versions of macOS and Solaris, an interface argument of ``any'' can be used to capture packets from all interfaces. Note that captures on the ``any'' pseudo- interface will not be done in promiscuous mode. If the -D flag is supported, an interface number as printed by that flag can be used as the interface argument, if no interface on the system has that number as a name. -I --monitor-mode Put the interface in "monitor mode"; this is supported only on IEEE 802.11 Wi-Fi interfaces, and supported only on some operating systems. Note that in monitor mode the adapter might disassociate from the network with which it's associated, so that you will not be able to use any wireless networks with that adapter. This could prevent accessing files on a network server, or resolving host names or network addresses, if you are capturing in monitor mode and are not connected to another network with another adapter. This flag will affect the output of the -L flag. If -I isn't specified, only those link-layer types available when not in monitor mode will be shown; if -I is specified, only those link-layer types available when in monitor mode will be shown. --immediate-mode Capture in "immediate mode". In this mode, packets are delivered to tcpdump as soon as they arrive, rather than being buffered for efficiency. This is the default when printing packets rather than saving packets to a ``savefile'' if the packets are being printed to a terminal rather than to a file or pipe. -j tstamp_type --time-stamp-type=tstamp_type Set the time stamp type for the capture to tstamp_type. The names to use for the time stamp types are given in pcap-tstamp(@MAN_MISC_INFO@); not all the types listed there will necessarily be valid for any given interface. -J --list-time-stamp-types List the supported time stamp types for the interface and exit. If the time stamp type cannot be set for the interface, no time stamp types are listed. --time-stamp-precision=tstamp_precision When capturing, set the time stamp precision for the capture to tstamp_precision. Note that availability of high precision time stamps (nanoseconds) and their actual accuracy is platform and hardware dependent. Also note that when writing captures made with nanosecond accuracy to a savefile, the time stamps are written with nanosecond resolution, and the file is written with a different magic number, to indicate that the time stamps are in seconds and nanoseconds; not all programs that read pcap savefiles will be able to read those captures. When reading a savefile, convert time stamps to the precision specified by timestamp_precision, and display them with that resolution. If the precision specified is less than the precision of time stamps in the file, the conversion will lose precision. The supported values for timestamp_precision are micro for microsecond resolution and nano for nanosecond resolution. The default is microsecond resolution. --micro --nano Shorthands for --time-stamp-precision=micro or --time-stamp-precision=nano, adjusting the time stamp precision accordingly. When reading packets from a savefile, using --micro truncates time stamps if the savefile was created with nanosecond precision. In contrast, a savefile created with microsecond precision will have trailing zeroes added to the time stamp when --nano is used. -K --dont-verify-checksums Don't attempt to verify IP, TCP, or UDP checksums. This is useful for interfaces that perform some or all of those checksum calculation in hardware; otherwise, all outgoing TCP checksums will be flagged as bad. -l Make stdout line buffered. Useful if you want to see the data while capturing it. E.g., tcpdump -l | tee dat or tcpdump -l > dat & tail -f dat Note that on Windows,``line buffered'' means ``unbuffered'', so that WinDump will write each character individually if -l is specified. -U is similar to -l in its behavior, but it will cause output to be ``packet-buffered'', so that the output is written to stdout at the end of each packet rather than at the end of each line; this is buffered on all platforms, including Windows. -L --list-data-link-types List the known data link types for the interface, in the specified mode, and exit. The list of known data link types may be dependent on the specified mode; for example, on some platforms, a Wi-Fi interface might support one set of data link types when not in monitor mode (for example, it might support only fake Ethernet headers, or might support 802.11 headers but not support 802.11 headers with radio information) and another set of data link types when in monitor mode (for example, it might support 802.11 headers, or 802.11 headers with radio information, only in monitor mode). -m module Load SMI MIB module definitions from file module. This option can be used several times to load several MIB modules into tcpdump. -M secret Use secret as a shared secret for validating the digests found in TCP segments with the TCP-MD5 option (RFC 2385), if present. -n Don't convert addresses (i.e., host addresses, port numbers, etc.) to names. -N Don't print domain name qualification of host names. E.g., if you give this flag then tcpdump will print ``nic'' instead of ``nic.ddn.mil''. -# --number Print an optional packet number at the beginning of the line. -O --no-optimize Do not run the packet-matching code optimizer. This is useful only if you suspect a bug in the optimizer. -p --no-promiscuous-mode Don't put the interface into promiscuous mode. Note that the interface might be in promiscuous mode for some other reason; hence, `-p' cannot be used as an abbreviation for `ether host {local-hw-addr} or ether broadcast'. --print Print parsed packet output, even if the raw packets are being saved to a file with the -w flag. --print-sampling=nth Print every nth packet. This option enables the --print flag. Unprinted packets are not parsed, which decreases processing time. Setting nth to 100 for example, will (counting from 1) parse and print the 100th packet, 200th packet, 300th packet, and so on. This option also enables the -S flag, as relative TCP sequence numbers are not tracked for unprinted packets. -Q direction --direction=direction Choose send/receive direction direction for which packets should be captured. Possible values are `in', `out' and `inout'. Not available on all platforms. -q Quick (quiet?) output. Print less protocol information so output lines are shorter. -r file Read packets from file (which was created with the -w option or by other tools that write pcap or pcapng files). Standard input is used if file is ``-''. -S --absolute-tcp-sequence-numbers Print absolute, rather than relative, TCP sequence numbers. -s snaplen --snapshot-length=snaplen Snarf snaplen bytes of data from each packet rather than the default of 262144 bytes. Packets truncated because of a limited snapshot are indicated in the output with ``[|proto]'', where proto is the name of the protocol level at which the truncation has occurred. Note that taking larger snapshots both increases the amount of time it takes to process packets and, effectively, decreases the amount of packet buffering. This may cause packets to be lost. Note also that taking smaller snapshots will discard data from protocols above the transport layer, which loses information that may be important. NFS and AFS requests and replies, for example, are very large, and much of the detail won't be available if a too-short snapshot length is selected. If you need to reduce the snapshot size below the default, you should limit snaplen to the smallest number that will capture the protocol information you're interested in. Setting snaplen to 0 sets it to the default of 262144, for backwards compatibility with recent older versions of tcpdump. -T type Force packets selected by "expression" to be interpreted the specified type. Currently known types are aodv (Ad- hoc On-demand Distance Vector protocol), carp (Common Address Redundancy Protocol), cnfp (Cisco NetFlow protocol), domain (Domain Name System), lmp (Link Management Protocol), pgm (Pragmatic General Multicast), pgm_zmtp1 (ZMTP/1.0 inside PGM/EPGM), ptp (Precision Time Protocol), quic (QUIC), radius (RADIUS), resp (REdis Serialization Protocol), rpc (Remote Procedure Call), rtcp (Real-Time Applications control protocol), rtp (Real-Time Applications protocol), snmp (Simple Network Management Protocol), someip (SOME/IP), tftp (Trivial File Transfer Protocol), vat (Visual Audio Tool), vxlan (Virtual eXtensible Local Area Network), wb (distributed White Board) and zmtp1 (ZeroMQ Message Transport Protocol 1.0). Note that the pgm type above affects UDP interpretation only, the native PGM is always recognised as IP protocol 113 regardless. UDP-encapsulated PGM is often called "EPGM" or "PGM/UDP". Note that the pgm_zmtp1 type above affects interpretation of both native PGM and UDP at once. During the native PGM decoding the application data of an ODATA/RDATA packet would be decoded as a ZeroMQ datagram with ZMTP/1.0 frames. During the UDP decoding in addition to that any UDP packet would be treated as an encapsulated PGM packet. -t Don't print a timestamp on each dump line. -tt Print the timestamp, as seconds since January 1, 1970, 00:00:00, UTC, and fractions of a second since that time, on each dump line. -ttt Print a delta (microsecond or nanosecond resolution depending on the --time-stamp-precision option) between current and previous line on each dump line. The default is microsecond resolution. -tttt Print a timestamp, as hours, minutes, seconds, and fractions of a second since midnight, preceded by the date, on each dump line. -ttttt Print a delta (microsecond or nanosecond resolution depending on the --time-stamp-precision option) between current and first line on each dump line. The default is microsecond resolution. -u Print undecoded NFS handles. -U --packet-buffered If the -w option is not specified, or if it is specified but the --print flag is also specified, make the printed packet output ``packet-buffered''; i.e., as the description of the contents of each packet is printed, it will be written to the standard output, rather than, when not writing to a terminal, being written only when the output buffer fills. If the -w option is specified, make the saved raw packet output ``packet-buffered''; i.e., as each packet is saved, it will be written to the output file, rather than being written only when the output buffer fills. The -U flag will not be supported if tcpdump was built with an older version of libpcap that lacks the pcap_dump_flush(3PCAP) function. -v When parsing and printing, produce (slightly more) verbose output. For example, the time to live, identification, total length and options in an IP packet are printed. Also enables additional packet integrity checks such as verifying the IP and ICMP header checksum. When writing to a file with the -w option and at the same time not reading from a file with the -r option, report to stderr, once per second, the number of packets captured. In Solaris, FreeBSD and possibly other operating systems this periodic update currently can cause loss of captured packets on their way from the kernel to tcpdump. -vv Even more verbose output. For example, additional fields are printed from NFS reply packets, and SMB packets are fully decoded. -vvv Even more verbose output. For example, telnet SB ... SE options are printed in full. With -X Telnet options are printed in hex as well. -V file Read a list of filenames from file. Standard input is used if file is ``-''. -w file Write the raw packets to file rather than parsing and printing them out. They can later be printed with the -r option. Standard output is used if file is ``-''. This output will be buffered if written to a file or pipe, so a program reading from the file or pipe may not see packets for an arbitrary amount of time after they are received. Use the -U flag to cause packets to be written as soon as they are received. The MIME type application/vnd.tcpdump.pcap has been registered with IANA for pcap files. The filename extension .pcap appears to be the most commonly used along with .cap and .dmp. tcpdump itself doesn't check the extension when reading capture files and doesn't add an extension when writing them (it uses magic numbers in the file header instead). However, many operating systems and applications will use the extension if it is present and adding one (e.g. .pcap) is recommended. See pcap-savefile(@MAN_FILE_FORMATS@) for a description of the file format. -W filecount Used in conjunction with the -C option, this will limit the number of files created to the specified number, and begin overwriting files from the beginning, thus creating a 'rotating' buffer. In addition, it will name the files with enough leading 0s to support the maximum number of files, allowing them to sort correctly. Used in conjunction with the -G option, this will limit the number of rotated dump files that get created, exiting with status 0 when reaching the limit. If used in conjunction with both -C and -G, the -W option will currently be ignored, and will only affect the file name. -x When parsing and printing, in addition to printing the headers of each packet, print the data of each packet (minus its link level header) in hex. The smaller of the entire packet or snaplen bytes will be printed. Note that this is the entire link-layer packet, so for link layers that pad (e.g. Ethernet), the padding bytes will also be printed when the higher layer packet is shorter than the required padding. In the current implementation this flag may have the same effect as -xx if the packet is truncated. -xx When parsing and printing, in addition to printing the headers of each packet, print the data of each packet, including its link level header, in hex. -X When parsing and printing, in addition to printing the headers of each packet, print the data of each packet (minus its link level header) in hex and ASCII. This is very handy for analysing new protocols. In the current implementation this flag may have the same effect as -XX if the packet is truncated. -XX When parsing and printing, in addition to printing the headers of each packet, print the data of each packet, including its link level header, in hex and ASCII. -y datalinktype --linktype=datalinktype Set the data link type to use while capturing packets (see -L) or just compiling and dumping packet-matching code (see -d) to datalinktype. -z postrotate-command Used in conjunction with the -C or -G options, this will make tcpdump run " postrotate-command file " where file is the savefile being closed after each rotation. For example, specifying -z gzip or -z bzip2 will compress each savefile using gzip or bzip2. Note that tcpdump will run the command in parallel to the capture, using the lowest priority so that this doesn't disturb the capture process. And in case you would like to use a command that itself takes flags or different arguments, you can always write a shell script that will take the savefile name as the only argument, make the flags & arguments arrangements and execute the command that you want. -Z user --relinquish-privileges=user If tcpdump is running as root, after opening the capture device or input savefile, but before opening any savefiles for output, change the user ID to user and the group ID to the primary group of user. This behavior can also be enabled by default at compile time. expression selects which packets will be dumped. If no expression is given, all packets on the net will be dumped. Otherwise, only packets for which expression is `true' will be dumped. For the expression syntax, see pcap-filter(@MAN_MISC_INFO@). The expression argument can be passed to tcpdump as either a single Shell argument, or as multiple Shell arguments, whichever is more convenient. Generally, if the expression contains Shell metacharacters, such as backslashes used to escape protocol names, it is easier to pass it as a single, quoted argument rather than to escape the Shell metacharacters. Multiple arguments are concatenated with spaces before being parsed. EXAMPLES top To print all packets arriving at or departing from sundown: tcpdump host sundown To print traffic between helios and either hot or ace: tcpdump host helios and \( hot or ace \) To print all IP packets between ace and any host except helios: tcpdump ip host ace and not helios To print all traffic between local hosts and hosts at Berkeley: tcpdump net ucb-ether To print all ftp traffic through internet gateway snup: (note that the expression is quoted to prevent the shell from (mis-)interpreting the parentheses): tcpdump 'gateway snup and (port ftp or ftp-data)' To print traffic neither sourced from nor destined for local hosts (if you gateway to one other net, this stuff should never make it onto your local net). tcpdump ip and not net localnet To print the start and end packets (the SYN and FIN packets) of each TCP conversation that involves a non-local host. tcpdump 'tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 and not src and dst net localnet' To print the TCP packets with flags RST and ACK both set. (i.e. select only the RST and ACK flags in the flags field, and if the result is "RST and ACK both set", match) tcpdump 'tcp[tcpflags] & (tcp-rst|tcp-ack) == (tcp-rst|tcp-ack)' To print all IPv4 HTTP packets to and from port 80, i.e. print only packets that contain data, not, for example, SYN and FIN packets and ACK-only packets. (IPv6 is left as an exercise for the reader.) tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' To print IP packets longer than 576 bytes sent through gateway snup: tcpdump 'gateway snup and ip[2:2] > 576' To print IP broadcast or multicast packets that were not sent via Ethernet broadcast or multicast: tcpdump 'ether[0] & 1 = 0 and ip[16] >= 224' To print all ICMP packets that are not echo requests/replies (i.e., not ping packets): tcpdump 'icmp[icmptype] != icmp-echo and icmp[icmptype] != icmp-echoreply' OUTPUT FORMAT top The output of tcpdump is protocol dependent. The following gives a brief description and examples of most of the formats. Timestamps By default, all output lines are preceded by a timestamp. The timestamp is the current clock time in the form hh:mm:ss.frac and is as accurate as the kernel's clock. The timestamp reflects the time the kernel applied a time stamp to the packet. No attempt is made to account for the time lag between when the network interface finished receiving the packet from the network and when the kernel applied a time stamp to the packet; that time lag could include a delay between the time when the network interface finished receiving a packet from the network and the time when an interrupt was delivered to the kernel to get it to read the packet and a delay between the time when the kernel serviced the `new packet' interrupt and the time when it applied a time stamp to the packet. Interface When the any interface is selected on capture or when a link-type LINUX_SLL2 capture file is read the interface name is printed after the timestamp. This is followed by the packet type with In and Out denoting a packet destined for this host or originating from this host respectively. Other possible values are B for broadcast packets, M for multicast packets, and P for packets destined for other hosts. Link Level Headers If the '-e' option is given, the link level header is printed out. On Ethernets, the source and destination addresses, protocol, and packet length are printed. On FDDI networks, the '-e' option causes tcpdump to print the `frame control' field, the source and destination addresses, and the packet length. (The `frame control' field governs the interpretation of the rest of the packet. Normal packets (such as those containing IP datagrams) are `async' packets, with a priority value between 0 and 7; for example, `async4'. Such packets are assumed to contain an 802.2 Logical Link Control (LLC) packet; the LLC header is printed if it is not an ISO datagram or a so-called SNAP packet. On Token Ring networks, the '-e' option causes tcpdump to print the `access control' and `frame control' fields, the source and destination addresses, and the packet length. As on FDDI networks, packets are assumed to contain an LLC packet. Regardless of whether the '-e' option is specified or not, the source routing information is printed for source-routed packets. On 802.11 networks, the '-e' option causes tcpdump to print the `frame control' fields, all of the addresses in the 802.11 header, and the packet length. As on FDDI networks, packets are assumed to contain an LLC packet. (N.B.: The following description assumes familiarity with the SLIP compression algorithm described in RFC 1144.) On SLIP links, a direction indicator (``I'' for inbound, ``O'' for outbound), packet type, and compression information are printed out. The packet type is printed first. The three types are ip, utcp, and ctcp. No further link information is printed for ip packets. For TCP packets, the connection identifier is printed following the type. If the packet is compressed, its encoded header is printed out. The special cases are printed out as *S+n and *SA+n, where n is the amount by which the sequence number (or sequence number and ack) has changed. If it is not a special case, zero or more changes are printed. A change is indicated by U (urgent pointer), W (window), A (ack), S (sequence number), and I (packet ID), followed by a delta (+n or -n), or a new value (=n). Finally, the amount of data in the packet and compressed header length are printed. For example, the following line shows an outbound compressed TCP packet, with an implicit connection identifier; the ack has changed by 6, the sequence number by 49, and the packet ID by 6; there are 3 bytes of data and 6 bytes of compressed header: O ctcp * A+6 S+49 I+6 3 (6) ARP/RARP Packets ARP/RARP output shows the type of request and its arguments. The format is intended to be self explanatory. Here is a short sample taken from the start of an `rlogin' from host rtsg to host csam: arp who-has csam tell rtsg arp reply csam is-at CSAM The first line says that rtsg sent an ARP packet asking for the Ethernet address of internet host csam. Csam replies with its Ethernet address (in this example, Ethernet addresses are in caps and internet addresses in lower case). This would look less redundant if we had done tcpdump -n: arp who-has 128.3.254.6 tell 128.3.254.68 arp reply 128.3.254.6 is-at 02:07:01:00:01:c4 If we had done tcpdump -e, the fact that the first packet is broadcast and the second is point-to-point would be visible: RTSG Broadcast 0806 64: arp who-has csam tell rtsg CSAM RTSG 0806 64: arp reply csam is-at CSAM For the first packet this says the Ethernet source address is RTSG, the destination is the Ethernet broadcast address, the type field contained hex 0806 (type ETHER_ARP) and the total length was 64 bytes. IPv4 Packets If the link-layer header is not being printed, for IPv4 packets, IP is printed after the time stamp. If the -v flag is specified, information from the IPv4 header is shown in parentheses after the IP or the link-layer header. The general format of this information is: tos tos, ttl ttl, id id, offset offset, flags [flags], proto proto, length length, options (options) tos is the type of service field; if the ECN bits are non-zero, those are reported as ECT(1), ECT(0), or CE. ttl is the time-to- live; it is not reported if it is zero. id is the IP identification field. offset is the fragment offset field; it is printed whether this is part of a fragmented datagram or not. flags are the MF and DF flags; + is reported if MF is set, and DF is reported if F is set. If neither are set, . is reported. proto is the protocol ID field. length is the total length field; if the packet is a presumed TSO (TCP Segmentation Offload) send, [was 0, presumed TSO] is reported. options are the IP options, if any. Next, for TCP and UDP packets, the source and destination IP addresses and TCP or UDP ports, with a dot between each IP address and its corresponding port, will be printed, with a > separating the source and destination. For other protocols, the addresses will be printed, with a > separating the source and destination. Higher level protocol information, if any, will be printed after that. For fragmented IP datagrams, the first fragment contains the higher level protocol header; fragments after the first contain no higher level protocol header. Fragmentation information will be printed only with the -v flag, in the IP header information, as described above. TCP Packets (N.B.:The following description assumes familiarity with the TCP protocol described in RFC 793. If you are not familiar with the protocol, this description will not be of much use to you.) The general format of a TCP protocol line is: src > dst: Flags [tcpflags], seq data-seqno, ack ackno, win window, urg urgent, options [opts], length len Src and dst are the source and destination IP addresses and ports. Tcpflags are some combination of S (SYN), F (FIN), P (PSH), R (RST), U (URG), W (CWR), E (ECE) or `.' (ACK), or `none' if no flags are set. Data-seqno describes the portion of sequence space covered by the data in this packet (see example below). Ackno is sequence number of the next data expected the other direction on this connection. Window is the number of bytes of receive buffer space available the other direction on this connection. Urg indicates there is `urgent' data in the packet. Opts are TCP options (e.g., mss 1024). Len is the length of payload data. Iptype, Src, dst, and flags are always present. The other fields depend on the contents of the packet's TCP protocol header and are output only if appropriate. Here is the opening portion of an rlogin from host rtsg to host csam. IP rtsg.1023 > csam.login: Flags [S], seq 768512:768512, win 4096, opts [mss 1024] IP csam.login > rtsg.1023: Flags [S.], seq, 947648:947648, ack 768513, win 4096, opts [mss 1024] IP rtsg.1023 > csam.login: Flags [.], ack 1, win 4096 IP rtsg.1023 > csam.login: Flags [P.], seq 1:2, ack 1, win 4096, length 1 IP csam.login > rtsg.1023: Flags [.], ack 2, win 4096 IP rtsg.1023 > csam.login: Flags [P.], seq 2:21, ack 1, win 4096, length 19 IP csam.login > rtsg.1023: Flags [P.], seq 1:2, ack 21, win 4077, length 1 IP csam.login > rtsg.1023: Flags [P.], seq 2:3, ack 21, win 4077, urg 1, length 1 IP csam.login > rtsg.1023: Flags [P.], seq 3:4, ack 21, win 4077, urg 1, length 1 The first line says that TCP port 1023 on rtsg sent a packet to port login on csam. The S indicates that the SYN flag was set. The packet sequence number was 768512 and it contained no data. (The notation is `first:last' which means `sequence numbers first up to but not including last'.) There was no piggy-backed ACK, the available receive window was 4096 bytes and there was a max- segment-size option requesting an MSS of 1024 bytes. Csam replies with a similar packet except it includes a piggy- backed ACK for rtsg's SYN. Rtsg then ACKs csam's SYN. The `.' means the ACK flag was set. The packet contained no data so there is no data sequence number or length. Note that the ACK sequence number is a small integer (1). The first time tcpdump sees a TCP `conversation', it prints the sequence number from the packet. On subsequent packets of the conversation, the difference between the current packet's sequence number and this initial sequence number is printed. This means that sequence numbers after the first can be interpreted as relative byte positions in the conversation's data stream (with the first data byte each direction being `1'). `-S' will override this feature, causing the original sequence numbers to be output. On the 6th line, rtsg sends csam 19 bytes of data (bytes 2 through 20 in the rtsg csam side of the conversation). The PSH flag is set in the packet. On the 7th line, csam says it's received data sent by rtsg up to but not including byte 21. Most of this data is apparently sitting in the socket buffer since csam's receive window has gotten 19 bytes smaller. Csam also sends one byte of data to rtsg in this packet. On the 8th and 9th lines, csam sends two bytes of urgent, pushed data to rtsg. If the snapshot was small enough that tcpdump didn't capture the full TCP header, it interprets as much of the header as it can and then reports ``[|tcp]'' to indicate the remainder could not be interpreted. If the header contains a bogus option (one with a length that's either too small or beyond the end of the header), tcpdump reports it as ``[bad opt]'' and does not interpret any further options (since it's impossible to tell where they start). If the header length indicates options are present but the IP datagram length is not long enough for the options to actually be there, tcpdump reports it as ``[bad hdr length]''. Particular TCP Flag Combinations (SYN-ACK, URG-ACK, etc.) There are 8 bits in the control bits section of the TCP header: CWR | ECE | URG | ACK | PSH | RST | SYN | FIN Let's assume that we want to watch packets used in establishing a TCP connection. Recall that TCP uses a 3-way handshake protocol when it initializes a new connection; the connection sequence with regard to the TCP control bits is 1) Caller sends SYN 2) Recipient responds with SYN, ACK 3) Caller sends ACK Now we're interested in capturing packets that have only the SYN bit set (Step 1). Note that we don't want packets from step 2 (SYN-ACK), just a plain initial SYN. What we need is a correct filter expression for tcpdump. Recall the structure of a TCP header without options: 0 15 31 ----------------------------------------------------------------- | source port | destination port | ----------------------------------------------------------------- | sequence number | ----------------------------------------------------------------- | acknowledgment number | ----------------------------------------------------------------- | HL | rsvd |C|E|U|A|P|R|S|F| window size | ----------------------------------------------------------------- | TCP checksum | urgent pointer | ----------------------------------------------------------------- A TCP header usually holds 20 octets of data, unless options are present. The first line of the graph contains octets 0 - 3, the second line shows octets 4 - 7 etc. Starting to count with 0, the relevant TCP control bits are contained in octet 13: 0 7| 15| 23| 31 ----------------|---------------|---------------|---------------- | HL | rsvd |C|E|U|A|P|R|S|F| window size | ----------------|---------------|---------------|---------------- | | 13th octet | | | Let's have a closer look at octet no. 13: | | |---------------| |C|E|U|A|P|R|S|F| |---------------| |7 5 3 0| These are the TCP control bits we are interested in. We have numbered the bits in this octet from 0 to 7, right to left, so the PSH bit is bit number 3, while the URG bit is number 5. Recall that we want to capture packets with only SYN set. Let's see what happens to octet 13 if a TCP datagram arrives with the SYN bit set in its header: |C|E|U|A|P|R|S|F| |---------------| |0 0 0 0 0 0 1 0| |---------------| |7 6 5 4 3 2 1 0| Looking at the control bits section we see that only bit number 1 (SYN) is set. Assuming that octet number 13 is an 8-bit unsigned integer in network byte order, the binary value of this octet is 00000010 and its decimal representation is 7 6 5 4 3 2 1 0 0*2 + 0*2 + 0*2 + 0*2 + 0*2 + 0*2 + 1*2 + 0*2 = 2 We're almost done, because now we know that if only SYN is set, the value of the 13th octet in the TCP header, when interpreted as a 8-bit unsigned integer in network byte order, must be exactly 2. This relationship can be expressed as tcp[13] == 2 We can use this expression as the filter for tcpdump in order to watch packets which have only SYN set: tcpdump -i xl0 'tcp[13] == 2' The expression says "let the 13th octet of a TCP datagram have the decimal value 2", which is exactly what we want. Now, let's assume that we need to capture SYN packets, but we don't care if ACK or any other TCP control bit is set at the same time. Let's see what happens to octet 13 when a TCP datagram with SYN-ACK set arrives: |C|E|U|A|P|R|S|F| |---------------| |0 0 0 1 0 0 1 0| |---------------| |7 6 5 4 3 2 1 0| Now bits 1 and 4 are set in the 13th octet. The binary value of octet 13 is 00010010 which translates to decimal 7 6 5 4 3 2 1 0 0*2 + 0*2 + 0*2 + 1*2 + 0*2 + 0*2 + 1*2 + 0*2 = 18 Now we can't just use 'tcp[13] == 18' in the tcpdump filter expression, because that would select only those packets that have SYN-ACK set, but not those with only SYN set. Remember that we don't care if ACK or any other control bit is set as long as SYN is set. In order to achieve our goal, we need to logically AND the binary value of octet 13 with some other value to preserve the SYN bit. We know that we want SYN to be set in any case, so we'll logically AND the value in the 13th octet with the binary value of a SYN: 00010010 SYN-ACK 00000010 SYN AND 00000010 (we want SYN) AND 00000010 (we want SYN) -------- -------- = 00000010 = 00000010 We see that this AND operation delivers the same result regardless whether ACK or another TCP control bit is set. The decimal representation of the AND value as well as the result of this operation is 2 (binary 00000010), so we know that for packets with SYN set the following relation must hold true: ( ( value of octet 13 ) AND ( 2 ) ) == ( 2 ) This points us to the tcpdump filter expression tcpdump -i xl0 'tcp[13] & 2 == 2' Some offsets and field values may be expressed as names rather than as numeric values. For example tcp[13] may be replaced with tcp[tcpflags]. The following TCP flag field values are also available: tcp-fin, tcp-syn, tcp-rst, tcp-push, tcp-ack, tcp-urg, tcp-ece and tcp-cwr. This can be demonstrated as: tcpdump -i xl0 'tcp[tcpflags] & tcp-push != 0' Note that you should use single quotes or a backslash in the expression to hide the AND ('&') special character from the shell. UDP Packets UDP format is illustrated by this rwho packet: actinide.who > broadcast.who: udp 84 This says that port who on host actinide sent a UDP datagram to port who on host broadcast, the Internet broadcast address. The packet contained 84 bytes of user data. Some UDP services are recognized (from the source or destination port number) and the higher level protocol information printed. In particular, Domain Name service requests (RFC 1034/1035) and Sun RPC calls (RFC 1050) to NFS. TCP or UDP Name Server Requests (N.B.:The following description assumes familiarity with the Domain Service protocol described in RFC 1035. If you are not familiar with the protocol, the following description will appear to be written in Greek.) Name server requests are formatted as src > dst: id op? flags qtype qclass name (len) h2opolo.1538 > helios.domain: 3+ A? ucbvax.berkeley.edu. (37) Host h2opolo asked the domain server on helios for an address record (qtype=A) associated with the name ucbvax.berkeley.edu. The query id was `3'. The `+' indicates the recursion desired flag was set. The query length was 37 bytes, excluding the TCP or UDP and IP protocol headers. The query operation was the normal one, Query, so the op field was omitted. If the op had been anything else, it would have been printed between the `3' and the `+'. Similarly, the qclass was the normal one, C_IN, and omitted. Any other qclass would have been printed immediately after the `A'. A few anomalies are checked and may result in extra fields enclosed in square brackets: If a query contains an answer, authority records or additional records section, ancount, nscount, or arcount are printed as `[na]', `[nn]' or `[nau]' where n is the appropriate count. If any of the response bits are set (AA, RA or rcode) or any of the `must be zero' bits are set in bytes two and three, `[b2&3=x]' is printed, where x is the hex value of header bytes two and three. TCP or UDP Name Server Responses Name server responses are formatted as src > dst: id op rcode flags a/n/au type class data (len) helios.domain > h2opolo.1538: 3 3/3/7 A 128.32.137.3 (273) helios.domain > h2opolo.1537: 2 NXDomain* 0/1/0 (97) In the first example, helios responds to query id 3 from h2opolo with 3 answer records, 3 name server records and 7 additional records. The first answer record is type A (address) and its data is internet address 128.32.137.3. The total size of the response was 273 bytes, excluding TCP or UDP and IP headers. The op (Query) and response code (NoError) were omitted, as was the class (C_IN) of the A record. In the second example, helios responds to query 2 with a response code of nonexistent domain (NXDomain) with no answers, one name server and no authority records. The `*' indicates that the authoritative answer bit was set. Since there were no answers, no type, class or data were printed. Other flag characters that might appear are `-' (recursion available, RA, not set) and `|' (truncated message, TC, set). If the `question' section doesn't contain exactly one entry, `[nq]' is printed. SMB/CIFS Decoding tcpdump now includes fairly extensive SMB/CIFS/NBT decoding for data on UDP/137, UDP/138 and TCP/139. Some primitive decoding of IPX and NetBEUI SMB data is also done. By default a fairly minimal decode is done, with a much more detailed decode done if -v is used. Be warned that with -v a single SMB packet may take up a page or more, so only use -v if you really want all the gory details. For information on SMB packet formats and what all the fields mean see https://download.samba.org/pub/samba/specs/ and other online resources. The SMB patches were written by Andrew Tridgell (tridge@samba.org). NFS Requests and Replies Sun NFS (Network File System) requests and replies are printed as: src.sport > dst.nfs: NFS request xid xid len op args src.nfs > dst.dport: NFS reply xid xid reply stat len op results sushi.1023 > wrl.nfs: NFS request xid 26377 112 readlink fh 21,24/10.73165 wrl.nfs > sushi.1023: NFS reply xid 26377 reply ok 40 readlink "../var" sushi.1022 > wrl.nfs: NFS request xid 8219 144 lookup fh 9,74/4096.6878 "xcolors" wrl.nfs > sushi.1022: NFS reply xid 8219 reply ok 128 lookup fh 9,74/4134.3150 In the first line, host sushi sends a transaction with id 26377 to wrl. The request was 112 bytes, excluding the UDP and IP headers. The operation was a readlink (read symbolic link) on file handle (fh) 21,24/10.731657119. (If one is lucky, as in this case, the file handle can be interpreted as a major,minor device number pair, followed by the inode number and generation number.) In the second line, wrl replies `ok' with the same transaction id and the contents of the link. In the third line, sushi asks (using a new transaction id) wrl to lookup the name `xcolors' in directory file 9,74/4096.6878. In the fourth line, wrl sends a reply with the respective transaction id. Note that the data printed depends on the operation type. The format is intended to be self explanatory if read in conjunction with an NFS protocol spec. Also note that older versions of tcpdump printed NFS packets in a slightly different format: the transaction id (xid) would be printed instead of the non-NFS port number of the packet. If the -v (verbose) flag is given, additional information is printed. For example: sushi.1023 > wrl.nfs: NFS request xid 79658 148 read fh 21,11/12.195 8192 bytes @ 24576 wrl.nfs > sushi.1023: NFS reply xid 79658 reply ok 1472 read REG 100664 ids 417/0 sz 29388 (-v also prints the IP header TTL, ID, length, and fragmentation fields, which have been omitted from this example.) In the first line, sushi asks wrl to read 8192 bytes from file 21,11/12.195, at byte offset 24576. Wrl replies `ok'; the packet shown on the second line is the first fragment of the reply, and hence is only 1472 bytes long (the other bytes will follow in subsequent fragments, but these fragments do not have NFS or even UDP headers and so might not be printed, depending on the filter expression used). Because the -v flag is given, some of the file attributes (which are returned in addition to the file data) are printed: the file type (``REG'', for regular file), the file mode (in octal), the UID and GID, and the file size. If the -v flag is given more than once, even more details are printed. NFS reply packets do not explicitly identify the RPC operation. Instead, tcpdump keeps track of ``recent'' requests, and matches them to the replies using the transaction ID. If a reply does not closely follow the corresponding request, it might not be parsable. AFS Requests and Replies Transarc AFS (Andrew File System) requests and replies are printed as: src.sport > dst.dport: rx packet-type src.sport > dst.dport: rx packet-type service call call-name args src.sport > dst.dport: rx packet-type service reply call-name args elvis.7001 > pike.afsfs: rx data fs call rename old fid 536876964/1/1 ".newsrc.new" new fid 536876964/1/1 ".newsrc" pike.afsfs > elvis.7001: rx data fs reply rename In the first line, host elvis sends a RX packet to pike. This was a RX data packet to the fs (fileserver) service, and is the start of an RPC call. The RPC call was a rename, with the old directory file id of 536876964/1/1 and an old filename of `.newsrc.new', and a new directory file id of 536876964/1/1 and a new filename of `.newsrc'. The host pike responds with a RPC reply to the rename call (which was successful, because it was a data packet and not an abort packet). In general, all AFS RPCs are decoded at least by RPC call name. Most AFS RPCs have at least some of the arguments decoded (generally only the `interesting' arguments, for some definition of interesting). The format is intended to be self-describing, but it will probably not be useful to people who are not familiar with the workings of AFS and RX. If the -v (verbose) flag is given twice, acknowledgement packets and additional header information is printed, such as the RX call ID, call number, sequence number, serial number, and the RX packet flags. If the -v flag is given twice, additional information is printed, such as the RX call ID, serial number, and the RX packet flags. The MTU negotiation information is also printed from RX ack packets. If the -v flag is given three times, the security index and service id are printed. Error codes are printed for abort packets, with the exception of Ubik beacon packets (because abort packets are used to signify a yes vote for the Ubik protocol). AFS reply packets do not explicitly identify the RPC operation. Instead, tcpdump keeps track of ``recent'' requests, and matches them to the replies using the call number and service ID. If a reply does not closely follow the corresponding request, it might not be parsable. KIP AppleTalk (DDP in UDP) AppleTalk DDP packets encapsulated in UDP datagrams are de- encapsulated and dumped as DDP packets (i.e., all the UDP header information is discarded). The file /etc/atalk.names is used to translate AppleTalk net and node numbers to names. Lines in this file have the form number name 1.254 ether 16.1 icsd-net 1.254.110 ace The first two lines give the names of AppleTalk networks. The third line gives the name of a particular host (a host is distinguished from a net by the 3rd octet in the number - a net number must have two octets and a host number must have three octets.) The number and name should be separated by whitespace (blanks or tabs). The /etc/atalk.names file may contain blank lines or comment lines (lines starting with a `#'). AppleTalk addresses are printed in the form net.host.port 144.1.209.2 > icsd-net.112.220 office.2 > icsd-net.112.220 jssmag.149.235 > icsd-net.2 (If the /etc/atalk.names doesn't exist or doesn't contain an entry for some AppleTalk host/net number, addresses are printed in numeric form.) In the first example, NBP (DDP port 2) on net 144.1 node 209 is sending to whatever is listening on port 220 of net icsd node 112. The second line is the same except the full name of the source node is known (`office'). The third line is a send from port 235 on net jssmag node 149 to broadcast on the icsd-net NBP port (note that the broadcast address (255) is indicated by a net name with no host number - for this reason it's a good idea to keep node names and net names distinct in /etc/atalk.names). NBP (name binding protocol) and ATP (AppleTalk transaction protocol) packets have their contents interpreted. Other protocols just dump the protocol name (or number if no name is registered for the protocol) and packet size. NBP Packets NBP packets are formatted like the following examples: icsd-net.112.220 > jssmag.2: nbp-lkup 190: "=:LaserWriter@*" jssmag.209.2 > icsd-net.112.220: nbp-reply 190: "RM1140:LaserWriter@*" 250 techpit.2 > icsd-net.112.220: nbp-reply 190: "techpit:LaserWriter@*" 186 The first line is a name lookup request for laserwriters sent by net icsd host 112 and broadcast on net jssmag. The nbp id for the lookup is 190. The second line shows a reply for this request (note that it has the same id) from host jssmag.209 saying that it has a laserwriter resource named "RM1140" registered on port 250. The third line is another reply to the same request saying host techpit has laserwriter "techpit" registered on port 186. ATP Packets ATP packet formatting is demonstrated by the following example: jssmag.209.165 > helios.132: atp-req 12266<0-7> 0xae030001 helios.132 > jssmag.209.165: atp-resp 12266:0 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:1 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:2 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:3 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:4 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:5 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:6 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp*12266:7 (512) 0xae040000 jssmag.209.165 > helios.132: atp-req 12266<3,5> 0xae030001 helios.132 > jssmag.209.165: atp-resp 12266:3 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:5 (512) 0xae040000 jssmag.209.165 > helios.132: atp-rel 12266<0-7> 0xae030001 jssmag.209.133 > helios.132: atp-req* 12267<0-7> 0xae030002 Jssmag.209 initiates transaction id 12266 with host helios by requesting up to 8 packets (the `<0-7>'). The hex number at the end of the line is the value of the `userdata' field in the request. Helios responds with 8 512-byte packets. The `:digit' following the transaction id gives the packet sequence number in the transaction and the number in parens is the amount of data in the packet, excluding the ATP header. The `*' on packet 7 indicates that the EOM bit was set. Jssmag.209 then requests that packets 3 & 5 be retransmitted. Helios resends them then jssmag.209 releases the transaction. Finally, jssmag.209 initiates the next request. The `*' on the request indicates that XO (`exactly once') was not set. BACKWARD COMPATIBILITY top The TCP flag names tcp-ece and tcp-cwr became available when linking with libpcap 1.9.0 or later. SEE ALSO top stty(1), pcap(3PCAP), bpf(4), nit(4P), pcap-savefile(@MAN_FILE_FORMATS@), pcap-filter(@MAN_MISC_INFO@), pcap-tstamp(@MAN_MISC_INFO@) https://www.iana.org/assignments/media-types/application/vnd.tcpdump.pcap AUTHORS top The original authors are: Van Jacobson, Craig Leres and Steven McCanne, all of the Lawrence Berkeley National Laboratory, University of California, Berkeley, CA. It is currently maintained by The Tcpdump Group. The current version is available via HTTPS: https://www.tcpdump.org/ The original distribution is available via anonymous ftp: ftp://ftp.ee.lbl.gov/old/tcpdump.tar.Z IPv6/IPsec support is added by WIDE/KAME project. This program uses OpenSSL/LibreSSL, under specific configurations. BUGS top To report a security issue please send an e-mail to security@tcpdump.org. To report bugs and other problems, contribute patches, request a feature, provide generic feedback etc. please see the file CONTRIBUTING.md in the tcpdump source tree root. NIT doesn't let you watch your own outbound traffic, BPF will. We recommend that you use the latter. Some attempt should be made to reassemble IP fragments or, at least to compute the right length for the higher level protocol. Name server inverse queries are not dumped correctly: the (empty) question section is printed rather than real query in the answer section. Some believe that inverse queries are themselves a bug and prefer to fix the program generating them rather than tcpdump. A packet trace that crosses a daylight savings time change will give skewed time stamps (the time change is ignored). Filter expressions on fields other than those in Token Ring headers will not correctly handle source-routed Token Ring packets. Filter expressions on fields other than those in 802.11 headers will not correctly handle 802.11 data packets with both To DS and From DS set. ip6 proto should chase header chain, but at this moment it does not. ip6 protochain is supplied for this behavior. Arithmetic expression against transport layer headers, like tcp[0], does not work against IPv6 packets. It only looks at IPv4 packets. COLOPHON top This page is part of the tcpdump (a command-line network packet analyzer) project. Information about the project can be found at http://www.tcpdump.org/. If you have a bug report for this manual page, see http://www.tcpdump.org/#patches. This page was obtained from the project's upstream Git repository https://github.com/the-tcpdump-group/tcpdump on 2023-12-22. (At that time, the date of the most recent commit that was found in the repository was 2023-12-20.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to man-pages@man7.org 21 October 2023 TCPDUMP(1) Pages that refer to this page: pcap(3pcap), pcap_dump_open(3pcap), pcap_open_offline(3pcap), netsniff-ng(8) HTML rendering created 2023-12-22 by Michael Kerrisk, author of The Linux Programming Interface. For details of in-depth Linux/UNIX system programming training courses that I teach, look here. Hosting by jambit GmbH.
# tcpdump\n\n> Dump traffic on a network.\n> More information: <https://www.tcpdump.org>.\n\n- List available network interfaces:\n\n`tcpdump -D`\n\n- Capture the traffic of a specific interface:\n\n`tcpdump -i {{eth0}}`\n\n- Capture all TCP traffic showing contents (ASCII) in console:\n\n`tcpdump -A tcp`\n\n- Capture the traffic from or to a host:\n\n`tcpdump host {{www.example.com}}`\n\n- Capture the traffic from a specific interface, source, destination and destination port:\n\n`tcpdump -i {{eth0}} src {{192.168.1.1}} and dst {{192.168.1.2}} and dst port {{80}}`\n\n- Capture the traffic of a network:\n\n`tcpdump net {{192.168.1.0/24}}`\n\n- Capture all traffic except traffic over port 22 and save to a dump file:\n\n`tcpdump -w {{dumpfile.pcap}} port not {{22}}`\n\n- Read from a given dump file:\n\n`tcpdump -r {{dumpfile.pcap}}`\n