date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,442,499,868,000
I was battling with setting up a Mint install on an encrypted hard-drive, and I think I partially succeeded. But I cannot boot the system because some configuration is not correct. I have no idea how to fix it now. The rough guidelines I followed was along the lines of http://blog.andreas-haerter.com/2011/06/18/ubuntu-full-disk-encryption-lvm-luks.sh The differences are that I partitioned with GParted. I'm also dual-booting with Windows, and I'm not using extra partition for /home. vg is on extended partition /dev/sda4, within logical partition /dev/sda5 /boot in on primary partition /dev/sda3 bootloader is on /dev/sda The install went good, I can mount the file-system as in script, from live DVD, but the script in chroot part failed, and the system doesn't boot... Can anyone tell me what do I have to do to allow boot to mount the encrypted partition? Is it enough to edit fstab and crypttab only? They seem to reside on the encrypted partition, so not readable by boot... If it's enough, what they should look like? Everything seems very confusing, and I cannot find a good source I could read about the problem... UPDATES: fdisk -l /dev/sda Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 209715199 104754176 7 HPFS/NTFS/exFAT /dev/sda3 209715200 210763775 524288 83 Linux /dev/sda4 210763776 625141759 207188992 5 Extended /dev/sda5 210765824 567281663 178257920 83 Linux /dev/sda6 567283712 625141759 28929024 7 HPFS/NTFS/exFAT pvs PV VG Fmt Attr PSize PFree /dev/dm-0 mint lvm2 a- 170.00g 0 pvscan PV /dev/dm-0 VG mint lvm2 [170.00 GiB / 0 free] Total: 1 [170.00 GiB] / in use: 1 [170.00 GiB] / in no VG: 0 [0 ] vgscan Reading all physical volumes. This may take a while... Found volume group "mint" using metadata type lvm2 vgs VG #PV #LV #SN Attr VSize VFree mint 1 2 0 wz--n- 170.00g 0 mount /dev/mapper/mint-root /mnt + cat /mnt/etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/mint-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda3 during installation UUID={uuidhre} /boot ext4 defaults 0 2 /dev/mapper/mint-swap none swap sw 0 0 cat /mnt/etc/crypttab - manually edited # <target name> <source device> <key file> <options> lvm_crypt /dev/sda5 none luks The tutorial for the reference in readable format: http://blog.andreas-haerter.com/2011/06/18/ubuntu-full-disk-encryption-lvm-luks SOLUTION: The post install update doesn't work in tutorial. You have to create the crypttab file manually, or fix it up before calling initramfs. I called everything except intitramfs, opened the /mnt/etc/crypttab with nano, patched the file, and then called chroot with initramfs only. Everything worked smoothly this way.
There's a evident wrong configuration: lvm_crypt /dev/sda5 none luks You decrypted the volume and named it lvm_crypt while mounting /dev/mapper/mint-root Were you asked to input the password during boot ? Also, did you updated initramfs afterwards ? Because this crypttab need to be embedded since it's for root partition. EDIT mint_root /dev/sda5 none luks And chroot inside, do update-initramfs -u will fix it.
How to salvage a lvm/luks install from custom install
1,442,499,868,000
I'd like to setup Arch Linux with encryption. I found the tutorial on the Arch wiki, and think that the second option (LVM on LUKS) is the best option for me. Here's the partitioning I'd like to use (Thinkpad X1 Carbon, ~ 500 GB SSD, 16 GB RAM): [alignment gap] 1 MB /boot 256 MB (FAT 32) swap 16 GB (size of memory) / (root) 64 GB (ext4) /var 8 GB (ext4) /tmp 4 GB (ext4) /home ~400 GB (ext4, remainder) The disk has the name nvme0n1. After booting the installer (September 2020 release) and connecting to the WiFi, I overwrite the disk with random data: # shred --random-source=/dev/urandom --iterations=3 /dev/nvme0n1 Then I setup a new GPT partition schema: # parted -s /dev/nvme0n1 mklabel gpt Next, I create and format a boot partition with a 1 MB alignment gap in front of it. # parted -s /dev/nvme0n1 mkpart boot fat32 1MiB 257MiB # parted -s /dev/nvme0n1 set 1 esp on # mkfs.fat -F 32 /dev/nvme0n1p1 Now comes the part, where the wiki is unclear. (It only mentions that it's possible to have /boot on a different device, which I don't.) I decided to make another partition, on top of which my encrypted volume will be located: # parted -s /dev/nvme0n1 mkpart cryptlvm 257MiB '100%' This creates a second partition /dev/nvme0n1p2 with the remainder disk size. (Maybe this step causes the problem.) I continue to setup the encryption: # cryptsetup luksFormat /dev/nvme0n1p2 # YES, entering passphrase twice # cryptsetup open /dev/nvme0n1p2 cryptlvm # entering passphrase # pvcreate /dev/mapper/cryptlvm # vgcreate VolumeGroup /dev/mapper/cryptlvm Then I create the partitions as described above: # lvcreate -L 16G VolumeGroup -n swap # lvcreate -L 64G VolumeGroup -n root # lvcreate -L 8G VolumeGroup -n var # lvcreate -L 4G VolumeGroup -n tmp # lvcreate -l '100%FREE' VolumeGroup -n home The partitions are now being formatted: # mkswap /dev/VolumeGroup/swap # mkfs.ext4 -F /dev/VolumeGroup/root # mkfs.ext4 -F /dev/VolumeGroup/var # mkfs.ext4 -F /dev/VolumeGroup/tmp # mkfs.ext4 -F /dev/VolumeGroup/home And mounted: # mount /dev/VolumeGroup/root /mnt # mkdir /mnt/boot # mount /dev/nvme0n1p1 /mnt/boot # mkdir /mnt/var mount /dev/VolumeGroup/var /mnt/var # mkdir /mnt/tmp mount /dev/VolumeGroup/tmp /mnt/tmp # mkdir /mnt/home mount /dev/VolumeGroup/home /mnt/home The system can now be bootstrapped together with lvm2: # pacstrap /mnt base liux linux-firmware lvm2 I also create and store the fstab: # genfstab -U /mnt >> /mnt/etc/fstab I chroot into the bootstrapped system: # arch-chroot /mnt As mentioned in the wiki, I add the hooks encrypt and lvm2 in /etc/mkinitcpio.conf: HOOKS=(base udev autodetect keyboard keymap consolefont modconf block filesystems fsck encrypt lvm2) I continue with the usual setup tasks (setup root password, install base packages, set timezone, locale, language, hostname): # passwd # pacman -S iw wpa_supplicant dialog intel-ucode netctl dhcpcd # ln -sf /usr/share/zoneinfo/Europe/Zurich /etc/localtime # timedatectl set-ntp true # hwclock --systohc # echo 'en_US.UTF-8 UTF-8' >> /etc/locale.gen # locale-gen # echo 'LANG=en_US.UTF-8' > /etc/locale.conf # echo -n 'x1' > /etc/hostname Now comes the bootloader. Here I traditionally use the systemd bootloader instead of grub. Here's how I set it up: # systemd-machine-id-setup # bootctl --path=/boot install I figure out the UUID (not PARTUUID) of the root partition as follows: # blkid | grep /dev/VolumeGroup/root | egrep -o 'UUID="[^"]!"' UUID="6d5b4777-2621-4bec-8bbc-ebd4b5ba9faf" Then I create the boot entry in /boot/loader/entries/arch.conf: title Arch Linux linux /vmlinuz-linux initrd /initramfs-linux.img options cryptdevice=UUID=6d5b4777-2621-4bec-8bbc-ebd4b5ba9faf:cryptlvm root/dev/VolumeGroup/root And an accordng /boot/loader/loader.conf: default arch timeout 0 editor 0 Last but not least, I run mkinitcpio, before leaving for a fresh boot: # mkinitcpio -P # exit # umount -R /mnt # shutdown -h now So that was my setup procedure. I remove the USB dongle and boot the system. The bootloader shows up, but then I get the following screen: :: running early hook [udev] Starting version 246.6-1-arch :: running early hook [lvm2] :: running hook [udev] :: Triggering uevents... :: running hook [encrypt] Waiting 10 seconds for device /dev/disk/by-uuid/6d5b4777-2621-4bec-8bbc-ebd4b5ba9faf ... Waiting 10 seconds for device /dev/VolumeGroup/root ... ERROR: device '/dev/VolumeGroup/root' not found. Skippng fsck. :: mounting '/dev/VolumeGroup/root' on real root mount: /new_root: no filesystem type specified. You are now being dropped into an emergency shell. Now I'm pretty clueless what I've done wrong. One suspicion is the second partition (/dev/nvme0n1p2) that I needed to create. Another suspicion is that I did something wrong with the bootloader. On regular setups, I always use the PARTUUID instead of the UUID. (However, there's no PARTUUID in the output of blikd, so this probably isn't the issue.)
Since @frostschutz hasn't written his correct solution to the problem as an answer yet, I'll summarize the issue here quickly: I picked the UUID of the wrong partition. The root partition under /dev/VolumeGroup/root is not the one to be chosen, but the actual partition /dev/nvme0n1p2. Here's how to extract that UUID: # uuid=$(blkid --match-tag UUID -o value /dev/nvme0n1p2) Which then can be used in the boot loader entry config: # cat <<EOF >/boot/loader/entries/arch.conf title Arch Linux linux /vmlinuz-linux initrd /initramfs-linux.img options cryptdevice=UUID=${uuid}:cryptlvm root=/dev/volgrp/root EOF I summarized the whole procedure on my private website. Thanks also to @Cbhihe for the advice on partition sizes.
Arch Linux Setup with Encryption (LVM on LUKS)
1,442,499,868,000
I currently have an unencrypted external hard drive that I use as a backup for my encrypted (with LUKS) main machine. To update my backup, I simply log in to the main machine and rsync to my external hard drive. Clearly, having an unencrypted backup for material that was worth encrypting in the first place is a bad idea. However, due to time constraints, I am unable to regularly update my backup without the help of something like rsync. It follows that any encryption method that I use on the external drive must be compatible with rsync. However, I have ran in to the following issues: Userspace stackable encryption methods like EncFS or eCryptfs appear to both take up a lot of space and not play nice with rsync. The hidden files reponsible for the encryption seem to change frequently enough that rsync ends up having to copy so many files that it's barely worth even using rsync. luksipc would be an option, but it's latest documentation tells me to instead use the the cryptsetup-reencrypt tool from dm-crypt. Sadly, whenever I look up the relevant documentation on the arch wiki for cryptsetup-reencrypt I can neither tell what to do, nor if it'll work with rsync. The cryptsetup-reencrypt tool also seems to be new enough that it's hard to find doccumentation on it that someone at my level can read. Plain LUKS, or anything similar isn't an option, because the earlier mentioned time constraints prevent me from being able to wipe the drive and make the backup again from scratch. Duplicity could be an option, but it doesn't seem able to encrypt any unencrypted files that are on the external hard drive (i.e. where it's copying to). Overall, it looks like #2 might be my best option for the goal of encrypting my external drive and keeping that drive up to date with rsync, but I don't really know where to begin and I'm not very open to the possibility that I might have to wipe the drive before encrypting it. Am I missing anything useful?
Nowadays cryptsetup itself supports non-destructively transforming an unencrypted partition into a encrypted LUKS device with the reencrypt subcommand. Assuming that your external drive is accessible via /dev/sdX and the current filesystem is located on /dev/sdXY you need first shrink the filesystem to make room for the LUKS header and some scratch space for the encryption operation (32 MiB works). The exact command depends on you filesystem, e.g. for ext4: e2fsck -f /dev/sdXY resize2fs /dev/sdXY NEWSIZE (Note that XFS doesn't support shrinking, thus you would need to fstransform it first ...) Trigger the encryption: cryptsetup reencrypt --encrypt /dev/sdXY --reduce-device-size 32M Enlarge the filesystem again: cryptsetup open /dev/sdXY backup resize2fs /dev/mapper/backup cryptsetup close backup (without a size argument resize2fs uses all available space) Since you don't change the content of you existing filesystem you can continue using rsync. Instead of something like mount /dev/sdXY /mnt/backup rsync -a /home /mnt/backup umount /mnt/backup you now have to do something like: cryptsetup open /dev/sdXY backup mount /dev/mapper/backup /mnt/backup rsync -a /home /mnt/backup umount /mnt/backup Since you mention your time constraints: cryptsetup reencrypt isn't necessarily as fast as a cryptsetup luksFormat followed by a fresh rsync. An alternative to the above is to switch to Restic for your backup needs. Restic encrypts all backups, supports incremental backups and is very fast. If your external drive is large enough you can start with Restic by initializing a Restic repository in a new subdirectory. After the first Restic backup is finished you can remove the old unencrypted backup files. Finally, you have to wipe the free space to destroy any traces of the old unencrypted backup files.
Encrypting a currently used external hard drive such that it can be updated with rsync?
1,442,499,868,000
I need to encrypt an SSD drive and I have opted to use dm-crypt. This is not something I do on a regular basis. So far I have successfully cleared the memory cells of my SSD with the ATA secure erase command. I have also filled the entire disk with random data using: dd if=/dev/urandom of=/dev/sdx bs=4096 status=progress. My question is in regards to the final step, which is encrypting the devices (my partitions) with the cryptsetup utility. Since I’ve already filled my entire disk with random data, will I need to refill my partitions with random data after creating and encrypting them? In other words, will the random data that I generated with dd still reside inside of the encrypted partitions that i create?
dd if=/dev/urandom of=/dev/sdx bs=4096 status=progress This command will overwrite the entire drive with random data. That random data will stay there until you write other data, or secure-erase, or TRIM. In other words, will the random data that I generated with dd still reside inside of the encrypted partitions that i create? Normally this is the case. However, it's not always obvious when TRIM happens. For example, mkfs or mkswap/swapon silently imply TRIM and you have to use additional parameters to disable it. I do not know if partitioners picked up the same idea and TRIM newly created partitions. If using LVM instead of partitions, note that lvremove / lvresize / etc. does imply TRIM if you have issue_discards = 1 in your lvm.conf. Other storage layers such as mdadm support TRIM as a simple pass-through operation. cryptsetup open by default does not allow TRIM unless you specify --allow-discards, however some distributions might choose to change those defaults. After all, it's very unusual to random-wipe SSD for encryption. The only use case I can think of is getting rid of old data while not trusting the hardware to do this for free when you TRIM or secure-erase. Even with encryption, it's normal for free space to be visible on SSD. Most people will want to use TRIM weekly/monthly to avoid possible performance degradation in the long term, so distributions might follow that trend and use allow-discards on encrypted devices by default. Once trimmed, your overwriting with random data was for naught. But as long as you are in control, and disable TRIM in everything you do, the random data will stay.
Filling SSD with Random Data for Encryption with Dm-Crypt
1,442,499,868,000
Are there, from a cryptanalysis point of view, security drawbacks when reusing the same key for different volumes in dm-crypt plain mode with cypher aes-xts-plain64? # Example: Encrypt two volumes with the same key cryptsetup --type plain --cipher=aes-xts-plain64 --key-size=256 --key-file mykey open /dev/sda myvol1 cryptsetup --type plain --cipher=aes-xts-plain64 --key-size=256 --key-file mykey open /dev/sdb myvol2 I'm only considering practical cases where less than, say, 100 volumes are encrypted with the same key.
Well, this isn't security stackexchange and I'm not a cryptography expert, but on the face of things: Alice unencrypted: 00000000 48 65 6c 6c 6f 20 6d 79 20 6e 61 6d 65 20 69 73 |Hello my name is| 00000010 20 41 6c 69 63 65 0a 00 00 00 00 00 00 00 00 00 | Alice..........| 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| Bobby unencrypted: 00000000 48 65 6c 6c 6f 20 6d 79 20 6e 61 6d 65 20 69 73 |Hello my name is| 00000010 20 42 6f 62 62 79 0a 00 00 00 00 00 00 00 00 00 | Bobby..........| 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| Both encrypted with the same (master) key, aes-xts-plain64: Alice000 8f 04 35 fc 9f cb 5d c8 af da ae 78 cd e5 64 3d |..5...]....x..d=| Bobby000 8f 04 35 fc 9f cb 5d c8 af da ae 78 cd e5 64 3d |..5...]....x..d=| Alice010 4f d3 99 77 7b c1 2c 8d ff 9b 4d 55 da a3 9b e2 |O..w{.,...MU....| Bobby010 12 d6 ad 17 74 50 4d 08 8c 38 22 40 98 a7 14 99 |....tPM..8"@....| Alice020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| Bobby020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| So - just from the looks of it, one problem is that identical offset and plaintext (for each 16-byte-block) results in identical ciphertext. If the plaintext differs, so does the ciphertext. In some situations this is may be more revealing than revealing free space. Another problem is that you can copy ciphertext from one drive to another, and have it decrypt to meaningful data — but on the wrong drive. Normally, if you mess with ciphertext, all you ever get when you decrypt is random garbage, but re-using the masterkey will simply give you more valid ciphertext to work with. So, completely artificial example, if you have a user who doesn't know the key, but somehow has access to a file stored on this system, and is able to copy ciphertext from one drive to another - normally not possible but let's just assume it is so. They could write a big file full of nonsense, figure out where this file is allocated on disk, then copy the data from the other drives over. And then see another drives data in plaintext when reading their file back in. Altogether, it's just an unnecessary headache when it's so easy to use a unique key for each disk. Even if you derive that key from a shared master key, using a hash function or whatever. Although there's no reason for that either. You can just use multiple keyfiles, or read multiple keys from a single file using --keyfile-offset, --keyfile-size options. LUKS is supposed to help you avoid various pitfalls. Unless you deliberately clone the header, it always uses a different, random master key for each container, even if you use the same passphrase for them. Also a bit of a note on your choice of cipher, aes-xts-plain64. This used to be called aes-xts-plain. And everything was fine until devices larger than 2TiB came about... with aes-xts-plain, ciphertext repeats every 2TiB which is basically the same problem as reusing the same master key. This was fixed with aes-xts-plain64, but some blogs/wikis still recommend the old one, or old containers are kept and grown along with new harddrives, so some people end up using the wrong one to this day...
Security of key reuse with dm-crypt in plain mode?
1,679,406,126,000
I just read this discussion between Linus Torvalds and (among others) Milan Broz, one of dm-crypt's maintainers. I am intrigued by the the following part of the discussion : Linus Torvalds: I thought the people who used hidden ("deniable") things didn't actually ever use the outer filesystem at all, exactly so that they can just put the real encrypted thing in there and nor worry about it. Milan Broz: Well, they actually should "use" outer from time to time so the data looks "recent" and for the whole "hidden OS" they should be even able to boot to outer decoy OS on request, just to show that something working is there. In theory, I agree with Milan's statement, using the decoy data is a good thing to do to increase credibility. But how do you achieve that in practice? E.g., how can you write to the outer volume without risking to overwrite the inner volume? I have been using hidden LUKS volumes for years now, combining detachable headers and data offset. Usually I start by creating a small LUKS-encrypted outer volume (let's say 20 GB), I format it with EXT4, I fill it with decoy data, then I increase this outer volume's size (to for example 500 GB), and I create the inner volume with an offset of 25GB for example. And after that I do what Linus said, I religiously avoid to touch the outer volume's decoy data, out of fear of damaging the inner volume's data. Is there a way to refresh the outer volume's data, without risking to damage the inner volume's data? E.g., is there a tool to write specifically on the 20 first Gigs of the outer volume, making sure to not mess with the 480 following gigs? I am using both HDDs and SSDs, so the question applies to both.
There are probably a few ways to do this with reasonable safety, with potentially different approaches if starting with a new outer volume or an existing one. Probably the best way to do this would be with the debugfs setb command on the unmounted outer filesystem device to mark range(s) of blocks that belong to the inner volume before mounting the outer filesystem and updating files there.: debugfs -c -R "setb <inner_start_blk> <inner_count>" /dev/<outer> setb block [count] Mark the block number block as allocated. If the optional argument "count" is present, then "count" blocks starting at block number "block" will be marked as allocated. If there are disjoint ranges to the file, then multiple setb commands could be scripted writing by piping a file with block ranges like: setb <range1> <count1> setb <range2> <count2> : to debugfs reading the file debugfs -c -f <file> /dev/<outer>. If you wanted to be a bit more clever than just packing the inner volume at the end of the outer filesystem, the inner volume could initially be created with fallocate -s 32M mydir/inner in the the outer filesystem, then the block range could be generated from debugfs: # debugfs -c -R "stat mydir/inner" /dev/vg_root/lvhome Inode: 263236 Type: regular Mode: 0664 Flags: 0x80000 Generation: 2399864846 Version: 0x00000000:00000001 User: 1000 Group: 1000 Project: 0 Size: 32499577 File ACL: 0 Links: 1 Blockcount: 63480 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x63c98fc0:62bb0a38 -- Thu Jan 19 11:45:20 2023 atime: 0x63cee835:5e019630 -- Mon Jan 23 13:04:05 2023 mtime: 0x63c98fc0:559e2928 -- Thu Jan 19 11:45:20 2023 crtime: 0x63c98fc0:41974a6c -- Thu Jan 19 11:45:20 2023 Size of extra inode fields: 32 Extended attributes: security.selinux (37) = "unconfined_u:object_r:user_home_t:s0\000" EXTENTS: (0-7934):966656-974590 In this case, the ~32MB (7935x4KiB block) file is in blocks 966656-974590, so this would use setb 966656 7935 to mark those blocks used. The inode should be erased with clri <inum> to prevent the allocated block range from being visible afterward. The blocks allocated in the outer filesystem by debugfs setb would remain allocated until the next time that e2fsck was run on the outer filesystem. That could potentially "expose" those blocks are in use if someone was really paying attention, so they could optionally be cleared again after the outer filesystem was unmounted, using `debugfs -c -R "clrb <inner_start> <inner_count>" /dev/", or kept allocated to avoid the inner filesystem from potentially being corrupted.
How to refresh decoy data on a plausible deniability dm-crypt scheme?
1,679,406,126,000
I'm considering replacing single-core Raspberry Pi with an encrypted disk with multi-core Banana Pi M3. Encryption/decryption performance is currently the bottleneck, so I'd like to know if encryption and decryption with dm-crypt can utilize multiple cores.
Yes, in recent kernels, dmcrypt requests can be parallelized. The parallelization patches have been integrated into 4.0, at a glance (4.0's dm-crypt.c includes kthread.h, previous versions didn't). Older versions had a single work queue for dmcrypt requests so different blocks couldn't be encrypted or decrypted in parallel (even on different devices, as far as I know). However parallelization is not always a win. It takes some time to dispatch requests to a different CPU and collect results, so it's a win only if there are enough requests in parallel and you aren't waiting on a single block at a time. Typically you'd win if you have multiple applications accessing different files but not so much (or possibly even lose a little) when working with a single large file. If you want better encryption performance, get something based on an ARMv8 processor, with AES acceleration, i.e. in practice a 64-bit CPU. Hardware crypto acceleration makes a real difference, far more than parallelization does at the best of times, and it helps for all workloads as long as CPU time is the bottleneck. Note that not all ARMv8-based CPUs have hardware crypto acceleration (it's sometimes left out to avoid running into crypto export/import regulations). But even without hardware crypto, running in 64-bit mode can be a measurable speedup. It turns out that the Pi 3 doesn't have crypto extensions. The Banana Pi M64 might be right for you, since it has crypto extensions (if I didn't get confused between the very similar SoC names). The Pi M64's SATA subsystem is on top of USB 2 though (like the M3), and this isn't as fast as the versions with a native SATA controller, so a Raspberry Pi 3 may be just as good if I/O turns out to be the bottleneck because the crypto doesn't saturate the CPU anyway.
Does dm-crypt utilize multiple cores? (Interested in multi-core Pi clones)
1,679,406,126,000
Tonight I decided I wanted to tweak the configuration of my Debian install on my netbook (Ideapad S10-2) to work better with the SSD I put in it. I have done this before and never had any issues but just in case I double-checked what I was doing against the Debian SSD Optimization guide and everything seemed right. At this point I rebooted and things went wrong. The system refused to mount the volume as anything but read-only complaining about the "discard" flag not being recognized. I've tried booting from several different live CDs (well, over PXE anyway) but they all refuse to mount the volume for one reason or another (after running through modprobe dm-mod; cryptsetup luksOpen et al) and I suspect it's the wrong way to go. Well, the problem I'd rather solve is to figure out a way to make the crippled system (which boots with the root partition mounted read-only) mount the root partition rw by somehow ignoring the discardflag in /etc/fstab, /etc/lvm/lvm.conf and /etc/crypttab so that I can change those back, reboot and have things back the way they were. Edit: It just dawned on me why it didn't work, the filesystem for the root partition is ext3 for some reason. I had naively assumed it would be ext4. So the solution is clearly to somehow mount while ignoring the discard flag.
Get to a shell ( boot into rescue / single user mode if needed ) and just mount -o remount,rw /. Or if you are booting from a rescue cd, then it knows nothing about /etc/fstab, so just don't specify the -o discard when mounting.
Followed SSD-optimization advice, now root partition won't mount rw
1,679,406,126,000
I've just written over the wrong hard drive using the command: sudo sh -c 'pv /dev/sdb >/dev/sdc' How do I go about undoing this? I was creating the first even backup of the drive, and I backed up over the wrong drive... The drive which got written over also has no backups, I was going to backup that drive next. Both drives were dm-crypt'ed.
If you do not have backups, your data wasn't important. It's gone. There is no undo. Especially not with encryption involved. something that produces output > /dev/somedisk overwrites data on the device. Whatever is overwritten can not be restored, so your only chance would be if you noticed and cancelled it right away. Then probably only the first few hundred megs would be missing and you might have a chance at recovery, especially if the partitions you want to recover started somewhere further out. In this case it's a matter of restoring the partition table, from memory or using testdisk, gpart or whatever. If you did not cancel, it depends on how much output was produced, i.e. in your case whether /dev/sdb was smaller than /dev/sdc so it was only overwritten so far. However, you say it was dm-crypt'ed. That usually means LUKS. And LUKS has a header at the start. If you lose that header and the LUKS container is not still open, there is no way to get anything back. If it's still open, you want to save the output of dmsetup table --showkeys. Some people use LUKS without partitioning the drive, and then have some silly mistake in a partitioner or installer that does nothing but create a small partition table. That overwrites less than 512 bytes at the start of the disk but it's still enough to damage the LUKS header and the data is irrecoverably lost.
Backed up over wrong hard drive
1,679,406,126,000
I have an external eSATA-hdd on an OpenSUSE 12.2 system. The external hdd has an LVM on a dm-crypt partition. I mount it by powering it up and then doing rescan-scsi-bus.sh cryptsetup -v luksOpen vgchange -ay mount Now when I want to power the hdd down, I do umount vgchange -an extern-1 cryptsetup -v remove /dev/mapper/extern-1-crypt echo 1 >/sys/block/sdf/device/delete Here the device (sdf) is currently hardcoded in the script. Can I somehow deduce it in the script from the VG or the crypto device?
Yes, you can find the information in /sys/block/$DEVICE/slaves. If you only have the canonical name you can use readlink to get the details, e.g: devdm="$(readlink -f /dev/mapper/extern-1-crypt)" dm="${devdm#/dev/}" ls /sys/block/$dm/slaves/ If you want to remove all you can just utilize directly the sys filesystem: echo 1 > /sys/block/$dm/slaves/*/../device/delete
Detecting the device of a crypto mount
1,679,406,126,000
I installed debian stretch using encrypted lvm from the installer on a usb drive. during installation, with all disks connected, sdo5 is assigned to my boot disk. when running the full system, my boot disk is now assigned sdn5 this is problematic, because I have an encrypted data disk that shows up as sdo1, as per blkid. I need to change the crypt configuration and initramfs to look for sdn, so that sdo is free. how can I do that? simply changing the crypttab and running update-initramfs -u -k all gives an error about invalid line in crypttab and then the system will not boot. cryptsetup: WARNING: invalid line in /etc/crypttab for sdo5_crypt there must be another step. where is sdo5_crypt referenced other than crypttab? my crypttab is as follows: sdo5_crypt UUID=long_string_here none luks and my fstab is: /dev/mapper/coldstorage--vg-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sdo1 during installation UUID=long_string_here /boot ext2 defaults 0 2 /dev/mapper/coldstorage--vg-swap_1 none swap sw 0 0 EDIT: I can see there is a lingering /dev/mapper/sdo5_crypt even when I reboot after changing crypttab but not updating initramfs (which causes the system to request the password for sdn5) If I can rename that, it might be enough? lvrename does not seem to work. # ls /dev/mapper/ control sdo5_crypt coldstorage--vg-root coldstorage--vg-swap_1 result of pvdisplay --- Physical volume --- PV Name /dev/mapper/sdo5_crypt VG Name coldstorage-vg and attempt to fix... # pvmove /dev/mapper/sdo5_crypt /dev/mapper/sdn5_crypt Physical Volume "/dev/mapper/sdn5_crypt" not found in Volume Group "coldstorage-vg".
Got it. dmsetup rename sdo5_crypt sdn5_crypt sed -i -e 's/sdo5_crypt/sdn5_crypt/g' /etc/crypttab update-initramfs -u -k all
change designated name of encrypted lvm root? from sdo to sdn in crypttab?
1,679,406,126,000
My / was originally on an encrypted volume and was transferred to an unencrypted volume by recursively copying every directory, and then grub was reinstalled: sudo -s cp -ax /mnt/encrypted /mnt/decrypted for f in sys dev proc ; do mount --bind /$f /mnt/decrypted/$f ; done chroot /mnt/decrypted grub-install /mnt/decrypted update-grub /etc/fstab was updated accordingly and the original encrypted volume was removed from /etc/crypttab, but after rebooting I'm still asked for a password to decrypt my new /. Why is that and how can it be removed?
Removing cryptsetup and regenerating initramfs fixed the problem: apt-get remove --purge cryptsetup update-initramfs -u -k all
How to remove LUKS encryption after transferring files to an unencrypted disk
1,679,406,126,000
I have Gentoo Linux installed on a 25.93GB/62.43GB partition /dev/sda4. The other partitions on the disk are 150MB /boot on /dev/sda1 and 56,66GB unused space on other two partitions. I am planning to encrypt the unused space with dm-crypt, format it to ext4 and after migrating my installation onto it, to nuke the old partition. My questions here are: Is this possible at all? Or would it require many tweaks to get the installation running on the encrypted volume /dev/sda2? Is this an efficient way? Taking into consideration my 25.9GB Gentoo, would it be less hassle for me if I just encrypted the whole disk and installed Gentoo(and all the packages) again? Should I use encfs or ecryptfs instead of dm-crypt here? Would they provide equal security? What algorithm should I use to encrypt the partition? My processor does not have AES-NI. What should I use to sync the encrypted partition with the other one? Would something like dcfldd work for that? Edit being written from migrated partition: After deleting the unused partitions and making a new unformatted /dev/sda2, I ran : cryptsetup luksFormat /dev/sda2 cryptsetup luksOpen /dev/sda2 encrypt pv /dev/zero > /dev/mapper/encrypt pv here is used to monitor the progress of writing zeroes, and after this I formatted the encrypted partition to ext4 with mkfs.ext4 /dev/mapper/encrypt. To sync the partitions, I used YoMismo's recommendation rsync after booting the PC from a live USB. It didn't let me in with chroot though, I had to reboot my old partition and chroot from there instead. I ran in this process: mkdir /tmp/old_partition /tmp/new_encrypt mount /dev/sda4 /tmp/old_partition mount /dev/mapper/encrypt /tmp/new_encrypt cd /tmp/new_encrypt rsync -av /tmp/old_partition/* . and after rebooting the old partition /dev/sda4, opening and mounting /dev/sda2 and mounting virtual kernel filesystems: I made an /etc/crypttab with root UUID=<uuid of /dev/sda2> none luks I altered /etc/fstab to tell my root partition is UUID=<uuid of mapper>. I altered /boot/grub/grub.conf : I deleted root=<root> on the end of kernel line, and set a crypted device with crypt_root=UUID=<uuid> root=/dev/mapper/root. I ran genkernel --install --luks initramfs to make new initramfs with luks support. Now I can boot and run it, the only thing left is setting the old partition on fire.
1.- Yes it is possible but you will have to do some tweaking. 2.- You can't encrypt the whole disk, at least boot partition must be unencrypted if you want your system to start (someone has to ask for the decryption password -initrd- and you need it unencrypted). 3.- encfs has some flaws, you can read about them here. I would use dm-crypt for the job. 4.- Can't help, maybe twofish? 5.- I would use a live CD/USB to do the job. I don't know if the space you have left is enought for the data on the other partitions, if it is (the partition is not full) I would: First you need to decide what kind of partition scheme you want. I will assume you only want /, /boot and swap. So /boot doesn't need to be messed with, I will also assume the space left in the unused partition is enough for the data you want to place in the encrypted partition (/ in this case). Start your system with the life CD. Assuming your destination partition is /dev/sdc1 do cryptsetup luksFormat /dev/sdc1 you will be asked for the encryption password. After that open the encrypted partition cryptsetup luksOpen /dev/sdc1 Enc write all zeros to it dd if=/dev/zero of=/dev/mapper/Enc and create the filesystem mkfs.ext4 /dev/mapper/Enc Now mount your partition, copy files and change root to the new partition. mkdir /tmp/O /tmp/D mount /dev/sda4 /tmp/O mount /dev/mapper/Enc /tmp/D cd /tmp/D;rsync -av /tmp/O/* . mount --bind /dev dev mount --bind /proc proc mount --bind /proc/sys sys mount --bind /sys sys chroot /tmp/D mount /dev/sda1 /boot -Use blkid to identify your partitions UUIDs and use that information to modify grub configuration files and your fstab (root partition device should be /dev/mapper/root_crypt). Modify your /etc/crypttab so that the new encrypted partition is referenced there. Create a line like root_crypt UUID=your/encrypted/dev/uuid none luks. grub-update grub-install to where your grub must be and update-initramfs so that the new changes are updated in your initrd. If I haven't missed anything you should now be ready to go unless you are worried about your swap partition, if you are and want to be able to resume from hybernation the you will have to follow the previous steps for encrypting swap partition and mkswap instead mkfs.ext4. You will also need to add the swap partition to the /etc/crypttab modify fstab so that /dev/mapper/name_swap_you_created_in_etc_crypttab is the device for the swap partition and update-initramfs.
Cloning a root partition onto a dm-crypt encrypted one
1,679,406,126,000
I already did some research on my question (see below), and it's as good as a 'done deal' but I would still like to put forward my problem to this knowledgeable community. Short version of the issue: in partman (the disk partitioner of the Debian installer) the passphrase of a previously dm-crypt/LUKS encrypted volume was changed (or added) by mistake. The data on this volume was not flagged for removal. I cancelled the installation after that point. Later after manually 'decrypting' this volume it was found that only the ‘new’ password could decrypt the volume, but data could not be read (i.e. filesystem and files were not found)... I was wondering if after changing back to the old passphrase I would be able to properly decrypt the volume's contents. Previous research: the above question was submitted to the debian-boot mailing list, and there I received the following (very clear) answer: I don't think the data will be recoverable unless you have a backup of the LUKS header. The way LUKS works is that data is not encrypted with a passphrase directly but with a key that is encrypted to a set of passphrases. If you worked purely through the installer's UI you will have overwritten your LUKS header and hence will be unable to decrypt the data ever again because the key material is lost. The position of the LUKS header on disk is always in the same place. Data erase is really just about overwriting the existing data with zeros, which I understand is pretty confusing. Technically the data is already erased by the fact that the header is overwritten but some people want to be sure and write random data (or in the case of non-encrypted disks zeros) to the disk before deploying the system into production. Alas, I do not have a backup of the LUKS header of this volume. As I said above, the intention was only to mount the previously encrypted volume, and not to change anything (so, to my regret, I didn't take the proper precautions). The question: Is there any way to (re)generate the original LUKS header using the (known) original password against which this volume was encrypted, or is this data permanently lost? Thank you for your consideration and your time.
There is no way to recover whatsoever. (*) With LUKS, the passphrase you use to open the encryption, and the master key actually used for the encryption, are completely unrelated to one another. Basically your passphrase decrypts a completely random key, and this random key is stored in the LUKS header. Losing the LUKS header entirely (or even changing a single bit of key material) renders you unable to obtain the master key used for the volume. This is also why with LUKS, you can have 8 different passwords, and change each of these passwords any time you like, without re-encrypting all data. No matter how often you change the LUKS passphrase, the master key stays the same. Master key recovery is explicitely NOT part of the LUKS concept, quite the opposite actually; LUKS takes many steps to prevent you (or anyone) to recover the master key from a (partly) overwritten LUKS header. The LUKS documentation even advises you to NOT backup the header, as a backup header out of your control means losing the ability to declare an old passphrase invalid. As the old passphrase would still be stored and usable in the old header. (*) The only exception to this rule is if the container is still open. For an active crypt mapping, one might be able to obtain the master key with dmsetup table --showkeys. So if you killed your LUKS header in a running system and realized immediately, you could create a new LUKS header with the known master key. Without the master key you cannot proceed and it's impossible to brute-force the master key, that's the whole point of the encryption in the first place. Well, you could do it given infinite CPU power and/or time, so if you want to leave your descendants with a puzzle, keep a copy of the encrypted data around and pass it on... ;)
The password of previously encrypted volume got changed by the Debian installer
1,679,406,126,000
I'd like to use dm-crypt with btrfs, because of the bitrot protection of this filesystem. What I am worrying about that the RADI1 is on the filesystem level above the dm-crypt, so if I write a file, it will be encrypted twice. HDD.x ⇄ dm-crypt.x ↰ btrfs-raid1 ⇒ btrfs HDD.y ⇄ dm-crypt.y ↲ Is there a way to encrypt the data only once for example via dm-crypt.x and store the exact same copy on both HDDs? (According to the btrfs FAQ I need encryptfs to do something like this: HDD.x ↰ btrfs-raid1 ⇒ btrfs ⇄ ecryptfs HDD.y ↲ but I'd rather use dm-crypt if it is possible to not get the extra performance penalty by using btrfs RAID1.
With BTRFS there is currently not such an option directly integrated. There has been talk in the past on the BTRFS mailing list about adding support for the VFS Encryption API (the same thing used by ext4 and F2FS for their transparent file encryption), but that appears to never have gone anywhere. At the moment the only way to achieve what you want is to put the replication outside of BTRFS, which eliminates most of the benefits of checksumming in BTRFS. eCryptFS is an option, but it will almost always be slower than using dm-crypt under BTRFS. EncFS might be an option, but I don't know anything about it's performance (it's also FUSE-based though, and as a general rule FUSE layers on top of BTRFS are painfully slow). As an alternative to all of this though, you might consider using a more conventional filesystem on top of regular RAID (through MD or LVM), put that on top of the dm-integrity target (which does cryptographic verification of the stored data, essentially working like a writable version of the dm-verity target that Android and ChromeOS use for integrity checking their system partitions), and then put that on top of dm-crypt. Doing this requires a kernel with dm-integrity support (I don't remember when exactly it was added, but it was within the past year), and a version of cryptsetup that supports it. This will give you the same level of integrity check that AEAD-style encryption does. Unfortunately though, to provide the same error correction ability that BTRFS does, you will have to put dm-crypt and dm-integrity under the RAID layer (otherwise, the I/O error from dm-integrity won't be seen by the RAID layer, and will therefore never be properly corrected by it).
How to dm-crypt the data only once by filesystem level RAID?
1,679,406,126,000
I have a T61 with C2D T7300 CPU/4 GByte RAM. I has SL 6.3 on it, and I ticked the encrypt VG during install. If I start a "normal" windows xp on it, its ~slow.. so.. I need a little performance boost :) Loud thinking/QUESTION: kcryptd could take ~20% (!) of CPU, but encryption is needed.. soo I was thinking that how can I encrypt only the home directory of that 1 user (and AES256 isn't needed, just a very-very light encryption, so that a burglar can't access the data on the notebook, I'm not defending against the "CIA" :D or at least a lighter encryption) UPDATE: I'm voting for: aes-ecb-null -s 128 So before the install I have to manually create the partitions. AFAIK using this and not using the default could really increase performance. UPDATE2: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/sect-Security_Guide-LUKS_Disk_Encryption.html - so looks like they are using 512 bits aes-xts-plain64.
There are three main storage encryption possibilities under Linux. Ordered from lowest level to highest level, from fastest to slowest, from least flexible to most flexible: Dm-crypt to encrypt a whole filesystem (or more generally any storage device). You get the best performance, but you have to decide to use it when you organize your storage partitions, and there's a key per partition. Ecryptfs to encrypt a user's home directory. Each user has their own key. The encryption is performed in the kernel, but the encryption is at the file level rather than at the block level which slows things down. Encfs to encrypt a few files. This can be set up by an ordinary user, as it only requires the administrator to provide FUSE. You can easily have multiple filesystems with different keys. It's slower than the other two. Given your constraints, dm-crypt is clearly the right choice. You can get better performance by encrypting only the files that need to be encrypted, and not the operating system. If you haven't really started to use your new system, it'll be simpler to reinstall, but you can also work on your existing system by booting from a live CD such as SystemRescueCD. Make a system partition and a separate /home partition, or even a separate /encrypted partition if you don't want the whole home directory to be encrypted but only some selected files. Make a dmcrypt volume for the one filesystem that you want to encrypt. There might be a little to gain by choosing the fastest cipher. AES-128 instead of AES-256 should give you a very slight performance increase at no cost to security. Pick CBC rather than XTS for the cipher mode, since you don't need integrity: cryptsetup luksCreate -c aes-cbc-sha256 -s 128. You can even choose aes-cbc-plain as the cipher: it's insecure, but only if the attacker can plant chosen files on your system, which doesn't matter for your use case; I don't know if there's any gain in performance though. The choice of hash (-h) only influences the time it takes to verify your passphrase when you mount the disk, so don't skimp on it.
Fast encryption for home directory with Scientific Linux (ala' RedHat)
1,679,406,126,000
We are using dm-verity for a squashfs root file system. Using kernel 4.8.4 everything was ok, after upgrading to kernel 4.14.14 mount fails, even though the veritysetup verify command validates the image. # veritysetup verify /dev/mmcblk0p5 /dev/mmcblk0p6 --hash-offset 4096 d35f95a4 b47c92332fbcf5aced9c4ed58eb2d5115bad4aa52bd9d64cc0ee676b --debug # cryptsetup 1.7.4 processing "veritysetup verify /dev/mmcblk0p5 /dev/mmcblk0p6 --hash-offset 4096 d35f95a4b47c92332fbcf5aced9c4ed58eb2d5115bad4aa52bd9d64cc0ee676b --debug" # Running command verify. # Allocating crypt device /dev/mmcblk0p6 context. # Trying to open and read device /dev/mmcblk0p6 with direct-io. # Initialising device-mapper backend library. # Trying to load VERITY crypt type from device /dev/mmcblk0p6. # Crypto backend (OpenSSL 1.0.2m 2 Nov 2017) initialized in cryptsetup library version 1.7.4. # Detected kernel Linux 4.14.14-yocto-standard armv7l. # Reading VERITY header of size 512 on device /dev/mmcblk0p6, offset 4096. # Setting ciphertext data device to /dev/mmcblk0p5. # Trying to open and read device /dev/mmcblk0p5 with direct-io. # Activating volume [none] by volume key. # Trying to activate VERITY device [none] using hash sha256. # Verification of data in userspace required. # Hash verification sha256, data device /dev/mmcblk0p5, data blocks 10462, hash_device /dev/mmcblk0p6, offset 2. # Using 2 hash levels. # Data device size required: 42852352 bytes. # Hash device size required: 348160 bytes. # Verification of data area succeeded. # Verification of root hash succeeded. # Releasing crypt device /dev/mmcblk0p6 context. # Releasing device-mapper backend. Command successful. # veritysetup create vroot /dev/mmcblk0p5 /dev/mmcblk0p6 --hash-offset 4096 d3 5f95a4b47c92332fbcf5aced9c4ed58eb2d5115bad4aa52bd9d64cc0ee676b --debug # mount -o ro /dev/mapper/vroot /mnt/ device-mapper: verity: 179:5: metadata block 2 is corrupted EXT4-fs (dm-0): unable to read superblock device-mapper: verity: 179:5: metadata block 2 is corrupted EXT4-fs (dm-0): unable to read superblock device-mapper: verity: 179:5: metadata block 2 is corrupted EXT4-fs (dm-0): unable to read superblock device-mapper: verity: 179:5: metadata block 2 is corrupted SQUASHFS error: squashfs_read_data failed to read block 0x0 squashfs: SQUASHFS error: unable to read squashfs_super_block device-mapper: verity: 179:5: metadata block 2 is corrupted FAT-fs (dm-0): unable to read boot sector mount: mounting /dev/mapper/vroot on /mnt/ failed: Input/output error Same error message appears in dmesg. The above commands were run on the target device. On my host machine, Debian 8 (kernel 3.16.0-5), using the files which eventually ended up in /dev/mmcblk0p5 and /dev/mmcblk0p6, I was able to set up everything working: # veritysetup create vroot rootfs-image.squashfs rootfs-image.hashtbl --hash-offset 4096 d35f95a4b47c92332fbcf5aced9c4ed58eb2d5115bad4aa52bd9d64cc0ee676b # mount /dev/mapper/vroot /tmp/mnt
By having a look at /proc/crypto I found there are two modules providing sha256: one from Atmel and the generic one: name : sha256 driver : atmel-sha256 module : kernel priority : 100 [...] name : sha256 driver : sha256-generic module : kernel priority : 0 By disabling the Atmel SHA hw accelerator in the kernel, CONFIG_CRYPTO_DEV_ATMEL_SHA=n, it will use the generic implementation and then everything works. It seems like something changed from Kernel 4.8.4 to Kernel 4.14.14 that breaks things. That is another issue...
veritysetup verify successful but mount fails after upgrade to new kernel
1,679,406,126,000
I'm not sure how to debug this, but I've noticed if I am performing a task that requires a large amount of disk reads/write (such as updating a large postgres table) that periodically actual reads and write will fall to 0 while dm_crypt shows 99.9% IO usage in iotop. On top of this the whole DE will freeze every so often as multiple kworker threads are spawned. The mouse continues to work and can be moved, but no other window responds for around 30-60s. The CPU is at low utilization the entire time and the freezes coincide with multiple kworker threads showing up in iotop. Here's the syslog output for the period of the freeze Oct 22 11:09:47 pop-os /usr/lib/gdm3/gdm-x-session[3348]: (EE) client bug: timer event5 debounce: scheduled expiry is in the past (-6ms), your system is too slow Oct 22 11:09:47 pop-os /usr/lib/gdm3/gdm-x-session[3348]: (EE) client bug: timer event5 debounce short: scheduled expiry is in the past (-19ms), your system is too slow Oct 22 11:10:12 pop-os gjs[184224]: JS ERROR: Gio.IOErrorEnum: Timeout was reached _proxyInvoker@resource:///org/gnome/gjs/modules/core/overrides/Gio.js:139:46 _makeProxyMethod/<@resource:///org/gnome/gjs/modules/core/overrides/Gio.js:164:30 makeAreaScreenshot@/home/anthony/.local/share/gnome-shell/extensions/[email protected]/auxhelper.js:78:33 main/<@/home/anthony/.local/share/gnome-shell/extensions/[email protected]/auxhelper.js:190:21 main@/home/anthony/.local/share/gnome-shell/extensions/[email protected]/auxhelper.js:204:30 @/home/anthony/.local/share/gnome-shell/extensions/[email protected]/auxhelper.js:216:3 Oct 22 11:10:36 pop-os gnome-shell[3610]: JS ERROR: Error: cmd: gjs /home/anthony/.local/share/gnome-shell/extensions/[email protected]/auxhelper.js --filename /tmp/gnome-shell-screenshot-ZPGAT0.png --area 3640,809,948,419 exitCode=256 callHelper/<@/home/anthony/.local/share/gnome-shell/extensions/[email protected]/selection.js:87:16 Oct 22 11:10:50 pop-os gnome-shell[3610]: ../clutter/clutter/clutter-actor.c:10558: The clutter_actor_set_allocation() function can only be called from within the implementation of the ClutterActor::allocate() virtual function. The postgres database is stored on a seperate disk from the OS, so there shouldn't be any reason that my DE should freeze when writing to it? Does anybody have any suggestions on how I can further debug this and figure out what's causing the problem? pop-os 20.04 5.4.0-7634-generic
To fix this I had to edit vm.dirty_ratio and vm.dirty_background_ratio. The issue was I was writing to the disk faster than the disk could handle and the system froze whenever the cache was filled.
dm_crypt / kworker hogging IO and causing system freeze
1,679,406,126,000
I'm working on dm-crypt utilizing cryptsetup. I'm interested to understand if it's using a fixed block dimension to encrypt files. I explain it better: I created a LUKS envelop, formatted it with luksFormat, then open and mounted in file system. Then I normally write files in that encrypted folder. I want to understand if I write 8 Kb file there is the possibility that dm-crypt encrypt it in blocks of fixed dimensions and in case there is a way to modify this block dimension?? |-----------------------------------------------| |+ 8Kb +| |-----------------------------------------------| | b1 | b2 | b3 | | | | bn | | | | | | | | | --------------------------------------------------
Are you talking about the blocksize used by the cipher? Cryptsetup uses block ciphers, often with a 16 byte blocksize. Changing the cipher might change the blocksize, see /proc/crypto for available ciphers & details, and man cryptsetup. Cryptsetup has a fixed blocksize, 512 bytes, here's a little from it's FAQ: 2.18 Is there a concern with 4k Sectors? Not from dm-crypt itself. Encryption will be done in 512B blocks, but if the partition and filesystem are aligned correctly and the filesystem uses multiples of 4kiB as block size, the dm-crypt layer will just process 8 x 512B = 4096B at a time with negligible overhead. LUKS does place data at an offset, which is 2MiB per default and will not break alignment. See also Item 6.12 of this FAQ for more details. Note that if your partition or filesystem is misaligned, dm-crypt can make the effect worse though. Also mentioned in 5.16: There is a potential security issue with XTS mode and large blocks. LUKS and dm-crypt always use 512B blocks and the issue does not apply. Might also be interested in this closed cryptsetup issue (#150) Add dm-crypt support for larger encryption sector (block) size: Comment by chriv... on 2013-11-07 11:32:05: I would be very interested in this. It turns out, there are many embedded-type systems with on-board crypto accelerators, that fail to perform adequately when given small blocks to work with. Examples include mv_cesa, which is found in so many home NASes these days (all the orion/kirkwood boards, at least. This includes most of Synology and QNaps offerings) Milan Broz @mbroz commented 5 months ago - Owner: The sector size option is in kernel 4.12 but will be supported only (optionally) in LUKS2.
dm-crypt / cryptsetup which block encryption dimension use
1,679,406,126,000
/path/to/directory/ is a path that points within an encrypted volume, to an arbitrary depth. In a bash script I need to determine if the block device related to this path is a removable device. I'm using Arch Linux. I have looked at a lot of similar questions (such as these listed below and others) but did not find a suitable answer: linux - How to determine which sd* is usb? - Unix & Linux Stack Exchange bash - How to know if /dev/sdX is a connected USB or HDD? - Unix & Linux Stack Exchange mount - How do I know the device path to an USB-stick? - Ask Ubuntu This is an example of what I'm working with: findmnt -n -o SOURCE --target /path/to/directory/ /dev/mapper/luksdev[/@subvolume] findmnt -D --target /path/to/directory SOURCE FSTYPE SIZE USED AVAIL USE% TARGET /dev/mapper/luksdev[/@subvolume] btrfs 4.5T 203.5G 4.3T 4% /path/to/directory df -P /path/to/directory/ | awk 'END{print $1}' /dev/mapper/luksdev (The findmnt parameter --target seems to be required if the path is not the exact mountpoint.) If the script can determine the block device (e.g., /dev/sda1) associated with /dev/mapper/luksdev, I get a step closer: udevadm info --query=all --name=/dev/sda1 | grep ID_BUS | grep "=usb" E: ID_BUS=usb But I assume not all removable devices are usb, right? By the way, I am OK with methods specific to BTRFS, if that makes this any easier. I did check: btrfs - Find physical block device of root filesystem on an encrypted filesystem? - Unix & Linux Stack Exchange EDIT: based on answer by Vojtech Trefny, here is what I have: mapper_path=$(findmnt -n -o SOURCE --target /path/to/directory/ | cut -d [ -f 1) mydev=$(lsblk -sl -o NAME /${mapper_path} | tail -n 1) drive_name=$(udisksctl info -b /dev/${mydev} | grep "Drive:" | cut -d"'" -f2) drive_name=$(echo $drive_name | sed -e 's|/org/freedesktop/UDisks2/drives/||') udisksctl info -d ${drive_name} | grep "\sRemovable:" | cut -d":" -f2 | tr -d "[:blank:]"
From the /dev/mapper path, the easiest way to get the disk name should be lsblk with -s to list devices in inverse order: $ lsblk -sl -o NAME /dev/mapper/<name> | tail -1 sda Easiest way from here is probably to check removable property from sysfs $ cat /sys/block/sda/removable 0 but I'd recommend using UDisks here, it does some extra checks on top of the sysfs information so I assume sysfs can be wrong about some removable devices. You can either use busctl to communicate with UDisks over DBus or udisksctl and grep from the output. $ busctl get-property org.freedesktop.UDisks2 /org/freedesktop/UDisks2/block_devices/sda org.freedesktop.UDisks2.Block Drive o "/org/freedesktop/UDisks2/drives/<drive_name>" $ busctl get-property org.freedesktop.UDisks2 /org/freedesktop/UDisks2/drives/<drive_name> org.freedesktop.UDisks2.Drive Removable b false or $ udisksctl info -b /dev/sda | grep "Drive:" | cut -d"'" -f2 /org/freedesktop/UDisks2/drives/<drive_name> $ udisksctl info -d <drive_name> | grep "\sRemovable:" | cut -d":" -f2 | tr -d "[:blank:]" false
Determine if a given path is on a removable device, even if encrypted, via bash script
1,679,406,126,000
So I'm installing a new Arch with the document here for a whole system encryption. The first part I get confused is here which the document wrote: Warning: GRUB does not support LUKS2. Do not use LUKS2 on partitions that GRUB needs to access. But in the later part of the section, it told me to run the command cryptsetup luksFormat /dev/sda3. But as I run it it ask for a password but doesn't it said GRUB doesn't support LUKS2? Later I enter the password, go down through to grub installing which I run grub-mkconfig -o /boot/grub/grub.cfg and it said filaed to connect lvmetad, but since it's warning I ignored it. Later I go through the process till the last part without getting any error. But then I exit arch-chroot and reboot but it can't boot, it skipped to next OS (which is Windows 10 in my case), why? Which part did I get wrong? How to solve it? P.S. Here is a table of my disk with command lsblk NAME SIZE TYPE MOUNTPOINT sda 114.6G disk sda1 4G part sda2 4G part /mnt/boot/efi sda3 16G part cryptboot 16G crypt /mnt/boot sda4 90.6G part lvm 90.6G crypt AALEvol-swap 8G lvm [SWAP] AALEvol-root 82.6G lvm /mnt
Create a partition on beginning of your Hard Disk Drive, it's size should be between 600 MB and 1GB, and in Linux setup mark that partition as /boot partition. You shouldn't encrypt the boot partition as none of your potentially sensitive data will be written to it. If you want to wipe the entire Hard Disk Drive before re-partitioning, i suggest you use fdisk -l | more to list all your Hard Disk Drives and all partitions on them, then when you find the drive do dd if=/dev/urandom of=/dev/sd(X) where X is your HDD letter. Then create other partitions which will be encrypted: 1./SWAP, 2./ROOT and 3. /HOME (optional).
Can't boot Arch Linux after installation with dm-crypt whole system encryption (BIOS)
1,679,406,126,000
I have LVM on the top cryptsetup on my Debian unstable amd64. A week ago after upgrade my initramfs changed and now I have to wait few minutes at the beginning of the boot before cryptsetup asked for password to unlock partition. There might be some problem with generated images as at the beginning there were just some kernels affected. After proper run of update-initramfs -u -k all installed kernels are affected. Wiki instructions don't help much, as debug kernel option takes too long and nothing happen (booting just stops). I tried to debug it with single break=mount kernel options, but nothing found. I see that in initramfs there is running script /scripts/init-premount, which starts dropbear (nothing unusual). I have no idea what changed. Any idea, how I can debug problem? Any module missing? My config: -- /proc/cmdline BOOT_IMAGE=/vmlinuz-4.3.0-rc5-amd64 root=/dev/mapper/t61-root ro -- /etc/crypttab sda2_crypt UUID=c524108a-b40f-49b4-8223-23e3441a7409 none luks -- /etc/crypttab sda2_crypt UUID=c524108a-b40f-49b4-8223-23e3441a7409 none luks -- /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/mapper/t61-root / ext4 relatime,errors=remount-ro 0 1 UUID=411fe373-ec79-45f7-90f2-e50be400c71d /boot ext4 defaults 0 2 tmpfs /tmp tmpfs nodev,nosuid,size=512M,mode=1777 0 0 /dev/mapper/t61-home /home ext4 defaults,relatime 0 2 /dev/mapper/t61-swap none swap sw 0 0 -- /etc/initramfs-tools/modules dm-crypt aes-x86_64 xts sha256_generic sha512_generic
My problem was in unconfigured network, which was required by dropbear. The actual problem with network was my ISP's dhcp server, which is alive, but not providing network setting for me, so I have to set network manually. In initramfs I had to wait for all tries of dhcp. It was in function configure_networking() in /scripts/functions (source file for mkinitramfs in running system is /usr/share/initramfs-tools/scripts/functions). Although dropbear is a great solution for servers as my system is laptop, I don't need it. I didn't even bother with configuring it or disabling it (in /etc/initramfs-tools/conf-hooks.d/dropbear), I just simply removed dropbear for initramfs-tools: apt-get remove dropbear-initramfs
Waiting long (few minutes) during boot for cryptsetup for password prompt
1,679,406,126,000
I store documents on btrfs partition built upon sparse dm-crypt device located on ext4 main partition on physical harddrive. When kernel panics (and this happens on daily basis on my ASUS P53E with 3.6 kernel :-( ) I loose recently modified files (always the files' contents got replaced with zeros). One way of preventing damage to files it is to disable the write cache. For this I would need to disable write caching for the btrfs partition, the dm-crypt device that backs the partition, and for the sparse file where dm-crypt device lives. How can I check the cache-write status for the drives? How can I disable it? I use Mint 13 Maya with 3.6.8 mainline kernel.
I'm not sure the disk drives write cache is going to fix the issue for you as it sounds like you are using a loop device. So there is still the page cache/file in between your Btrfs filesystem and the actual disk. The same type of issue exists for journaling filesystems detailed here for loop-AES. So when data is synced to your loop device, it may not be on real disk yet just in a cache waiting to be re ordered and written out. ext4 doesn't support the sync mount option ext2/3 did to disable caching. due to the layers in between I'm not sure even that would get you an effective recovery. I don't know enough about the internals unfortunately, at least more data would make it to disc. In the same way you might be able to limit the issue by tuning the page cache so the system writes out to disk more often. The linux page cache reports it's values in /proc/meminfo under "Dirty" - for pages that are currently dirty "Writeback" - for dirty pages that are being written out to disk. There are files in /proc/sys/vm/ that report status and control the flush threads that write data back to disk. You could put a small value (greater than 8096 or 2 pages) in /proc/sys/vm/dirty_background_bytes to make the background pdflush process run more actively or /proc/sys/vm/dirty_bytes to make a process trigger flush more actively (For a performance penalty, or at least more disk writes overall). I think it's unlikely the hardware write cache is where your main issue lies. If you were only going direct to the device via dm-crypt then I'd look there first. In any case IDE and SATA write caches can be disabled with hdparm -W0 /dev/xdx. Also as your using a, technically, experimental file system in a more edge case way than most you might have better luck with a more mature fs where people have stumbled into the issues already. If you need Btrfs, the best bet would be to do the encryption to a physical partition.
How to debug/audit which devices cache writes?
1,679,406,126,000
while installing my Gentoo system I encoutered a strange error. One of my disk is encrypted, I have done all the installation via ssh, so did I enter the encryption passphrase via ssh. Now I entered the password directly on the computer, and it did not work. (I tried it about six times, even plugged in the keyboard from the other computer and tried it again). After ssh'ing into the box, unlocking the cryptdrive worked without complaining. I think I have to reformat that crypt-drive again, because its intended to be a /root encryption, so ssh'ing into the computer wont be possible in that stage of boot. (reformating the drive is not a problem, I prolly did that about 20 times the last five days). But why does this problem occur at all? And yeah, my password is long, about 30 characters (numbers, upper/lowercase, special chars) and I am using a german keyboard on both computers. But entering the passphrase into the shell, to actually see it, yields - at least from what I can see - the same result.
At boot time, you have a US keyboard layout until another layout is loaded. If you want to have a different layout at boot time, you need to include the keymap in your initrd/initramfs. For Gentoo, the Gentoo wiki has instructions on building an initramfs with a custom keymap. See also the discussion in bug #218920. A second issue is that passphrases are really made of bytes, not characters. If you use a different encoding on the console and in your SSH session, you may have difficulties typing the proper bytes. For example, if your password is swördfish in UTF-8, then you need to enter swördfish on a latin-1 terminal; and if your password is swördfish in latin-1, you won't be able to type it on an UTF-8 terminal. I recommend using only printable ASCII characters in your passphrase.
cryptsetup luksOpen only accepts password via ssh
1,679,406,126,000
I have a device that is encrypted using dm-crypt. This is a mini SD card that I use on my laptop. I've had some issues with my laptop freezing recently, and in the journal these messages come up: Mar 20 17:18:30 gorgonzola kernel: EXT4-fs (dm-0): warning: mounting fs with errors, running e2fsck is recommended Mar 20 17:18:30 gorgonzola kernel: EXT4-fs (dm-0): recovery complete Mar 20 17:18:30 gorgonzola kernel: EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) ... Mar 20 17:23:30 gorgonzola kernel: EXT4-fs (dm-0): error count since last fsck: 84 Mar 20 17:23:30 gorgonzola kernel: EXT4-fs (dm-0): initial error at time 1505289981: ext4_journal_check_start:60 Mar 20 17:23:30 gorgonzola kernel: EXT4-fs (dm-0): last error at time 1551543757: ext4_reserve_inode_write:5903: inode 1054920: block 4194732 I have tried running fsck, but I get this error: Bad magic number in super-block Before attempting to resolve this, I just want to make sure that I should indeed be able to run fsck on a dm-encrypted drive. Or is this error expected? The reason why I am mystified is because I can mount this device just fine. For all intents and purposes, the drive works well. It mounts, it can read and write all data... the only problem is that I get this error at boot. So is there really a problem with the super block?
Are you trying to run fsck on the /dev/sd* (or whatever) device that refers to the actual SD or its partition, just like on an unencrypted device? If so, that device is fully encrypted, and that's why fsck cannot make any sense of it at all. If it found anything recognizable as a filesystem, that would be a sign of dm-crypt not working: the encrypted data is supposed to look like nondescript pseudorandom noise. You need to point the fsck to the dm-crypt target, which will probably be named /dev/mapper/<something>. And that requires using cryptsetup to open the encrypted device first, just like when preparing to mount the encrypted device, before trying to run fsck on it. Since dm-crypt has several possible modes, I cannot suggest a correct cryptsetup command without knowing more about your setup. Perhaps your /etc/crypttab file might contain the necessary details?
Bad magic number in super-block: dm-crypt device
1,679,406,126,000
I'm on an embedded Linux device and trying to open an encrypted squashfs for my rootfs. The image is created on the host (build agent) and from there I'm able to open and work with the content, so I know the image is correct. From the embedded Linux's initramfs when I try to open the image I get the error: root# cryptsetup open ./rootfs.sqfs.img rootfs # cryptsetup 2.5.0 processing "/usr/sbin/cryptsetup --debug open ./rootfs.sqfs.img rootfs" # Verifying parameters for command open. # Running command open. # Locking memory. # Installing SIGINT/SIGTERM handler. # Unblocking interruption on signal. # Allocating context for crypt device ./rootfs.sqfs.img. # Trying to open and read device ./rootfs.sqfs.img with direct-io. # Initialising device-mapper backend library. # Trying to load any crypt type from device ./rootfs.sqfs.img. Cannot initialize crypto backend. Device ./rootfs.sqfs.img is not a valid LUKS device. # Releasing crypt device ./rootfs.sqfs.img context. # Releasing device-mapper backend. # Unlocking memory. Some searching online make it sound like this error is caused by a missing kernel module, but I have all modules that have been listed. I have the following CRYPTO modules enabled: CONFIG_CRYPTO_SHA1_ARM=y CONFIG_CRYPTO_SHA256_ARM=y CONFIG_CRYPTO_SHA512_ARM=y CONFIG_CRYPTO_AES_ARM=y CONFIG_CRYPTO_ALGAPI=y CONFIG_CRYPTO_ALGAPI2=y CONFIG_CRYPTO_AEAD=y CONFIG_CRYPTO_AEAD2=y CONFIG_CRYPTO_BLKCIPHER=y CONFIG_CRYPTO_BLKCIPHER2=y CONFIG_CRYPTO_HASH=y CONFIG_CRYPTO_HASH2=y CONFIG_CRYPTO_RNG=y CONFIG_CRYPTO_RNG2=y CONFIG_CRYPTO_RNG_DEFAULT=y CONFIG_CRYPTO_AKCIPHER2=y CONFIG_CRYPTO_AKCIPHER=y CONFIG_CRYPTO_KPP2=y CONFIG_CRYPTO_KPP=y CONFIG_CRYPTO_ACOMP2=y CONFIG_CRYPTO_RSA=y CONFIG_CRYPTO_ECDH=y CONFIG_CRYPTO_MANAGER=y CONFIG_CRYPTO_MANAGER2=y CONFIG_CRYPTO_USER=y CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y CONFIG_CRYPTO_GF128MUL=y CONFIG_CRYPTO_NULL=y CONFIG_CRYPTO_NULL2=y CONFIG_CRYPTO_WORKQUEUE=y CONFIG_CRYPTO_CRYPTD=y CONFIG_CRYPTO_AUTHENC=y CONFIG_CRYPTO_CCM=y CONFIG_CRYPTO_GCM=y CONFIG_CRYPTO_SEQIV=y CONFIG_CRYPTO_ECHAINIV=y CONFIG_CRYPTO_CBC=y CONFIG_CRYPTO_CTR=y CONFIG_CRYPTO_CTS=y CONFIG_CRYPTO_ECB=y CONFIG_CRYPTO_XTS=y CONFIG_CRYPTO_KEYWRAP=y CONFIG_CRYPTO_CMAC=y CONFIG_CRYPTO_HMAC=y CONFIG_CRYPTO_CRC32C=y CONFIG_CRYPTO_GHASH=y CONFIG_CRYPTO_MD5=y CONFIG_CRYPTO_RMD128=y CONFIG_CRYPTO_RMD160=y CONFIG_CRYPTO_RMD256=y CONFIG_CRYPTO_RMD320=y CONFIG_CRYPTO_SHA1=y CONFIG_CRYPTO_SHA256=y CONFIG_CRYPTO_SHA512=y CONFIG_CRYPTO_AES=y CONFIG_CRYPTO_ARC4=y CONFIG_CRYPTO_DES=y CONFIG_CRYPTO_DEFLATE=y CONFIG_CRYPTO_LZO=y CONFIG_CRYPTO_ZSTD=y CONFIG_CRYPTO_ANSI_CPRNG=y CONFIG_CRYPTO_DRBG_MENU=y CONFIG_CRYPTO_DRBG_CTR=y CONFIG_CRYPTO_DRBG=y CONFIG_CRYPTO_JITTERENTROPY=y CONFIG_CRYPTO_USER_API=y CONFIG_CRYPTO_USER_API_HASH=y CONFIG_CRYPTO_USER_API_SKCIPHER=y CONFIG_CRYPTO_USER_API_RNG=y CONFIG_CRYPTO_USER_API_AEAD=y CONFIG_CRYPTO_HASH_INFO=y CONFIG_CRYPTO_HW=y CONFIG_CRYPTO_DEV_ATMEL_AES=y CONFIG_CRYPTO_DEV_ATMEL_TDES=y I also have device mapper support (dm_crypt) in my kernel. All options are built into the kernel, so issue is not from a module not loaded. On the embedded Linux system cryptsetup version 2.5.0 is installed. The host has version 2.2.2 installed. The embedded Linux is running kernel 4.19.231. What else am I missing for having cryptsetup able to map this to /dev/mapper/rootfs? EDIT: Thought I was using the kernel backend, not sure how to check on the embedded linux system. Running on the host it appears to use openssl (see below), my initramfs does not include openssl, so if its' trying to use openssl rather than the kernel that may be my problem. # cryptsetup 2.2.2 processing "cryptsetup --debug open rootfs.sqfs.img rootfs" # Running command open. # Locking memory. # Installing SIGINT/SIGTERM handler. # Unblocking interruption on signal. # Allocating context for crypt device rootfs.sqfs.img. # Trying to open and read device rootfs.sqfs.img with direct-io. # Initialising device-mapper backend library. # Trying to load any crypt type from device rootfs.sqfs.img. # Crypto backend (OpenSSL 1.1.1f 31 Mar 2020) initialized in cryptsetup library version 2.2.2. # Detected kernel Linux 5.15.0-58-generic x86_64. # Loading LUKS2 header (repair disabled). # Acquiring read lock for device rootfs.sqfs.img. # Verifying lock handle for rootfs.sqfs.img. # Device rootfs.sqfs.img READ lock taken. # Trying to read primary LUKS2 header at offset 0x0. # Opening locked device rootfs.sqfs.img # Veryfing locked device handle (regular file) # LUKS2 header version 2 of size 16384 bytes, checksum sha256. # Checksum:a69c54af714a6d46ac5a514399ebe367012a233d742d2f2913a7b5979ae70441 (on-disk) # Checksum:a69c54af714a6d46ac5a514399ebe367012a233d742d2f2913a7b5979ae70441 (in-memory) # Trying to read secondary LUKS2 header at offset 0x4000. # Reusing open ro fd on device rootfs.sqfs.img # LUKS2 header version 2 of size 16384 bytes, checksum sha256. # Checksum:d1a6fae45d92dd47f5a99e11e6d157bc6ba0140fc2bd62ebc1fb9dad0414f0ff (on-disk) # Checksum:d1a6fae45d92dd47f5a99e11e6d157bc6ba0140fc2bd62ebc1fb9dad0414f0ff (in-memory) # Device size 68157440, offset 16777216. # Device rootfs.sqfs.img READ lock released. # PBKDF argon2i, time_ms 2000 (iterations 0), max_memory_kb 1048576, parallel_threads 4. # Activating volume rootfs using token -1. # Interactive passphrase entry requested. Enter passphrase for rootfs.sqfs.img: # Activating volume rootfs [keyslot -1] using passphrase. # dm version [ opencount flush ] [16384] (*1) # dm versions [ opencount flush ] [16384] (*1) # Detected dm-ioctl version 4.45.0. # Detected dm-crypt version 1.23.0. # Device-mapper backend running with UDEV support enabled. # dm status rootfs [ opencount noflush ] [16384] (*1) # Keyslot 0 priority 1 != 2 (required), skipped. # Trying to open LUKS2 keyslot 0. # Reading keyslot area [0x8000]. # Acquiring read lock for device rootfs.sqfs.img. # Verifying lock handle for rootfs.sqfs.img. # Device rootfs.sqfs.img READ lock taken. # Reusing open ro fd on device rootfs.sqfs.img # Device rootfs.sqfs.img READ lock released. # Verifying key from keyslot 0, digest 0. # Loading key (64 bytes, type logon) in thread keyring. # dm versions [ opencount flush ] [16384] (*1) # dm status rootfs [ opencount noflush ] [16384] (*1) # Allocating a free loop device. # Trying to open and read device /dev/loop27 with direct-io. # Calculated device size is 100352 sectors (RW), offset 32768. # DM-UUID is CRYPT-LUKS2-606147e882c040c3ae6c7a346a4f5b43-rootfs # Udev cookie 0xd4da08f (semid 32788) created # Udev cookie 0xd4da08f (semid 32788) incremented to 1 # Udev cookie 0xd4da08f (semid 32788) incremented to 2 # Udev cookie 0xd4da08f (semid 32788) assigned to CREATE task(0) with flags DISABLE_LIBRARY_FALLBACK (0x20) # dm create rootfs CRYPT-LUKS2-606147e882c040c3ae6c7a346a4f5b43-rootfs [ opencount flush ] [16384] (*1) # dm reload rootfs [ opencount flush securedata ] [16384] (*1) # dm resume rootfs [ opencount flush securedata ] [16384] (*1) # rootfs: Stacking NODE_ADD (253,2) 0:6 0660 [trust_udev] # rootfs: Stacking NODE_READ_AHEAD 256 (flags=1) # Udev cookie 0xd4da08f (semid 32788) decremented to 1 # Udev cookie 0xd4da08f (semid 32788) waiting for zero # Udev cookie 0xd4da08f (semid 32788) destroyed # rootfs: Skipping NODE_ADD (253,2) 0:6 0660 [trust_udev] # rootfs: Processing NODE_READ_AHEAD 256 (flags=1) # rootfs (253:2): read ahead is 256 # rootfs: retaining kernel read ahead of 256 (requested 256) Key slot 0 unlocked. # Releasing crypt device rootfs.sqfs.img context. # Releasing device-mapper backend. # Closing read only fd for rootfs.sqfs.img. # Closed loop /dev/loop27 (rootfs.sqfs.img). # Unlocking memory. Command successful. [SOLVED] My issue was caused by the fact I was using a musl-libc and lvm2 required glibc. After switching to glibc cryptsetup was able to load the proper backend.
My issue was caused by the fact I was using a musl-libc and lvm2 required glibc. After switching to glibc cryptsetup was able to load the proper backend for cryptsetup.
cryptsetup cannot initialize crypto backend from initramfs
1,666,571,108,000
I suspect there's a bug in Ubuntu's default whole disk encryption setup. Here's what happens, repeatably: I make a fresh install, Ubuntu 15.10 with whole disk encryption, overwriting the whole disk It boots and seems to work just fine A few reboots later, programs start crashing. "Ubuntu has experienced an internal error", Firefox will crash immediately on startup, etc. Finally, after an additional reboot or two, it will boot to busybox. Running fsck finds and fixes tons of errors. Go to step 2 Not cool. Conclusions so far: I'm quite sure it's not disk failure. I reproduced this from scratch with two different drives. In both cases, the SMART data looks healthy, and running self tests thru gnome-disks comes up clean. Beyond that... I have no idea. Details: System76 Galago Ultrapro 64-bit desktop Ubuntu 15.10 Kernel 4.2.0-18-generic Default Ubuntu whole-disk encryption setup: ext2 boot partition, dm-crypt+LUKS+ext4 main partition. I ran into this first with a 256GB Samsung 840 EVO, then reproduced it on a 512GB Samsung 830. I got the same problems in both cases: works fine for a while, but becomes unusable after a few reboots. Installing Ubuntu without disk encryption works. Has this happened to anyone else? I've checked the syslog and couldn't find anything incriminating. Does anyone know how I could figure out what's going on here?
There have been constant reports of corruption bugs with ext4 file systems, with varying setups. Lots of people complaining in forums. The bug seems to affect more people with RAID configurations. However, they are supposedly fixed in 4.0.3. "4.0.3 includes a fix for a critical ext4 bug that can result in major data loss." https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785672 There are other ext4 bugs, including bugs fixed as of the 30th of November [of 2015]. https://lists.ubuntu.com/archives/foundations-bugs/2015-November/259035.html There is also here a very interesting article talking about configuration options in ext4, and possible corruption with it with power failures. https://www.pointsoftware.ch/2014/02/05/linux-filesystems-part-4-ext4-vs-ext3-and-why-delayed-allocation-is-bad/ I would test the setup with other filesystem other than ext4.
Massive disk corruption on Ubuntu 15.10 with dm-crypt + LUKS full disk encryption?
1,666,571,108,000
I have a dual-boot(xubuntu/#!) setup LVM with dm-crypt+luks as follows /dev/sda1 = /boot (xubuntu) /dev/sda2 = /boot (#!) /dev/sda3 = encrypted LVM /dev/mapper/volgroup-xroot = / (xubuntu) /dev/mapper/volgroup-yroot = / (#!) /dev/mapper/volgroup-home = /home (/home/xubuntu & /home/crunchbang) /dev/mapper/volgroup-swap = swap I have Grub installed only from xubuntu on the MBR I was able to set this up successfully get this working initially. Recently, upon installing Libre Office on the xubuntu OS, I unwittingly let the network manager get uninstalled. I attempted to reinstall it by booting into crunchbang and then chroot-ing into the xubuntu file system. It worked but it messed the crunchbang boot process up somehow. First Grub dropped the crunchbang OS listing. I updated it and it found it again. Now, when I attempt to boot crunchbang it seems to process everything fine up to requesting a passphrase. After entering my passphrase, it quickly fails and reports the message "cryptsetup: lvm fs found but no lvm configured" and reprompts for the passphrase again. looking into it, I found this error message comes from the /usr/share/initramfs-tools/scripts/local-top/cryptroot script and occurs when if [ "$FSTYPE" = "LVM_member" ] || [ "$FSTYPE" = "LVM2_member" ]; then if [ -z "$cryptlvm" ]; then message "cryptsetup: lvm fs found but no lvm configured" return 1 $FSTYPE is just the type of the dmname, the decrypted lvm container which is set as $cryptroot and then $crypttarget - apparently successfully in order to reach this error. It seems like the script is checking for $cryptlvm to be an empty string and if so fails with my error. I have found only one reference to $cryptlvm, setting cryptlvm="" earlier in the cryptroot script, and no reference to it otherwise. I have been checking things against my xubuntu install and all relevant files so far are equivalent, including setting cryptlvm="" at the beginning of the script. And this is where I'm stuck. Can someone point me in the right direction here?
You make this message disappear by setting your GRUB_CMDLINE_LINUX_DEFAULT variable in /etc/default/grub with crypt_opts=<whatever#1>,lvm=<whatever#2> The script in /usr/share you mention set the variable cryptlvm with . For further reference, my own GRUB_CMDLINE_LINUX_DEFAULT contains: crypt_opts=target=system,source=/dev/sda5,lvm=/dev/mapper/system system is here my encrypted lvm partition.
boot fails with "cryptsetup: lvm fs found but no lvm configured" [dual boot(2xlinux LVM, dm-crypt+luks)]
1,666,571,108,000
I wanted to resize the FS on a machine but i ran into problems: Purpose: the LV of /home is too big, the LV of / is too small (they're on 1 VG.) - I need to put 10 GByte from /home to /!! Problem: gparted only shows this (can't see the LV's.. :): How can I put 10 GByte to / from /home if they're encrypted with LUKS?
GParted doesn't support LVM at all (unless this has changed recently?). You'll need to use the command line tools. First, if you're booting from some rescue media, ensure that the volume group involved is active. The sequence will be something like cryptsetup luksOpen /dev/sda2 encrypted pvscan vgchange -ay /dev/mapper/my_volume_group lvchange -ay /dev/mapper/my_volume_group-root /dev/mapper/my_volume_group-home Then shrink the filesystem of the home volume. Use the right tool depending on the filesystem, e.g. resize2fs for ext2/ext3/ext4, resize_reiserfs for ReiserFS, … Then resize the logical volumes, first shrinking home to make room, then expanding root to use the available space. Check the documentation for the units you can use with lvreduce. lvreduce -L NEWSIZE /dev/mapper/my_volume_group-home lvextend /dev/mapper/my_volume_group-root Finally extend the filesystem of the root volume.
Resize LV's in a LUKS encrypted VG - Ubuntu 11.04
1,666,571,108,000
I'm trying to run cryptsetup benchmark --cipher on the entire list of ciphers included in /proc/crypto. I obtained the list from /proc/crypto by doing the following: cd ./Documents/; cat /proc/crypto | grep "name" | cut -c 16- | tee ciphers.txt Now, I'm trying to find a way to pass each cipher, one by one, through to cryptsetup. My first attempt was simply cat ciphers.txt | cryptsetup benchmark --cipher, but now I am thinking I might need to convert the list I created to a .CSV file and pass it in via a for loop. Is there a way to use the ciphers.txt list I've created, without too much effort, to pass through to cryptsetup?
Your problem has nothing to do with dm_crypt. It's a simple shell programming question. First, you are committing cat abuse. Instead of cat /proc/crypto | grep "name", simply write grep name /proc/crypto (no quotation marks required). You could also combine the grep and cut into a single sed command. Not necessarily easier to read: sed -n '/^name/s/.*: //p' /proc/crypto but requires a single command instead of two. -n prevents sed from printing lines by default. The program finds lines that start with "name" and strips the first part of the line until the blank after the colon. p ensures that this is then printed. Let's now address your question. I understand that the --cipher option takes a single cipher. This means that you have to run cryptsetup benchmark several times, once per cipher. This requires a loop, for example: for cipher in $(<cipher.txt) do cryptsetup benchmark --cipher "$cipher" done The quotation marks are necessary since some cipher names contain special characters such as parentheses. If you don't need the file cipher.txt, you could do all this in one go: for cipher in $(grep name /proc/crypto | cut -c 16-) do cryptsetup benchmark --cipher "$cipher" done
Have Cryptsetup Benchmark --cipher run through list of all ciphers
1,666,571,108,000
My understanding is that dm-crypt serves to abstract the actual block device so that the read/decrypt and write/encrypt happen automatically. However, assuming a device mapping with target: crypt is created, it has a file system, and it is already mounted, is it possible to tell dm-crypt to ask for the key on every write (encryption) and read (decryption) instead of automatically using the key present in the table mapping? What I'm really asking here is can we extend FDE to also have security at run-time and not just when the computer is off or the hard drive is stolen? Or is FBE the only appropriate method for such things?
In principle, it'd be possible to have dm-crypt "forget" the key and require it to be retyped every time, but it'd be impractical and very inconvenient. Reads and writes to the filesystem don't necessarily correspond directly to user operations like "open a file" or "save a file". When a program opens a file, it doesn't necessarily load the whole thing into RAM all at once. It might keep the file open and just read parts of it as needed, interrupting you with password prompts at arbitrary times. Likewise, when a program writes to a file, it doesn't necessarily write the whole thing at once. And even if it does, write caching can delay the actual write to disk. Depending on what software is on your system, there may be things that need to access the filesystem in the background, independent of whatever you're doing (and possibly when you're away from the computer). This would lead to more random interruptions with password prompts. In practice, you can't do what you want because dm-crypt doesn't support it, since it's impractical for the above reasons.
Can dm-crypt be configured to ask for key on every read/write?
1,666,571,108,000
In shell script $crypsetup isLuks /dev/sda1 Above command returns 0 -> for luks partition (encrypted partition) 1 -> non luks partitions (non encrypted partition). I have implemented disk encryption using library APIs in CPP program. How can i check whether a partition is luks partition or not using crypsetup APIs?
You can use the crypt_load function to do that. Quick "reimplementation" of cryptsetup isLuks could look like this: #include <libcryptsetup.h> #include <stdio.h> #include <stdlib.h> #define DEVICE "/dev/sda1" int main (void) { struct crypt_device *cd = NULL; int ret; ret = crypt_init (&cd, DEVICE); if (ret != 0) { printf ("Failed to initialize device\n"); return 1; } ret = crypt_load (cd, CRYPT_LUKS, NULL); crypt_free (cd); if (ret != 0) return 1; else return ret; } Note that crypt_load reads the entire LUKS metadata so if you only want to check the superblock to see if the device has a LUKS header, using libblkid might be better (and faster). For that you can check this implementation of "is LUKS" in the libblockdev library.
How to implement "cryptseup isLuks" function using cryptsetup library APIs
1,666,571,108,000
I have a Debian 11 installation with the following partition layout: path format mount point /dev/nvme0n1p7 ext4 (no encryption) /boot (Debian 11) /dev/nvme0n1p8 dm-crypt LUKS2 LVM2 (named vg_main) /dev/mapper/vg_main-lv_swap swap - /dev/mapper/vg_main-lv_debian ext4 / (Debian 11) /dev/mapper/vg_main-lv_ubuntu ext4 / (Ubuntu 22.04) The /boot for Ubuntu, lives inside its root file system (/dev/mapper/vg_main-lv_ubuntu). I'd like to kexec the Ubuntu kernel after booting the Debian kernel that lives in the unencrypted /boot partition that unlocks the LUKS2 partition. I'd like to use the systemd kexec strategy described here. Is there a way to pass any specific kernel parameter to Debian 11 (that I will do in a specially created GRUB2 entry for this) to tell systemd to simple kexec the Ubuntu 22.04 kernel? Solution: Worked as per @telcoM suggestion, with just few adjustments: /etc/systemd/system/ubuntu-kexec.target [Unit] Description=Ubuntu kexec target Requires=sysinit.target ubuntu-kexec.service After=sysinit.target ubuntu-kexec.service AllowIsolate=yes /etc/systemd/system/ubuntu-kexec.service [Unit] Description=Ubuntu kexec service DefaultDependencies=no Requires=sysinit.target After=sysinit.target Before=shutdown.target umount.target final.target [Service] Type=oneshot ExecStart=/usr/bin/mount -o defaults,ro /dev/mapper/vg_main-lv_ubuntu /mnt ExecStart=/usr/sbin/kexec -l /mnt/boot/vmlinuz --initrd=/mnt/boot/initrd.img --command-line="root=/dev/mapper/vg_main-lv_ubuntu resume=UUID=[MY-UUID-HERE] ro quiet splash" ExecStart=/usr/bin/systemctl kexec [Install] WantedBy=ubuntu-kexec.target
You might want to set up a ubuntu-kexec.target which would be essentially a stripped-down version of multi-user.target, with basically: [Unit] Description=Kexec an Ubuntu kernel from within an encrypted partition Requires=basic.target #You might get by with just sysinit.target here Conflicts=rescue.service rescue.target Wants=ubuntu-kexec.service After=basic.target rescue.service rescue.target ubuntu-kexec.service AllowIsolate=yes This would invoke a ubuntu-kexec.service, which you would create to run your kexec command. The kernel parameter would then be: systemd.unit=ubuntu-kexec.target, similar to how rescue.target or emergency.target can be invoked when necessary. The idea is that ubuntu-kexec.target will pull in basic.target (or even just sysinit.target) to get the filesystems mounted, and then pull in the ubuntu-kexec.service which runs the actual kexec command line. As far as I know, you can specify just one systemd.unit= option, and since you need to specify "boot as usual up to sysinit.target/basic.target, then pull in ubuntu-kexec.service, you'll need a unit of type *.target to specify all the necessary details.
How to chainload another kernel with kexec inside a LUKS2 + LVM2 partition?
1,666,571,108,000
I have read `cryptsetup luksOpen <device> <name>` fails to set up the specified name mapping https://www.saout.de/pipermail/dm-crypt/2014-August/004272.html And tried cryptsetup open --type luks <device> <dmname> --key-file /root/luks.key still getting error 22 cryptsetup luksFormat <device> --key-file /root/luks.key -q output command successful. Followed steps here: https://gist.github.com/huyanhvn/1109822a989914ecb730383fa0f9cfad Created key with openssl genrsa -out /root/luks.key 4096 chmod 400 /root/luks.key $ sudo dmsetup targets striped v1.6.1 linear v1.3.1 error v1.5.1 Edit 1 Realised dm_crypt is not loaded, so did $ modprobe dm_crypt To check $ lsmod | grep -i dm_mod $ which cryptsetup Also checked $ blkid /dev/data /dev/data: UUID="xxxxxxxxxxxx" TYPE="crypto_LUKS" Edit 2 More missing module: modprobe aes_generic modprobe xts Kernel $ uname -r 4.9.0-12-amd64 OS is Debian Stretch And it's an Azure provided image, I'm not sure if they have patched anything related to this.
It's a naming conflict, I already have /dev/mapper/data due to the previous testing, so have to test it with another name. cryptsetup open --type luks /dev/data new_name # 1st time sucess cryptsetup open --type luks /dev/data new_name # 2nd time fail
cryptsetup failed with code 22 invalid argument
1,666,571,108,000
On the Fedora wiki it is mentioned that LUKS offers this protection. LUKS does provide passphrase strengthening but it is still a good idea to choose a good (meaning "difficult to guess") passphrase. What is it exactly and how is it accomplished?
A similar phrase appears in other places (e.g., this Red Hat 5 page), where a bit more detail is given: LUKS provides passphrase strengthening. This protects against dictionary attacks. Just from that I would expect it to mean that the password is being salted and probably has other improvements applied to the process (e.g., hashing it N times to increase the cost). Googling around, this phrase seems to have first appeared in conjunction with LUKS around 2006 in the Wikipedia article on Comparison of disk encryption software. There the description of "passphrase strengthening" goes to the article on "Key stretching", which is about various techniques to make passwords more resilient to brute-force attacks, including using PBKDF2. And indeed, LUKS1 did use PBKDF2 (LUKS2 switched to Argon2), according to the LUKS FAQ. So that's what passphrase strengthening means in this context: using PBKDF2 and similar to make passwords more difficult to crack. The FAQ also has a short description: If the password has lower entropy, you want to make this process cost some effort, so that each try takes time and resources and slows the attacker down. LUKS1 uses PBKDF2 for that, adding an iteration count and a salt. The iteration count is per default set to that it takes 1 second per try on the CPU of the device where the respective passphrase was set. The salt is there to prevent precomputation. For specifics, LUKS used SHA1 as the hashing mechanism in PBKDF2 (since 1.7.0 it's SHA256), with iteration count set so that it takes about 1 second. See also section 5.1 of the FAQ: How long is a secure passphrase? for a comparison of how using PBKDF2 in LUKS1 made for a considerable improvement over dm-crypt: For plain dm-crypt (no hash iteration) this is it. This gives (with SHA1, plain dm-crypt default is ripemd160 which seems to be slightly slower than SHA1): Passphrase entropy Cost to break 60 bit EUR/USD 6k 65 bit EUR/USD 200K 70 bit EUR/USD 6M 75 bit EUR/USD 200M 80 bit EUR/USD 6B 85 bit EUR/USD 200B ... ... For LUKS1, you have to take into account hash iteration in PBKDF2. For a current CPU, there are about 100k iterations (as can be queried with cryptsetup luksDump. The table above then becomes: Passphrase entropy Cost to break 50 bit EUR/USD 600k 55 bit EUR/USD 20M 60 bit EUR/USD 600M 65 bit EUR/USD 20B 70 bit EUR/USD 600B 75 bit EUR/USD 20T ... ...
In regards to dm-crypt with LUKS, what is meant by "passphrase strengthening"?
1,666,571,108,000
How to download and install cryptsetup-luks-devel package for Debian? I can't find it. When I google I get this package only for CentOS.
On debian ,the package is called libcryptsetup-dev: This package provides the libcryptsetup development files. sudo apt install libcryptsetup-dev
Where to download cryptsetup-luks-devel package?
1,666,571,108,000
I plan to encrypt a user's /home directory, prefering dm-crypt over eCryptfs, which seems to read data x times faster. But encrypting the whole /home would be a problem for others, entering an encryption key at every login. Is it possible to separate /home/$USER as a partition?
Log out as that user, proceed as root. Create said additional partition with fdisk or parted. Make temporary mountpoint for that partition, say /mnt/tempuser. Mount it to that mountpoint. Rsync /home/$USER to /mnt/tempuser/ and then mv /home/$USER /home/originaluser. Mkdir /home/$USER and chown it to said $USER Now try logging in as $USER. su - $USER from that same root console for example. Should be enough to test if it worked or what went wrong. Fallback would be umount /home/$USER ; mv /home/$USER /home/faileduser ; mv /home/originaluser /home/$USER Then if all seems fine with logging in like that, add it to /etc/fstab so it gets mounted on boot, define mountpoint here as /home/$USER
A separate partition for a user's /home directory?
1,666,571,108,000
I'm running Arch Linux (systemd) on several systems. Some have SSD storage, others have nvme storage devices and some use rotational HDD's. Some systems use dm-crypt to encrypt the filesystem. All systems run btrfs on /. I wish to have a bash script determine the physical device which hosts the root filesystem (/). The purpose is to check if that block device supports trim, and if so, to then take some action if fstrim.timer is not enabled on the system. If we know that / is on /dev/sda for example, we can check hdparm -I /dev/sda | grep TRIM to find out if trim is supported. If so, I can do systemctl enable fstrim.timer. But on an encrypted system, / is reported as being on /dev/mapper/cryptoroot or something similar, and I am not finding a script-friendly way to map that back to the physical block device (e.g., /dev/sda) to determine if it supports trim. My understanding is that SSD's generally benefit from having periodic trim run, while NVMe devices may not. For non-encrypted situations, these questions are relevant: How do I find on which physical device a folder is located? Find out what device /dev/root represents in Linux? https://unix.stackexchange.com/a/431968/15010
BTRFS supports multiple devices, so what you can do is use btrfs fi show to get the list of block devices. Then use cryptsetup status to check if a given device is a LUKS container. If it is the command will output the underlining device. I wouldn't call this script-friendly, since you'll have to parse the output, but it should work.
Find physical block device of root filesystem on an encrypted filesystem?
1,666,571,108,000
My disks (ZFS on Linux on encrypted LUKS) are not staying in standby and I'm not able to identify which process is waking them up. iotop is showing command txg_sync which is related to ZFS. So I tried fatrace. But even with fatrace -c I don't get any output. This is related to ZFS and a known issue. Next try was using the iosnoop script (https://github.com/brendangregg/perf-tools). With this I was only able to identify that dm_crypt is writing when the disks are becoming active again. So it seems I'm not really able to identify the process nor the file which is accessed due to the combination of ZFS and LUKS. What else can I do to identify which process is waking up my drives?
With the following you are able to identify I/O per process: cut -d" " -f 1,2,42 /proc/*/stat | sort -n -k +3
How to identify which process is writing on encrypted disk with ZFS
1,666,571,108,000
Hello and thanks for clicking into this for a look. I noticed that in the arch wiki, under cryptdevice in dm-crypt you have this: cryptdevice This parameter will make the system prompt for the passphrase to unlock the device containing the encrypted root on a cold boot. It is parsed by the encrypt hook to identify which device contains the encrypted system: cryptdevice=device:dmname device is the path to the device backing the encrypted device. Usage of persistent block device naming is strongly recommended. dmname is the device-mapper name given to the device after decryption, which will be available as /dev/mapper/dmname. If a LVM contains the encrypted root, the LVM gets activated first and the volume group containing the logical volume of the encrypted root serves as device. It is then followed by the respective volume group to be mapped to root. The parameter follows the form of cryptdevice=/dev/vgname/lvname:dmname In this, I want to know why do some people say :root while some say cryptoroot and still some says vgname. In which I am very confused as to which one should be the official one? I did :root:allow-discards and it worked very well. In this I ask for you take on it. This line is only edited if you want to create an encrypted arch btw. Thanks for taking a look again and have a safe day.
You can use whatever you want for the dmname parameter, just make sure to use the same name when referring to the device at other places (e.g. in fstab) or use UUID. When opening the device manually using cryptsetup (cryptsetup luksOpen <device> <name>), you'll also need to specify a name, which also can be whatever you want, this is the same case. It is even possible to use a different name every time the device is opened (but that would be impractical for system devices which needs to be mounted etc.). When opening the encrypted device, cryptsetup creates a new device mapper device on top of the encrypted device which (from system point of view) is not encrypted (system sees a "normal" device with ext4 filesystem, the only difference is that all writes to it are encrypted before writing the data to the underlying block device) and you need a name for it and as I already said, you can use any name you want. Some tools like UDisks and systemd use luks-<UUID> just to make sure the name is unique system wide, but it's not necessary. This is how encrypted (unlocked) partition looks in Fedora with the luks-<UUID> name: └─sda2 8:2 0 930,5G 0 part └─luks-094c2ba3-eb59-48fe-83ab-eca3fe533c03 253:0 0 930,5G 0 crypt and this is the /dev/mapper symlink: $ ls -la /dev/mapper/luks* lrwxrwxrwx. 1 root root 7 19. pro 08.25 /dev/mapper/luks-094c2ba3-eb59-48fe-83ab-eca3fe533c03 -> ../dm-0
What is "dmname" in Arch linux grub config
1,666,571,108,000
When I tried to set up encryption using cryptmount-setup, it just returns without any feedback. It did not even created the crypto.fs file. I ran the commands: cryptmount nextcloud_data and cryptmount -l to show that the creation of the target was clearly unsuccessful, as follows. ------------------------------ Your filing system is now ready to be built - this will involve: - Creating the directory "/media/nextcloud_data" - Creating a 2700000MB file, "/media/hdd_3tb/crypto.fs" - Adding an extra entry ("nextcloud_data") in /etc/cryptmount/cmtab - Creating a key-file ("/etc/cryptmount/nextcloud_data.key") - Creating an ext3 filingsystem on "/media/hdd_3tb/crypto.fs" If you do not wish to proceed, no changes will be made to your system. Please confirm that you want to proceed (enter "yes") [no]: yes Making mount-point (/media/nextcloud_data)... done Creating filesystem container (/media/hdd_3tb/crypto.fs)...~ $ ~ $ ~ $ ~ $ cryptmount nextcloud_data Target name "nextcloud_data" is not recognized ~ $ cryptmount -l ~ $ I already reduced the size of the crypto-file, to make sure that is not the problem. Any ideas about this issue?
Cryptmount-setup tries to write to a partition that is mounted in read-only mode, that is why the command exits unexpectedly.
Cryptmount-setup is not working
1,295,896,722,000
How do I move all files in a directory (including the hidden ones) to another directory? For example, if I have a folder "Foo" with the files ".hidden" and "notHidden" inside, how do I move both files to a directory named "Bar"? The following does not work, as the ".hidden" file stays in "Foo". mv Foo/* Bar/ Try it yourself. mkdir Foo mkdir Bar touch Foo/.hidden touch Foo/notHidden mv Foo/* Bar/
Quick answers first; see below for more in-depth discussion and documentation links for bash, ksh93 and zsh. Zsh mv Foo/*(DN) Bar/ or setopt glob_dots null_glob mv Foo/* Bar/ Case and underscores are ignored in the option name. set -o can also be used like in Korn/POSIX-like shells and dotglob (DotGlob, DOT_GLOB...) is also supported for compatibility with the GNU shell (bash). Bash shopt -s dotglob nullglob mv Foo/* Bar/ Ksh93 If you know the directory is not empty: FIGNORE='.?(.)' mv Foo/* Bar/ Fish If you know the directory is not empty: mv Foo/{.,}* Bar/ Standard (POSIX) sh for x in Foo/* Foo/.[!.]* Foo/..?*; do if [ -e "$x" ]; then mv -- "$x" Bar/; fi done If you're willing to let the mv command return an error status even though it succeeded, it's a lot simpler: mv Foo/* Foo/.[!.]* Foo/..?* Bar/ GNU find and GNU mv find Foo/ -mindepth 1 -maxdepth 1 -exec mv -t Bar/ -- {} + Standard find find Foo/. ! -name . -prune -exec sh -c 'mv -- "$@" "$0"' ../Bar/ {} + Here's more detail about controlling whether dot files are matched in bash, ksh93 and zsh. Bash Set the dotglob option. $ echo * none zero $ shopt -s dotglob $ echo * ..two .one none zero There's also the more flexible GLOBIGNORE variable, which you can set to a colon-separated list of wildcard patterns to ignore. If unset (the default setting), the shell behaves as if the value was empty if dotglob is set, and as if the value was .* if the option is unset. See Filename Expansion in the manual. The pervasive directories . and .. are always omitted, unless the . is matched explicitly by the pattern. $ GLOBIGNORE='n*' $ echo * ..two .one zero $ echo .* ..two .one $ unset GLOBIGNORE $ echo .* . .. ..two .one $ GLOBIGNORE=.:.. $ echo .* ..two .one Ksh93 Set the FIGNORE variable. If unset (the default setting), the shell behaves as if the value was .*. To ignore . and .., they must be matched explicitly (the manual in ksh 93s+ 2008-01-31 states that . and .. are always ignored, but this does no longer correctly describe the actual behavior; edit that was fixed since). $ echo * none zero $ FIGNORE='@(.|..)' $ echo * ..two .one none zero $ FIGNORE='n*' $ echo * . .. ..two .one zero You can include dot files in a pattern by matching them explicitly. $ unset FIGNORE $ echo @(*|.[^.]*|..?*) ..two .one none zero To have the expansion come out empty if the directory is empty, use the N pattern matching option: ~(N)@(*|.[^.]*|..?*) or ~(N:*|.[^.]*|..?*). Zsh Set the dot_glob option. % echo * none zero % setopt dot_glob % echo * ..two .one none zero . and .. are never matched, even if the pattern matches the leading . explicitly. % echo .* ..two .one You can include dot files in a specific pattern with the D glob qualifier. % echo *(D) ..two .one none zero Add the N glob qualifier to make the expansion come out empty in an empty directory: *(DN). Note: you may get filename expansion results in different orders (e.g., none followed by .one followed by ..two) based on your settings of the LC_COLLATE, LC_ALL, and LANG variables.
How do you move all files (including hidden) from one directory to another?
1,295,896,722,000
The Midnight Commander is a very helpful tool when we're using only the text mode. But sometimes it bothers me that I have to see all the hidden files inside a folder (files that begin with "."). I've tried to find how to do it changing some configurations by myself and then looking on the man page. But I didn't succeed. Does anyone know how can I do it?
Choose Options from the menu bar, then Panel options. You have it right there, 5th option on the left column: "Show hidden files".
How can I stop seeing hidden files in the Midnight Commander?
1,295,896,722,000
I have for many years had my entire $HOME directory checked into subversion. This has included all my dotfiles and application profiles, many scripts, tools and hacks, my preferred basic home directory structure, not a few oddball projects and a warehouse worth of random data. This was a good thing. While it lasted. But it's gotten out of hand. The basic checkout is the same across dozens of systems, but not all that stuff is appropriate for all my machines. It doesn't even all play nicely with different distros. I'm in the process of cleaning house -- separating the data out where it belongs, splitting out some scripts as separate projects, fixing some broken links in stuff that should be automated, etc. My intent is to replace subversion with git for the toplevel checkout of $HOME, but I'd like to pare this down to just the things I'd like to have on ALL my systems, meaning dotfiles, a few directories and some basic custom scripts. In reading up online a lot of people seem to be doing this using the symlink approach: clone into a subdirectory then create symlinks from $HOME into the repository. Having had my $HOME under full version control for over a decade, I don't like the idea of this approach and I can't figure out why people seem so averse to the straight checkout method. Are there pitfalls I need to know about specific to git as a top level checkout for $HOME? P.S. Partly as an exercise in good coding, I'm also planning on making my root checkout public on GitHub. It's scary how much security sensitive information I've allowed to collect in files that ought to be sharable without a second thought! WiFi password, un-passphrased RSA keys, etc. Eeek!
Yes, there is at least one major pitfall when considering git to manage a home directory that is not a concern with subversion. Git is both greedy and recursive by default. Subversion will naively ignore anything it doesn't know about and it stops processing folders either up or down from your checkout when it reaches one that it doesn't know about (or that belongs to a different repository). Git, on the other hand, keeps recursing into all child directories making nested checkouts very complicated due to namespace issues. Since your home directory is likely also the place where you checkout and work on various other git repositories, having your home directory in git is almost certainly going to make your life an impossible mess. As it turns out, this is the main reason people checkout their dotfiles into an isolated folder and then symlink into it. It keeps git out of the way when doing anything else in any child directory of your $HOME. While this is purely a matter of preference if checking your home into subversion, it becomes a matter of necessity if using git. However, there is an alternate solution. Git allows for something called a "fake root" where all the repository machinery is hidden in an alternate folder that can be physically separated from the checkout working directory. The result is that the git toolkit won't get confused: it won't even SEE your repository, only the working copy. By setting a couple environment variables you can tip off git where to find the goods for those moments when you are managing your home directory. Without the environment variables set nobody is the wiser and your home looks like it's classic file-y self. To make this trick flow a little smoother, there are some great tools out there. The vcs-home mailing list seems like the defacto place to start, and the about page has a convenient wrap up of howtos and people's experiences. Along the way are some nifty little tools like vcsh, mr. If you want to keep your home directory directly in git, vcsh is almost a must have tool. If you end up splitting your home directory into several repostories behind the scenes, combine vcsh with mr for quick and not very dirty way to manage it all at once.
Are there pitfalls to putting $HOME in git instead of symlinking dotfiles?
1,295,896,722,000
How to match the hidden files inside the given directories for example If I give the below command it's not giving the result of the hidden files, du -b maybehere*/* how to achieve this simple using a single command instead of using du -b maybehere*/.* maybehere*/* as I need to type maybehere twice.
Take advantage of the brace expansion: du -b maybehere*/{*,.[^.],.??*} or alternatively du -b maybehere*/{,.[^.],..?}* The logic behind this is probably not obvious, so here is explanation: * matches all non-hidden files .[^.] matches files which names started with single dot followed by not a dot; that are only 2 character filenames in the first form. .??* matches hidden files which are at least 3 character long ..?* like above, but second character must be a dot The whole point is to exclude hard links to current and parent directory (. and ..), but include all normal files in such a way that each of them will be counted only once! For example the simplest would be to just write du -b maybehere*/{.,}* It means that that the list contains a dot . and "nothing" (nothing is between , and closing }), thus all hidden files (which start from a dot) and all non-hidden files (which start from "nothing") would match. The problem is that this would also match . and .., and this is most probably not what you want, so we have to exclude it somehow. Final word about brace expansion. Brace expansion is a mechanism by which you can include more files/strings/whatever to the commandline by writing fewer characters. The syntax is {word1,word2,...}, i.e. it is a list of comma separated strings which starts from { and end with }. bash manual gives a very basic and at the same time very common example of usage: $ echo a{b,c,d}e abe ace ade
How to match * with hidden files inside a directory
1,295,896,722,000
I just edited the .zshrc file to configure Z shell on FreeBSD, for example to update the PATH system variable. path+=/usr/local/openjdk12/bin How do I make the changes take effect? Must I log out and log in again? Is there a way to immediately run that file?
Restart zsh Zsh reads .zshrc when it starts. You don't need to log out and log back in. Just closing the terminal and opening a new one gives you your new .zshrc in this new terminal. But you can make this more direct. Just tell zsh to relaunch itself: exec zsh If you run this at a zsh prompt, this replaces the current instance of zsh by a new one, running in the same terminal. The new instance has the same environment variables as the previous one, but has fresh shell (non-exported) variables, and it starts a new history (so it'll mix in commands from other terminals in typical configurations). Any background jobs are disowned. Reread .zshrc You can also tell zsh to re-read .zshrc. This has the advantage of preserving the shell history, shell variables, and knowledge of background jobs. But depending on what you put in your .zshrc, this may or may not work. Re-reading .zshrc runs commands which may not work, or not work well, if you run them twice. . ~/.zshrc There are just too many things you can do to enumerate everything that's ok and not ok to put in .zshrc if you want to be able to run it twice. Here are just some common issues: If you append to a variable (e.g. fpath+=(~/.config/zsh) or chpwd_functions+=(my_chpwd)), this appends the same elements again, which may or may not be a problem. If you define aliases, and also use the same name as a command, the command will now run the alias. For example, this works: function foo { … } alias foo='foo --common-option' But this doesn't, because the second time the file is sourced, foo () will expand the alias: foo () { … } alias foo='foo --common-option' If you patch an existing zsh function, you'll now be patching your own version, which will probably make a mess. If you do something like “swap the bindings of two keys”, that won't do what you want the second time.
How do I apply the changes to the .zshrc file after editing it?
1,295,896,722,000
I was wondering what the difference between these two were: ~/somedirectory/file.txt and ~/.somedirectory/file.txt It's really difficult to ask this on Google since I didn't know how to explain the . when I didn't even know what to call it. But can someone describe the difference between including the dot and excluding it?
Under unix-like systems, all directories contain two entries, . and .., which stand for the directory itself and its parent respectively. These entries are not interesting most of the time, so ls hides them, and shell wildcards like * don't include them. More generally, ls and wildcards hide all files whose name begins with a .; this is a simple way to exclude . and .. and allow users to hide other files from listings. Other than being excluded from listings, there's nothing special about these files. Unix stores per-user configuration files in the user's home directory. If all configuration files appeared in file listings, the home directory would be cluttered with files that users don't care about every day. So configuration files always begin with a .: typically, the configuration file for the application Foo is called something like .foo or .foorc. For this reason, user configuration files are often known as dot files.
What's so special about directories whose names begin with a dot?
1,295,896,722,000
This answer reveals that one can copy all files - including hidden ones - from directory src into directory dest like so: mkdir dest cp -r src/. dest There is no explanation in the answer or its comments as to why this actually works, and nobody seems to find documentation on this either. I tried out a few things. First, the normal case: $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file $ cp -r src dest $ ls -A dest dest_file src Then, with /. at the end: $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file $ cp -r src/. dest $ ls -A dest dest_file .dotfile src_dir src_file So, this behaves simlarly to *, but also copies hidden files. $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file $ cp -r src/* dest $ ls -A dest dest_file src_dir src_file . and .. are proper hard-links as explained here, just like the directory entry itself. Where does this behaviour come from, and where is it documented?
The behaviour is a logical result of the documented algorithm for cp -R. See POSIX, step 2f: The files in the directory source_file shall be copied to the directory dest_file, taking the four steps (1 to 4) listed here with the files as source_files. . and .. are directories, respectively the current directory, and the parent directory. Neither are special as far as the shell is concerned, so neither are concerned by expansion, and the directory will be copied including hidden files. *, on the other hand, will be expanded to a list of files, and this is where hidden files are filtered out. src/. is the current directory inside src, which is src itself; src/src_dir/.. is src_dir’s parent directory, which is again src. So from outside src, if src is a directory, specifying src/. or src/src_dir/.. as the source file for cp are equivalent, and copy the contents of src, including hidden files. The point of specifying src/. is that it will fail if src is not a directory (or symbolic link to a directory), whereas src wouldn’t. It will also copy the contents of src only, without copying src itself; this matches the documentation too: If target exists and names an existing directory, the name of the corresponding destination path for each file in the file hierarchy shall be the concatenation of target, a single slash character if target did not end in a slash, and the pathname of the file relative to the directory containing source_file. So cp -R src/. dest copies the contents of src to dest/. (the source file is . in src), whereas cp -R src dest copies the contents of src to dest/src (the source file is src). Another way to think of this is to compare copying src/src_dir and src/., rather than comparing src/. and src. . behaves just like src_dir in the former case.
cp behaves weirdly when . (dot) or .. (dot dot) are the source directory
1,295,896,722,000
I tried to display only hidden files but don't know how to do it. That is working (but matching also dots in other places) ls -la | grep '\.' Was trying adding ^ but didn't find the solution.
ls -ld .* will do what you want.
display only files starting with . (hidden)
1,295,896,722,000
Apparently you can rename file to .... If I were insane, how would I rename file to .. or .? Is such a filename even allowed? Backslash doesn't seem to disable dot's special meaning: $ mv test \. mv: `test' and `./test' are the same file
.. is not special, it is just that it already exists. On Unix, Dos and MS-Windows every directory has a directory . it links back to itself, and a directory .. it links to its parent directory (or self if root directory). If .. and . are special it is only because you can not remove them, or rename them (actually you can remove them, you just remove the directory that contains them). Therefore you can not name any (other) file . or ... However you can create files ..., \, …, ..  (note there is a space after the .., but you can hardly see it here, or easily in you directory listing) or any other name you like; The only reserved character is / (Warning — advanced details: and null, null is a special character, not used for anything except to mark the end of things and sometimes as a separator). . has no special meaning: not to file names, kernel or to the shell, it does not need escaping. Actually if a file-name starts with a . then it is special, the file is normally hidden by directory listing tools (e.g. ls), but still it does not need escaping. Aside This hidden file behaviour came about in an early implementation of ls where the author wanted to hide . and .., so they wrote code to hide any files starting with a .. Other users noticed this bug/feature and started creating files starting with a . when they wanted the file to be hidden. Explanation of Linked question In the question you link to the questioner is trying to move the file to the parent directory .. but ends up renaming to ..., files starting with a dot are by default hidden, that is why they can not find it. When using mv in the form mv a b If you move to . it is effectively a no operation, but mv treats it as an error. If you move to .. it will move the file to the parent directory.
How to rename file to .. (dot dot)?
1,295,896,722,000
Situation : $ mkdir foo && touch foo/.test $ cp foo/* . zsh: no matches found: foo/* (or bash : cp: cannot stat ‘foo/*’: No such file or directory) I have a directory full of hidden folders and files. What is happening and what is the solution?
Disclaimer: This answer deals with Bash specifically but much of it applies to the question regarding glob patterns! The star character (*) is a wildcard. There are a certain set of characters that it will take the place of and the first character being a dot (.) isn't one of them. This is a special case just because of how the Unix filesystems work, files that start with a dot are considered "hidden". That means that tools such as cp, ls, etc. will not "see" them unless explicitly told to do so. Examples First let's create some sample data. $ mkdir .dotdir{1,2} regdir{1,2} $ touch .dotfile{1,2} regfile{1..3} So now we have the following: $ tree -a . |-- .dotdir1 |-- .dotdir2 |-- .dotfile1 |-- .dotfile2 |-- regdir1 |-- regdir2 |-- regfile1 |-- regfile2 `-- regfile3 Now let's play some games. You can use the command echo to list out what a particular wildcard (*) would be for a given command like so: $ echo * regdir1 regdir2 regfile1 regfile2 regfile3 $ echo reg* regdir1 regdir2 regfile1 regfile2 regfile3 $ echo .* . .. .dotdir1 .dotdir2 .dotfile1 .dotfile2 $ echo .* * . .. .dotdir1 .dotdir2 .dotfile1 .dotfile2 regdir1 regdir2 regfile1 regfile2 regfile3 $ echo .dotdir* .dotdir1 .dotdir2 Changing the behavior? You can use the command shopt -s dotglob to change the behavior of the * so that in addition to files like regfile1 it will also match .dotfile1. excerpt from the bash man page dotglob If set, bash includes filenames beginning with a `.' in the results of pathname expansion. Example: $ shopt -s dotglob $ echo * .dotdir1 .dotdir2 .dotfile1 .dotfile2 regdir1 regdir2 regfile1 regfile2 regfile3 You can revert this behavior with this command: $ shopt -u dotglob $ echo * regdir1 regdir2 regfile1 regfile2 regfile3 Your situation? For you you're telling cp that you want to copy all the files that match the pattern *, and there aren't any files. $ cp foo/.* . Or you can do this if you want everything in the foo folder: $ cp foo . Or you can be explicit: $ cp foot/.* foo/* . A more compact form using brace expansion in bash: $ cp foo/{.,}* . At any time you can use the echo trick to see what your proposed file patterns (that's the fancy term for what the star is a part of). $ echo {.,}* . .. .dotdir1 .dotdir2 .dotfile1 .dotfile2 abc regdir1 regdir2 regfile1 regfile2 regfile3 Incidentally if you're going to copy a directory of files + other directories, you typically want to do this recursively, that's the -R switch to cp: $ cp -R foo/. .
cp hidden files with glob patterns
1,295,896,722,000
Someone of our team wanted to recursively change the user permissions on all hidden directories in a users home directory. To do so he executed the following command: cd /home/username chown -R username:groupname .* We were pretty surprised when we realized, that he actually recursively changed the permissions of all user directories in /home, because .* equals to .. as well. Would you have expected this behavior in Linux though?
I always get burned when I try using .* for anything and long ago switched to using character classes: chown -R username.groupname .[A-Za-z]* is how I would have done this. Edit: someone pointed out that this doesn't get, for example dot files such as ._Library. The catch all character class to use would be chown -R username.groupname .[A-Za-z0-9_-]*
`command .*` acts on the parent directory [duplicate]
1,295,896,722,000
Due to work I have recently started using OS X and have set it up using homebrew in order to get a similar experience as with Linux. However, there are quite a few differences in their settings. Some only need to be in place on one system. As my dotfiles live in a git repository, I was wondering what kind of switch I could set in place, so that some configs are only read for Linux system and other for OS X. As to dotfiles, I am referring, among other, to .bash_profiles or .bash_alias.
Keep the dotfiles as portable as possible and avoid OS dependent settings or switches that require a particular version of a tool, e.g. avoid GNU syntax if you don't use GNU software on all systems. You'll probably run into situations where it's desirable to use system specific settings. In that case use a switch statement with the individual settings: case $(uname) in 'Linux') LS_OPTIONS='--color=auto --group-directories-first' ;; 'FreeBSD') LS_OPTIONS='-Gh -D "%F %H:%M"' ;; 'Darwin') LS_OPTIONS='-h' ;; esac In case the configuration files of arbitrary applications require different options, you can check if the application provides compatibility switches or other mechanisms. For vim, for instance, you can check the version and patchlevel to support features older versions, or versions compiled with a different feature set, don't have. Example snippet from .vimrc: if v:version >= 703 if has("patch769") set matchpairs+=“:” endif endif
How to keep dotfiles system-agnostic?
1,295,896,722,000
Initially I thought it was a coincidence, but now I see there's even a tag for it: all hidden file names start with a dot. Is this a convention? Why was it chosen? Can it be changed? Or in other words (as a related question @evilsoup suggested that implies the answer to a bunch of others): can I hide files without renaming them (using . as the first character of their name)?
According to Wikipedia, The notion that filenames preceded by a . should be hidden is the result of a software bug in the early days of Unix. When the special . and .. directory entries were added to the filesystem, it was decided that the ls command should not display them. However, the program was mistakenly written to exclude any file whose name started with a . character, rather than the exact names . or ... ...so it started off as a bug, and then it was embraced as a feature (for the record, . is a link to the current directory and .. is a link to the directory above it, but I'm sure you know that already). Since this method of hiding files actually is good enough most of the time, I suppose nobody ever bothered to implement Windows-style file hiding. There's also the fact that implementing different behaviour would produce an even greater amount of fragmentation to the *nix world, which is the last thing anyone wants. There is another method for hiding files that doesn't involve renaming them, but it only works for GUI file managers (and it's not universal amongst those -- the major Linux ones use it, but I don't think OSX's Finder does, and the more niche Linux file managers are less likely to support this behaviour): you can create a file called .hidden, and put the filenames you want to hide inside it, one per line. ls and shell globs won't respect this, but it might be useful to you, still.
Why are filenames that start with a dot hidden? Can I hide files without using a dot as their first character?
1,295,896,722,000
I'm experiencing a strange behavior on some of our machines atm. At least, it seems strange to me and my colleagues and we didn't find any explanation for it :) [edit 1] Next paragraph seems to be wrong. See edit 2 at end. We're using bash and zsh here. So, when SSHing into some of the zsh-default-machines (plain ssh login@host) which are configured to use zsh as default shell (with chsh -s /usr/bin/zsh), the then-opened shell is an interactive but non-login shell, regardless if we're already logged in on the respective machine or not. In my understanding, SSHing into a machine should be starting a new user session on that machine, thus requiring the shell to be a login shell, right? Shouldn't that be the case for zsh, too? When changing the default shell to bash on the machines, logging into the machine uses a login-shell. Is this the normal behavior for zsh? Could it be changed? Or is it some misconfiguration? [/edit 1] [edit 2] Ok, according to the ZSH documentation you could easily test if it is a login shell or not: $ if [[ -o login ]]; then; print yes; else; print no; fi See: http://zsh.sourceforge.net/Guide/zshguide02.html However, due to zsh man entry / documentation, zsh should source /etc/profile which in turn sources the scripts under /etc/profile.d/*.sh. My question above originated in the fact, that the scripts are not sourced and thus most of our environment variables and system configuration stuff isn't properly initialized. However, as described above - when we're using bash as default shell, /etc/profile and the scripts in the profile.d-folder are sourced. [/edit 2] [edit 3 - ANSWER] Thx @StéphaneChazelas for the answer in the comments below! It seems zsh is only sourcing /etc/profile when running in sh/ksh compatibility mode (see the respecitve man entry https://linux.die.net/man/1/zsh). As logging in via SSH doesn't trigger that compatibility mode, zsh doesn't necessarily source /etc/profile on it's own but have to be triggered via .zprofile [/edit 3] System: OS: Ubuntu 18.04 zsh-5.4.2 with omz and some plugins activated. Thank you!
ZSH just works in this way. /etc/profile is NOT an init file for ZSH. ZSH uses /etc/zprofile and ~/.zprofile. Init files for ZSH: /etc/zshenv ~/.zshenv login mode: /etc/zprofile ~/.zprofile interactive: /etc/zshrc ~/.zshrc login mode: /etc/zlogin ~/.zlogin Tips: Default shell opened in your terminal on Linux is a non-login, interactive shell. But on macOS, it's a login shell. References Unix shell initialization Shell Startup Scripts
SSHing into system with ZSH as default shell doesn't run /etc/profile
1,295,896,722,000
I need to iterate through every file inside a directory. One common way I saw was using the for loop that begins with for file in *; do. However, I realized that it does not include hidden files (files that begin with a "."). The other obvious way is then do something like for file in `ls -a`; do However, iterating over ls is a bad idea because spaces in file names mess everything up. What would be the proper way to iterate through a directory and also get all the hidden files?
You just need to create a list of glob matching files, separated by space: for file in .* *; do echo "$file"; done Edit The above one can rewrite in different form using brace expansion for file in {.*,*}; do echo "$file"; done or even shorter: for file in {.,}*; do echo "$file"; done Adding the path for selected files: for file in /path/{..?,.[!.],}*; do echo "$file"; done Adding path for selected files: for file in /path/{.,}*; do echo "$file"; done If you want to be sophisticated and remove from the list usually unneeded . and .. just change {.,}* to {..?,.[!.],}*. For completeness it is worth to mention that one can also set dotglob to match dot-files with pure *. shopt -s dotglob In zsh one needs additionally set nullglob to prevent the error in case of no-matches: setopt nullglob or, alternatively add glob qualifier N to the pattern: for file in /path/{.,}*(N); do echo "$file"; done
proper way to iterate through contents in a directory [duplicate]
1,295,896,722,000
Recently I had a little mishap caused by a shell pattern that expanded in an unexpected way. I wanted to change the owner of a bunch of dot files in the /root directory, so I did chown -R root .* Naturally, the .* expanded to .. which was a bit of a disaster. I know in bash this behaviour can be changed by tweaking some shell option, but with the default settings is there any pattern that would expand to every dot file in the directory but not to . and ..?
Bash, ksh and zsh have better solutions, but in this answer I assume a POSIX shell. The pattern .[!.]* matches all files that begin with a dot followed by a non-dot character. (Note that [^.] is supported by some shells but not all, the portable syntax for character set complement in wildcard patterns is [!.].) It therefore excludes . and .., but also files that begin with two dots. The pattern ..?* handles files that begin with two dots and aren't just ... chown -R root .[!.]* ..?* This is the classical pattern set to match all files: * .[!.]* ..?* A limitation of this approach is that if one of the patterns matches nothing, it's passed to the command. In a script, when you want to match all files in a directory except . and .., there are several solutions, all of them cumbersome: Use * .* to enumerate all entries, and exclude . and .. in a loop. One or both of the patterns may match nothing, so the loop needs to check for the existence of each file. You have the opportunity to filter on other criteria; for example, remove the -h test if you want to skip dangling symbolic links. for x in * .*; do case $x in .|..) continue;; esac [ -e "$x" ] || [ -h "$x" ] || continue somecommand "$x" done A more complex variant where the command is run only once. Note that the positional parameters are clobbered (POSIX shells don't have arrays); put this in a separate function if this is an issue. set -- for x in * .[!.]* ..?*; do case $x in .|..) continue;; esac [ -e "$x" ] || [ -h "$x" ] || continue set -- "$@" "$x" done somecommand "$@" Use the * .[!.]* ..?* triptych. Again, one or more of the patterns may match nothing, so we need to check for existing files (including dangling symbolic links). for x in * .[!.]* ..?*; do [ -e "$x" ] || [ -h "$x" ] || continue somecommand "$x" done Use the * .[!.]* ..?* tryptich, and run the command once per pattern but only if it matched something. This runs the command only once. Note that the positional parameters are clobbered (POSIX shells don't have arrays), put this in a separate function if this is an issue. set -- * [ -e "$1" ] || [ -h "$1" ] || shift set -- .[!.]* "$@" [ -e "$1" ] || [ -h "$1" ] || shift set -- ..?* "$@" [ -e "$1" ] || [ -h "$1" ] || shift somecommand "$@" Use find. With GNU or BSD find, avoiding recursion is easy with the options -mindepth and -maxdepth. With POSIX find, it's a little trickier, but can be done. This form has the advantage of easily allowing to run the command a single time instead of once per file (but this is not guaranteed: if the resulting command is too long, the command will be run in several batches). find . -name . -o -exec somecommand {} + -o -type d -prune
Shell filename pattern that expands to dot files but not to `..`?
1,295,896,722,000
I can understand the rationale of hiding files and folders in the /home/user directory to prevent users from messing around with things. However, I do not see how the same rationale can be applied to files in the /etc, /boot and /var directories which is the domain of administrators. My question is why are some files and folders hidden from administrators? Example: /boot/.vmlinuz-3.11.1-200.fc20.x86_64.hmac /etc/.pwd.lock /etc/selinux/targeted/.policy.sha512 /etc/.java /etc/.java/.systemPrefs /etc/skel/.bash_profile /root/.ssh /root/.config /var/cache/yum/x86_64/20/.gpgkeyschecked.yum /var/spool/at/.SEQ /var/lib/pear/.filemap
You've misinterpreted the primary rationale for "hidden files". It is not to prevent users from messing around with things. Although it may have this consequence for very new users until they learn what a "dot file" is (dot file and dot directory are perhaps more appropriate and specific terms than "hidden"). All by itself it doesn't prevent you from messing around with things -- that's what permissions are for. It does perhaps help to indicate to new users that this is something they should not mess around with until they understand what it is for. You could thus think of the dot prefix as a sort of file suffix -- notice they usually don't have one of those, although they can. It indicates this file is not of interest for general browsing, which is why ls and file browsers usually will not display it. However, since it's a prefix instead of a suffix, there is the added bonus, when you do display them (ls -a) in lexicographical order, to see them all listed together. The normal purpose of a file like this is for use by an application (e.g. configuration). You don't have to use them directly or even be aware of them. So, this "hiding" isn't so much intended to literally hide the file from the user as it is to reduce clutter and provide some organization conceptually.
Why are some files and folders hidden?
1,295,896,722,000
I'm trying to list all the hidden files in a directory, but not other directories, and I am trying to do this using only ls and grep. ls -a | egrep "^\." This is what I have so far, but the problem is that it also lists hidden directories, when I don't want that. Then, completely separately, I want to list the hidden directories.
To list only hidden files: ls -ap | grep -v / | grep "^\." Note that files here is everything that is not a directory. It's not file in "everything in Linux is a file" ;) To list only hidden directories: ls -ap | grep "^\..*/$" Comments: ls -ap lists everything in the current directory, including hidden ones, and puts a / at the end of directories. grep -v / inverts results of grep /, so that no directory is included. "^\..*/$" matches everything that start with . and end in /. If you want to exclude . and .. directories from results of the second part, you can use -A option instead of -a for ls, or if you like to work with regex, you can use "^\.[^.]+/$" instead of "^\..*/$". Have fun!
How to show only hidden directories, and then find hidden files separately
1,295,896,722,000
I want to back-up all files from my laptop partitions to external HDD. I ran, for example cp -a /med*/ravb*/*00 /med*/ravb*/M*L*/7.3GB_CP && echo "7.3GB BACKED UP PROPERLY" || echo "7.3GB FAILED TO BACK UP" The issue is that dot files are also getting included which I don't want. What should I do so as to ignore all dot files for backing up.
Why not use rsync instead? It's made for the job! rsync -uan --progress --exclude=".*" <source> <destination> The above will list all the files to be archived without actually copying anything. Check that the list is correct, then run it again with the n option removed in order to copy the files (you could also remove the --progress for a quieter experience). To expand, the options above are:- u - 'update' - only copy newer files. a - 'archive' n - 'dry-run` - don't copy, just list what it would do. --progress - show progress of copy --exclude=".*" - exclude files that begin with a dot
how to copy or backup files ignoring dot files
1,295,896,722,000
For example, in ZFS under FreeBSD and ZoL, there is a magic .zfs dir inside of each zpool mountpoint and you can use zfs set snapdir=visible to make that .zfs dir visible. What makes me curious is: if that setting is set to "hidden", how is the .zfs dir actually hidden from the output of an ls -a or shell path-auto-completion, while still being accessible otherwise (you can still cd to it or call stat on it)? I can't really wrap my mind around this fact, because I somehow think if something is there and accessible it's supposed to be listed in ls -a -- even if it's just magic/virtual in nature. Can anybody explain how this works? Is there a POSIX conforming way to have a directory that is hidden from ls -a while still being accessible? How do you do it?
Well, how to do it is easy enough: ls gets its list from a syscall (or, on Linux, libc function) called readdir. Changing into a directory is done with a separate syscall, chdir. stat is also a different syscall, as are most of the other file operations. In short, "what's in this directory?" and "access this directory" are completely separate requests of the kernel—so they can be programmed to work differently. So, to make a directory that doesn't appear in ls -a, you just have the kernel omit it from the results of readdir. But it still works with chdir (etc.), because that's a different syscall. This isn't so different from having a directory where you have +x permission, but not -r: you can access files and directories inside it, cd into it, etc., but ls will fail. I believe other things have used this arrangement too—for example (and this is from fuzzy memory without looking it up) AFS had a sort-of global /afs namespace where you could connect to any AFS server essentially by cd'ing into its name; an ls on /afs wouldn't show all the world's servers, though. I've seen FUSE filesystems do similar (e.g., cd to connect to an anonymous FTP server). (I'm not sure whether the zfs arrangement strictly complies with POSIX).
How are files/dirs hidden from ls -a while still being accessible otherwise in a POSIX compliant system?
1,295,896,722,000
I use Ubuntu 14.04 and in a terminal I became root with sudo su and I wanted to delete root's trash manually. It deleted everything except for a few files that start with a dot. Like .htaccess etc. So I went to that directory (which is "files") and I ran this command: rm -rf .* It did delete those files, BUT I also got an error message that the system couldn't delete "." and ".." What does it mean? Like if I tried to delete the whole directory tree? Like I said, when I was running that command I was in the lowest directory. This one to be exact: /root/.local/share/Trash/files/ I shot down my PC and then turned it on. Everything seems to be normal at first glance. So now I want to ask is what went wrong and if what I did could really cause any serious damage to the system in general? In other words, should I be worried now or everything is OK?
.* matches all files whose name starts with .. Every directory contains a file called . which refers to the directory itself, and a file called .. which refers to the parent directory. .* includes those files. Fortunately for you, attempting to remove . or .. fails, so you get a harmless error. In zsh, .* does not match . or ... In bash, you can set GLOBIGNORE='.:..:*/.:*/..' and then * will match all files, including dot files, but excluding . and ... Alternatively, you can use a wildcard pattern that explicitly excludes . and ..: rm -rf .[!.]* ..?* or rm -rf .[!.] .??* Alternatively, use find. find . -mindepth 1 -delete
How to delete all files in a current directory starting with a dot?
1,295,896,722,000
I want to delete all .swp files recursively. However: rm -r *.swp Gives: rm: cannot remove ‘*.swp’: No such file or directory Just to be sure, ls -all gives: total 628 drwxr--r--. 8 przecze przecze 4096 Aug 3 18:16 . drwxr--r--. 31 przecze przecze 4096 Aug 3 18:14 .. -rwxrwxr-x. 1 przecze przecze 108 Jul 28 21:41 build.sh -rwxrwxr-x. 1 przecze przecze 298617 Aug 3 00:52 exec drwxr--r--. 8 przecze przecze 4096 Aug 3 18:08 .git drwxrwxr-x. 2 przecze przecze 4096 Aug 3 18:14 inc -rw-rw-r--. 1 przecze przecze 619 Aug 3 00:52 main.cc -rw-r--r--. 1 przecze przecze 12288 Aug 3 17:29 .main.cc.swp -rw-rw-r--. 1 przecze przecze 850 Aug 1 00:30 makefile -rw-------. 1 przecze przecze 221028 Aug 3 01:47 nohup.out drwxrwxr-x. 2 przecze przecze 4096 Aug 3 00:52 obj drwxrwxr-x. 2 przecze przecze 4096 Aug 3 00:52 out drwxrwxr-x. 12 przecze przecze 4096 Aug 3 18:14 runs -rwxr--r--. 1 przecze przecze 23150 Aug 2 18:56 Session.vim drwxrwxr-x. 2 przecze przecze 4096 Aug 3 18:14 src -rw-rw-r--. 1 przecze przecze 13868 Jul 31 19:28 tags -rw-rw-r--. 1 przecze przecze 2134 Aug 3 00:31 view.py -rw-r--r--. 1 przecze przecze 12288 Aug 3 17:29 .view.py.swp So there are *.swp files to delete! And rm .build.sh.swp successfully deleted one of them. What am I doing wrong?
Try to match the dot: $ rm -r .*.swp I hope this solve your problem.
rm wildcard not working
1,295,896,722,000
If I look at my home directory there are a large number of dot files. If I am creating a new program that needs a user configuration file, is there any guidance where to put it? I could imagine creating a new dot directory ~/.myProgramName or maybe I should add it to /.config or ~/.local.
The .config directory is a newish development courtesy of XDG that seems, deservedly, to have won favour. Personally, I don't mind a dot directory of your own. A bunch of separate dot files (ala bash and various old school tools) in the toplevel of $HOME is a bit silly. Choosing a single dot file is a bad idea, because if in the future you realize maybe there are a couple more files that would be good to have, you have a possible backward compatibility issue, etc. So don't bother starting out that way. Use a directory, even if you are only going to have one file in it. A better place for that directory is still in ~/.config, unless you are very lazy, because of course you must first check to make sure it actually exists and create it if necessary (which is fine). Note you don't need a dot prefix if your directory is in the .config directory. So to summarize: use a directory, not a standalone file put that directory in $HOME/.config
Where should user configuration files go? [duplicate]
1,295,896,722,000
Is the behavior of .* to include . and .. defined in LSB or POSIX or some other specification?
Quoting from the Single Unix specification version 2, volume ”Commands & Utilities", §2.13.3: If a filename begins with a period (.) the period must be explicitly matched by using a period as the first character of the pattern or immediately following a slash character. (…) It is unspecified whether an explicit period in a bracket expression matching list, such as [.abc] can match a leading period in a filename. There is no exception that would make the second period in .., or the empty string following the only period in ., not matched by the wildcard in .*. Therefore the standard says that .* matches . and .., annoying though it may be. The passage above describes the behavior of the shell (sh command). The section on the glob C library function refererences this passage. The language is exactly the same in version 3, also known as POSIX:2001 and IEEE 1003.1-2001, which is what most current systems implement. Dash, bash and ksh93 comply with POSIX. Pdksh and zsh (even under emulate sh) don't. In ksh, you can make .* skip . and .. by setting FIGNORE='.?(.)', but this has the side effect of making * include dot files. Or you can set FIGNORE='.*', but then .* doesn't match anything. In bash, you can make .* skip . and .. by setting GLOBIGNORE='.:..', but this has the side effect of making * include dot files. Or you can set GLOBIGNORE='.*', but then .* doesn't match anything.
Is the behaviour of .* to include . and .. defined in LSB or POSIX or some other specification?
1,295,896,722,000
I want to delete many configuration folder in my home user folder but I can't figure out how to delete them. How can I delete a hidden folder?
You can remove hidden directories (with . at the beginning of the name) like normal directories: rm -rf .directory_name (r for recursive, f for force). To display hidden directories use -a option for ls: ls -a You can also use mc or some other file manager to remove them. Most of them will have option to display hidden directories in View menu or in settings. In mc hidden directories are displayed by default.
How can I delete a hidden folder?
1,295,896,722,000
I am aware of using .[!.]* to refer to all dotfiles in a directory with the exception of .., but how might one refer to all dotfiles except for .. and .git? I have tried several variations on .[!.||.git]* and .[!.][!.git]* and the like, but none refer to the intended files.
You can use the extended globbing in bash: shopt -s extglob ls .!(.|git) This also matches ., though, so you probably need ls .!(|.|git)
Copy all dotfiles except for `.git` and `..`
1,295,896,722,000
Whenever you type ls -a into the command prompt, you usually get all of your folders, files, and then you see that the first two entries are . and .. Just curious, but what is the significance of these two entries?
. is the relative reference for the current directory. .. is the relative reference for the parent directory. This is why cd .. makes the parent directory the new working directory.
When you type "ls -a", what is the significance of "." and ".."?
1,295,896,722,000
From time to time I need to find a culprit in an unknown dotfile and instead of trying to figure out which package is to be blamed (e.g. xfce4 or thunar?) and what is their naming convention (.app vs .application vs .some_old_name vs .config/app...), I just want to go for it the quick & dirty way: me@here:~$ grep -IR .* -e culprit But this quick & dirty way is also the silly way. After few minutes I figure out that .* means .. as well, and there we are. Sometimes I resort to probably even less quick & more dirty variant: me@here:~$ grep -IR /home/me -e culprit which turns out to be of a superior silliness, especially if I have some huge or distant mountains at my $HOME. Too bad that I can't figure out The Quick And Clean And Right Way. (And my disk heads are slowly wearing out.) Is it possible to achieve this within wildcard expansion at all? I.e. a variant of .* that does not match .. (and ../.....)?
Thanks to this wiki, I found there is this GLOBIGNORE variable: The Bash variable (not shopt) GLOBIGNORE allows you to specify patterns a glob should not match. This lets you work around the infamous "I want to match all of my dot files, but not . or .." problem: $ echo .* . .. .bash_history .bash_logout .bashrc .inputrc .vimrc $ GLOBIGNORE=.:.. $ echo .* .bash_history .bash_logout .bashrc .inputrc .vimrc Nice thing is that this has almost no side effects (I mean, how often you really want to match .. and .?), so it would be even acceptable to export GLOBIGNORE=.:.. from .bashrc, and for manual tasks just use the old .* glob, as in the first example in the Q. me@here:~$ set | grep GLOBIGNORE GLOBIGNORE=.:. me@here:~$ grep -IR .* -e culprit .some-app/config: set culprit=1 me@here:~$
grepping dotfiles with -R correctly?
1,295,896,722,000
I wanted to move all files, including starting with dot (hidden) and folders (recursively). So I used the following commands shopt -s dotglob nullglob mv ~/public/* ~/public_html/ and it worked. But do I need to reset anything after doing shopt -s dotglob nullglob? Doesn't it change how commands like mv operate? Because I would like it changed back.
Yes, you would have to unset those options (with shopt -u nullglob dotglob) afterwards if you wanted the default globbing behaviour back in the current shell. You could just do mv ~/public/* ~/public/.* ~/public_html/ That would still generate an error without nullglob set if one of the patterns didn't match anything, obviously, but would work without having to set either option. It would probably also say something about failing to rename . since it's a directory, but that too isn't stopping it from moving the files. A better option may be to use rsync locally: rsync -av ~/public/ ~/public_html/ and then delete ~/public.
bash moving hidden files, reset dotglob?
1,295,896,722,000
Normally dot files are not included for wildcard expansion: % echo * Applications Desktop Documents Downloads Library Movies Music Pictures Public bin If I explicitly ask for dot files, I get them: % echo * .* Applications Desktop Documents Downloads Library Movies Music Pictures Public bin . .. .CFUserTextEncoding .DS_Store .Trash .adobe .bash_history .cups .gitconfig .gnupg .history .lesshst .netbeans .scanview.cfg .sqlite_history .ssh .swt .systemmodeler .tcshrc .viminfo However I also get . and ... I don't want those, for example if I'm passing to du -s where I want the size of every item in the directory. Is there some pattern that gives me just what's in the current directory, and everything in the current directory without . and ..? I use tcsh. (With regard to the "This question may already have an answer here:" note above: no this question doesn't have an answer there, since that answer only works for bash.)
With tcsh 6.17.01 and above: set globdot du -s -- * With older ones: du -s -- * .[^.]* ..?* (interestingly, that works better than its POSIX counterpart (* .[!.]* ..?*) because in tcsh (and in zsh in csh emulation (cshnullglob option)), contrary to POSIX shells, those pattern that don't match any file get expanded to nothing instead of themselves) With standard find: find . ! -name . -prune -exec du -s {} + Note that GNU du has a -d option to limit the depth at which dir disk usage are reported: du -ad1
How do I specify arguments to return all dot files, but not . and ..?
1,295,896,722,000
When I try to match all dot files in a directory with .* it seems to have a nasty side-effect: besides matching all (real) files and directories, it matches . and ... bash-3.2$ mv test/.* dest/ mv: rename test/. to dest/.: Invalid argument mv: test/.. and dest/.. are identical This seems really weird, since they are basically 'fake' directories, just there to make relative paths easy. They are not part of the contents of a directory, and I don't ever want them matched when I try to move the contents of one directory to another directory. I can't think of any scenario where I would want them matched by .*. So how can I turn this off? (Besides using Z shell, which is not always available, and which may not be the shell in use by someone running a function I've written.)
You can use the GLOBIGNORE bash variable. GLOBIGNORE A colon-separated list of patterns defining the set of filenames to be ignored by pathname expansion. If a filename matched by a pathname expansion pattern also matches one of the patterns in GLOBIGNORE, it is removed from the list of matches. and .......................-. The file names ``.'' and ``..'' are always ignored when GLOBIGNORE is set and not null. So if you set GLOBIGNORE='*/.:*/..' then path/.* will not match . and .., as you ask.
How can I make bash not match `.` and `..` with `.*`
1,295,896,722,000
I am currently using the following ls alias: alias ls='ls -alhGkpsq --color=auto'. This results in following directory listing. [03:35] bryson@brys ~ :$ ls total 48K 4.0K drwx------ 4 bryson 4.0K Nov 2 03:34 ./ 8.0K drwxr-xr-x 3 root 4.0K Apr 19 2012 ../ 4.0K -rw------- 1 bryson 676 Nov 2 03:35 .bash_history 8.0K -rw-r--r-- 1 bryson 21 Nov 23 2011 .bash_logout 8.0K -rw-r--r-- 1 bryson 57 Nov 23 2011 .bash_profile 4.0K -rw------- 1 bryson 50 Nov 2 03:34 .lesshst 4.0K drwxr-xr-x 3 bryson 4.0K Nov 2 03:21 source/ 4.0K drwx------ 2 bryson 4.0K Nov 2 03:23 .ssh/ 4.0K -rw------- 1 bryson 1.6K Nov 2 03:23 .viminfo The issue I have with this, which is not an issue with OS X's version of ls, is that .ssh/ is alphabetized ignoring the . in the filename. What I would like is for ls to alphabetize the dot files all together at the top, which is where it puts . and .. as well. (Arch Linux, Bash)
Probably caused by your locale, but if you do: LC_COLLATE=C ls -F --color=auto -l The dot files are sorted correctly
Alphabetizing names in `ls` alias with .files not intermingled
1,295,896,722,000
rm -rf .* will only not end horribly because rm refuses to delete . and ... How do I exclude these special directories from a glob pattern? This is not solved by dotglob since I want to match only files beginning with a dot not all files.
With bash, setting the GLOBIGNORE special variable is some non-empty value is enough to make it ignore . and .. when expanding globs. From the Bash docs: The GLOBIGNORE shell variable may be used to restrict the set of filenames matching a pattern. If GLOBIGNORE is set, each matching filename that also matches one of the patterns in GLOBIGNORE is removed from the list of matches. If the nocaseglob option is set, the matching against the patterns in GLOBIGNORE is performed without regard to case. The filenames . and .. are always ignored when GLOBIGNORE is set and not null. However, setting GLOBIGNORE to a non-null value has the effect of enabling the dotglob shell option, so all other filenames beginning with a ‘.’ will match. If we set it to .:.., both . and .. will be ignored. Since setting it to anything non-null will also get this behaviour, we might as well set it to just . So: GLOBIGNORE=. rm -rf .* (From my earlier answer on Ask Ubuntu.)
How do I match only dotfiles in bash? [duplicate]
1,295,896,722,000
Normally when I cat a file like this it's hard to read without colorizing. I've managed to get cat to use source-highlight like this: cdc() { for fn in "$@"; do source-highlight --out-format=esc -o STDOUT -i $fn 2>/dev/null || /bin/cat $fn done; }; alias cat='cdc' which now produces the following for a recognized file extension - .sh in this case: However without the .sh, e.g. if the file is just called .bash_functions the colorizing doesn't happen - because the file extension is not known. Is there any way I can get color-highlight to color dot files (files that begin with a dot) as sh colors ? btw this builds on top of How can i colorize cat output including unknown filetypes in b&w? man source-higlight shows the following but I'm not clear what to do: ... --outlang-def=filename output language definition file --outlang-map=filename output language map file (default=`outlang.map') --data-dir=path directory where language definition files and language maps are searched for. If not specified these files are searched for in the current directory and in the data dir installation directory --output-dir=path output directory --lang-def=filename language definition file --lang-map=filename language map file (default=`lang.map') --show-lang-elements=filename prints the language elements that are defined in the language definition file --infer-lang force to infer source script language (overriding given language specification)
Define your cdc function as cdc() { for fn do if [[ "${fn##*/}" == .* ]] then source-highlight --src-lang=sh --out-format=esc -i "$fn" else source-highlight --out-format=esc -i "$fn" fi 2> /dev/null || /bin/cat "$fn" done } for fn do is short for for fn in "$@"; do. ${fn##*/} looks at the value of $fn and removes everything from the beginning up through (and including) the last /.  I.e., if $fn is a full pathname, this will be just the filename part. [[ (the_above) == .* ]] checks whether the filename matches the .* glob/wildcard pattern; i.e., whether the filename begins with a ..  Note that this usage of == works only inside [[ … ]]; it does not work inside [ … ]. So, if $fn is a “dot file”, run source-highlight with the --src-lang=sh option. You should always put shell variable references in double quotes unless you have a good reason not to, and you’re sure you know what you’re doing.  Unix/Linux filenames can contain spaces.  If you had a file named foo bar, and you said /bin/cat "foo bar", cat would display the contents of the file foo bar.  But, if you said cdc "foo bar" (with the current version of your cdc function), you would run source-highlight with -i foo bar, which would look for a file called foo and generally make a mess of things.  And so it would fail, and your function would try /bin/cat foo bar, which would likewise fail.  Using "$fn" makes this work for filenames that contain spaces. The cp program requires you to specify, on the argument list, the name of the file or directory you want it to write to.  This is one of the few exceptions to the rule that most programs write to standard output by default (unless you specify otherwise).  You don’t need to say -o STDOUT, and I wonder why the author(s) of the program even made it possible for you to specify that. And, yes, I realize that you just copied all of that from the answer to your other question. Obviously, if $fn is not a dot file, just run source-highlight the normal way, and let it check for an extension. Note that the 2> /dev/null and the || /bin/cat "$fn" can be done for the if … then … else … fi block in its entirety; they don’t have to be repeated for each branch. Hmm.  My version of source-highlight (3.1.7) has a --src-lang=LANGUAGE option (-s LANGUAGE, as used by yaegashi, for short).  I just noticed that it isn’t in the source-highlight man page excerpt you included in your question.  So, obviously, if your version of source-highlight doesn’t support that option, my answer won’t work for you.  (And, of course, neither will yaegashi’s.)  If that’s the case, you should see if you can install version 3.1.7 (or compatible) of source-highlight.
How can I make source-highlight colorize .dotfiles by default?
1,295,896,722,000
I'm trying to manage my dotfiles under version control. My dotfiles repos contains a xfce-base folder, this folder contains the .config/xfce4/.../xy-setting.xml stuff. I can stow, or better, symlink to the correct place, everything works as expected. But, when I open one of the xfce settings editors (Window Manager, Keyboard Shortcuts), the changes made ther do overwrite my symlink with a normal file. So, adieu version control. I guess this would not happen, If I had hard links, right? Is hard linking possible with gnu stow (doesnt seem so?), or are there any alternatives? EDIT: I came across this, does hard links, but doesn't work recursivly (complains about existing .config directory...) EDIT II: I'm still not sure if a hard link is a good solution.
You are correct that GNU Stow doesn't support hard-linking currently. However I think you're also correct in that hard-linking probably isn't any better a solution than symlinking, because if an external application will happily replace a symlink with a normal file then it can certainly also break a hard link (i.e. replace the inode). However, I do have some good news for you :-) I also use GNU Stow to manage my dotfiles, which is why in 2.1.3 I specifically added the --adopt option to help deal with precisely this scenario. After an external program has broken your symlink, you can simply restow with this option, and then the potentially changed version of the file will be adopted into your Stow package directory and the symlink restored, with the net effect of no change to the contents of the file. Since you track your dotfiles via version control, you can then see what has changed (e.g. via git diff) and then if you want, check in some or all of the changes. N.B. For package directories which are not tracked in a version control system, this is a risky option because it can modify the contents of the Stow package with no way of undoing the changes.
dotfiles: can/should(?) gnu stow make hard links, so I can still use xfce settings gui programs
1,295,896,722,000
I need an elegant solution to store my dotfiles on GitHub for easy access. I tried to create a dotfiles directory and symlink all the dotfiles into there. Then I tried adding the symlinks to git and committing in that directory, but git saves the links not the contents of the files it points to. Is there a way to do this or something similar?
I have no idea what the best approach is and elegance is certainly in the eye of the beholder, but I use the following for my dotfiles: A ~/.dotfiles directory that contains all of the dotfiles themselves. These are all managed in a git repo. A script, also in ~/.dotfiles that creates the required links into my home directory. I don't have any dotfiles in my home directory, only links into ~/.dotfiles. For example: $ ls -l ~/.muttrc lrwxr-xr-x 1 mj mj 25 May 4 2014 /home/mj/.muttrc -> /home/mj/.dotfiles/muttrc After I've cloned the repo onto a new machine (into ~/.dotfiles), I just run the script to update the symlinks. I've found the above approach works very well for me.
Elegant Way To Store Dotfiles on GitHub
1,295,896,722,000
I have a directory where regardless of user or options selected, I would like it to always show hidden files. I know the -a option will show hidden files. I just want to automate the decision on using the option. Say I'm in /home/user I don't care to see the hidden files, but if I'm in /filestoprcess I want to see the hidden files. Does this type of functionality exists?
The easiest way I can think of to do this would be to create a shell alias that maps to a function. Say we're using bash and add the following alias to your .bashrc: alias ls=ls_mod Now, add the ls_mod function below: ls_mod () { DIRS_TO_SHOW_HIDDEN=(dir1 dir2 dir3) for DIR in "$@"; do for CHECK in "${DIRS_TO_SHOW_HIDDEN[@]}"; do if [ "$(realpath "$DIR")" == "$CHECK" ]; then ls -a "$DIR" else ls "$DIR" fi done done } I haven't tested this, so I doubt it's perfect, but at least it gives you the idea. You may need to work to pass extra arguments to ls.
Is there a way to have ls show hidden files for only certain directories?
1,295,896,722,000
I'm new to XMonad and I'd like to understand what's going on in this config file. It is a working config file, nothing is broken. I understand what each setting does but I don't understand what's happening under main = .... Any explanation is appreciated. Also, in this setup, how would one go about changing/adding a keybinding? -- Imports import XMonad import XMonad.Hooks.DynamicLog -- The main function main = xmonad =<< statusBar myBar myPP toggleStrutsKey myConfig myBar = "xmobar" myPP = xmobarPP { ppCurrent = xmobarColor "#429942" "" . wrap "<" ">" } toggleStrutsKey XConfig { XMonad.modMask = modMask } = (modMask, xK_b) myConfig = defaultConfig { modMask = mod4Mask , terminal = "urxvt" , borderWidth = 2 } If this is the wrong StackExchange website please feel free to move it to a more appropiate one. :)
The =<< is the action composition in Haskell and requires knowledge of how monads work in Haskell and the related syntax. To try and understand exactly what's happening there, maybe look at the links (below) describing the =<<, >>=[1][2]. To add your own keybindings, you can add , keys = myKeys to your myConfig and then define your own myKeys as described on Xmonad wiki. For a sample keys map that I personally use, have a look at my bitbucket xmonad dotfiles. [1]: http://hackage.haskell.org/package/base-4.6.0.1/docs/Prelude.html#v%3a-61--60--60- [2]: http://hackage.haskell.org/package/base-4.6.0.1/docs/Prelude.html#v:-62--62--61-
Please explain what's going on in my XMonad config file
1,295,896,722,000
Why the following doesn't include hidden files ? ls -a *vim* that will return ls: cannot access '*vim*': No such file or directory ls -a | grep vim .vim .vimrc
If you are explicitly looking for hidden files use a pattern that starts with dot, ls .*vim* Then there's no need for the -a flag.
How to ls with globbing for hidden files?
1,295,896,722,000
for f in ~/common/.*; do echo $f done The entries listed are, /home/sk/common/. #undesired /home/sk/common/.. #undesired /home/sk/common/.aliasrc And i am putting a ugly hack to skip processing . and .. to avoid this, if [[ $f == '/home/sk/common/.' || $f == '/home/sk/common/..' ]]; then true else --do the ops fi Is there a shell option that will hide the dotted folders? I am facing this problem with ls -a as well.
Here is a method using bash's extglob: shopt -s extglob for f in .!(|.); do echo "$f" done With extglob the pattern !(pattern-list) matches anything except for the given pattern. The pattern in the example says match everything that starts with . and is not followed by nothing or another single ..
for loop in bash lists dot and double dot folders [duplicate]
1,295,896,722,000
When constructing a pattern that matches a file name such as /home/user/project/.git, how does one match the . character "explicitly" -- that is, without the use of shopt -s dotglob? The manual at https://www.gnu.org/software/bash/manual/html_node/Filename-Expansion.html states: When a pattern is used for filename expansion, the character ‘.’ at the start of a filename or immediately following a slash must be matched explicitly, unless the shell option dotglob is set. What, exactly, does it mean, to "be matched explicitly"? And again, at http://www.tldp.org/LDP/abs/html/globbingref.html (in the Notes section at the end), the same notion is addressed: Filename expansion can match dotfiles, but only if the pattern explicitly includes the dot as a literal character. The note provides the following examples: ~/[.]bashrc # Will not expand to ~/.bashrc ~/?bashrc # Neither will this. # Wild cards and metacharacters will NOT #+ expand to a dot in globbing. ~/.[b]ashrc # Will expand to ~/.bashrc ~/.ba?hrc # Likewise. ~/.bashr* # Likewise. I fail to understand the inner workings of the last three examples, which will expand to include the "dotfile". How, specifically, does placing the b in brackets after the . make this an "explicit" match in the example ~/.[b]ashrc? The subsequent examples are even more ambiguous to me. I simply fail to understand how manipulating the pattern in ways that seem completely unrelated to the . character cause the pattern to produce a match. With regard to why I would like to avoid using shopt -s dotglob, the impetus for this question is rooted in the fact that I am writing these patterns for use in another program's configuration file. I want to exclude paths that contain, for example, "hidden .git directories", and I'm not sure that I have the ability to specify dotglob in any capacity. In essence: What is the simplest means by which to match the . character by "being explicit"? Placing the next character in brackets "makes it work", but I'd like to know why; I feel like I'm "shooting in the dark" with that approach. Any explanation as to the underlying behavior in this regard is much appreciated. EDIT TO ADD: Initially, it didn't seem relevant, but because people seem to be interested in the specifics of my use-case, I'll explain further. I'm using a Host-Based Intrusion Detection software called Samhain. Samhain will "alert" whenever the filesystem is modified according to certain user-specified configuration parameters. I don't want Samhain to alert when files within .git directories (that are located within certain parent directories) are created/modified/deleted. In Samhain, this type of exclusion is performed by defining "ignore rules". The exact specification of these rules is explained at http://www.la-samhna.de/samhain/manual/filedef.html , in 4.2. File/directory specification. In short: Wildcard patterns ('*', '?', '[...]') as in shell globbing are supported for paths. The leading '/' is mandatory. So, I am trying to write an "ignore rule" that will match the .git directories in question, which will, in effect, cause Samhain to exclude them from its monitoring activities. Initially, I tried this: [IgnoreAll] dir = -1/home/user/project/*/*/.git This didn't work; Samhain still alerted whenever files inside those .git directories changed. Upon finding the examples cited above, I tried this: dir = -1/home/user/project/*/*/.[g]it With this change, Samhain ignores the files, as desired. In posting this question, I was simply trying to understand why that change has the intended effect. I will say, I feel less stupid given that the very pattern I was trying to use at first does indeed match the .git directories in question when I use "echo" test: echo /home/user/project/*/*/.git So, it wasn't so much that I was misunderstanding something fundamental with regard to pattern-matching, globbing, or file-name expansion in Bash; to the contrary, there seem to be nuances with regard to how Samhain, in particular, implements pattern-matching in this context. I have no idea why this doesn't work when applied in the context of Samhain's configuration file (obviously). Maybe somebody will be able to explain, given this edit.
First of all, I assume that you know what things like [b], ? and * mean in a pathname pattern.  (If you don’t, do more research.) At the risk of repeating what the others have said, you’re overthinking it.  Patterns that contain the string /. (i.e., a / immediately followed by a .) explicitly include the dot as a literal character.  The point is just that [b], ? and/or * occurring after the . don’t affect whether the pattern can match dotfile(s).  The last three examples are offered as examples of patterns (i.e., not just a plain file/pathname, but something that could potentially match several file/pathnames – or none) that will match ~/.bashrc — as opposed to the first two, which would match ~/.bashrc if . weren’t handled specially. So, what is your real question? … I am writing these patterns for use in another program’s configuration file.  I want to exclude paths that contain, for example, “hidden .git directories”, and I’m not sure that I have the ability to specify dotglob in any capacity. I guess you want to do something (like chown or cp) to all files/directories except those beginning with dot.  But your code is going to be used in somebody else’s script (via the . or source command), and you’re afraid to do your_command * because the script may have set dotglob, so * would expand to all files, including “hidden” ones.  And you don’t want to turn off dotglob because you don’t want to break the functionality of the existing script. Use a smarter wildcard (pathname expansion pattern). I hope that you understand wildcards (a.k.a. globs) like [abc] — they match any of the characters a, b or c.  For example, the string c[aou]t matches cat, cot and cut;  d[iou]g matches dig, dog and dug.  (They can be, and commonly are, used with ranges; e.g., [a-z] and [0-9].)  Well, a special case of this is [!abc] — it matches any character except a, b or c.  So you can use [!.]* (or directory_name/[!.]*) to match names that begin with a character other than dot.  Paradoxically, [.] (at the beginning of a filename) won’t match a dot if dotglob is not set, but [!.] will exclude a dot regardless of the setting of dotglob. This will give the same result whether dotglob is set or not. Use dotglob (in a subshell). Shell options (shopts) are local to a process, and process attributes never flow backwards (uphill) from child to parent.  So(shopt -u dotglob; your_command *) will run your_command on non-hidden files only, without affecting the settings and behavior of the rest of the script. Use dotglob (without using a subshell). Some people prefer to avoid subshells because they use extra resources.  But the cost is minuscule (unless you do it in a loop that executes many times), so this is not a very good reason.  A better reason to avoid a subshell is if you need to do something that affects the environment of the shell, like cd or umask. If this is your situation, you can temporarily turn off dotglob, and later restore the previous setting. If you type shopt dotglob (without -s or -u), it reports (displays) the current setting of the dotglob option.  (shopt with no parameters lists the current settings of all the options.)  It also sets the exit status accordingly.  The -q flag suppresses the display, so you can do shopt -q dotglob dotglob_setting=$? shopt -u dotglob your_command * if [ "$dotglob_setting" = 0 ] then shopt -s dotglob fi But wait … you said “another program’s configuration file”.  What are you talking about?  If you’re talking about writing or modifying a file that says something like ignore=*.o, then this whole question doesn’t make sense, because that file will be processed (and interpreted) by whatever program processes it, and that program will decide how to interpret * — the shell has nothing to do with it. OK, now that we have a better idea of what the question is: The short answer is that the behavior that you’re seeing doesn’t make sense.  If a .git directory exists, then specifying it exactly (literally) as .git and specifying it with a wildcard / glob pattern of .[g]it should behave identically. The longer answer: I stand by the last paragraph of the first version of my answer.  Samhain is reading and parsing its policy configuration file.  It might use the shell to interpret wildcards in the config file, but I guess that it’s doing it internally. And, if it is “using the shell”, which shell is it using?  On many systems, /bin/sh is not bash.  Their baseline behavior with regards to pathname expansion patterns (i.e., wildcards) should be the same, but once you step off the porch, you’re in a swamp.  The POSIX specification for the shell doesn’t even have the shopt command, and (AFAIK) doesn’t have any way to make * expand to all files (and not just non-hidden ones). If you feel like wasting spending some more time on this, you might experiment with putting  /home/user/project/* into the Samhain config file and seeing whether it interprets it as all files or just non-hidden ones.  If it interprets it as all files, we can conclude Samhain isn’t using /bin/sh to expand wildcards. It isn’t using standard, default rules for wildcards (the ones you discussed at such length in your question). The documentation is wrong (or, at best, incomplete and misleading) inasmuch as it says, “Wildcard patterns (‘*’, ‘?’, ‘[...]’) as in shell globbing are supported for paths.” without saying that (unlike in the shell’s default behavior) * means all files. It might be using bash in dotglob mode to expand wildcards.  But this doesn’t make sense; as I said, the handling of .git and .[g]it doesn’t correspond to the normal behavior of any shell that I know of.  It’s almost certainly got its own code for wildcards. But in any case I believe that we can say with some confidence that your conclusion is correct: Samhain has a bug with regard to the handling of wildcards in IgnoreAll specifications.  You might want to file a bug report with the vendor.  Or, since you’ve found a workaround, you could just forget about it.
Bash pattern to match directories whose names begin with a dot (period), by being "explicit", instead of using "shopt -s dotglob"?
1,295,896,722,000
I am creating multiple files (.school_aliases and .git_aliases) to put my aliases in for organization. However, Vim doesn't highlight syntax for these files automatically like for .bashrc or .bash_aliases. Is there a way I could get Vim to do this automatically rather than just doing set syntax=sh?
You can use the following in your .vimrc: autocmd BufNewFile,BufRead *.school_aliases,*.git_aliases set syntax=sh Or, you can set these file extensions to syntax types in ~/.vim/filetype.vim.
Setting Vim to use shell syntax for dotfiles?
1,295,896,722,000
As title. I always do things on macOS, but now I have to learn some Virtual Machine and have installed Ubuntu 20.04. On macOS I put all dotfiles inside the folder ~/.config/. I did the same on Ubuntu, but it didn't work. Now I have to run tmux source ~/.config/tmux/tmux.conf everytime I enter a session, or I will not be able to use those keybindings. So where should I put this config file? My intuition told me that I would need to create a symlink to the default path, which I don't know, to make this work.
Despite the two answers with the traditional tmux config locations, tmux 3.1 and later does support ~/.config/tmux/tmux.conf, although it's not mentioned in the man page. See the release notes here. That's why it works for you on MacOS. However, the Ubuntu 20.04 repo looks like it's only at 3.0. If you can run 21.04 or later in your VM, it should have an appropriate tmux version, and ~/.config/tmux/tmux.conf should be automatically handled for you. If you are stuck on Ubuntu 20.04 for LTS reasons, then you can fall back to the symlink option, or use the -f option as in this question.
Tmux doesn't read `~/.config/tmux/tmux.conf` by default, so where?
1,551,884,463,000
I only want echo $(date) to return the date not the backticked version. echo $(date) # should return Wed Mar 6 09:50:41 EST 2019 echo `date` # should return `date`
Wrap the backticks in strong quotes to divest them of their subshelly powers: $ echo '`echo`' `echo` Beware, though, the contraction wrapped in strong quotes: $ echo 'I can't process this.' > Oh whoops that ">" means we're still in a strong quote. I cant process this. Oh whoops that ">" means were still in a strong quote.
How do configure ZSH commands substition to not use backticks (`)?
1,551,884,463,000
I have two existing directories : foo: directory with dotfiles in it foo2: empty directory I would like to have a solution to copy all dotfiles in foo to foo2. I would like a solution that is not shell-dependent (bash, zsh, etc.). I would prefer not having to install rsync to do it (tar is ok). Weeks ago, I asked this similar question, but I feel both questions should be separated as they answer different needs. All answers were shell-dependent or using rsync.
I assume by "shell independent", you are restricting yourself to Bourne-type shells (not csh, etc) cp -r foo/.??* foo/.[^.] foo2
Shell-independent way to cp dotfiles from a folder to another [duplicate]
1,551,884,463,000
This answer on opening all files in vim except [condition]: https://unix.stackexchange.com/a/149356/98426 gives an answer similar to this: find . \( -name '.?*' -prune \) -o -type f -print (I adapted the answer because my question here is not about vim) Where the negated condition is in the escaped parentheses. However, on my test files, the following find . -type f -not -name '^.*' produces the same results, but is easier to read and write. The -not method, like the -prune method, prunes any directories starting with a . (dot). I am wondering what are the edge cases where the -not and the -prune -o -print method would have different results. Findutils' infopage says the following: -not expr: True if expr is false -prune: If the file is a directory, do not descend into it. (and further explains that -o -print is required to actually exclude the top matching directory) They seem to be hard to compare this way, because -not is a test and -prune is an action, but to me, they are interchangeable (as long as -o -print comes after -prune)
First, note that -not is a GNU extension and is the equivalent of the standard ! operator. It has virtually no advantage over !. The -prune predicate always evaluates to true and affects the way find walks the directory tree. If the file for which -prune is run is of type directory (possibly determined after symlink resolution with -L/-H/-follow), then find will not descend into it. So -name 'pattern' -prune (short for -name 'pattern' -a -prune) is the same as -name 'pattern' except that the directories whose name matches pattern will be pruned, that is find won't descend into them. -name '.?*' matches on files whose name starts with . followed by one character (the definition of which depends on the current locale) followed by 0 or more characters. So in effect, that matches . followed by one or more characters (so as not to prune . the starting directory). So that matches hidden files with the caveat that it matches only those whose name is also entirely made of characters, that is are valid text in the current locale (at least with the GNU implementation). So here, find . \( -name '.?*' -a -prune \) -o -type f -a -print Which is the same as find . -name '.?*' -prune -o -type f -print since AND (-a, implied) has precedence over OR (-o). finds files that are regular (no symlink, directory, fifo, device...) and are not hidden and are not in hidden directories (assuming all file paths are valid text in the locale). find . -type f -not -name '^.*' Or its standard equivalent: find . -type f ! -name '^.*' Would find regular files whose name doesn't start with ^.. find . -type f ! -name '.*' Would find regular files whose name doesn't start with ., but would still report files in hidden directories. find . -type f ! -path '*/.*' Would omit hidden files and files in hidden directories, but find would still descend into hidden directories (any level deep) only to skip all the files in them, so is less efficient than the approach using -prune.
Difference between GNU find -not and GNU find -prune -o -print
1,551,884,463,000
I was in user home directory , and wanted to rename the ssh folder into .ssh folder. I tried this. rachit@DESKTOP-ENS2652:~/ssh$ ls some-machine some-machine.pub rachit@DESKTOP-ENS2652:~/ssh$ cd .. rachit@DESKTOP-ENS2652:~$ ls ssh rachit@DESKTOP-ENS2652:~$ mv -R ssh .ssh mv: invalid option -- 'R' Try 'mv --help' for more information. rachit@DESKTOP-ENS2652:~$ mv ssh .ssh rachit@DESKTOP-ENS2652:~$ ls rachit@DESKTOP-ENS2652:~$ ls After doing this, my ssh folder competely disappeared. Its no big deal I can create another one , but am not able to get my head around what did I do wrong , and why is it wrong. I am trying out things on WSL ( Windows subsystem for linux). Basically ubuntu on windows 10.
This mv ssh .ssh could move ssh into an (already existing) .ssh directory. Do this mv .ssh/ssh ./ to put it back. You would have seen .ssh if you had done ls -la If .ssh did not already exist, then mv .ssh ssh will make it "appear" when you do just ls -l
ssh directory disappears after `mv ssh .ssh`
1,551,884,463,000
Today I discovered that sudo ls shows hidden files (that is, those that have names starting with .) on OS X. This surprised me so much that I asked a question about this behaviour, which I still find somewhat strange and unexpected. Turns out, this behaviour goes back to 2BSD in 1979. Given that, now I’d like to ask the following question. Why doesn’t ls on Linux behave this way? Was there a period of time when ls on some other kinds of *nixes had this behaviour? Are there any documents, commit messages, emails explaining who and why decided that this feature should not be copied at all or that it should be dropped if it was copied initially?
The POSIX standard says: "Filenames beginning with a ( '.' ) and any associated information shall not be written out unless explicitly referenced, the -A or -a option is supplied, or an implementation-defined condition causes them to be written." Being root is evidently not considered a condition which causes hidden files to be written by the GNU Coreutils implementation of ls that is commonly packaged in Linux distros. There are good reasons not to have the behavior of programs influenced by global variables, like which user ID is in effect. A script developed as non-root will change behavior when run as root. The hiding of files that begin with dot is not a security mechanism; it shouldn't be connected to security contexts. It conceals things that we normally don't want to see, like the .git directory among your .c source files or whatever. If you have read access to another user's directory, you can list their hidden files. The dot hides items whose presence is expected and uninteresting, not whose presence is intended to be secret. Dotted directory entries other than .. and . have no special operating system status; just ls treats them specially. I just tried Solaris 10; its ls also has no such behavior. It is not a universal "Unixism", which explains why the POSIX requirement is worded that way.
sudo ls not showing hidden files on Linux
1,551,884,463,000
ls -d .* lists only hidden "items" (files & directories). (I think) technically it lists every item beginning with ., which includes the current . and above .. directories. I also know that ls -A lists "almost all" of the items, listing both hidden and un-hidden items, but excluding . and ... However, combining these as ls -dA .* doesn't list "almost all" of my hidden items. How can I exclude . and .. when listing only hidden items?
This has been answered over at Ask Ubuntu, which I will reproduce here: ls -d .!(|.) with Bash's extended globs (shopt -s extglob to enable) ls -d .[!.]* ..?* if not
How can I exclude . and .. when listing only hidden items?
1,551,884,463,000
I am trying to remove large amount of mails (mostly mail delivery failed) from my server using rm -rf /home/*/mail/new/* And I am getting -bash: /usr/bin/rm: Argument list too long I tried using find find /home/*/mail/new/ -mindepth 1 -delete But after 20 minutes it looks like it's not doing anything. How do I use for loop to delete everything (directories, files, dotfiles) within /home/*/mail/new/ Something like this for f in /home/*/mail/new/*.*~; do # if it is a file, delete it if [ -f $f ] then rm "$f" fi done Please help me rewrite this command to delete files AND folders and everything within /home/*/mail/new/ EDIT: My question is unique because it's about doing that in FOR loop.
The problem is that /home/*/mail/new/* expands to too many file names. The simplest solution is to delete the directory instead: rm -rf /home/*/mail/new/ Alternatively, use your find command. It should work, it will just be slower. Or, if you need the new directories use a loop to find them, delete and recreate them: for d in /home/*/mail/new/; do rm -rf "$d" && mkdir "$d" done The loop you were trying to write (but don't use this, it is very slow and inefficient) is something like: for f in /home/*/mail/new/* /home/*/mail/new/.*; do rm -rf "$f" done No need to test for files if you want to delete everything, just use rm -rf and both directories and files can be deleted by the same command. It will complain about not being ab;e to delete . and .. but you can ignore that. Or, if you want to be super clean and avoid the errors, you can do this in bash: shopt -s dotglob for f in /home/*/mail/new/*; do rm -rf "$f" done
Remove everything within directory using for loop
1,551,884,463,000
Say I have a folder ~/dot containing some files and directories, such as zshrc, Xresources, and emacs.d. How do I create symlinks to all of those in ~, but such that the symlinks begin with a dot (.zshrc &c.)? And how would I remove all already existing symlinks that would have been created by the answer to the first question? (I.e. how would I uninstall my dotfiles.)
creating the symlinks cd ~/dot for file in *; do ln -sf dot/"$file" ~/."$file" done deleting the symlinks cd for dotfile in .*; do test -L || continue target="$(readlink "$dotfile")" [[ $target =~ ^dot/ ]] && echo rm "$dotfile" done
Symlinking all dot-files and -directories
1,551,884,463,000
I have a Tar file: testwebsite.tar I've placed it in the directory I would like to extract its contents to on my Web server which is mytestdirectory I run this command via PuTTY: tar -xvf testwebsite.tar Which results in the content being extracted but in this format: mytestdirectory/srv/test/www.testwebsite.com/ All of the files I would like have appear in mytestdirectory are contained in the www.testwebsite.com sub directory. Is there a way I can move all of the content within this sub directory to the parent directory using the mv command? I was looking around and tried a few answers, one being to run this command from within the sub directory: mv * .[^.]* .. But that resulted in this message being shown: mv: cannot stat `*': No such file or directory mv: cannot stat `.[^.]*': No such file or directory Could someone please help me on this issue, I'm basically trying to move all of the content (including hidden files) from the sub directory www.testwebsite.com to the parent directory mytestdirectory.
You need to use the --strip-components option of tar; that's because the paths you don't need are contained in the tar archive. So for instance if the tar contains this: srv/test/www.testwebsite.com/index.html and you want to obtain this mytestdirectory/index.html, you need $ cd /path/to/mytestdirectory $ tar xf testwebsite.tar --strip-components=3 If you use the verbose command, you will still set the full original filenames, so you can also add the --show-transformed argument to list the modified paths: $ tar tfz workspace.tar.gz --strip-components=2 workspace/project -rw-rw-r-- guido/guido 11134 2009-01-22 23:21 workspace/project/aaa -rw-rw-r-- guido/guido 11134 2009-01-22 23:21 workspace/project/bbb [... list continues ...] $ tar tfz workspace.tar.gz --strip-components=2 --show-transformed workspace/project -rw-rw-r-- guido/guido 11134 2009-01-22 23:21 aaa -rw-rw-r-- guido/guido 11134 2009-01-22 23:21 bbb [ ... and so on ...] To fix your situation with mv, that would be: # cd /path/to/mytestdirectory # mv srv/test/www.testwebsite.com/* . For taking care of hidden files, one soution will be this: # shopt -s dotglob executed before the above commands, to the glob also match dotfiles; or even better, delete your current target directory, then move and rename the one you want to copy: [ delete or rename current /path/to/mytestdirectory ] # cd /path/to # mv srv/test/www.testwebsite.com/ mytestdirectory/ # rmdir srv/test/ srv/
Extracting the Contents of tar archive to parent directory issue resulting in creation of sub directories
1,551,884,463,000
I use git to track my dotfiles across multiple machines. I wrote a pretty simple script in the repo, which backs up any outdated existing dotfiles and then creates symbolic links to each of the up-to-date dotfiles. Here is the script: #!/bin/bash ############################ # makesymlinks.sh # This script creates symlinks from the home directory to any desired dotfiles in ~/dotfiles ############################ ########## Variables dir=~/dotfiles # dotfiles directory olddir=~/dotfiles_old # old dotfiles backup directory files="bash_aliases bashrc vimrc" # list of files/folders to symlink in homedir ########## # create dotfiles_old in homedir echo "Creating $olddir for backup of any existing dotfiles in ~" mkdir -p $olddir # move any existing dotfiles in homedir to dotfiles_old directory, then create symlinks echo "Moving any existing dotfiles from ~ to $olddir" for file in $files; do if [ -f ~/."$file" ]; then mv -n ~/."$file" ~/dotfiles_old/ #-n option means don't overwrite existing files in dotfiles_old fi #if e.g. ~/.vimrc exists after mv command, then this script must've been run before w/ .vimrc included if [ -f ~/."$file" ]; then echo "Symlink to $dir/$file already exists" else echo "Creating symlink to $dir/$file in ~" ln -s $dir/"$file" ~/."$file" fi done # source .bashrc printf "\nTo complete the setup, please run the following command:\n\n" printf "\tsource ~/.bashrc\n\n" This script normally works just fine. Today though I started working on a new machine (remotely through TeamViewer if that matters), and when I ran this script for the first time, it deleted the directory it was in. I have no idea how it could've done that given the script I wrote, and it worked correctly the second time I ran it (after re-cloning the repository again). What went wrong, and how can I fix it? Was this somehow git's fault? Here's what my bash terminal looked like surrounding the bug (I've added some commentary with bash comments here for clarity): drakeprovost@shatterdome:~/RoverCoreOS$ git clone https://github.com/DrakeProvost/dotfiles.git Cloning into 'dotfiles'... remote: Enumerating objects: 42, done. remote: Counting objects: 100% (42/42), done. remote: Compressing objects: 100% (30/30), done. remote: Total 42 (delta 21), reused 29 (delta 11), pack-reused 0 Unpacking objects: 100% (42/42), done. drakeprovost@shatterdome:~/RoverCoreOS$ cd dotfiles/ drakeprovost@shatterdome:~/RoverCoreOS/dotfiles$ ls bash_aliases bashrc makesymlinks.sh README.md vimrc drakeprovost@shatterdome:~/RoverCoreOS/dotfiles$ ./makesymlinks.sh Creating /home/drakeprovost/dotfiles_old for backup of any existing dotfiles in ~ Moving any existing dotfiles from ~ to /home/drakeprovost/dotfiles_old Creating symlink to /home/drakeprovost/dotfiles/bash_aliases in ~ Creating symlink to /home/drakeprovost/dotfiles/bashrc in ~ Creating symlink to /home/drakeprovost/dotfiles/vimrc in ~ To complete the setup, please run the following command: source ~/.bashrc drakeprovost@shatterdome:~/RoverCoreOS/dotfiles$ ls bash_aliases bashrc makesymlinks.sh README.md vimrc drakeprovost@shatterdome:~/RoverCoreOS/dotfiles$ cd drakeprovost@shatterdome:~$ ls -al #.bashrc, .vimrc, and .bash_aliases were all red symlinks here, meaning they pointed to non-existent files. Also note that the dotfiles directory has disappeared total 144 drwxr-xr-x 26 drakeprovost drakeprovost 4096 Jul 19 22:40 . drwxr-xr-x 12 root root 4096 Sep 24 2019 .. lrwxrwxrwx 1 drakeprovost drakeprovost 40 Jul 19 22:40 .bash_aliases -> /home/drakeprovost/dotfiles/bash_aliases -rw------- 1 drakeprovost drakeprovost 11400 Feb 27 20:01 .bash_history -rw-r--r-- 1 drakeprovost drakeprovost 220 Sep 17 2019 .bash_logout lrwxrwxrwx 1 drakeprovost drakeprovost 34 Jul 19 22:40 .bashrc -> /home/drakeprovost/dotfiles/bashrc drwx------ 15 drakeprovost drakeprovost 4096 Oct 15 2019 .cache drwxr-xr-x 5 drakeprovost drakeprovost 4096 Feb 20 18:08 catkin_ws drwxr-xr-x 5 drakeprovost drakeprovost 4096 Feb 27 19:23 catkin_ws_PMCurdf drwx------ 13 drakeprovost drakeprovost 4096 Feb 27 18:57 .config drwxr-xr-x 2 drakeprovost drakeprovost 4096 Oct 15 2019 Desktop drwxr-xr-x 2 drakeprovost drakeprovost 4096 Oct 15 2019 Documents drwxr-xr-x 2 drakeprovost drakeprovost 4096 Jul 19 22:40 dotfiles_old drwxr-xr-x 2 drakeprovost drakeprovost 4096 Oct 15 2019 Downloads -rw-r--r-- 1 drakeprovost drakeprovost 8980 Sep 17 2019 examples.desktop drwx------ 2 drakeprovost drakeprovost 4096 Oct 15 2019 .gconf drwx------ 3 drakeprovost drakeprovost 4096 Oct 15 2019 .gnupg -rw------- 1 drakeprovost drakeprovost 2052 Jul 19 22:31 .ICEauthority drwx------ 3 drakeprovost drakeprovost 4096 Oct 15 2019 .local drwx------ 5 drakeprovost drakeprovost 4096 Oct 15 2019 .mozilla drwxr-xr-x 2 drakeprovost drakeprovost 4096 Oct 15 2019 Music drwx------ 6 drakeprovost drakeprovost 4096 Jul 19 22:31 .nx drwxr-xr-x 2 drakeprovost drakeprovost 4096 Oct 15 2019 Pictures -rw-r--r-- 1 drakeprovost drakeprovost 807 Sep 17 2019 .profile drwxr-xr-x 2 drakeprovost drakeprovost 4096 Oct 15 2019 Public drwx------ 2 drakeprovost drakeprovost 4096 Jul 19 22:31 .qt drwxr-xr-x 4 drakeprovost drakeprovost 4096 Feb 27 19:58 .ros drwxr-xr-x 11 drakeprovost drakeprovost 4096 Jul 19 22:40 RoverCoreOS drwxr-xr-x 2 drakeprovost drakeprovost 4096 Feb 13 13:45 .rviz drwxr-xr-x 3 drakeprovost drakeprovost 4096 Oct 15 2019 snap drwx------ 2 drakeprovost drakeprovost 4096 Oct 15 2019 .ssh -rw-r--r-- 1 drakeprovost drakeprovost 0 Oct 15 2019 .sudo_as_admin_successful drwxr-xr-x 2 drakeprovost drakeprovost 4096 Oct 15 2019 Templates drwxr-xr-x 2 drakeprovost drakeprovost 4096 Oct 15 2019 Videos -rw------- 1 drakeprovost drakeprovost 761 Oct 15 2019 .viminfo lrwxrwxrwx 1 drakeprovost drakeprovost 33 Jul 19 22:40 .vimrc -> /home/drakeprovost/dotfiles/vimrc drakeprovost@shatterdome:~$ source ~/.bashrc bash: /home/drakeprovost/.bashrc: No such file or directory drakeprovost@shatterdome:~$ git clone https://github.com/DrakeProvost/dotfiles.git Cloning into 'dotfiles'... remote: Enumerating objects: 42, done. remote: Counting objects: 100% (42/42), done. remote: Compressing objects: 100% (30/30), done. remote: Total 42 (delta 21), reused 29 (delta 11), pack-reused 0 Unpacking objects: 100% (42/42), done. drakeprovost@shatterdome:~$ cd dotfiles drakeprovost@shatterdome:~/dotfiles$ ./makesymlinks.sh Creating /home/drakeprovost/dotfiles_old for backup of any existing dotfiles in ~ Moving any existing dotfiles from ~ to /home/drakeprovost/dotfiles_old Creating symlink to /home/drakeprovost/dotfiles/bash_aliases in ~ Symlink to /home/drakeprovost/dotfiles/bashrc already exists Creating symlink to /home/drakeprovost/dotfiles/vimrc in ~ To complete the setup, please run the following command: source ~/.bashrc drakeprovost@shatterdome:~/dotfiles$ cd drakeprovost@shatterdome:~$ ls #notice that dotfiles still exists this time catkin_ws Documents Downloads Pictures snap catkin_ws_PMCurdf dotfiles examples.desktop Public Templates Desktop dotfiles_old Music RoverCoreOS Videos drakeprovost@shatterdome:~$ source ~/.bashrc #this now works like you would expect drakeprovost@shatterdome:~$
Here's the output in your question annotated: drakeprovost@shatterdome:~/RoverCoreOS$ git clone https://github.com/DrakeProvost/dotfiles.git Cloning into 'dotfiles'... remote: Enumerating objects: 42, done. remote: Counting objects: 100% (42/42), done. remote: Compressing objects: 100% (30/30), done. remote: Total 42 (delta 21), reused 29 (delta 11), pack-reused 0 Unpacking objects: 100% (42/42), done. NOTE: you were in the directory ~/RoverCoreOS when you ran the above git clone so the above created the directory ~/RoverCoreOS/dotfiles, not ~/dotfiles. drakeprovost@shatterdome:~/RoverCoreOS$ cd dotfiles/ drakeprovost@shatterdome:~/RoverCoreOS/dotfiles$ ls bash_aliases bashrc makesymlinks.sh README.md vimrc drakeprovost@shatterdome:~/RoverCoreOS/dotfiles$ ./makesymlinks.sh Creating /home/drakeprovost/dotfiles_old for backup of any existing dotfiles in ~ Moving any existing dotfiles from ~ to /home/drakeprovost/dotfiles_old Creating symlink to /home/drakeprovost/dotfiles/bash_aliases in ~ Creating symlink to /home/drakeprovost/dotfiles/bashrc in ~ Creating symlink to /home/drakeprovost/dotfiles/vimrc in ~ To complete the setup, please run the following command: source ~/.bashrc drakeprovost@shatterdome:~/RoverCoreOS/dotfiles$ ls bash_aliases bashrc makesymlinks.sh README.md vimrc All of the above happened in ~/RoverCoreOS/dotfiles. drakeprovost@shatterdome:~/RoverCoreOS/dotfiles$ cd You are now in the directory ~ drakeprovost@shatterdome:~$ ls -al #.bashrc, .vimrc, and .bash_aliases were all red symlinks here, meaning they pointed to non-existent files. Also note that the dotfiles directory has disappeared ~/dotfiles didn't disappear, it never existed. ~/RoverCoreOS/dotfiles existed and presumably still exists. ... drakeprovost@shatterdome:~$ git clone https://github.com/DrakeProvost/dotfiles.git Cloning into 'dotfiles'... remote: Enumerating objects: 42, done. remote: Counting objects: 100% (42/42), done. remote: Compressing objects: 100% (30/30), done. remote: Total 42 (delta 21), reused 29 (delta 11), pack-reused 0 Unpacking objects: 100% (42/42), done. Now you've created the directory ~/dotfiles and from here on things work as you expect. I'd recommend you modify your script to add some defensive checks. They can't stop you from doing the above but they can at least alert you of some issues and they would have caught the above problem (assuming you didn't have an old dotfiles directory with the expected files in your HOME dir), e.g.: [[ -d "$dir" ]] || { ret="$?"; echo "dir \"$dir\" does not exist"; exit "$ret"; } for file in $files; do [[ -s "$dir/$file" ]] || { ret="$?"; echo "file \"$dir/$file\" does not exist or is empty"; exit "$ret"; } done # create dotfiles_old in homedir echo "Creating $olddir for backup of any existing dotfiles in ~" mkdir -p "$olddir" || { ret="$?"; echo "Failed to create olddir \"$olddir\""; exit "$ret"; } You can add other defensive checks like that as you see fit.
My dotfiles bash script somehow deleted its own directory?
1,551,884,463,000
I know we can config file explorer to hide hidden files, but in the open dialogue box of some program, it is not an option. I am just tired of scrolling through hidden folder and files in my home directory. Is there a way to move them into a sub folder and keep the applications working?
You would be going against alot of UNIX momentum and history renaming your the hidden folders in you home directory, I wouldn't do it. Not only do the existing programs expect those folders to exist but any applications you install in the future will just place more hidden folders in your home directory. I agree its annoying - I have almost 100 files and folders in my home dir - instead I recommend you learn how to use tools to manage listing and searching files, e.g. Here is a couple of ways to list ignoring hidden files: ls find . -not -path '*/\.*' ==Explanation==> The -path option checks the pattern against the entire path string. * is a wildcard, / is a directory separator, \. is a dot (it has to be escaped to avoid special meaning), and * is another wildcard. -not means don't select files that match this test. My personal preference is to use tmux's copy mode (with vi key bindings).
How to move hidden config files to sub folder in the home directory
1,551,884,463,000
I am trying to use rsync for creating backups of my computer. For this I would like to exclude all hidden directories and files but include specific sub-directories of those hidden directories. As an example, I have the following structure: .hidden_1/ sub_dir_1/ sub_file_1 sub_dir_2/ sub_file_2 .hidden_2/ sub_file_3 .hidden_file normal_folder/ normal_file With this, I would like to copy all normal files and only the sub_dir_1 with all its content. The result should look like this: .hidden_1/ sub_dir_1/ sub_file_1 normal_folder/ normal_file I have already tried all kinds of filter settings, so far with no luck. Can anyone help me out here? Kind regards valkyrie
rsync -avh --include='/.hidden_1/' --include='/.hidden_1/sub_dir_1/***' --exclude='/.**' src/ dest --exclude='/.**' Exclude all hidden files and directories relative to the source directory and everything in those directories, i.e. .hidden_1/ and .hidden_1/sub_dir_1/, but not e.g. normal_folder/.hiddenfoo. The ** matches anything, including slashes. --include='/.hidden_1/' Include the .hidden_1 directory relative to the source directory overriding the exclude rule. It only includes the directory itself, not its content. --include='/.hidden_1/sub_dir_1/***' Include directory .hidden_1/sub_dir_1/ and its content. This is equivalent to the two rules --include='/.hidden_1/sub_dir_1/' and --include='/.hidden_1/sub_dir_1/**'.
Rsync include specific sub-directories in hidden directories
1,551,884,463,000
I am a teacher and I use Linux which is great! But students are curious about this "new" operating system they do not know and in GUI they tweak program settings which affects hidden files inside /home/user: [profesor@240-kateder ~]$ ls -a . .dbeaver4 .gtkrc-2.0 .sane .. .dbeaver-drivers .icons .swt .bash_history .dropbox .kde4 .themes .bash_logout .eclipse .local .thumbnails .bash_profile .esd_auth .lyx .ViberPC .bashrc .FlatCAM .masterpdfeditor .w3m .cache .FreeCAD .mozilla .Xauthority .config .gimp-2.8 .pki .xinitrc .convertall .gnupg .qucs .xournal This is unwanted because over time program interfaces will change so dramatically that programs will be missing toolbars, buttons, main menus, status menus... and students end up with completely different GUI, so they are calling me about the issue and we spend too much time. Now to optimize this I have to make sure that program settings (hidden files inside /home/user) aren't changed, so I tried to change them like sudo chmod -R 555 ~/.* but this didn't work out well for all of the programs, because some of the programs want to manipulate their settings at boot and they therefore fail to start withouth sudo. And student's don't have sudo privileges. But sudo chmod -R 555 ~/.* worked for .bash_profile, .bash_logout, .bashrc, .bash_history, .xinitrc so I was thinking if I would: prevent user from deleting .bash_profile, .bash_logout, .bashrc, .bash_history, .xinitrc copy all hidden setting files into a folder /opt/restore_settings program .bash_profile to clean up all settings in users home directory on login using rm -r ~/.* (I assume this wouldn't delete files from point 1., if I protect them) and then restore settings from the /opt/restore_settings. I wan't to know your opinion about this idea, or if there is any better way to do it. And I need a way to prevent users from deleting files from point 1. Otherwise this can't work.
Totally different approach: Create a group students, give each student his own account with group membership in students. Have a script that restores a given home directory from a template to a known good state, possibly deleting all extra dot files. Tell students about this script. If you have a number of computers, centralize this approach (user management on a single central server), and use a central file server for student home directories, so each student gets the same home directory on any machine. Together with proper (basic chmod) permissions everywhere, this will ensure that each student can only wreak havoc in his or her own home directoy, and can restore it when it breaks, possibly loosing their own customizations in this process, so they'll be more cautious next time. BTW, that's a very standard setup for many users on a cluster of machines.
Good way to prevent student from messing program settings in /home/user
1,551,884,463,000
How can ls be flagged to sort .-prefixed hidden directories and files in groups so that its output is sorted as visible directories, hidden directories, visible files, then hidden files? I currently have ls aliased to ls -lG --color --group-directories-first which groups directories first, but visible and hidden directories are mixed together. Instead, the output of ls should be: visibleDirectoryA visibleDirectoryB .hiddenDirectoryA .hiddenDirectoryB visibleFileA visibleFileB .hiddenFileA .hiddenFileB
Use -v for natural sort. e.g. ls -lG --color --group-directories-first -A -v Note while they are sorted into their own "group", the .hidden directories will appear before the visible directories, not after them, because a . sorts lower than most other characters.
Grouping hidden files and directories with ls
1,551,884,463,000
The echo command outputs differing results based on the expression passed to it. The working directory is /home/etc $ echo .*/ / ../ .cache/ .config/ .gnupg/ .local/ .mozilla/ .ssh/ $ echo ./* ./Desktop ./Documents ./Downloads ./Music ./Pictures ./Public ./snap ./Templates ./Videos $ echo .*/* .cache/event-sound-cache.tdb.d410907cf15246578458d0ad7919eb5e.x86_64-pc-linux-gnu .cache/evolution .cache/fontconfig .cache/gnome-screenshot .cache/gnome-software .cache/gstreamer-1.0 .cache/ibus .cache/ibus-table .cache/libgweather .cache/mesa_shader_cache .cache/mozilla .cache/thumbnails .cache/ubuntu-report .cache/update-manager-core .cache/wallpaper .config/dconf .config/enchant .config/eog .config/evolution .config/gedit .config/gnome-initial-setup-done .config/gnome-session .config/goa-1.0 .config/gtk-3.0 .config/ibus .config/nautilus .config/pulse .config/rclone .config/update-notifier .config/user-dirs.dirs .config/user-dirs.locale ./Desktop ./Documents ./Downloads .gnupg/private-keys-v1.d .gnupg/pubring.kbx .gnupg/trustdb.gpg .local/share .mozilla/extensions .mozilla/firefox .mozilla/systemextensionsdev ./Music ../ec ./Pictures ./Public ./snap ./Templates ./Videos $ echo */*/ Downloads/sync/ Downloads/testdir/ snap/gnome-calculator/ The aim is to reduce the number of commands to get the output. Can there be a single echo statement that combines the output of echo .*/ and echo ./* other than echo .*/ */?
It is the shell that does it. Not echo. This may be more like what you are trying to do. ( shopt -s dotglob; echo * ) It lists all files, but not . and ... It works in bash.
Can echo be combined to produce a single result of .*/ and */?
1,551,884,463,000
Examples: .bashrc .config/fish/config.fish I would like to know which is more common and what pros and cons they each have. I imagine a dotfile would be easier to change, since it is right in the home directory, but it seems .config would be easier to carry around, since it is one directory with everything in it. Do applications usually support just one, or both? Would it be a good idea to pick one, then symlink for each application? For example, if I wanted a dotfile, I could use ln .config/fish/config.fish .fish and just edit .fish, right?
Dotfiles are the older form, and I believe avoiding them completely will be difficult unless you use a distribution that insists on patching every software included to use the .config directory tree instead of plain dotfiles. Many old applications will have a long history of using a particular dotfile; some may have their own dot directory. Others may actually have both: for example, vim supports .vimrc, but also .vim/ directory with multiple files and a specific sub-directory structure. The .config directory structure is based on XDG Base Directory Specification. It was initially taken up by Desktop Environments like GNOME and KDE, as they both originally had a lot of per-user configuration files and were both already independently chosen somewhat similar sub-directory solutions. For GUI file managers, the concept of hidden files can be problematic: if you choose to not display file and directory names beginning with a dot by default, following the classic unix-style behavior, the existence and function of dot files will not be easily discoverable by a GUI user. And if you choose to not hide the dot files and directories, you get a lot of clutter in your home directory, which is in some sense the absolute top level of your personal workspace. Both ways will make someone unhappy. Pushing the per-user configuration files to a dedicated sub-directory can be an attractive solution, as having just one sub-directory instead of a number of dot files and/or dot directories will reduce clutter when "hidden" files are displayed in GUI, and the difference in ease of access is not too big. But it flies in the face of long-standing user expectations: (some) dotfiles "have always been here and named like this". This is going to be a very opinion-based issue. If the dotfiles are not related to login access or some other privileged access control, you can use symlinks to bridge from one convention to another, whichever way you prefer. But if you really edit a specific configuration file so often that ease of access is important, perhaps you might want to create a shell alias or desktop icon/menu item that opens the actual configuration file in your favorite editor immediately (using an absolute pathname) instead? It could be even more convenient. Some dotfiles and directories are accessed by privileged processes (e.g. as part of authentication and access control) like ~/.ssh, ~/.login_conf etc. and they cannot normally be replaced by symbolic links, as these processes want the actual file instead of a symbolic link in the designated location in order to disallow various kinds of trickery and exploits. If you want to relocate these files, it must be done by modifying the configuration of the appropriate process, usually system-wide.
What is the difference between dotfile and dot config? [duplicate]