date
int64 1,220B
1,719B
| question_description
stringlengths 28
29.9k
| accepted_answer
stringlengths 12
26.4k
| question_title
stringlengths 14
159
|
---|---|---|---|
1,382,410,950,000 |
When I send myself a mail with:
echo "test" | mail -n -s "test" [email protected]
I get the following error in /var/log/exim4/mainlog:
Error in system filter: malformed numerical string ""
How can I find the error in the system filter?
if $h_X-Spam_score_int is above 49
and foranyaddress $recipients ($thisaddress contains "@example.at")
then
headers add "Old-Subject: $h_subject"
headers remove "Subject"
headers add "Subject: *** SPAM ($header_X-Spam_score points) *** $h_old-subject"
headers remove "Old-Subject"
#save /var/mail/suspect_spam
finish endif
|
This command gives you the name of the system filter file:
$ /usr/sbin/exim4 -bP system_filter
It's unset by default, so if it contains something, it must be set somewhere in your Exim configuration.
| Exim: Error in system filter: malformed numerical string "" |
1,382,410,950,000 |
I have exim installed and configured as "internet site; mail is sent and received directly using SMTP". Mail is stored in /home/*user*/Maildir and it is really there. I can send and receive mail globally and internally, but my Debian Wheezy's mail program isn't showing any of it.
Does mail support Maildir format at all? The messages are there, I can access them via mutt -f Maildir. It's convenient to see a number of new e-mails at SSH logon.
|
Okay, @jordanm gave me a right direction, but information is scattered across the Net, so I think it is worth to post some kind of guide myself.
Install mailutils and heirloom-mailx packages:
sudo apt-get install mailutils heirloom-mailx
Update alternatives for mailx — choose /usr/bin/heirloom-mailx:
sudo update-alternatives --config mailx
The last part: update 3 files in /etc/pam.d/. There is a string in each of three following files, which starts with session optional pam_mail.so, update it with the following values:
/etc/pam.d/login:
session optional pam_mail.so dir=~/Maildir standard
/etc/pam.d/su:
session optional pam_mail.so dir=~/Maildir nopen
/etc/pam.d/sshd:
session optional pam_mail.so dir=~/Maildir standard
Sources: Ask Ubuntu and this blog post.
| Debian: exim, Maildir and mail |
1,382,410,950,000 |
My exim4 vis causing a segfault error on sending an email message whenever I use AUTH LOGIN authentication. However, sending the email using AUTH PLAIN works like a charm. Both auth methods connect to Dovecot authenticator.
Exim4 info:
Exim version 4.92 #3 built 09-Sep-2021 16:25:33
Copyright (c) University of Cambridge, 1995 - 2018
(c) The Exim Maintainers and contributors in ACKNOWLEDGMENTS file, 2007 - 2018
Berkeley DB: Berkeley DB 5.3.28: (September 9, 2013)
Support for: crypteq iconv() IPv6 PAM Perl Expand_dlfunc GnuTLS move_frozen_messages Content_Scanning DANE DKIM DNSSEC Event OCSP PRDR PROXY SOCKS SPF TCP_Fast_Open Experimental_ARC Experimental_DCC Experimental_DMARC Experimental_DSN_info
Lookups (built-in): lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmjz dbmnz dnsdb dsearch ldap ldapdn ldapm mysql nis nis0 passwd pgsql sqlite
Authenticators: cram_md5 cyrus_sasl dovecot plaintext spa tls
Routers: accept dnslookup ipliteral iplookup manualroute queryprogram redirect
Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp
Malware: f-protd f-prot6d drweb fsecure sophie clamd avast sock cmdline
Fixed never_users: 0
Configure owner: 0:0
Size of off_t: 8
Configuration file search path is /etc/exim4/exim4.conf:/var/lib/exim4/config.autogenerated
Configuration file is /var/lib/exim4/config.autogenerated
Here is the segfault message:
Sep 13 12:57:36 tornavacas kernel: exim4[12679]: segfault at 0 ip 00007fdd2d854206 sp 00007ffe23909ac8 error 4 in libc-2.28.so[7fdd2d7de000+148000]
Sep 13 12:57:36 tornavacas kernel: Code: 0f 1f 40 00 66 0f ef c0 66 0f ef c9 66 0f ef d2 66 0f ef db 48 89 f8 48 89 f9 48 81 e1 ff 0f 00 00 48 81 f9 cf 0f 00 00 77 6a <f3> 0f 6f 20 66 0f 74 e0 66 0f d7 d4 85 d2 74 04 0f bc c2 c3 48 83
And here is the last lines of a strace output:
[pid 16595] munmap(0x7f5e7f800000, 2097152) = 0
[pid 16595] munmap(0x7f5e7df65000, 331776) = 0
[pid 16595] munmap(0x7f5e7fb1a000, 135168) = 0
[pid 16595] exit_group(1) = ?
[pid 16595] +++ exited with 1 +++
[pid 16592] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 16595
[pid 16592] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=16595, si_uid=106, si_status=1, si_utime=2, si_stime=1} ---
[pid 16592] alarm(0) = 30
[pid 16592] rt_sigaction(SIGCHLD, {sa_handler=SIG_IGN, sa_mask=[CHLD], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f5c6ba6b840}, {sa_handler=SIG_DFL, sa_mask=[CHLD], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f5c6ba6b840}, 8) = 0
[pid 16592] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=NULL} ---
[pid 16592] +++ killed by SIGSEGV +++
<... select resumed> ) = ? ERESTARTNOHAND (To be restarted if no handler)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=16592, si_uid=106, si_status=SIGSEGV, si_utime=0, si_stime=1} ---
rt_sigaction(SIGCHLD, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7f5c6bc07730}, NULL, 8) = 0
rt_sigreturn({mask=[]}) = -1 EINTR (Interrupted system call)
wait4(-1, [{WIFSIGNALED(s) && WTERMSIG(s) == SIGSEGV}], WNOHANG, NULL) = 16592
wait4(-1, 0x7fff60755674, WNOHANG, NULL) = -1 ECHILD (No child processes)
rt_sigaction(SIGCHLD, {sa_handler=0x55cf02123500, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7f5c6bc07730}, NULL, 8) = 0
select(11, [3 4 5 6 7 8 9 10], NULL, NULL, NULL
Here is how I am reproducing the problem:
#!/usr/bin/expect
set timeout 30
proc abort {} { exit 2 }
spawn nc tornavacas.domain.com 587
expect default abort "220 "
send "EHLO mypc\r"
expect default abort "\n250 "
send "AUTH LOGIN\r"
expect default abort "\n334 "
send "ZGlzZ3Vpc2VkQGRvbWFpbi5jb20=\r"
expect default abort "\n334 "
send "cGFzc3dvcmQ=\r"
send "MAIL FROM:[email protected]\r"
expect default abort "\n250 "
send "RCPT TO:[email protected]\r"
expect default abort "\n250 "
send "DATA\r"
expect default abort "\n354 "
send "Subject: Mensaje de prueba de Microsoft Outlook\r"
send "\r"
send "This is a multipart message in MIME format.\r"
send ".\r"
expect default abort "\n250 "
send "QUIT\r"
On executing this script, I get the following output:
../..
DATA
354 Enter message, ending with "." on a line by itself
Subject: Mensaje de prueba de Microsoft Outlook
This is a multipart message in MIME format.
.
Nonetheless, if I send the same message using AUTH PLAIN, it works:
#!/usr/bin/expect
set timeout 30
proc abort {} { exit 2 }
spawn nc tornavacas.domain.com 587
expect default abort "220 "
send "EHLO mypc\r"
expect default abort "\n250 "
send "AUTH PLAIN AGRpc2d1aXNlZEBkb21haW4uY29tAHBhc3N3b3Jk\r"
expect default abort "\n235 "
send "MAIL FROM:[email protected]\r"
expect default abort "\n250 "
send "RCPT TO:[email protected]\r"
expect default abort "\n250 "
send "DATA\r"
expect default abort "\n354 "
send "Subject: Mensaje de prueba de Microsoft Outlook\r"
send "\r"
send "This is a multipart message in MIME format.\r"
send ".\r"
expect default abort "\n250 "
send "QUIT\r"
The output for the above command is this:
DATA
354 Enter message, ending with "." on a line by itself
Subject: Mensaje de prueba de Microsoft Outlook
This is a multipart message in MIME format.
.
250 OK id=1mPk9v-0004O2-Bp
As you can see, now the email server replies with the 250 code, whereas before it did not replied at all as it died.
The thing is authentication is working in both cases, but something changes when the user authenticates himself using LOGIN method instead of PLAIN one.
I would like to support both methods. Do you have any idea about what could be the cause of the segfault error after using AUTH LOGIN?
Update
I have been investigating a little bit more, and I have found that the cause of the problem is in the check_data ACL, particularly in the following snippet:
warn add_header = :at_start: ${authresults {$primary_hostname}}
Theoretically, that line should only add to the email a header with the authresults expansion item. However, on commenting it out, the segfault did not happen whereas it does if the warn directive is active.
Warm regards,
|
I found that the cause of the problem was in the check_data ACL, particularly in the following snippet:
warn add_header = :at_start: ${authresults {$primary_hostname}}
Theoretically, that line should only add to the email a header with the authresults expansion item. However, on commenting it out, the segfault did not happen whereas it does if the warn directive is active.
| Exim4 segfault using AUTH LOGIN |
1,382,410,950,000 |
My setup includes a script that send a mail to a local user via the exim command line. This script is called as root (Reality is of course more complicated, but this seems to be a minimal working example).
/home/jens/send_mail:
#!/bin/sh
cat /home/jens/testmail | /usr/bin/exim -bm jens
Running this script from a root shell works fine. The mail is delivered without problems.
Now I try to automate this script and call it from a systemd service:
/etc/systemd/system/send_mail.service:
[Unit]
Description=Send mail to jens
[Service]
Type=oneshot
ExecStart=/home/jens/send_mail
[Install]
WantedBy=multi-user.target
Running systemctl start send_mail.service does not deliver the mail, but places it in the exim queue to be delivered later. In my real setup, I find lines reading
... exim[275968]: 2020-07-16 23:09:40 1jwB8O-0019n4-Lj failed to write to main log: length=91 result=-1 errno=9 (Bad file descriptor)
... exim[275968]: write failed on panic log: length=116 result=-1 errno=9 (Bad file descriptor)
in my journal. To my knowledge, I have no exim-specific environment variables for my root shell. What could be the cause of this different behavior?
I am using exim 4.94 on Arch Linux. Please ask if you need further details.
|
This issue seems to be caused by systemd killing the spawned exim process as soon as send_mail has finished executing.
It can be solved by either waiting an appropriate time at the end of send_mail, or setting the KillMode option in the systemd unit to process or none (which the manual recommends against).
Sources:
https://systemd-devel.freedesktop.narkive.com/nV1QMO8j/exim4-only-queues-mails-sent-by-systemd-service
https://www.freedesktop.org/software/systemd/man/systemd.kill.html
| Difference between oneshot systemd unit and root command line? |
1,535,930,122,000 |
I've got local_acl_check_data to reject the typical spammer tactic of using the same address as From: and To:, but since some less-spammy sources, such as Yahoo Groups, do this, I'm using a whitelist as well. Here is the ACL:
# block spammers who use the same "from" and "to" address
accept
senders = ${if exists{CONFDIR/local_sender_whitelist}\
{CONFDIR/local_sender_whitelist}\
{}}
deny
condition = ${if eqi{${address:$h_from:}}{${address:$h_to:}}{true}{false}}
log_message = rejecting spam with to:${address:$h_to:} and from:${address:$h_from:}
message = Message identified as spam. If you think this is wrong, get in touch with postmaster
Problem is, when I test with:
jcomeau@tektonic:~$ cat bin/testacl
exim4 -bh 66.163.168.186 <<EOT
helo tester
mail from: [email protected]
rcpt to: [email protected]
data
from: [email protected]
to: [email protected]
subject: should be ok
this one should not reject
.
mail from: [email protected]
rcpt to: [email protected]
data
from: [email protected]
to: [email protected]
subject: should reject
this one should be rejected
.
quit
EOT
It works as expected: the first message is accepted because it found yahoogroups.com in the whitelist, and the second was rejected. But in real operation, the yahoogroups.com emails are rejected by that ACL along with the spammers. I'm using 4.72-6, and this has happened for all the versions I've been using for the last few years. I've run out of ideas.
As requested, the log of exim4 rejecting a message which should have passed:
jcomeau@tektonic:~$ grep -C2 Freecycle /var/log/exim4/rejectlog
2011-02-25 09:52:00 1Psz1U-00020g-79 H=n52c.bullet.mail.sp1.yahoo.com [66.163.168.186] F=<sentto-15991578-2122-1298645513-jc=example.com@returns.groups.yahoo.com> rejected after DATA: rejecting spam with to:[email protected] and from:[email protected]
Envelope-from: <sentto-15991578-2122-1298645513-jc=example.com@returns.groups.yahoo.com>
Envelope-to: <[email protected]>
--
MIME-Version: 1.0
I Message-ID:
Mailing-List: list [email protected]; contact [email protected]
Delivered-To: mailing list [email protected]
List-Id: <PetalumaFreecycle.yahoogroups.com>
Precedence: bulk
List-Unsubscribe: <mailto:[email protected]>
Date: 25 Feb 2011 14:51:53 -0000
F From: [email protected]
T To: [email protected]
Subject: [Petaluma Freecycle] Digest Number 2122
X-Yahoo-Newman-Property: groups-digest-trad-m
R Reply-To: "No Reply"<[email protected]>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
And here's what my testacl script shows for the first test:
>>> using ACL "acl_check_data"
>>> processing "accept"
>>> check senders = ${if exists{/etc/exim4/local_sender_whitelist}{/etc/exim4/local_sender_whitelist}{}}
>>> yahoogroups.com in "yahoogroups.com"? yes (matched "yahoogroups.com")
>>> [email protected] in "/etc/exim4/local_sender_whitelist"? yes (matched "yahoogroups.com" in /etc/exim4/local_sender_whitelist)
>>> accept: condition test succeeded
LOG: 1PuxAz-0005jZ-B0 <= [email protected] H=n52c.bullet.mail.sp1.yahoo.com (tester) [66.163.168.186] P=smtp S=380
250 OK id=1PuxAz-0005jZ-B0
|
The "sender", as Exim sees it is the envelope-from address, and that was in domain returns.groups.yahoo.com. Once I put that domain (completely; groups.yahoo.com doesn't work, neither does yahoo.com) into my local_sender_whitelist, the ACL worked.
It had worked during testing because I had used the envelope-from address of yahoogroups.com, the same as the From: address. Never bothered to check if that was the case in the emails from yahoo groups.
| exim4 on debian: why does this ACL work when testing with -bh but not in actual use? |
1,535,930,122,000 |
I would like to know where to find the source code of exim4, with specific version and revision, 4.89-2+deb9u4.
I have executed apt-get source but got an error message:
apt-get source exim4=4.89-2+deb9u4
E: Can not find version '4.89-2+deb9u4' of package 'exim4'
E: Unable to find a source package for exim4
For various other revisions of exim4, I was able to find the source code in this way.
I also found the source repository but it only has the source code of exim4 version 4.89-2+deb9u3~bpo8+1 and 4.89-2+deb9u5, not for 4.89-2+deb9u4.
Does it mean the source code of 4.89-2+deb9u4 is no longer available? Or is there any way to get it?
|
The source code for that particular version is available on snapshot.debian.org. If you install devscripts you can retrieve it by running
dget http://snapshot.debian.org/archive/debian-security/20190605T153608Z/pool/updates/main/e/exim4/exim4_4.89-2%2Bdeb9u4.dsc
You can also use the corresponding tag in the Debian package’s repository.
| Where is the source code of exim4-4.89-2+deb9u4? |
1,535,930,122,000 |
I am using a Debian Jessie server and have setup exim4 to send me emails instead of postfix or sendmail.
Thats when I started getting loads of emails as follows:
First:
Title: * SECURITY information for vultr.guest *
Body: vultr.guest: Dec 7 12:13:29 : root : unable to resolve host vultr.guest
Second:
Title: Cron test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp
Body:
This is an automatically generated Delivery Status Notification
THIS IS A WARNING MESSAGE ONLY.
YOU DO NOT NEED TO RESEND YOUR MESSAGE.
Delivery to the following recipient has been delayed:
[email protected]
Message will be retried for 2 more day(s)
Technical details of temporary failure:
The recipient server did not accept our requests to connect. Learn more at https://support.google.com/mail/answer/7720
[(10) example.com. [xxx.xxx.xxx.90]:25: socket error]
----- Original message -----
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gmail.com; s=20120113;
h=from:to:subject:mime-version:content-type:content-transfer-encoding
:message-id:date;
bh=k/8GlT8DBvBIJzBOOfw8qR0kGPzj7m9ZR/aj+JOKBhg=;
b=eA6kpVtS0eNBO0CFBfLzlnaYwZ9/GubMaWTGUkG4MaxbNy55YxY2jZAuh3RHI2mo8Q
qp5OmKihchYTgCxcAx0xvJaXuuxDhoT9dCJ6YEIzqjmypWjpUEqoXkNu7uKU4Cd1vTfS
5/dSvE7zVE6TYe4L18vrOiYBEUNrJQ3lTdv//RrlHZs/f62GorIyMHgVL4XvkVNLWF/K
lK9SSybf9ee3KTKUxurBm1Tyah62Gk4/869Hynr1QEAjSAzM8sSKDyKH/KOZ06sDWtPQ
jE0Agxffk8RkhsFkEtIbpZBfS/zagGZ8+CXsGqR9541ylMAHGOGeYtRp4oiB8tVP2Sbv
h4Rw==
X-Received: by 10.129.114.10 with SMTP id n10mr3081975ywc.0.1449600002717;
Tue, 08 Dec 2015 10:40:02 -0800 (PST)
Return-Path: <[email protected]>
Received: from vultr.guest ([104.156.246.90])
by smtp.gmail.com with ESMTPSA id f203sm2998216ywf.45.2015.12.08.10.40.02
for <[email protected]>
(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
Tue, 08 Dec 2015 10:40:02 -0800 (PST)
From: Cron Daemon <[email protected]>
X-Google-Original-From: [email protected] (Cron Daemon)
Received: from smmsp by vultr.guest with local (Exim 4.84)
(envelope-from <[email protected]>)
id 1a6NBB-0007Vu-Os
for [email protected]; Tue, 08 Dec 2015 13:40:01 -0500
To: [email protected]
Subject: Cron <smmsp@vultr> test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Cron-Env: <MAILTO=root>
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/var/lib/sendmail>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=smmsp>
Message-Id: <[email protected]>
Date: Tue, 08 Dec 2015 13:40:01 -0500
I edited /etc/cron.d/sendmail and first tried to change the MAILTO= line from root to my gmail address. That did not help. So I commented that line out, and the
*/20 * * * * smmsp test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp
That made it stop from sending me emails every 20 minutes with junk. But I still get many and often emails with the
Subject: * SECURITY information for vultr.guest *
Body:
This is an automatically generated Delivery Status Notification
THIS IS A WARNING MESSAGE ONLY.
YOU DO NOT NEED TO RESEND YOUR MESSAGE.
Delivery to the following recipient has been delayed:
[email protected]
Message will be retried for 1 more day(s)
Technical details of temporary failure:
The recipient server did not accept our requests to connect. Learn more at https://support.google.com/mail/answer/7720
[(10) example.com. [xxx.xxx.xxx.90]:25: socket error]
I did modify /etc/hostname and removed vultr.guest and replaced it with example.com. And in /etc/hosts I only have:
127.0.0.1 localhost
127.0.1.1 install.install install
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
What did I configure wrong? And how can I fix it?
UPDATE: The /etc/exim4/update-exim4.conf.conf contents are:
dc_eximconfig_configtype='satellite'
dc_other_hostnames=''
dc_local_interfaces='127.0.0.1 ; ::1'
dc_readhost=''
dc_relay_domains=''
dc_minimaldns='false'
dc_relay_nets=''
dc_smarthost='smtp.gmail.com::587'
CFILEMODE='644'
dc_use_split_config='false'
dc_hide_mailname='true'
dc_mailname_in_oh='true'
dc_localdelivery='mail_spool'
|
It seems to me that VULTR's tutorial is not correct. When you configure exim4 by dpkg-reconfigure exim4-config, they tell you to choose mail sent by smarthost; no local mail, and configure it as follows:
System mail name: YOUR_HOSTNAME
IP-addresses to listen on for incoming SMTP connections: 127.0.0.1 ; ::1
Other destinations for which mail is accepted: <BLANK>
Visible domain name for local users: <BLANK>
IP address or host name of the outgoing smarthost: smtp.gmail.com::587
Keep number of DNS-queries minimal (Dial-on-Demand)? No
Split configuration into small files? No
Root and postmaster mail recipient: <BLANK>
But I doubt that the Other destinations for which mail is accepted: should be BLANK. If you configure it, add your email address, or your example.com domain/email.
Otherwise, try to edit the following locations:
/etc/aliases:
root: [email protected]
mailer-daemon: [email protected]
postmaster: [email protected]
nobody: [email protected]
hostmaster: [email protected]
usenet: [email protected]
news: [email protected]
webmaster: [email protected]
www: [email protected]
www-data: [email protected]
ftp: [email protected]
abuse: [email protected]
noc: [email protected]
security: [email protected]
*: [email protected]
and edit /etc/email-addresses to include the user:email combo:
root: [email protected]
mail: [email protected]
*: [email protected]
Then restart
service sendmail restart and service exim4 restart
| Delivery Status Notification (Delay) emails from my server? |
1,535,930,122,000 |
I successfully installed and configured Exim4 on my Debian/Squeeze machine, so now I am able to send outgoing emails with a command like this:
exim4 -v [email protected]
From: [email protected]
To: [email protected]
Subject: Test email
Body of the email
.
Is there a similar command to RETRIEVE emails into the Maildir folder?
NOTE: The emails I want to retrieve are from another email server on the same network. Typically, I use a regular email client to connect to the server via IMAP and SSL.
|
While it is quite viable to use exim to send emails, your question reads like you are using the wrong tools for what ever your overall goal is. exim cannot retrieve emails from another server, because exim is a mail transfer agent, cf. RFC 821. Accessing a users mailbox and retrieving email (what you want to do) is a completely different thing than sending and relaying emails (what exim has been developed for). To sync mailboxes you can use for example imapsync or offlineimap.
| How do I retrieve email with Exim4? |
1,535,930,122,000 |
I have suspicious log line, where someone logged in client webmail, and there is no remote or local IP logged, there just stands ::1. What does that mean?
Line is like here:
H=(webmail.domain.com) [::1]:33260
|
::1 is just the IPv6 address for localhost. Therefore, someone (probably you?) logged in to the webmail interface from the server itself.
| What does ::1 stands for in Exim mainlog in rip and lip in log? |
1,535,930,122,000 |
I need to update exim on one of my servers to at least version 4.86 to use it with rspamd. But the latest version provided by the OS is 4.82.
Is there any comfortable way to get the latest version, besides of building it from sources?
Cheers
|
This is a somewhat generic answer on installing newer software on an older version of a Debian derivative.
The first thing is to make sure that you actually want a newer version. Contrary to a popular misconception, newer isn't always better. The newer version usually has bug fixes but it also has new bugs. Distributions apply fixes for major bugs and especially for security issues, so if all you care about is bug fixes in general, you should stick with your distribution's package. In your case, you need a new feature, so this warning does not apply to you.
The easiest way to get a newer version is if someone has already done the work for you. Check whether a backport package is available for your distribution. For Ubuntu, backports are listed on the package page on the website. For exim4, there is no backport.
Also check if the application developer has packages available. This doesn't seem to be the case for Exim.
Lacking an official package, check if there's an unofficial package. With an unofficial package, there's more of a risk that the maintainer of that package won't make timely updates to fix security issues and major bugs, so evaluate the source and decide whether you want to take the risk. For Ubuntu, and sometimes for other Debian derivatives, check whether a PPA is available. For exim with rspamd support, you're half in luck. There's an exim-rspamd PPA but it doesn't seem to be actively maintained so it probably has security holes by now.
A radically different approach is to install a more recent distribution in a chroot environment, and run the program from this more recent distribution. This consumes a lot of disk space and bandwidth compared to just installing one application, but those are cheap compared with human labor, and this method is very light on labor, especially for Debian derivatives thanks to schroot. See my guide on using schroot on Debian derivatives. This is a good method for “end user” applications, but for a system service like exim4, it might not be so easy.
For server-side software, you may be able to find a chroot-like package in the form of a Docker container. Many Docker images with exim are available. I have no idea about their quality, reliability and trustworthiness.
For open source software, installing from source is always a possibility. It may be more or less painful depending on what other software (typically libraries) the program depends on. For GUI programs that require a couple dozen libraries and keep updating their minimum requirements, it can be very difficult to keep up. For a program like exim that has very few dependencies, it should be pretty easy. The main constraint is that you have to watch for, and apply, security updates as they come out. This can introduce risk if the application developer only provides security updates for the latest version (which may introduce bugs that affect you). Check if a long-term-support version is available (there isn't one for Exim).
In your case, I'd either go for a Docker container if there's a reliable one, or build your own deb package starting from the work that was done for the exim-rspamd PPA.
| Need exim >=4.86 on Ubuntu 14.04 LTS |
1,535,930,122,000 |
Can anyone know how to change default error messages generated by exim for Unrouteable address or quota exceeded?
I found this "Customizing error messages" , but I don't know how to use it...
Where to save these files?
What is the meaning of >>>>>>> .linelength 80em ? can/need I change it?
-- My exim version is 4.84.2 on Debian 8
|
I was take wrong documentation version.
I set bounce_message_file to file contains template in format from this:
exim documentation 4.84 and its working perfect.
| Custom Error messages in Exim |
1,535,930,122,000 |
I am attempting to configure Exim in such a way that clients who wish to relay email through the server must supply a single passphrase. The file /etc/exim4/conf.d/auth/30_exim4-config_examples contains the following configuration lines commented out:
# plain_server:
# driver = plaintext
# public_name = PLAIN
# server_condition = "${if crypteq{$auth3}{${extract{1}{:}{${lookup{$auth2}lsearch{CONFDIR/passwd}{$value}{*:*}}}}}{1}{0}}"
# server_set_id = $auth2
# server_prompts = :
# .ifndef AUTH_SERVER_ALLOW_NOTLS_PASSWORDS
# server_advertise_condition = ${if eq{$tls_cipher}{}{}{*}}
# .endif
I'm not sure I fully understand exactly what is going on here.
Why is server_prompts empty when the login_server example includes prompts for both a username and password? Shouldn't there be a prompt for a password here?
Where is the password actually set?
I fully intend to use TLS to secure communication between client and server - from what I understand, the last three lines in the snippet above causes the authentication method to be advertised only if TLS is enabled or AUTH_SERVER_ALLOW_NOTLS_PASSWORDS is set.
|
Leaving server_prompts as-is gives you the default (RFC compliant) behaviour, otherwise you might need to modify your clients to supply additional values.
The password is looked up in the CONFDIR/passwd file, CONFDIR is equal to /etc/exim4 on Debian.
Is your intention that all users use a common password? Then you could change the server_condition. Something like:
server_condition = ${if {eq{$auth3}{mysecret}{yes}{no}}
Do checkout the excellent exim documentation, e.g. here
| Configuring Exim on Debian to authenticate using only a password? |
1,535,930,122,000 |
Some processes on my server send mail to various system accounts which all goes to root on the local machine. I want the root account to be an alias for my (external) email address. I'm using exim4 version 4.86_2
I have the following in /etc/aliases:
mailer-daemon: postmaster
postmaster: root
nobody: root
hostmaster: root
usenet: root
news: root
webmaster: root
www: root
ftp: root
abuse: root
noc: root
security: root
root: [email protected]
I've run the "newaliases" command, but when I send a mail to "root" it goes to root@localdomain.
How can I make the server read /etc/aliases or send system mail out to an external email address?
|
It turns out that the host didn't know what the canonical name of the machine was, so was assuming all local mail was in fact remote. I've fixed it now as per this answer.
| exim4 not using /etc/aliases |
1,535,930,122,000 |
Not so long ago I have found that exim is sending mail4root emails and logs them into /var/mail/mail. Example from exim log:
2016-07-19 09:39:02 1bPOgI-000370-1Q <= [email protected] U=root P=local S=78459
2016-07-19 09:39:02 1bPOgI-000370-1Q => /var/mail/mail <[email protected]> R=mail4root T=address_file
2016-07-19 09:39:02 1bPOgI-000370-1Q Completed
2016-07-19 09:40:18 Start queue run: pid=12117
2016-07-19 09:40:18 End queue run: pid=12117
2016-07-19 10:09:02 1bPP9K-00042T-LK <= [email protected] U=root P=local S=78459
2016-07-19 10:09:02 1bPP9K-00042T-LK => /var/mail/mail <[email protected]> R=mail4root T=address_file
2016-07-19 10:09:02 1bPP9K-00042T-LK Completed
2016-07-19 10:10:18 Start queue run: pid=15678
2016-07-19 10:10:18 End queue run: pid=15678
Can someone explain what causes it?
|
As a security measure Exim will not deliver email to root. The mail4root router is a last ditch handler to deliver mail for root to the mailbox for mail.
Normally, an alias for root would be configured in /etc/aliases to deliver to the system administrator's personal mailbox. There are a number of aliases that redirect to root as they should be handled by the system administrator (root). They also get redirected if the alias exists.
| Exim4 sends strange emails on root |
1,535,930,122,000 |
I installed exim4 on a Debian server configured it to use Dovecot LMTP delivery and everything works nice. But I'm having problems with a bit of a spam attack right now. I installed fail2ban but it's a bit slow to catch up. Also I was looking at the actions for the exim4 jail and I saw that there can be error messages with 535, Sender verify failed and unknown users and I think all of those sound like better ban reason than the current relay not permitted:
2015-11-23 09:03:25 H=118-160-211-95.dynamic.hinet.net (xxx.xxx.xxx.xxx) [118.160.211.95] F=<[email protected]> rejected RCPT <[email protected]>: relay not permitted
So I wanted to ask if there is a way to force authentication based on the fact that this is not even an mail server that I am responsible for (163.com and I'm responsible for example.com only) and to give different error message (like not auth)?
Also as a side note would this mean that I somehow forgot to add some configuration somewhere like the lack of auth in some ACL? (Every actual person from example.com needs to enter the real password before he sends an email and if wrong it comes up with an error message saying wrong password).
|
You can't force a remote client to attempt to authenticate, because you don't know until the RCPT TO: whether the client is attempting to deliver an email to your server (which doesn't require authentication unless you have a very unusual configuration like only accepting mail from known mail servers) or it is trying to relay through your mail server without authorisation.
The RCPT TO stage of an SMTP session comes well after any AUTH negotiation (if any).
| How to force valid authentication in exim before sending at all? |
1,535,930,122,000 |
I want to use exim to send emails via my ISP's SMTP server. However, the Arch wiki is quite confusing (exim is much simpler on a Debian system). I followed instructions in the final section, modifying the SMTP address from mail.internode.on.net to my SMTP server, and modifying *@* [email protected] Ffr to *@* $1@my_emaildomain.com Ffr. This worked when I was connected to the internet via my ISP.
However, to use this on my work network, I need to authenticate. I tried to follow the instructions listed for Gmail, while changing the url, but this failed with
authenticator iinet_route: cannot find authenticator driver "manualroute"
How can I set up exim for authentication? (FWIW I'm with iinet.)
EDIT
I realised I had been putting the "Gmail"-like settings in the wrong parts. I moved them around, and am no longer getting the error messages. However, exim now fails silently. I get no error message, but no mail is delivered.
Here are the changes I made to the factory default:
--- exim.conf.factory_default 2015-08-03 02:14:31.000000000 +1000
+++ exim.conf 2015-11-10 08:09:54.196287461 +1100
@@ -402,7 +402,7 @@
# Deny unless the sender address can be verified.
- require verify = sender
+ #require verify = sender
# Accept if the message comes from one of the hosts for which we are an
# outgoing relay. It is assumed that such hosts are most likely to be MUAs,
@@ -552,14 +552,19 @@
# If the DNS lookup fails, no further routers are tried because of the no_more
# setting, and consequently the address is unrouteable.
-dnslookup:
- driver = dnslookup
- domains = ! +local_domains
- transport = remote_smtp
- ignore_target_hosts = 0.0.0.0 : 127.0.0.0/8
+#dnslookup:
+# driver = dnslookup
+# domains = ! +local_domains
+# transport = remote_smtp
+# ignore_target_hosts = 0.0.0.0 : 127.0.0.0/8
# if ipv6-enabled then instead use:
# ignore_target_hosts = <; 0.0.0.0 ; 127.0.0.0/8 ; ::1
- no_more
+# no_more
+
+iinet_route:
+ driver = manualroute
+ transport = iinet_relay
+ route_list = * mail.iinet.net.au
# This alternative router can be used when you want to send all mail to a
@@ -735,6 +746,12 @@
address_reply:
driver = autoreply
+iinet_relay:
+ driver = smtp
+ port = 587
+ hosts_require_auth = <; $host_address
+ hosts_require_tls = <; $host_address
+
######################################################################
@@ -769,6 +786,7 @@
# There are no rewriting specifications in this default configuration file.
begin rewrite
+*@* [email protected] Ffr
@@ -821,6 +839,12 @@
# server_advertise_condition = ${if def:tls_in_cipher }
+iinet_login:
+ driver = plaintext
+ public_name = LOGIN
+ hide client_send = : [email protected] : PASSWORD_HERE
+
+
######################################################################
# CONFIGURATION FOR local_scan() #
######################################################################
And here is my full configuration file.
EDIT 2
I also tried changing the port to 465, which also fails silently. (FWIW 587 works fine in msmtp.)
EDIT 3
Here is the information on a failed email, using exim -Mvl. The original attempt to send used echo body | /usr/bin/mail -s subject -r [email protected] [email protected]
2015-11-10 11:53:39 Received from [email protected] U=sparhawk P=local S=428 id=20151110005339.ag4kfrHaJ%[email protected]
2015-11-10 11:53:41 [email protected] R=iinet_route T=iinet_relay defer (-42): authentication required but authentication attempt(s) failed
EDIT 4
I ran the mail command again (as per edit 3), and got a slightly different error. I've also linked to the full output of exim -d+all -M messageID <ID>
$ sudo exim -Mvl 1ZwMHr-0008I4-92
2015-11-11 14:41:31 Received from [email protected] U=lee P=local S=426 id=20151111034131.VRuQn__aN%[email protected]
2015-11-11 14:41:31 [email protected] R=iinet_route T=iinet_relay defer (-53): retry time not reached for any host
Full debug output is here.
|
According to the error you get, you have put the stanzas from the gmail example in the wiki in the wrong sections. The exim config is built up in distinct parts, in order:
maincontains global definitions and settings
acl
routershow to handle an address; first hit is used so order is important
transportsdefines ways to disposing of a message, these are referenced from the routers above; order is not important
retryhow often to retry delivery
rewritechanging addresses e.g. to map internal addresses to globally usable addresses
authenticatorsdefines ways of authenticating; both as server and as client
The error message authenticator iinet_route: cannot find authenticator driver "manualroute" clearly indicates that you have put a router stanza in the authenticators section.
Put each stanza in the relevant section (i.e. the router definition after the line begin routers and before the line begin transports, taking order into account; etc.) and the error should go away.
| How can I set up exim to use my ISP's SMTP server (on a non-Debian system)? |
1,535,930,122,000 |
Summary: I have a mail server (exim 4, Debian 10) in an LXC container. The host is running Debian 11. Since yesterday evening spam traffic has been coming in that appears to come from the LXC Host. However, tcpdump logs show that it is actually remote traffic. What is going on?
This is an example of an exim4 log entry on the mail server, for a spam mail seemingly coming from the lxc host:
2023-07-23 11:15:51 1qNX42-009wSW-VR <= [email protected] H=LXCHOST (prvzvtrfnh) [LXCHOSTIPV4] P=esmtp S=615 [email protected]
Yet on the tcpdump logs on the host I see corresponding entries like this:
14:06:07.165374 IP 39.170.36.149.34307 > MAILSERVERCONTAINER.smtp: Flags [P.], seq 5672:5702, ack 1397, win 27, options [nop,nop,TS val 1151815058 ecr 475541370], length 30: SMTP: MAIL FROM:<[email protected]>
So the traffic appears to come from the (Chinese) IP 39.170.36.149. (This IP does not appear at all in the container logs.) So why does this traffic appear as coming from the host to the mail server?
The relevant network interfaces on the host are:
eno1, the physical interface
br0, a bridge connecting the phyiscal interface with several lxc containers
The tcpdump command on the host that shows the spammy traffic is:
tcpdump -i br0 port 25 and dst host [MAILSERVERIPV4]
The bridge interface is setup like this in /etc/network/interfaces:
auto br0
iface br0 inet static
bridge_ports regex eth.* regex eno.*
bridge_fd 0
address HOSTADDRES
netmask 255.255.255.192
gateway HOSTGATEWAY
Both container and host are up to date with security updates. But the host's uptime is 248 days, so it is possible that it is running outdated binaries.
UPDATE
I think the problem was caused by an iptables rule on the host, -t nat -A POSTROUTING -o br0 -j MASQUERADE. This rule is intended for containers without an external IP to reach the internet. I have apparently misunderstood what this does. Shouldn't it only masquerade traffic that is routed from internal IPs to the internet? As I understand it, external traffic to the mail server is bridged and not routed at all. Also, it's only one particular spammer that was able to exploit my setup. The normal traffic to my mail server shows external IPs. How did the spammer do this?
UPDATE 2: The problems started after installing docker on the host. Could it be that docker and lxc interact in a way to create these problems?
|
I think the problem was caused by an iptables rule on the host
iptables -t nat -A POSTROUTING -o br0 -j MASQUERADE
This rule is intended for containers without an external IP to reach the internet.
What this rule does is masquerade any traffic going out through br0. It could be traffic going out from the host to a container, or it could (as intended) be traffic leaving the host and heading off to the wider Internet.
The problems started after installing docker on the host. Could it be that docker and lxc interact in a way to create these problems?
Yes, I would say that's quite likely. You will need to modify the rule to avoid masquerading local traffic.
As an example, let's assume your host is 192.168.1.1 (and maybe also has a public IPv4 address), and you have a hidden container subnet of 192.168.1.0/24. Docker has come along and grabbed 172.17.0.0/16.
We might suppose that this rule is intended to masquerade anything leaving the Docker subnet,
iptables -t nat -A POSTROUTING -o br0 --src 172.17.0.0/24 -j MASQUERADE
| Remote SMTP traffic appears to come from LXC Host to container |
1,535,930,122,000 |
I have exigrep output like this.
2019-02-02 17:03:00 1gpxky-0005ky-Mk <= [email protected] U=XXXXX P=local S=14529 [email protected] T="XXXXXXXXX" for [email protected]
2019-02-02 17:03:00 1gpxky-0005ky-Mk Sender identification U=XXXXX D=XXXXX.com [email protected]
2019-02-02 17:03:00 1gpxky-0005ky-Mk SMTP connection outbound 1549123380 1gpxky-0005ky-Mk XXXXX.com [email protected]
2019-02-02 17:03:01 1gpxky-0005ky-Mk => [email protected] R=dkim_lookuphost T=dkim_remote_smtp H=gmail-smtp-in.l.google.com [XXX.XXX.XXX.XXX] X=TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128 CV=yes C="250 2.0.0 OK 1549123381 m21si11695854lfc.90 - gsmtp"
2019-02-02 17:03:01 1gpxky-0005ky-Mk Completed
2019-02-02 15:48:22 1gpwaj-00081N-5J H=mx2.XXXXX.pl [XX.XX.XX.XX]:15240 Warning: "SpamAssassin as takapara detected message as NOT spam (2.4)"
2019-02-02 15:48:22 1gpwaj-00081N-5J H=mx2.XXXXX.pl [XX.XX.XX.XX]:15240 Warning: Message has been scanned: no virus or other harmful content was found
2019-02-02 15:48:22 1gpwaj-00081N-5J <= [email protected] H=mx2.XXXX.pl [XX.XX.XX.XX]:15240 P=esmtp S=72014 id=9c38a455-1b57-404a-ae68-87ed816473a8 T="XXXXXXXXXX" for [email protected]
2019-02-02 15:48:23 1gpwaj-00081N-5J => XXXX <[email protected]> R=virtual_user T=dovecot_virtual_delivery C="250 2.0.0 <[email protected]> +A/zNratVVyfaQAADQHPYA Saved"
2019-02-02 15:48:23 1gpwaj-00081N-5J Completed
And I have some of these - but after done awk regex style "grep" I've got all mail addresses (even that in middle of output of single "block" - second output in example)
I search for just grep through first line 5th thing with awk (sender mail address wo is on my server) but \n don't work.
I have code like this:
# cat /var/log/exim_mainlog | grep 2019-02-02 | exigrep {user_name} | awk '/^([0-9]*-[0-9]*-[0-9]*) ([0-9]*:[0-9]*:[0-9]*) ([0-9a-zA-Z]*-[0-9a-zA-Z]*-[0-9a-zA-Z]*) (<=).*\n/ {print $5}'
How to define EOL here?
|
awk and grep use '$' for the end-of-line marker (a feature of POSIX regular expressions). \n isn't part of that, whether basic regular expressions (the default for grep) or extended regular expressions (an option with grep, standard with awk).
See 9.3.8 BRE Expression Anchoring:
A <dollar-sign> ( '$' ) shall be an anchor when used as the last character of an entire BRE. The implementation may treat a <dollar-sign> as an anchor when used as the last character of a subexpression. The <dollar-sign> shall anchor the expression (or optionally subexpression) to the end of the string being matched; the <dollar-sign> can be said to match the end-of-string following the last character.
From the comment - if you want to print only the first match in awk, you could do something like this to the chunk
{print $5}
replacing that with
{if (found) next; found = 1; print $5; }
| Exim - What's the mark of EOL in exigrep output? |
1,535,930,122,000 |
I am trying to install Postfix and remove the Exim4. How should I safely do it?
|
You can do this by using yum:
sudo yum remove exim4
| How to safely remove Exim4 on CentOS? [closed] |
1,535,930,122,000 |
I would like to configure Exim to allow SMTP AUTH logins using the same credentials as for a Dovecot IMAP server on the same machine, but I would also like to allow additional sets of credentials so I can allow network devices to send e-mails through the Exim server without giving them credentials to an IMAP mailbox.
So I have implemented a PLAIN and LOGIN auth for Exim, using driver = dovecot, and I can use SMTP AUTH with my Dovecot credentials, great.
However if I add another two PLAIN and LOGIN auth methods, this time using driver = plaintext instead (to look up the AUTH credentials from a local file), I get an error saying:
two server authenticators (dovecot_login and file_login) have the same public name (LOGIN)
Is it correct that you can only have one SMTP AUTH method, and it is not possible to fall back and try any others if they are available?
|
I ended up asking the Exim devs about this and the answer is unfortunately that it cannot be done directly, as although the plaintext authenticator can be extended, the dovecot one cannot.
The only solution is to move to an external authentication method like SASL that both Exim and Dovecot can use.
| How to use multiple Exim SMTP AUTH methods (dovecot and plaintext) |
1,535,930,122,000 |
I would like to enable system-wide filtering so I can define some custom spam filtering. I am using the Ubuntu/Debian split configuration for Exim but cannot see where to define the system filter.
In a normal configuration, I would just the following to the main configuration:
system_filter = /etc/mail/exim.filter
system_filter_user = Debian-exim
system_filter_group = Debian-exim
system_filter_file_transport = address_file
system_filter_pipe_transport = address_pipe
However, I am unsure as to where to add these in the split configuration setup.
|
Create a file called 30_exim4-config_system_filter in /etc/exim4/conf.d/main which contains the following:
# System wide filter:
# http://exim.org/exim-html-current/doc/html/spec_html/ch-systemwide_message_filtering.html
system_filter = /etc/mail/exim.filter
system_filter_user = Debian-exim
system_filter_group = Debian-exim
system_filter_file_transport = address_file
system_filter_pipe_transport = address_pipe
# System wide filter end.
Then run the following commands:
sudo exim -bF /etc/mail/exim.filter < /etc/mail/spam-test
sudo update-exim4.conf
sudo service exim4 restart
Your new filter should be working...
| Exim system filter with split configuration |
1,535,930,122,000 |
I can't find a guide for this (they are old and don't work) and I can't seem to manage to install it. I can't install Postfix because of its dependencies, so I'm going with Exim, which I installed through yum install exim and it was the latest version. However, I have no idea where to go from here. I know that I need to install Dovecot or Cyrus and I want to install Horde (not Squirrelmail). Not to mention that I also want to keep MariaDB 10, which I installed from its repository, as well as PHP 5.5. I don't know where to go from here.
|
Dovecot 2.2.10 is now in the updates repo and can be installed via yum install dovecot. Horde can be obtained from remi's PHP repository, which features not only an up-to-date version of PHP but also packages for Horde and various of its modules. After enabling remi's repo, a simple yum install php-horde-imp should be sufficient to give you a starter for a webmail installation based on Horde and IMP. You'll still have to configure it accordingly, though.
As for MariaDB: The MariaDB folks haven't set up a repo for CentOS 7 and 10.x, yet. You can still use the CentOS 6 repositories for 7, but I'd advise caution as the packages for 6 don't fit that well into 7. E.g. they do not come with unit files for systemd, which is forcing systemd to use the shipped init scripts. Even worse, MariaDB-server 10.x is clashing heavily with mariadb-libs, which in turn is being pulled in as a dependency by a lot of packages such as exim-mysql. CentOS base repo is currently shipping MariaDB 5.5.37, which is the most current of the 5.5 branch. If you're content with that, use 5.5 for now and upgrade to 10.x once a repository for CentOS 7 is available. The alternative would be to compile a dummy rpm deprecating the mariadb-libs package, which essentially amounts to a dirty hack I cannot really recommend.
If you decide to use 5.5 now and upgrade to 10.x later, be warned that this is everything but hassle-free in my experience. Safest way I found has been to create a complete database dump, clear /var/lib/mysql, upgrade to 10, feed the dump to the new version and run mysql_upgrade.
Update: Upon closer inspection, it appears I've been talking rubbish. The MariaDB-shared package is statisfying the dependencies on mariadb-libs just fine. Install it and you'll be good to go.
| How do I install Exim and Horde on Centos 7 with MariaDB 10? |
1,357,727,051,000 |
I just formatted stuff. One disk I format as ext2. The other I want to format as ext4. I want to test how they perform.
Now, how do I know the kind of file system in a partition?
|
How do I tell what sort of data (what data format) is in a file?
→ Use the file utility.
Here, you want to know the format of data in a device file, so you need to pass the -s flag to tell file not just to say that it's a device file but look at the content. Sometimes you'll need the -L flag as well, if the device file name is a symbolic link. You'll see output like this:
# file -sL /dev/sd*
/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=63fa0104-4aab-4dc8-a50d-e2c1bf0fb188 (extents) (large files) (huge files)
/dev/sdb1: Linux rev 1.0 ext2 filesystem data, UUID=b3c82023-78e1-4ad4-b6e0-62355b272166
/dev/sdb2: Linux/i386 swap file (new style), version 1 (4K pages), size 4194303 pages, no label, UUID=3f64308c-19db-4da5-a9a0-db4d7defb80f
Given this sample output, the first disk has one partition and the second disk has two partitions. /dev/sda1 is an ext4 filesystem, /dev/sdb1 is an ext2 filesystem, and /dev/sdb2 is some swap space (about 4GB).
You must run this command as root, because ordinary users may not read disk partitions directly: if needed, add sudo in front.
| How do I know if a partition is ext2, ext3, or ext4? |
1,357,727,051,000 |
Is there a way to tell the kernel to give back the free disk space now? Like a write to something in /proc/ ? Using Ubuntu 11.10 with ext4.
This is probably an old and very repeated theme.
After hitting 0 space only noticed when my editor couldn't save source code files I have open, which to my horror now have 0 byte size in the folder listing, I went on a deleting spree.
I deleted 100's of MB of large files both from user and from root, and did some hardlinking too.
Just before I did apt-get clean there was over 900MB in /var/cache/apt/archives, now there is only 108KB:
# du
108 /var/cache/apt/archives
An hour later still no free space and cannot save my precious files opened in the editor, but notice the disparity below:
# sync; df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda4 13915072 13304004 0 100% /
Any suggestions? I shut off some services/processes but not sure how to check who might be actively eating disk space.
More info
# dumpe2fs /dev/sda4
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 884736
Block count: 3534300
Reserved block count: 176715
Free blocks: 422679
Free inodes: 520239
First block: 0
Block size: 4096
Fragment size: 4096
|
Check with lsof to see if there are files held open. Space will not be freed until they are closed.
sudo /usr/sbin/lsof | grep deleted
will tell you which deleted files are still held open.
| Tell fs to free space from deleted files NOW |
1,357,727,051,000 |
I've got a question concerning the block size and cluster size. Regarding to what I have read about that I assume the following:
The block size is the physical size of a block, mostly 512 bytes. There is no way to change this.
The cluster size is the minimal size of a block that is read and writable by the OS. If I create a new filesystem e.g. ext3, I can specify this minimal block size with the switch -b. Almost all programs like dumpe2fs, mke2fs use block size as an name for cluster size.
If I have got the following output:
$ stat test
File: `test'
Size: 13 Blocks: 4 IO Block: 2048 regular file
Device: 700h/1792d Inode: 15 Links: 1
Is it correct that the size is the actual space in bytes, blocks are the physically used blocks (512 bytes for each) and IO block relates to the block size specified when creating the FS?
|
I think you're confused, possibly because you've read several documents that use different terminology. Terms like “block size” and “cluster size” don't have a universal meaning, even within the context of filesystem literature.
Filesystems
For ext2 or ext3, the situation is relatively simple: each file occupies a certain number of blocks. All blocks on a given filesystem have the same size, usually one of 1024, 2048 or 4096 bytes. A file¹ whose size is between N blocks plus one byte and N+1 blocks occupies N+1 blocks. That block size is what you specify with mke2fs -b. There is no separate notion of clusters.
The FAT filesystem used in particular by MS-DOS and early versions of Windows has a similarly simple space allocation. What ext2 calls blocks, FAT calls clusters; the concept is the same.
Some filesystems have a more sophisticated allocation scheme: they have fixed-size blocks, but can use the same block to store the last few bytes of more than one file. This is known as block suballocation; Reiserfs and Btrfs do it, but not ext3 or even ext4.
Utilities
Unix utilities often use the word “block” to mean an arbitrarily-sized unit, typically 512 bytes or 1kB. This usage is unrelated to any particular filesystem or disk hardware. Historically, the 512B block did come about because disks and filesystems at the time often operated in 512B chunks, but the modern usage is just arbitrary. Traditional unix utilities and interfaces still use 512B blocks sometimes, though 1kB blocks are now often preferred. You need to check the documentation of each utility to know what size of block it's using (some have a switch, e.g. du -B or df -B on Linux).
In the GNU/Linux stat utility, the blocks figure is the number of 512B blocks used by the file. The IO Block figure is the preferred size for file input-output, which is in principle unrelated but usually an indication of the underlying filesystem's block size (or cluster size if that's what you want to call it). Here, you have a 13-byte file, which is occupying one block on the ext3 filesystem with a block size of 2048; therefore the file occupies 4 512-byte units (called “blocks” by stat).
Disks
Most disks present an interface that shows the disk as a bunch of sectors. The disk can only write or read a whole sector, not individual bits or bytes. Most hard disks have 512-byte sectors, though 4kB-sector disks started appearing a couple of years ago.
The disk sector size is not directly related to the filesystem block size, but having a block be a whole number of sectors is better for performance.
¹
Exception: sparse files save space .
| Difference between block size and cluster size |
1,357,727,051,000 |
For config auditing reasons, I want to be able to search my ext3 filesystem for files which have the immutable attribute set (via chattr +i). I can't find any options for find or similar that do this. At this point, I'm afraid I'll have to write my own script to parse lsattr output for each directory. Is there a standard utility that provides a better way?
|
Thanks to Ramesh, slm and Stéphane for pointing me in the right direction (I was missing the -R switch for lsattr). Unfortunately, none of the answers so far worked correctly for me.
I came up with the following:
lsattr -aR .//. | sed -rn '/i.+\.\/\/\./s/\.\/\///p'
This protects against newlines being used to make a file appear as being immutable when it is not. It does not protect against files that are set as immutable and have newlines in their filenames. But since such a file would have to be made that way by root, I can be confident that such files don't exist on my filesystem for my use case. (This method is not suitable for intrusion detection in cases where the root user may be compromised, but then neither is using the same system's lsattr utility which is also owned by the same root user.)
| How to search for files with immutable attribute set? |
1,357,727,051,000 |
Is useful to use -T largefile flag at creating a file-system for a partition with big files like video, and audio in flac format?
I tested the same partition with that flag and without it, and using tune2fs -l [partition], I checked in "Filesystem features" that both have "large_file" enabled. So, is not necessary to use -T flag largefile?
|
The -T largefile flag adjusts the amount of inodes that are allocated at the creation of the file system. Once allocated, their number cannot be adjusted (at least for ext2/3, not fully sure about ext4). The default is one inode for every 16K of disk space. -T largefile makes it one inode for every megabyte.
Each file requires one inode. If you don't have any inodes left, you cannot create new files. But these statically allocated inodes take space, too. You can expect to save around 1,5 gigabytes for every 100 GB of disk by setting -T largefile, as opposed to the default. -T largefile4 (one inode per 4 MB) does not have such a dramatic effect.
If you are certain that the average size of the files stored on the device will be above 1 megabyte, then by all means, set -T largefile. I'm happily using it on my storage partitions, and think that it is not too radical of a setting.
However, if you unpack a very large source tarball of many files (think hundreds of thousands) to that partition, you have a chance of running out of inodes for that partition. There is little you can do in that situation, apart from choosing another partition to untar to.
You can check how many inodes you have available on a live filesystem with the dumpe2fs command:
# dumpe2fs /dev/hda5
[...]
Inode count: 98784
Block count: 1574362
Reserved block count: 78718
Free blocks: 395001
Free inodes: 34750
Here, I can still create 34 thousand files.
Here's what I got after doing mkfs.ext3 -T largefile -m 0 on a 100-GB partition:
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/loop1 102369 188 102181 1% /mnt/largefile
/dev/loop2 100794 188 100606 1% /mnt/normal
The largefile version has 102 400 inodes while the normal one created 6 553 600 inodes, and saved 1,5 GB in the process.
If you have a good clue on what size files you are going to put on the file system, you can fine-tune the amount of inodes directly with the -i switch. It sets the bytes per inode ratio. You would gain 75% of the space savings if you used -i 65536 while still being able to create over a million files. I generally calculate to keep at least 100 000 inodes spare.
| largefile feature at creating file-system |
1,357,727,051,000 |
I have done many obscure system optimizations in the past, but I got rid of most of them after powertop told me I should set my USB ports to autosuspend, which forced them to an eternal sleep, and also after I realized the benefits of a higher swappiness.
But today, while looking at /etc/fstab, I noticed I had set the option commit=60 for / and /home. I remember that this was an optimization for laptops, to reduce the amount of writes to the disk, thus saving battery. But then I became concerned that this might cause data loss (sometimes my battery gets disconnected, and then on boot fsck tells me about a couple of orphan inodes).
While searching for an explanation for this option, I came to the following explanations (the second seems to contradict my previous understanding):
$ man mount | awk '/commit=/,/^$/'
commit=nrsec
Sync all data and metadata every nrsec seconds. The default value is 5 seconds.
Zero means default.
https://forums.gentoo.org/viewtopic-p-4088752.html
commit=60 stops the "immediate" (default of 5 seconds) prioritization of writes of over reads, caching the writes for a few more seconds later. This is good in the situation of heavy reads and writes mixed together, where the user wants the reads to take priority, so that the processor can be kept busy rather than pause while waiting for the writes to finish before it can continue reading.
A real-world example I have seen is waiting several seconds for the Gnome pull-down menu to appear, for seemingly no reason. The reason was that the disk was busy writing, so the CPU had to wait for the writing to finish before it could get all the data from the disk to be able to show the menu.
What does commit really do? Are there really advantages of increasing it (like responsiveness and power savings)? May it actually cause data loss?
|
What does commit really do?
I think one of the best explanations was given here by allquixotic.
Are there really advantages of increasing it (like responsiveness and
power savings)? May it actually cause data loss?
As per the ext4 official documentation:
Ext4 can be told to sync all its data and metadata every 'nrsec' seconds. The default value is 5 seconds. This means that if you lose
your power, you will lose as much as the latest 5 seconds of work
(your filesystem will not be damaged though, thanks to the
journaling). This default value (or any low value) will hurt
performance, but it's good for data-safety. Setting it to 0 will have
the same effect as leaving it at the default (5 seconds). Setting it
to very large values will improve performance.
Increasing commit value means you might lose as much as the latest N seconds of work (where N = commit interval) though most of the time this won't happen as software can still call fsync() and get its data written to disk, overriding the commit setting. You could look at it as "write everything to disk at least this often".1
On the other hand, it means less writes (which makes it quite popular among ssd users) and better performance (multiple writes are combined into one single larger write, updates to previous writes within the commit time frame are cancelled out).
As to the power savings, according to this page, it turns out that nowadays increasing commit value does not save power.
| Advantages/disadvantages of increasing "commit" in fstab |
1,357,727,051,000 |
If you run ls -l on a file that contains one letter, it will list as 2B in size. If your file system is in 4k blocks, I thought it rounded files up to the block size? Is it because ls -l actually reads the byte count from the inode? In what circumstances do you get rounded up to block answers vs actual byte count answers in Linux 2.6 Kernel GNU utils?
|
I guess you got that one letter into the file with echo a > file or vim file, which means, you'll have that letter and an additional newline in it (two characters, thus two bytes). ls -l shows file size in bytes, not blocks (to be more specific: file length):
$ echo a > testfile
$ ls -l testfile
-rw-r--r-- 1 user user 2 Apr 28 22:08 testfile
$ cat -A testfile
a$
(note that cat -A displays newlines as $ character)
In contrast to ls -l, du will show the real size occupied on disk:
$ du testfile
4
(actually, du shows size in 1kiB units, so here the size is 4×1024 bytes = 4096 bytes = 4 kiB, which is the block size on this file system)
To have ls show this, you'll have to use the -s option instead of/in addition to -l:
$ ls -ls testfile
4 -rw-r--r-- 1 user user 2 Apr 28 22:08 testfile
The first column is the allocated size, again in units of 1kiB. Last can be changed by specifying --block-size, e.g.
$ ls -ls --block-size=1 testfile
4096 -rw-r--r-- 1 aw aw 2 Apr 28 22:08 testfile
| EXT3: If block size is 4K, why does ls -l show file sizes below that? |
1,357,727,051,000 |
I have an embedded setup using an initramfs for the root file system but using a custom ext3 partition mounted on a compact flash IDE drive. Because data integrity in the face of power loss is the most important factor in the entire setup, I have used the following options to mount (below is the entry from my /etc/fstab file
<file system> <mount pt> <type> <options> <dump><pass>
/dev/sda2 /data ext3 auto,exec,relatime,sync,barrier=1 0 2
I came by these options from reading around on the internet. What I am worried about is that the content of /proc/mounts give the following:
/dev/sda2 /data ext3 rw,sync,relatime,errors=continue,user_xattr,acl,
barrier=1,data=writeback 0 0
From what I understand from reading around is that I want to use data=journal option for my mount as this offers the best protection against data corruption. However, from the man page for specific ext3 options for mount it says the
following about the writeback option:
Data ordering is not preserved - data may be written into the main
filesystem after its metadata has been committed to the journal.
This is rumoured to be the highest-throughput option. It guarantees
internal filesystem integrity, however it can allow old data to appear
in files after a crash and journal recovery.
I am very confused about this - the man page seems to suggest that for file system integrity I want to specify data=writeback option to mount but most other references I have found (including some published books on embedded linux) suggest that I should be using data=journal. What would be the best approach for me to use? Write speed is not an issue at all - data integrity is though.
|
Don't get misled by the fact that only writeback mentions internal filesystem integrity.
With ext3, whether you use journal, ordered or writeback, file system metadata is always journalled and that means internal file system integrity.
The data modes offer a way of control over how ordinary data is written to the file system.
In writeback mode, metadata changes are first recorded in the journal and a commit block is written. After the journal has been updated, metadata and data write-outs may proceed. data=writeback can be a severe security risk: if the system crashes while appending to a file, after the metadata has been committed (and additional data blocks allocated), but before the data has been written (data blocks overwritten with new data), then after journal recovery that file may contain blocks filled with data from previously deleted files – from any user1.
So, if data integrity is your main concern and speed is not important, data=journal is the way to go.
| What mount option to use for ext3 file system to minimise data loss or corruption? |
1,357,727,051,000 |
I have an external HDD which I formatted as NTFS partition in Windows. Now, I formatted this HDD in my linux system using the below command.
mkfs.ext3 /dev/sdb1
It was formatted successfully. However, when I run the fdisk -l command, it gives me the system as NTFS/HPFS.
Device Boot Start End Blocks Id System
/dev/sdb1 1 121601 976760001 83 HPFS/NTFS
However, the command df -T /dev/sdb1 was still giving me the file system type as ext3.
Why is it not showing me the system as Linux when I run the fdisk -l command?
|
When setting up a disk or partition there are 2 aspects to doing this. The first is the act of laying down a partition table scheme on the disk using typically either MBR (Master Boot Record) or GPT (GUID Partitioning Table) formats. Both of these lay down a "structure" on the disk.
MBR
If you take a look at the structure of an MBR you'll notice that there is a section allotted for defining a partitions "type".
The valid partition types for MBR:
Command (m for help): l
0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT-
2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx
5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data
6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / .
7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility
8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt
9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access
a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O
b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor
c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi eb BeOS fs
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD ee GPT
f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ef EFI (FAT-12/16/
10 OPUS 55 EZ-Drive a7 NeXTSTEP f0 Linux/PA-RISC b
11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f1 SpeedStor
12 Compaq diagnost 5c Priam Edisk a9 NetBSD f4 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f2 DOS secondary
16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ fb VMware VMFS
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fc VMware VMKCORE
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fd Linux raid auto
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fe LANstep
1c Hidden W95 FAT3 75 PC/IX be Solaris boot ff BBT
1e Hidden W95 FAT1 80 Old Minix
So in your case the partition is identified as being of type 17.
Filesystem format
The second aspect to this is the formatting of the space within the partition itself (the filesystem). These are the filesystems that most are more familiar with when dealing with EXT3/4, etc.
So in your case you've mixed a partition type and a filesystem that generally don't go together. I should mention here that tools such as fdisk are "dumb" in the sense that they'll generally let you do whatever you want whether it makes sense to do so or not.
Changing the partition's type
So to resolve your issue you'll need to change the partition type to 83 if it's a bare partition being formatted as EXT4, or 8e if it's an LVM partition. The good news is you can use fdisk to change the partitions type through the t function:
t change a partition's system id
After successfully doing this your partitions should looks something like this:
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 976773119 487873536 8e Linux LVM
What I would do!
However in your case since the partition type appears to be listed already as 83 and the partition is reported as being HPFS/NTFS, I think I'd be inclined to delete the partition(s) all together and start over with a clean slate.
| fdisk -l shows ext3 file system as HPFS/NTFS |
1,357,727,051,000 |
I'm trying to mount an ext3 file system from another Linux installation so that the user, not root, will have full access to all the files. (I really do need user to have access to those files, because I would like to use them from another computer via sshfs, and sshfs will only give the user's access rights to the files.)
If I run mount /dev/sda1 /mnt/whatever all files are only accessible by root.
I've also tried mount -o nosuid,uid=1000,gid=1000 /dev/sda1 /mnt/whatever as instructed by a SuperUser question discussing ext4 but that fails with an error, and dmesg reports:
EXT3-fs: Unrecognized mount option "uid=1000" or missing value
How can I mount the filesystem?
|
On an ext4 filesystem (like ext2, ext3, and most other Unix-originating filesystems), the effective file permissions don't depend on who mounted the filesystem or on mount options, only on the metadata stored within the filesystem.
If you have a removable filesystem that uses different user IDs from your system, you can use bindfs to provide a view of any filesystem with different ownership or permissions. The removable filesystem must be mounted already, e.g. on /mnt/sda1; then, if you want a particular user to appear as the owner of all files, you can run something like
mkdir /home/$user/sda1
bindfs --no-allow-other -u $user -g $group /mnt/sda1 /home/$user/sda1
| Mounting an ext3 filesystem with user privileges |
1,357,727,051,000 |
I know that this feature dates back 20 years but I still would like to find out
What is the purpose of the reserved blocks in ext2/3/4 filesystems?
|
The man page of tune2fs gives you an explanation:
Reserving some number of filesystem blocks for use by privileged processes is done to avoid filesystem fragmentation, and to allow system daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem.
It also acts as a failsafe; if for some reason the normal users and their programs fill up the disk up to 100%, you might not even be able to login and/or sync files before deleting them. By reserving some blocks to root, the system ensures you can always correct the situation.
In practice, 5% is an old default and may be too much if your hard drive is big enough. You can change that value using the previously mentioned tune2fs tool, but be sure to read its man page first!
| ext2/3/4 reserved blocks percentage purpose [duplicate] |
1,357,727,051,000 |
The ext2/3/4 filesystem checker has two options that seem to be very similar, -p and -y.
Both seem to perform an automatic repair, but the manpage states that -p can exit when it encounters certain errors while for -y no such thing is mentioned. Is this the only difference?
|
There is a specific difference which when we read it twice might make more sense.
-p - Automatically repair the file system without any questions.
-y - Assume an answer of `yes' to all questions.
So fsck -p will try to fix the file system automatically without any user intervention. It is most likely to take decisions such as yes or no by itself.
However, fsck -y will just assume yes for all questions.
An example can be thought like,
If some changes need to be made in a partition, fsck -y will just go ahead and assume yes and make the changes.
However, fsck -p will take the correct decision which can be either yes or no.
| What is the difference between fsck options -y and -p? |
1,357,727,051,000 |
zerofree -v /dev/sda1 returned
123642/1860888/3327744.
The man page does not explain what those numbers are:
http://manpages.ubuntu.com/manpages/natty/man8/zerofree.8.html
I found some code on github:
https://github.com/haggaie/zerofree/blob/master/zerofree.c
And there's this line:
if ( verbose ) {
printf("\r%u/%u/%u\n", modified, free, fs->super->s_blocks_count);
}
So I guess the middle number was the free space (in kB?), the first one might be the amount that was written over with zeros, and the last one lost me.
What do you think?
|
I have the same tool installed on Fedora 19, and I noticed in the .spec file a URL which lead to this page titled: Keeping filesystem images sparse. This page included some examples for creating test data so I ran the commands to create the corresponding files.
Example
$ dd if=/dev/zero of=fs.image bs=1024 seek=2000000 count=0
$ /sbin/mke2fs fs.image
$ ls -l fs.image
-rw-rw-r--. 1 saml saml 2048000000 Jan 4 21:42 fs.image
$ du -s fs.image
32052 fs.image
When I ran the zerofree -v command I got the following:
$ zerofree -v fs.image
...counting up percentages 0%-100%...
0/491394/500000
Interrogating with filefrag
When I used the tool filefrag to interrogate the fs.image file I got the following.
$ filefrag -v fs.image
Filesystem type is: ef53
File size of fs.image is 2048000000 (500000 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 620: 11714560.. 11715180: 621:
1: 32768.. 32769: 11716608.. 11716609: 2: 11715181:
2: 32892.. 33382: 11716732.. 11717222: 491: 11716610:
3: 65536.. 66026: 11722752.. 11723242: 491: 11717223:
...
The s_block_count referenced in your source code also coincided with the source code for my version of zerofree.c.
if ( verbose ) {
printf("\r%u/%u/%u\n", nonzero, free,
current_fs->super->s_blocks_count) ;
}
So we now know that s_blocks_count is the 500,000 blocks of 4096 bytes.
Interrogating with tune2fs
We can also query the image file fs.image using tune2fs.
$ sudo tune2fs -l fs.image | grep -i "block"
Block count: 500000
Reserved block count: 25000
Free blocks: 491394
First block: 0
Block size: 4096
Reserved GDT blocks: 122
Blocks per group: 32768
Inode blocks per group: 489
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
From this output we can definitely see that the 2nd and 3rd numbers being reported by zerofree are in fact:
Free blocks: 491394
Block count: 500000
Back to the source code
The 1st number being reported is in fact the number of blocks that are found that are not zero. This can be confirmed by looking at the actual source code for zerofree.
There is a counter called, nonzero which is getting incremented in the main loop that's analyzing the free blocks.
if ( i == current_fs->blocksize ) {
continue ;
}
++nonzero ;
if ( !dryrun ) {
ret = io_channel_write_blk(current_fs->io, blk, 1, empty) ;
if ( ret ) {
fprintf(stderr, "%s: error while writing block\n", argv[0]) ;
return 1 ;
}
}
Conclusion
So after some detailed analysis it would look like those numbers are as follows:
number of nonzero free blocks encountered (which were subsequently zeroed)
number of free blocks within the filesystem
total number of blocks within the filesystem
| zerofree verbose returns what? |
1,357,727,051,000 |
How many bits on a linux file system is taken up for the permissions of a file?
|
To add to the other answers:
Traditional Unix permissions are broken down into:
read (r)
write (w)
execute file/access directory (x)
Each of those is stored as a bit, where 1 means permitted and 0 means not permitted.
For example, read only access, typically written r--, is stored as binary 100, or octal 4.
There are 3 sets of those permissions, which determines the allowed access for:
the owner of the file
the group of the file
all other users
They are all stored together in the same variable, e.g. rw-r-----, meaning read-write for the owner, read-only for the group, and no access for others, is stored as 110100000 binary, 640 octal.
So that makes 9 bits.
Then, there are 3 other special bits:
setuid
setgid
sticky
See man 1 chmod for details of those.
And finally, the file's type is stored using 4 bits, e.g. whether it is a regular file, or a directory, or a pipe, or a device, or whatever.
These are all stored together in the inode, and together it makes 16 bits.
| How many bits is the access flags of a file? |
1,357,727,051,000 |
I have created an 200MB ext3 using the following commands.
dd if=/dev/zero of=./system.img bs=1000000 count=200
mkfs.ext2 ./system.img
tune2fs -j ./system.img
How can I resize it to 50MB and 300MB? The problem is I have only some binaries in my system. They are : badblocks,e2fsck, mke2fs, mke2fs.bin, parted, resize2fs, tune2fs
|
First, run a filesystem check, e2fsck -f ./system.img. Without this, it may proceed to enlarge the raw file, but fail to make any meaningful changes to the filesystem.
To reduce the size of the file system:
resize2fs ./system.img 50M
To enlarge:
resize2fs ./system.img 300M
resize2fs automatically adjusts the file size for you.
| How to resize ext3 image files |
1,357,727,051,000 |
I have a ReadyNAS box named "storage" that I believe is based on Debian. I can ssh into it as root. I'm trying to reconfigure the webserver, but I'm running into a file permissions problem that I just don't understand. I can't do anything with /etc/frontview/apache/apache.pem even as root! It doesn't appear to have any special permissions compared to other files in the same directory and I can work with those.
storage:~# whoami
root
storage:~# cd /etc/frontview/apache/
storage:/etc/frontview/apache# ls -lah apache.pem*
-rw------- 1 admin admin 4.0k Jul 10 2013 apache.pem
-rw------- 1 admin admin 4.0k Jun 9 05:57 apache.pem.2017-02-04
-rw------- 1 admin admin 1.5k Jun 9 05:57 apache.pem.orig
storage:/etc/frontview/apache# touch apache.pem
touch: creating `apache.pem': Permission denied
storage:/etc/frontview/apache# touch apache.pem.2017-02-04
storage:/etc/frontview/apache# rm -f apache.pem
rm: cannot unlink `apache.pem': Operation not permitted
What is so special about this file that it can't be touched? I can't delete it. I can't change the permissions on it. I can't change the owner of it.
The directory seems to be fine. It has space left, it isn't mounted read-only. In fact I can edit other files in the same directory.
# ls -ld /etc/frontview/apache
drwxr-xr-x 8 admin admin 4096 Jun 9 05:44 /etc/frontview/apache
# df /etc/frontview/apache
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hdc1 2015824 504944 1510880 26% /
|
I just found the problem. The "immutable" attribute was set on that file. ls doesn't show it. You need a different command to see it:
# lsattr apache.pem*
----i--------- apache.pem
-------------- apache.pem.2017-02-04
-------------- apache.pem.orig
Once I remove the immutable bit, I can edit that file:
# chattr -i apache.pem
# touch apache.pem
| Permission denied for only a single file in a directory as root user on an ext3 filesystem under RAIDiator OS |
1,357,727,051,000 |
I am building a disk image for an embedded system (to be placed on an 4GB SD card). I want the system to have two partitions. A 'Root'(200Mb), and a 'Data' partition(800Mb).
I create an empty 1GB file with dd.
Then I use parted to set up the partitions.
I mount them each in a loop device then format them; ext2 for 'Root' ext4 for 'Data'. Add my root file system to the 'Root' partition and leave 'Data' empty.
Here's where the problem is. I am now stuck with a 1GB image, with only 200MB of data on it. Shouldn't I, in theory, be able to truncate the image down to say.. 201MB and still have the file system mountable? Unfortunately I have not found this to be the case.
I recall in the past having used a build environment from Freescale that used to create 30Mb images, that would have partitions for utilizing an entire 4GB sdcard. Unfortunately, at this time, I can not find how they were doing that.
I have read the on-disk format for the ext file system, and if there is no data in anything past the first super block (except for backup super blocks, and unused block tables) I thought I could truncate there.
Unfortunately, when I do this, the mounting system freaks out. I can then run FSCK, restore the super blocks, and block tables, and can mount it then no problem. I just don't think that should be necessary.
Perhaps a different file system could work? Any ideas?
thanks,
edit
changed partition to read file system. The partition is still there and deoesn't change, but the file system is getting destroyed after truncating the image.
edit
I have found the case to be that when I truncate the file to a size just larger than the first set of 'Data' partition superblock and inode/block tables, (Somewhere in the data-block range) the file system becomes umountable without doing a fsck to restore the rest of the super blocks and block/inode tables
|
Firstly, writing a sparse image to a disk will not result in anything but the whole of the size of that image file - holes and all - covering the disk. This is because handling of sparse files is a quality of the filesystem - and a raw device (such as the one to which you write the image) has no such thing yet. A sparse file can be stored safely and securely on a medium controlled by a filesystem which understands sparse files (such as an ext4 device) but as soon as you write it out it will envelop all that you intend it to. And so what you should do is either:
Simply store it on an fs which understands sparse files until you are prepared to write it.
Make it two layers deep...
Which is to say, write out your main image to a file, create another parent image with an fs which understands sparse files, then copy your image to the parent image, and...
When it comes time to write the image, first write your parent image, then write your main image.
Here's how to do 2:
Create a 1GB sparse file...
dd bs=1kx1k seek=1k of=img </dev/null
Write two ext4 partitions to its partition table: 1 200MB, 2 800MB...
printf '%b\n\n\n\n' n '+200M\nn\n' 'w\n\c' | fdisk img
Create two ext4 filesystems on a -Partitioned loop device and put a copy of the second on the first...
sudo sh -c '
for p in "$(losetup --show -Pf img)p"* ### the for loop will iterate
do mkfs.ext4 "$p" ### over fdisks two partitions
mkdir -p ./mnt/"${p##*/}" ### and mkfs, then mount each
mount "$p" ./mnt/"${p##*/}" ### on dirs created for them
done; sync; cd ./mnt/*/ ### next we cp a sparse image
cp --sparse=always "$p" ./part2 ### of part2 onto part1
dd bs=1kx1k count=175 </dev/zero >./zero_fill ### fill out part1 w/ zeroes
sync; cd ..; ls -Rhls . ### sync, and list contents
umount */; losetup -d "${p%p*}" ### last umount, destroy
rm -rf loop*p[12]/ ' ### loop devs and mount dirs
mke2fs 1.42.12 (29-Aug-2014)
Discarding device blocks: done
Creating filesystem with 204800 1k blocks and 51200 inodes
Filesystem UUID: 2f8ae02f-4422-4456-9a8b-8056a40fab32
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
mke2fs 1.42.12 (29-Aug-2014)
Discarding device blocks: done
Creating filesystem with 210688 4k blocks and 52752 inodes
Filesystem UUID: fa14171c-f591-4067-a39a-e5d0dac1b806
Superblock backups stored on blocks:
32768, 98304, 163840
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
175+0 records in
175+0 records out
183500800 bytes (184 MB) copied, 0.365576 s, 502 MB/s
./:
total 1.0K
1.0K drwxr-xr-x 3 root root 1.0K Jul 16 20:49 loop0p1
0 drwxr-xr-x 2 root root 40 Jul 16 20:42 loop0p2
./loop0p1:
total 176M
12K drwx------ 2 root root 12K Jul 16 20:49 lost+found
79K -rw-r----- 1 root root 823M Jul 16 20:49 part2
176M -rw-r--r-- 1 root root 175M Jul 16 20:49 zero_fill
./loop0p1/lost+found:
total 0
./loop0p2:
total 0
Now that's a lot of output - mostly from mkfs.ext4 - but notice especially the ls bits at the bottom. ls -s will show the actual -size of a file on disk - and it is always displayed in the first column.
Now we can basically reduce our image to only the first partition...
fdisk -l img
Disk img: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc455ed35
Device Boot Start End Sectors Size Id Type
img1 2048 411647 409600 200M 83 Linux
img2 411648 2097151 1685504 823M 83 Linux
There fdisk tells us there are 411647 +1 512 byte sectors in the first partition of img...
dd seek=411648 of=img </dev/null
That truncates the img file to only its first partition. See?
ls -hls img
181M -rw-r--r-- 1 mikeserv mikeserv 201M Jul 16 21:37 img
...but we can still mount that partition...
sudo mount "$(sudo losetup -Pf --show img)p"*1 ./mnt
...and here are its contents...
ls -hls ./mnt
total 176M
12K drwx------ 2 root root 12K Jul 16 21:34 lost+found
79K -rw-r----- 1 root root 823M Jul 16 21:34 part2
176M -rw-r--r-- 1 root root 175M Jul 16 21:34 zero_fill
And we can append the stored image of the second partition to the first...
sudo sh -c '
dd seek=411648 if=./mnt/part2 of=img
umount ./mnt; losetup -D
mount "$(losetup -Pf --show img)p"*2 ./mnt
ls ./mnt; umount ./mnt; losetup -D'
1685504+0 records in
1685504+0 records out
862978048 bytes (863 MB) copied, 1.96805 s, 438 MB/s
lost+found
Now that has grown our img file: it's no longer sparse...
ls -hls img
1004M -rw-r--r-- 1 mikeserv mikeserv 1.0G Jul 16 21:58 img
...but removing that is as simple the second time as it was the first, of course...
dd seek=411648 of=img </dev/null
ls -hls img
181M -rw-r--r-- 1 mikeserv mikeserv 201M Jul 16 22:01 img
| How do I create small disk image with large partitions |
1,357,727,051,000 |
I need to detect a filesystem type from a C/C++ program using the filesystem superblock. However, I don't see much differences between superblocks for ext2 and ext4. The s_rev_level field is the same (=1), the s_minor_rev_level is the same (=0).
I could check some features from s_feature_compat (and other feature fields) and try to locate features, which aren't supported by ext2. But - the person, formatting a partition, could disable some features on purpose. So, this method can detect the ext4, but it can't distinguish between the ext2 and the ext4 with disabled ext4-specific features.
So, how to do that?
|
After looking at the code for various utilities and the kernel code for some time, it does seem that what @Hauke suggested is true - whether a filesystem is ext2/ext3/ext4 is purely defined by the options that are enabled.
From the Wikipedia page on ext4:
Backward compatibility
ext4 is backward compatible with ext3 and ext2, making it possible to mount ext3 and ext2 as ext4. This will slightly improve performance, because certain new features of ext4 can also be used with ext3 and ext2, such as the new block allocation algorithm.
ext3 is partially forward compatible with ext4. That is, ext4 can be mounted as ext3 (using "ext3" as the filesystem type when mounting). However, if the ext4 partition uses extents (a major new feature of ext4), then the ability to mount as ext3 is lost.
As most probably already know, there is similar compatibility between ext2 and ext3.
After looking at the code which blkid uses to distinguish different ext filesystems, I was able to turn an ext4 filesystem into something recognised as ext3 (and from there to ext2). You should be able to repeat this with:
truncate -s 100M testfs
mkfs.ext4 -O ^64bit,^extent,^flex_bg testfs <<<y
blkid testfs
tune2fs -O ^huge_file,^dir_nlink,^extra_isize,^mmp testfs
e2fsck testfs
tune2fs -O metadata_csum testfs
tune2fs -O ^metadata_csum testfs
blkid testfs
./e2fsprogs/misc/tune2fs -O ^has_journal testfs
blkid testfs
First blkid output is:
testfs: UUID="78f4475b-060a-445c-a5d2-0f45688cc954" SEC_TYPE="ext2" TYPE="ext4"
Second is:
testfs: UUID="78f4475b-060a-445c-a5d2-0f45688cc954" SEC_TYPE="ext2" TYPE="ext3"
And the final one:
testfs: UUID="78f4475b-060a-445c-a5d2-0f45688cc954" TYPE="ext2"
Note that I had to use a new version of e2fsprogs than was available in my distro to get the metadata_csum flag. The reason for setting, then clearing this was because I found no other way to affect the underlying EXT4_FEATURE_RO_COMPAT_GDT_CSUM flag. The underlying flag for metadata_csum (EXT4_FEATURE_RO_COMPAT_METADATA_CSUM) and EXT4_FEATURE_RO_COMPAT_GDT_CSUM are mutually exclusive. Setting metadata_csum disables EXT4_FEATURE_RO_COMPAT_GDT_CSUM, but un-setting metadata_csum does not re-enable the latter.
Conclusions
Lacking a deep knowledge of the filesystem internals, it seems either:
Journal checksumming is meant to be a defining feature of a filesystem created as ext4 that you are really not supposed to disable and that fact that I have managed this is really a bug in e2fsprogs. Or,
All ext4 features were always designed to be disabled and disabling them does make the filesystem to all intents an purposes an ext3 filesystem.
Either way a high level of compatibility between the filesystems is clearly a design goal, compare this to ReiserFS and Reiser4 where Reiser4 is a complete redesign. What really matters is whether the features present are supported by the driver that is used to mount the system. As the Wikipedia article notes the ext4 driver can be used with ext3 and ext2 as well (in fact there is a kernel option to always use the ext4 driver and ditch the others). Disabling features just means that the earlier drivers will have no problems with the filesystem and so there are no reasons to stop them from mounting the filesystem.
Recommendations
To distinguish between the different ext filesystems in a C program, libblkid seems to be the best thing to use. It is part of util-linux and this is what the mount command uses to try to determine the filesystem type. API documentation is here.
If you have to do your own implementation of the check, then testing the same flags as libblkid seems to be the right way to go. Although notably the file linked has no mention of the EXT4_FEATURE_RO_COMPAT_METADATA_CSUM flag which appears to be tested in practice.
If you really wanted to go the whole hog, then looking at for journal checksums might be a surefire way of finding if a filesystem without these flags is (or perhaps was) ext4.
Update
It is actually somewhat easier to go in the opposite direction and promote an ext2 filesystem to ext4:
truncate -s 100M test
mkfs.ext2 test
blkid test
tune2fs -O has_journal test
blkid test
tune2fs -O huge_file test
blkid test
The three blkid ouputs:
test: UUID="59dce6f5-96ed-4307-9b39-6da2ff73cb04" TYPE="ext2"
test: UUID="59dce6f5-96ed-4307-9b39-6da2ff73cb04" SEC_TYPE="ext2" TYPE="ext3"
test: UUID="59dce6f5-96ed-4307-9b39-6da2ff73cb04" SEC_TYPE="ext2" TYPE="ext4"
The fact that ext3/ext4 features can so easily by enabled on a filesystem that started out as ext2 is probably the best demonstration that the filesystem type really is defined by the features.
| Reliable way to detect ext2 or ext3 or ext4? |
1,357,727,051,000 |
I am looking for a way to fragment an existing file in order to evaluate the performance of some tools. I found a solution for NTFS file system called MyFragmenter as described in this thread. However I can't find anything for ext2/3/4... I guest I can develop my own file fragmenter but due to time constraint I would like to find a faster solution. I found some tool like HJ-Split which split a file in smaller bits but I doubt this will simulate file fragmentation.
Is their any solution available for my problem?
|
If you want to ensure fragmentation but not prevent it (so you only have partial control over what happens), and you don't care about the specifics of the fragmentation, here's a quick & dirty way of doing things.
To create a file of n blocks in at least two fragments:
Open the file with synchronous writes, write m < n blocks.
Open another file. Add to it until there are at most n - m blocks free on disk. Don't make it sparse by mistake!
Write the remaining n - m blocks to the first file.
Close and unlink the second file.
You can fragment in more pieces by interlacing more files.
This assumes the filesystem is available for this sort of torture, i.e. not in a multi-user or mission-critical environment. It also assumes the filesystem has no reserved blocks, or the reserved blocks are reserved for your UID, or you're root.
There's no direct way to ensure fragmentation, because Unix systems employ filesystem abstraction, so you never talk to the raw filesystem.
Also, ensuring filesystem-level fragmentation tells you nothing about what happens at lower levels. LVM, software and hardware RAID, hardware-level sector remapping and other abstraction layers can play havoc with your expectations (and measurements).
| How to deliberately fragment a file |
1,357,727,051,000 |
I understand that I can list the location of a filesystem's superblocks using the following commands.
Example
First get the device handle for the current directory.
$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora_greeneggs-home 402G 146G 236G 39% /home
Then use this command to list the superblocks for handle /dev/mapper/fedora_greeneggs-home.
$ sudo dumpe2fs /dev/mapper/fedora_greeneggs-home | grep -i superblock
dumpe2fs 1.42.7 (21-Jan-2013)
Primary superblock at 0, Group descriptors at 1-26
Backup superblock at 32768, Group descriptors at 32769-32794
Backup superblock at 98304, Group descriptors at 98305-98330
Backup superblock at 163840, Group descriptors at 163841-163866
Backup superblock at 229376, Group descriptors at 229377-229402
Backup superblock at 294912, Group descriptors at 294913-294938
Backup superblock at 819200, Group descriptors at 819201-819226
Backup superblock at 884736, Group descriptors at 884737-884762
Backup superblock at 1605632, Group descriptors at 1605633-1605658
Backup superblock at 2654208, Group descriptors at 2654209-2654234
Backup superblock at 4096000, Group descriptors at 4096001-4096026
Backup superblock at 7962624, Group descriptors at 7962625-7962650
Backup superblock at 11239424, Group descriptors at 11239425-11239450
Backup superblock at 20480000, Group descriptors at 20480001-20480026
Backup superblock at 23887872, Group descriptors at 23887873-23887898
Backup superblock at 71663616, Group descriptors at 71663617-71663642
Backup superblock at 78675968, Group descriptors at 78675969-78675994
Backup superblock at 102400000, Group descriptors at 102400001-102400026
But how does one actually examine the contents of one of these superblocks?
|
I'm not sure how you can examine any particular superblock, but you can use this command to examine the general contents that all the superblocks share like so, using dumpe2fs.
$ sudo dumpe2fs /dev/mapper/fedora_greeneggs-home | less
Example
$ sudo dumpe2fs /dev/mapper/fedora_greeneggs-home | less
Filesystem volume name: <none>
Last mounted on: /home
Filesystem UUID: xxxxxxx-xxxx-xxxx-xxxx-88c06ecdd872
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 26722304
Block count: 106857472
Reserved block count: 5342873
Free blocks: 67134450
Free inodes: 25815736
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 998
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sat Dec 7 20:41:58 2013
Last mount time: Sun Dec 22 21:31:01 2013
...
References
Superblock Definition
| How can I dump the contents of a filesystem's superblock? |
1,357,727,051,000 |
Recently I installed Debian Squeeze, first using ext3 and then again using ext4 on the same machine. The automatic fsck done after a certain number of mounts is much faster using ext4 (about 1 min) than ext3 (about 5 min).
What are the reasons for this significant difference in speed? If ext4 is much faster why does the Debian installer default to using ext3?
|
That's one of the most advertised benefits of ext4 (see it mentioned in the Features on Wikipedia).
The reason? Filesystem developers worked hard to achieve this.
Here's a short summary quoted from Wikipedia:
Faster file system checking
In ext4, unallocated block groups and sections of the inode table are marked as such. This enables e2fsck to skip them entirely on a check and greatly reduces the time it takes to check a file system of the size ext4 is built to support.
| Significant difference in speed between fsck using ext3 and ext4 on Debian Squeeze |
1,357,727,051,000 |
I often need to move files between two Linux computers via USB. I use gparted to format the USB's. When I formatted the USB to use FAT32, the USB was unable to copy symlinks, so I had to recreate the symlinks on the other computer after copying the files. When I formatted the USB to use EXT3, I created a lost+found directory on the USB, and prevented me from copying files to the USB unless I became root.
Is there a preferred file system to use when transferring files between two Linux computers?
How can I copy files without running into the problems presented by the FAT32 and EXT3 filesystems?
|
What I do is to store tarballs on the USB drive (formatted as VFAT). I'm wary of reformatting USB drives, they are build/optimized for VFAT so to level wear, and I'm afraid it will die much sooner with other filesystems. Besides, formatting another way will make it useless for ThatOtherSystem...
| What filesystem should be used when transferring files between Linux systems? |
1,357,727,051,000 |
If the block size of a file system is 4KB, then for a 1KB file, 3KB space(which is internal fragmentation) is wasted. So, under a directory, is there any command to summarize how much disk space is wasted due to internal fragmentation?
|
Except if you have sparse files, it sounds like you're looking for du -s «dir» vs. du -s --apparent-size «dir».
Or, in stat output, the difference between size and blocks × block size:
anthony@Zia:/tmp$ echo -n 1 > foo
anthony@Zia:/tmp$ stat -c '%s %b × %B' foo
1 8 × 512
And with du (which defaults to kilobytes, add -B 1 if you want bytes):
anthony@Zia:/tmp$ du foo
4 foo
anthony@Zia:/tmp$ du --apparent-size foo
1 foo
du will of course count entire directory trees, not just individual files.
| Any command to view the file system internal fragmentation size under a directory? |
1,357,727,051,000 |
Var is showing as full to many apps like Nagios, Puppet, and the LVM tools (pvs, vgs, etc)
df -h output
6.0G 4.3G 1.4G 77% /var
vgs output
/var/lock/lvm/V_rootvg:aux: open failed: No space left on device
Can't get lock for rootvg
Skipping volume group rootvg
lsof +L1 shows nothing under var anymore, so I don't think there are unlinked files which have yet to be cleared from the /var filesystem. I don't understand why 1.4G free on a 6G filesystem is considered full. I know some space is reserved by the system on each filesystem but that can't be it, it's too much space. The filesystem is ext3 on Red Hat 5.
dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: c8f44510-e8f7-4e2e-950a-1410b069910e
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 393216
Block count: 1572864
Reserved block count: 78627
Free blocks: 1183083
Free inodes: 388144
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 63
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Filesystem created: Mon Apr 29 13:12:02 2013
Last mount time: Wed Oct 23 19:10:44 2013
Last write time: Wed Oct 23 19:10:44 2013
Mount count: 6
Maximum mount count: -1
Last checked: Mon Apr 29 13:12:02 2013
Check interval: 0 (<none>)
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 8766dfd5-c802-4bc3-81cc-21869e810656
Journal backup: inode blocks
Journal features: journal_incompat_revoke
Journal size: 32M
Journal length: 8192
Journal sequence: 0x0112568e
Journal start: 3334
|
Looking at the comments others have helped you diagnose you're out of inodes. If you need to make a few available so you can get some basic access back to your system then you could delete the following files on a CentOS 5 install, assuming you can live without them.
Example
$ sudo rm -fr /var/log/*.[1-9]?(.gz)
This will remove any of the previously backed up files in /var/log. This should buy you a few dozen inodes to start.
Counting inodes
using df
I usually use the command df to determine the number available.
$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/VolGroup00-LogVol00
59932672 807492 59125180 2% /
using tune2fs
You can also use tune2fs. With it you'll need to give it the path to the LVM LV mapper.
$ tune2fs -l /dev/mapper/VolGroup00-LogVol00 | grep -i inode
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Inode count: 59932672
Free inodes: 59126861
Inodes per group: 32768
Inode blocks per group: 1024
First inode: 11
Inode size: 128
Journal inode: 8
First orphan inode: 21561629
Journal backup: inode blocks
Freed up some inodes, now what?
With some breathing room you basically have a couple of options.
I would start by trying to quickly get a list together of files that can be targeted for deletion, so you can begin to get more headroom. I'd focus on /tmp and /var some more potential files to be removed.
If you have old versions of Java or anything installed under /usr/local or /opt I'd pick on those next.
I'd start formulating a list of installed RPMs that can be uninstalled
If you've been using YUM to do updates on this server you can clear out its cache.
$ sudo yum clean all
Look into adding additional space.
| ext3 Filesystem shows full to most apps, but only 77% full to DF |
1,357,727,051,000 |
I recently moved from a hardware RAID1 enclosure to using two eSATA drives with md. Everything seems to be working fine, except for the fact that directory traversals/listings sometimes crawl (on the order of 10s of seconds). I am using an ext3 filesystem, with the block size set to 4K.
Here is some relevant output from commands that should be important:
mdadm --detail:
/dev/md127:
Version : 1.2
Creation Time : Sat Nov 16 09:46:52 2013
Raid Level : raid1
Array Size : 976630336 (931.39 GiB 1000.07 GB)
Used Dev Size : 976630336 (931.39 GiB 1000.07 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Nov 19 01:07:59 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Events : 19691
Number Major Minor RaidDevice State
2 8 17 0 active sync /dev/sdb1
1 8 1 1 active sync /dev/sda1
fdisk -l /dev/sd{a,b}:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes, 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0xb410a639
Device Boot Start End Blocks Id System
/dev/sda1 2048 1953525167 976761560 83 Linux
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes, 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x261c8b44
Device Boot Start End Blocks Id System
/dev/sdb1 2048 1953525167 976761560 83 Linux
time dumpe2fs /dev/md127 |grep size:
dumpe2fs 1.42.7 (21-Jan-2013)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Block size: 4096
Fragment size: 4096
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal size: 128M
real 2m14.242s
user 0m2.286s
sys 0m0.352s
The way I understand it, I've got 4K sectors on these drives (recent WD reds), but the partitions/filesystems appear to be properly aligned. Since it looks like I'm using md metadata version 1.2, I think I'm also good (based on mdadm raid1 and what chunksize (or blocksize) on 4k drives?). The one thing I haven't found an answer for online is whether or not having an inode size of 256 would cause problems. Not all operations are slow, it seems that the buffer cache does a great job of keeping things zippy (as it should).
My kernel version is 3.11.2
EDIT: new info, 2013-11-19
mdadm --examine /dev/sd{a,b}1 | grep -i offset
Data Offset : 262144 sectors
Super Offset : 8 sectors
Data Offset : 262144 sectors
Super Offset : 8 sectors
Also, I am mounting the filesystem with noatime,nodiratime I'm not really willing to mess with journaling much since if I care enough to have RAID1, it might be self-defeating. I am tempted to turn on directory indexing
EDIT 2013-11-20
Yesterday I tried turning on directory indexing for ext3 and ran e2fsck -D -f to see if that would help. Unfortunately, it hasn't. I am starting to suspect it may be a hardware issue (or is md raid1 over eSATA just really dumb to do?). I'm thinking of taking each of the drives offline and seeing how they perform alone.
EDIT 2013-11-21
iostat -kx 10 |grep -P "(sda|sdb|Device)":
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.37 1.17 0.06 0.11 1.80 5.10 84.44 0.03 165.91 64.66 221.40 100.61 1.64
sdb 13.72 1.17 2.46 0.11 110.89 5.10 90.34 0.08 32.02 6.46 628.90 9.94 2.55
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
I truncated the output past this since it was all 0.00%
I really feel like it should be irrespective of ext4 vs. ext3 because this isn't just feeling a little slower, it can take on the order of tens of seconds to a minute and some change to tab auto-complete or run an ls
EDIT: Likely a hardware issue, will close question when confirmed
The more I think of it, the more I wonder if it's my eSATA card. I'm currently using this one: http://www.amazon.com/StarTech-PEXESAT32-Express-eSATA-Controller/dp/B003GSGMPU
However, I just checked dmesg and it's littered with these messages:
[363802.847117] ata1.00: status: { DRDY }
[363802.847121] ata1: hard resetting link
[363804.979044] ata2: softreset failed (SRST command error)
[363804.979047] ata2: reset failed (errno=-5), retrying in 8 secs
[363804.979059] ata1: softreset failed (SRST command error)
[363804.979064] ata1: reset failed (errno=-5), retrying in 8 secs
[363812.847047] ata1: hard resetting link
[363812.847061] ata2: hard resetting link
[363814.979063] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 10)
[363814.979106] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 10)
....
[364598.751086] ata2.00: status: { DRDY }
[364598.751091] ata2: hard resetting link
[364600.883031] ata2: softreset failed (SRST command error)
[364600.883038] ata2: reset failed (errno=-5), retrying in 8 secs
[364608.751043] ata2: hard resetting link
[364610.883050] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 10)
[364610.884328] ata2.00: configured for UDMA/100
[364610.884336] ata2.00: device reported invalid CHS sector 0
[364610.884342] ata2: EH complete
I am also going to buy shorter shielded eSATA cables as I'm wondering if there is some interference going on.
|
THIS ENDED UP BEING A HARDWARE ISSUE
Switching to the new shielded cables did not help, but replacing the old card with this one: http://www.amazon.com/gp/product/B000NTM9SY did get rid of the error messages and the strange behavior. Will post something new if anything changes.
IMPORTANT NOTE FOR SATA ENCLOSURES:
Even after doing the above, any drive operation would be incredibly slow (just halt for 10-30 seconds) whenever the drive was idle for a while. The enclosure I'm using has an eSATA interface, but is powered by USB. I determined this was because it didn't have enough power to spin up, so I tried a a couple of things:
Using an external higher-current USB power source (in case the ports were only doing the 500mA minimum)
Disabling spin-down with hdparm -S 0 /dev/sdX (this alleviated the problem greatly, but did not resolve it completely)
Disabled advanced power management via hdparm -B 255 /dev/sdX (again, did not fully resolve)
Eventually, I discovered that Western Digital has a jumper setting for Reduced Power Spinup - designed especially for this use case!
The drives I am using are: WD Red WD10JFCX 1TB IntelliPower 2.5"
http://support.wdc.com/images/kb/scrp_connect.jpg
Note that I am still operating without all the power management and spin down features (Still -B 255 and -S 0 on hdparm).
Final Verdict
Unfortunately, the RPS did not solve all of my problems, just reduced the magnitude and frequency. I believe the issues were ultimately due to the fact that the enclosure could not provide enough power (even when I use an AC-USB adapter). I eventually bought this enclosure:
http://www.amazon.com/MiniPro-eSATA-6Gbps-External-Enclosure/dp/B003XEZ33Y
and everything has been working flawlessly for the last three weeks.
| md raid1 ext3 and 4k sectors slow with directory operations |
1,357,727,051,000 |
Is the quota approach still in use to limit the usage of disk space and/or the concurrency between users.
Quota works with aquota.user files in the concerned directories AND some settings in /etc/fstab with options like usrquota…
But some times, regarding with journalised filesystems, this options change for usrjquota=aquota.user,jqfmt=vfsv1 .
Is this abstract still correct?
https://wiki.archlinux.org/index.php/Disk_quota
I'm very surprised to see both quota and jquota set of options. Are they backward compatible, deprecated, replaced???
Could another approach use cgroups to limit space access? It seems not: How to set per process disk quota?
Are there other methods nowadays?
|
Is the quota approach still in use?
Yes it is. Since disks have grown in size, quotas might not be of much worth to common users, but still find their usage in multi-user environment e.g. on servers. Android uses quotas on ext4 and f2fs to clear caches and and control per-app disk usage. In-kernel implementations as well as userspace tools are up-to-date.
Quota works with aquota.user files in the concerned directories AND some settings in /etc/fstab with options like usrquota.
Linux disk quota works on per-filesystem basis, so aquota.user (and aquota.group) files are created in the root of concerned filesystem. usrquota (or usrjquota=) mount option has to be passed when mounting filesystem. Or quota filessytem feature has to be enabled when formatting or later using tune2fs.
I'm very surprised to see both quota and jquota set of options
jquota is evolution of quota. From ext4(5): "Journaled quotas have the advantage that even after a crash no quota check is required." jqfmt= specifies quota database file format. See more details in Difference between journaled and plain quota.
Are they backward compatible, deprecated, replaced?
No they are two different sets of mount options, not deprecated or replaced. Mount options are different and not compatible, either one of the two can be used. Journaled quota is only supported by version 2 quota files (vfsv0 and vfsv1), which can also be hidden files (associated to reserved inodes 3 and 4 on ext4) if quota filesystem feature is enabled. Version 1 quota file format (vfsold) works with both. Also upgrading to journaled quota is not very complex, so backward compatibility doesn't matter much.
Could another approach use cgroups to limit space access?
No. Control groups limit resource usage (e.g. processor, RAM, disk I/O, network traffic) on per process basis while files are saved on filesystems with UID/GID information. When a process accesses a file for reading or writing, kernel enforces DAC to allow or deny access by comparing process UID/GID with filesystem UID/GID. So it's quite simple to enforce quota limits at the same time as the filesystem always maintains total space usage on per-UID basis (when quota is enabled).
Are there other methods nowadays?
No. Or at least not very commonly known.
| What is the most recent technique to implement quotas? |
1,357,727,051,000 |
I created two partitions on a 1.5 TB drive, the first was 1 TB, the latter was the remaining .5 TB. Both were formatted as ext3. I don't mind the automatic filesystem checks occurring every so often, so I never bother configuring the frequency of it. What I found odd was that it decided to make the automatic check occur every 39 mounts for the 1 TB, and 27 mounts for the .5 TB partition. I attempted to look in the man pages as well as various forums, but I couldn't find any mention about how it determines the frequency for file system checks. I assume it is a simple formula, so does anyone know what it is?
|
The good thing about linux is the source is always somewhere. You can download or view the base e2fsprogs sources on kernel.org. This can also depend on your specific version and distribution though...
From current code it looks like it's some value added to 20 based on the UUID of the partition, if you have enable_periodic_fsck = 1 in your mke2fs.conf
mke2fs.c
if (get_bool_from_profile(fs_types, "enable_periodic_fsck", 0)) {
fs->super->s_checkinterval = EXT2_DFL_CHECKINTERVAL;
fs->super->s_max_mnt_count = EXT2_DFL_MAX_MNT_COUNT;
/*
* Add "jitter" to the superblock's check interval so that we
* don't check all the filesystems at the same time. We use a
* kludgy hack of using the UUID to derive a random jitter value
*/
for (i = 0, val = 0 ; i < sizeof(fs->super->s_uuid); i++)
val += fs->super->s_uuid[i];
fs->super->s_max_mnt_count += val % EXT2_DFL_MAX_MNT_COUNT;
} else
fs->super->s_max_mnt_count = -1;
mke2fs.h
:#define EXT2_DFL_MAX_MNT_COUNT 20
Always good to see the words 'kludgy' and 'hack' in code =)
| What makes ext3 determine how frequently to perform file system checks when no options are specified? |
1,357,727,051,000 |
I wondered about some missing space on my ext3 partition and, after some googling, found that debian based ubuntu reserves 5% of the size for root.
I also found posts describing how to change that size via tune2fs utility.
Now I've got 2 questions, that I didn't find clear answers for:
should I unmount the partition before changing the reserved space. what could happen if I don't?
how much space should I reserve for the filesystem, so that it can operate efficiently?
Thank you!
|
You don't need to unmount the partition prior to doing this. Regarding question two, it depends. As HDDs have grown in size, so has the total amount of disk space that's reserved for root. If you have a 2 TB HDD and it's totally used for /, then I would say you could quite safely tune it down to 1% by doing this:
$ sudo tune2fs -m 1 /dev/sda*X*
A smaller drive in the region of 320 GB I'd probably leave as is.
Keep in mind that drives that are for data storage purposes don't really need all this space reserved for root. In this case you can change the number of reserved blocks like this:
$ sudo tune2fs -r 20000 /dev/sdb*X*
Hope that helps.
EDIT: Regarding fragmentation issues, ext file systems are inherently immune to fragmentation issues. To quote Theodore Ts'o:
If you set the reserved block count to zero, it won't affect performance much except if you run for long periods of time (with lots of file creates and deletes) while the filesystem is almost full (i.e., say above 95%), at which point you'll be subject to fragmentation problems. Ext4's multi-block allocator is much more fragmentation resistant, because it tries much harder to find contiguous blocks, so even if you don't enable the other ext4 features, you'll see better results simply mounting an ext3 filesystem using ext4 before the filesystem gets completely full.
| how much space to reserve on ext3 filesystem to prevent fragmentation issues? |
1,357,727,051,000 |
I've ran fsck -c on the (unmounted) partition in question a while ago. The process was unattended and results were not stored anywhere (except badblock inode).
Now I'd like to get badblock information to know if there are any problems with the harddrive.
Unfortunately, partition is used in the production system and can't be unmounted.
I see two ways to get what I want:
Run badblocks in read-only mode. This will probably take a lot of time and cause unnecessary bruden on the system.
Somehow extract information about badblocks from the filesystem iteself.
How can I view known badblocks registered in mounted filesystem?
|
Try
dumpe2fs -b /dev/<WHATEVER>
| How to view bad blocks on mounted ext3 filesystem? |
1,357,727,051,000 |
I have my /home partition formatted as ext3. Occassionally, some program that is part of GNOME is giving notifications about there only being 700mb of space left. Nautilius tells me I have 5.6GB. Disk Usage Analyzer tells me I have 10GB.
Which of these is most accurate, or is there another program that is more accurate?
What accounts for these different figures?
|
Try a differfent program; maybe this will be more accurate:
df -h
| How to check accurately the remaining disk space on a partition? |
1,357,727,051,000 |
On ubuntu I'm using the following to create a ext3 filesystem image system.img.
dd if=/dev/zero of=./system.img bs=1000000 count=200
mkfs.ext3 ./system.img
I'm attempting to do the same on android platform. But the problem is I can't find a mkfs.ext3 binary for armv7 android. But I have mkfs.ext2 , mke2fs , tune2fs and e2fsck. So my question is, Is it possible to achieve the same result with the available binaries? If yes, how to do it?
|
Actually yes. Since ext2 and ext3 are fairly similar, with the major difference being ext3 supports journalling, you should be able to:
tune2fs -j ./system.img
Which enables journalling. The conversion process is detailed here with the usual disclaimers about important information, messing with filesystems etc.
You can actually go back to ext2 too, if needed, by removing the journal. Also note - you'll need to change any auto-mount options to ext3 to actually enable use of the journal.
| Making an ext3 filesystem image without mkfs.ext3 |
1,357,727,051,000 |
Say I want to observe how the flow from file name to cluster on hard disc goes.
I get the inode number of I file (which is mapped in a directory data):
1863 autorun.inf
So, now i know that i have to look for the inode numbered 1863 which will contain the pointers to the data on the hard disc.
Where is the inode data located and how does the os know where to find it?
|
Inode data are usually scattered around the disk (in order to cut down seeks). Being able to tell where the inode structures are is the core functionality of a filesystem driver - check LXR for current implementation of ext3 in Linux) or e2fsprogs sources if you are interested in details.
From a user's perspective you might want to take a look at dumpe2fs which will give you some information about a ext2-based (ext3/ext4) filesystem structure.
| Location of inodes (ext)? |
1,357,727,051,000 |
E.g: I need to know when was the last time a pendrive was mounted. Where could I see that? The pendrive has e.g.: FAT32, EXT3 filesystem.
|
ext3 stores the last mount time and can be retrieved with:
dumpe2fs -h /dev/node
I'm not sure that FAT stores this information.
| Where can I see the last mount time? |
1,386,093,579,000 |
I have a partition which contains MySQL data which is constantly growing. My LVM PV has precious little free space remaining and therefore I find I'm frequently adding additional space to my /var partition using lvextend and resize2fs in smallish increments (250-500 MB at a time) so as not to give too much space to /var and then be unable to allocate those PEs to other partitions should I need to later.
I'm concerned about reaching some limit or causing a problem by calling resize2fs too often to grow this filesystem. Is there a limit to how often resize2fs can be used to grow an Ext3 filesystem? Is it better to do one large Ext3 resize rather than many small ones? Does resizing using resize2fs too often carry a potential for problems or data loss?
|
Beyond the wear and tear on the HDDs I can't see any reason why this would be dangerous. I've never come across a EXT3/EXT4 parameter that limits the amount of times you can do this. There isn't any counter I've seen either.
In looking through the output from tune2fs I see nothing that I would find alarming which would lead me to believe that performing many resizes would be harmful to the filesystem or the device, beyond the wear and tear.
Example
$ sudo tune2fs -l /dev/mapper/vg_grinchy-lv_root
tune2fs 1.41.12 (17-May-2010)
Filesystem volume name: <none>
Last mounted on: /
Filesystem UUID: 74e66905-d09a-XXXX-XXXX-XXXXXXXXXXXX
Filesystem magic number: 0x1234
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 3276800
Block count: 13107200
Reserved block count: 655360
Free blocks: 5842058
Free inodes: 2651019
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 1020
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sat Dec 18 19:05:48 2010
Last mount time: Mon Dec 2 09:15:34 2013
Last write time: Thu Nov 21 01:06:03 2013
Mount count: 4
Maximum mount count: -1
Last checked: Thu Nov 21 01:06:03 2013
Check interval: 0 (<none>)
Lifetime writes: 930 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 1973835
Default directory hash: half_md4
Directory Hash Seed: 74e66905-d09a-XXXX-XXXX-XXXXXXXXXXXX
Journal backup: inode blocks
dumpe2fs
You can also poke at the EXT3/EXT4 filesystems using dumpe2fs which essentially shows the same info as tune2fs. The output from that command is too much to include here, mainly because it includes information about the groups of inodes within the filesystem. But when I went through the output, again I saw no mention of any counters that were inherent within the EXT3/EXT4 filesystems.
| Is there a problem using resize2fs too often? |
1,386,093,579,000 |
Some preamble: I'm taking bitwise copy of disk devices (via dd command) from twin hosts (i.e. with the same virtualized hardware layout and software packages, but with different history of usage). To optimize image size I trailed all empty space on partitions with zeroes (e.g. from /dev/zero). I'm also aware of reserved blocks per partition and temporarily downgraded that value to 0% before trailing.
But I'm curious about discrepancy of the final compressed (by bzip2) images. All hosts have almost the same tar-gziped size of files, but compressed dd images have significant variety (up to 20%). So how could it be? Is there a reason in the filesystem journals data which I was unable to purge? There are over ten partitions on the host and each reported of 128Mb journal size. (I also checked defragmentation, it's all ok: 0 or 1 due to e4defrag tool report)
So, my question is it possible somehow to clean ext3/ext4 filesystem journals? (safely for stored data of course :)
CLARIFICATION
I defenitely asked a question about how to clean (purge/refresh) journals in ext3/ext4 filesystem if possible or maybe I'm mistaken and there is no such feature as reclaiming disk space occupied by filesystem journals, so all solutions are welcome. An intention to ask the question I put as premise into the preamble and the answer to my question would help me to investigate the issue I encountered with.
|
You can purge the journal by either un-mounting, or remounting read-only (arguably a good idea when cloning). With ext4 you can also turn off the journal altogether (tune2fs -O ^has_journal), the .journal magic immutable file will be removed automatically. The journal data will still be on the underlying disk of course, so removing the journal and then zero-filling free space might get the best results.
The comments above hit the nail on the head though, dd sees the bits underneath the filesystem, how they came to be in any particular arrangement depends on all the things that have happened to the filesystem, rather than just the final contents of files. Features such as pre-allocation, delayed allocation, multi-block allocation, nanosecond timestamps and of course the journal itself all contribute to this. Also, there is one potentially random allocation strategy: the Orlov allocator can fall-back to random allocation (see fs/ext4/ialloc.c).
For completeness the secure deletion feature with random scrubbing would also contribute to differences (assuming you deleted your zero-filled ballast files), though that feature is not (yet) mainline.
On many systems the dump and restore commands can be used for a similar cloning method, for various reasons it never quite caught on in Linux.
| How to clean journals in ext3/ext4 filesystem? [closed] |
1,386,093,579,000 |
Short version: ext3 root filesystem on rackspace (xen) VM detects aborted journal on boot and mounts read-only. I've attempted to repair this from a rescue environment with tune2fs and e2fsck as prescribed in many articles I read, but the error continues to happen.
UPDATE: So based on this article I added "barrier=0" to the /etc/fstab entry for this filesystem and it mounted r/w fine at the next boot. I'm lead to believe this is a paravirtualization thing, but would love it if anyone fully understands what is going on here and can explain.
Long version:
Rackspace VM just upgraded from Ubuntu 11.10 to 12.04.2
dmesg output with the error:
[ 14.701446] blkfront: barrier: empty write xvda op failed
[ 14.701452] blkfront: xvda: barrier or flush: disabled
[ 14.701460] end_request: I/O error, dev xvda, sector 28175816
[ 14.701473] end_request: I/O error, dev xvda, sector 28175816
[ 14.701487] Aborting journal on device xvda1.
[ 14.704186] EXT3-fs (xvda1): error: ext3_journal_start_sb: Detected aborted journal
[ 14.704199] EXT3-fs (xvda1): error: remounting filesystem read-only
[ 14.940734] init: dmesg main process (763) terminated with status 7
[ 18.425994] init: mongodb main process (769) terminated with status 1
[ 21.940032] eth1: no IPv6 routers present
[ 23.612044] eth0: no IPv6 routers present
[ 27.147759] [UFW BLOCK] IN=eth0 OUT= MAC=40:40:73:00:ea:12:c4:71:fe:f1:e1:3f:08:00 SRC=98.143.36.192 DST=50.56.240.11 LEN=40 TOS=0x00 PREC=0x00 TTL=242 ID=37934 DF PROTO=TCP SPT=30269 DPT=8123 WINDOW=512 RES=0x00 SYN URGP=0
[ 31.025920] [UFW BLOCK] IN=eth0 OUT= MAC=40:40:73:00:ea:12:c4:71:fe:f1:e1:3f:08:00 SRC=116.6.60.9 DST=50.56.240.11 LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=1433 WINDOW=16384 RES=0x00 SYN URGP=0
[ 493.974612] EXT3-fs (xvda1): error: ext3_remount: Abort forced by user
[ 505.887555] EXT3-fs (xvda1): error: ext3_remount: Abort forced by user
In a rescue OS, I've tried:
tune2sf -O ^has_journal /dev/xdbb1 #Device is xvdb1 in rescue, but xvdba1 in real OS
e2fsck -f /dev/xvdb1
tune2sf -j /dev/xvdb1
I've also run e2fsck -p, e2fsck -f, and tune2fs -e continue. Here's the output of tune2fs -l.
tune2fs 1.41.14 (22-Dec-2010)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 68910771-4026-4588-a62a-54eb992f4c6e
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 1245184
Block count: 4980480
Reserved block count: 199219
Free blocks: 2550830
Free inodes: 1025001
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 606
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Filesystem created: Thu Oct 20 21:34:53 2011
Last mount time: Mon Apr 8 23:01:13 2013
Last write time: Mon Apr 8 23:08:09 2013
Mount count: 0
Maximum mount count: 29
Last checked: Mon Apr 8 23:04:49 2013
Check interval: 15552000 (6 months)
Next check after: Sat Oct 5 23:04:49 2013
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 1e07317a-6301-41d9-8885-0e3e837f2a38
Journal backup: inode blocks
I also grepped some lines from /var/log/syslog while in the rescue mode with some additional error info:
Apr 8 19:47:06 dev kernel: [26504959.895754] blkfront: barrier: empty write xvda op failed
Apr 8 19:47:06 dev kernel: [26504959.895763] blkfront: xvda: barrier or flush: disabled
Apr 8 20:19:33 dev kernel: [ 0.000000] Command line: root=/dev/xvda1 console=hvc0 ro quiet splash
Apr 8 20:19:33 dev kernel: [ 0.000000] Kernel command line: root=/dev/xvda1 console=hvc0 ro quiet splash
Apr 8 20:19:33 dev kernel: [ 0.240303] blkfront: xvda: barrier: enabled
Apr 8 20:19:33 dev kernel: [ 0.249960] xvda: xvda1
Apr 8 20:19:33 dev kernel: [ 0.250356] xvda: detected capacity change from 0 to 20401094656
Apr 8 20:19:33 dev kernel: [ 5.684101] EXT3-fs (xvda1): mounted filesystem with ordered data mode
Apr 8 20:19:33 dev kernel: [ 140.547468] blkfront: barrier: empty write xvda op failed
Apr 8 20:19:33 dev kernel: [ 140.547477] blkfront: xvda: barrier or flush: disabled
Apr 8 20:19:33 dev kernel: [ 140.709985] EXT3-fs (xvda1): using internal journal
Apr 8 21:18:12 dev kernel: [ 0.000000] Command line: root=/dev/xvda1 console=hvc0 ro quiet splash
Apr 8 21:18:12 dev kernel: [ 0.000000] Kernel command line: root=/dev/xvda1 console=hvc0 ro quiet splash
Apr 8 21:18:12 dev kernel: [ 1.439023] blkfront: xvda: barrier: enabled
Apr 8 21:18:12 dev kernel: [ 1.454307] xvda: xvda1
Apr 8 21:18:12 dev kernel: [ 6.799014] EXT3-fs (xvda1): recovery required on readonly filesystem
Apr 8 21:18:12 dev kernel: [ 6.799020] EXT3-fs (xvda1): write access will be enabled during recovery
Apr 8 21:18:12 dev kernel: [ 6.839498] blkfront: barrier: empty write xvda op failed
Apr 8 21:18:12 dev kernel: [ 6.839505] blkfront: xvda: barrier or flush: disabled
Apr 8 21:18:12 dev kernel: [ 6.854814] EXT3-fs (xvda1): warning: ext3_clear_journal_err: Filesystem error recorded from previous mount: IO failure
Apr 8 21:18:12 dev kernel: [ 6.854820] EXT3-fs (xvda1): warning: ext3_clear_journal_err: Marking fs in need of filesystem check.
Apr 8 21:18:12 dev kernel: [ 6.855247] EXT3-fs (xvda1): recovery complete
Apr 8 21:18:12 dev kernel: [ 6.855902] EXT3-fs (xvda1): mounted filesystem with ordered data mode
Apr 8 21:18:12 dev kernel: [ 143.505890] EXT3-fs (xvda1): using internal journal
|
At this point I'm thinking this is very likely an instance of Debian Bug 637234. As this is a cloud VM, the hypervisor kernel is outside of my control. The workaround is using barrier=0 in /etc/fstab for the root filesystem. The long-term fix is to rebuild the box as a next-gen rackspace cloud instance instead of a first-gen Xen-based instance.
| ext3 root filesystems goes read-only with aborted journal even after repairs |
1,386,093,579,000 |
I have an old /home partition, that dates back to former linux systems, and it is still in ext3 format. Whereas the rest of my system, / and some other mounted point are devices formated in ext4.
I have grasped some sites on the net that describes how to convert an ext3 partition to an ext4.
In this UL.SE question Can I convert an ext3 partition into ext4 without formatting?, there are also warnings recommending backup of the data before convertion... if ever...
So I wonder if is generally a good idea to convert an existing ext3 partition to ext4. I know it's possible, I know there is a little risk that need a back up if ever. Are there enough benefits such that I should do it ?
|
Both ext3 and ext4 are journaling filesystems, in addition this list several differences, the most relevant are:
Maximum individual file size can be from 16 GB to 16 TB
Overall maximum ext4 file system size is 1 EB (exabyte). 1 EB = 1024 PB (petabyte). 1 PB = 1024 TB (terabyte).
Directory can contain a maximum of 64,000 subdirectories (as opposed to 32,000 in ext3)
Several other new features are introduced in ext4: multiblock allocation, delayed allocation, journal checksum. fast fsck, etc. All you need to know is that these new features have improved the performance and reliability of the filesystem when compared to ext3.
The interesting thing for you might be the faster fsck, the others are probably of less significance in this particular situation (unless your disk gets a growth spurt and magically can contain much larger files).
If you are not going to use that partition intensively I would not recommend converting (at least not without a backup).
| Convert old /home from ext3 to ext4 |
1,386,093,579,000 |
I tried looking at the difference, the main ones seem to be 4 supports more subdirectories in a file, supports larger files, has delayed write which I don't prefer as I don't want data loss. I also see timestamps are more accurate but it also mentions there is no support in glibc so no apps would use it. Also I just need it ot be as accurate as NTFS, I don't need anything more accurate.
I'm thinking I should go with ext3 because its more likely to be more stable. What should I look at when choosing between the two?
|
These days ext4 is considered the stable standard, and you should use it. Also all filesystems use delayed writing, ext4 just delays allocating where the blocks go until they are actually written, which helps reduce fragmentation. It also uses extents to track the blocks, which makes it more efficient.
| How do I choose between ext 3 and 4? |
1,386,093,579,000 |
Suppose you have an ext3 partition which was unfortunately formated as ext4 partition (and where now are some but not a lot new files on it). Is there any way to recover (some) files from the old ext3 partition?
|
You can use a tool like PhotoRec to read the blocks and try to recover files. It actually recovers a lot of file types, not just images like the name may suggest.
http://www.cgsecurity.org/wiki/PhotoRec
| Recover formatted ext3 partition |
1,386,093,579,000 |
I have a ext3 filesystem on a .img file. After mounting and unmounting it, I noticed that the md5sum is changed, even if no file inside was changed!
md5sum myfilesystem.img
XXXX myfilesystem.img
mount -t ext3 myfilesystem.img temp/
umount temp/
md5sum myfilesystem.img
YYYY myfilesystem.img
Why does XXXX differs from YYYY? I clearly didn't touch anything inside myfilesystem.img.
|
Because, if you mount the ext3 in writable mode, there are a few things that get updated, like the last mount date. Try if this also happens when you mount with -o ro.
| md5sum change after mount? |
1,386,093,579,000 |
Related to this question: having a root file system that had to be mounted read-only (say it's completely broken), can I reformat the partition or dd a older backup image on top of it (and then reboot)?
I guess the file system won't like those radical changes while mounted (even if readonly).
Or, as a similar question: does one have other possibilities to fix a bad root fs other than running fsck interactively (i.e. perform permanent changes on the partition data with other tools)?
|
Well, other than run interactively, you can try fsck -y like my answer in the other question :-P
If you want to dd an image on top of the rootfs, your best bet is going to be to do that from your initramfs before mounting the rootfs.
You can do it with the system booted to that rootfs, but this is one of those things where Unix gives you the rope (with the loop already nicely tied for you). The filesystem will not like it at all ("hey, I expected an inode there, what is this junk?!"). Make sure that its truly read-only, e.g., no journal replay going.
If you avoid filesystem access, you'll probably get away with it. This implies that your source image can't be on the rootfs. That'd be a really bad idea.
After running dd, shutdown -r now is not going to work (nor is much else, including ls and cat). Instead, I'd suggest that you either use a watchdog (even softdog) to force a reset, or alternatively use /proc/sysrq-trigger—echo is normally a shell builtin, so you should still be able to run echo.
I'm not sure what you're doing, but it sounds like you may be building some sort of appliance. You ought to consider keeping a read-only rootfs, and using an overlay (union mounts, aufs, etc.) to make your changes, similar to how a livecd works. Or, alternatively, have a backup or recovery-only rootfs (similar to how many Android phones work).
| Can I make low-level changes on a root fs mounted RO? |
1,386,093,579,000 |
Let's say I want to set one or more attributes (in the chattr sense) on every file created in a given directory.
Is there a way to achieve this automatically, like umask does for file permissions ?
In other words, is there a way to omit the chattr step in :
$ copy file /path/to/backup/
$ chattr +i /path/to/backup/file
for every file created in /path/to/backup/ ?
Note : My system is Debian and my filesystem is ext3.
|
You can run inoticoming to watch for files placed in the directory and automatically run any command, in this case chattr. (note linux specific)
| Automatically set file attributes in a given directory |
1,386,093,579,000 |
For benchmark and testing purposes I need to be able to allocate a file at a specific offset from the start of the partition. When I create a new file normally, its blocks are placed wherever the file system decides, but I want to control that. In other words, I want to manually pick which blocks are assigned to a file.
I've looked at debugfs, but I can't find any way to do what I want. Though I can mark blocks as allocated and modify the inode, this only works for the first 12 blocks. After that I need to be able to create indirect and double indirect blocks as well, which it doesn't look like debugfs has any capability for.
Is there any way to do this? Any tool that could help me? You may assume that the file system is either ext3 or ext4 and that it has been freshly formatted (no other files exist).
Thanks in advance.
|
I have managed to find a way to do this. It uses a python script which first uses debugfs to find the necesssary number of blocks (including indirect blocks) that the file will need. It then manually writes the indirect blocks to the disk, and invokes debugfs again to mark the blocks as used and to update the file's inode.
The only issue is that debugfs apparently doesn't update the free block count of the block group when you use setb. Although I can set that parameter manually, there doesn't appear to be any way to print the current value so I can't calculate the correct value. As far as I can tell it doesn't have any real negative consequences, and fsck.ext3 can be used to correct the values if needed, so for benchmark purposes it'll do.
If there's any other file system consistency issue I've missed, please let me know, but since fsck.ext3 reports nothing besides the incorrect free block count I should be safe.
import sys
import tempfile
import struct
import subprocess
SECTOR_SIZE = 512
BLOCK_SIZE = 4096
DIRECT_BLOCKS = 12
BLOCKS_PER_INDIRECT_BLOCK = BLOCK_SIZE / 4
def write_indirect_block(device, indirect_block, blocks):
print "writing indirect block ", indirect_block
dev = open(device, "wb")
dev.seek(indirect_block * BLOCK_SIZE)
# Write blocks
for block in blocks:
bin_block = struct.pack("<I", int(block))
dev.write(bin_block)
zero = struct.pack("<I", 0)
# Zero out the rest of the block
for x in range(len(blocks), BLOCKS_PER_INDIRECT_BLOCK):
dev.write(zero)
dev.close()
def main(argv):
if len(argv) < 5:
print "Usage: ext3allocfile.py [device] [file] [sizeInMB] [offsetInMB]"
return
device = argv[1] # device containing the ext3 file system, e.g. "/dev/sdb1"
file = argv[2] # file name relative to the root of the device, e.g. "/myfile"
size = int(argv[3]) * 1024 * 1024 # Size in MB
offset = int(argv[4]) * 1024 * 1024 # Offset from the start of the device in MB
if size > 0xFFFFFFFF:
# Supporting this requires two things: triple indirect block support, and proper handling of size_high when changing the inode
print "Unable to allocate files over 4GB."
return
# Because size is specified in MB, it should always be exactly divisable by BLOCK_SIZE.
size_blocks = size / BLOCK_SIZE
# We need 1 indirect block for each 1024 blocks over 12 blocks.
ind_blocks = (size_blocks - DIRECT_BLOCKS) / BLOCKS_PER_INDIRECT_BLOCK
if (size_blocks - DIRECT_BLOCKS) % BLOCKS_PER_INDIRECT_BLOCK != 0:
ind_blocks += 1
# We need a double indirect block if we have more than one indirect block
has_dind_block = ind_blocks > 1
total_blocks = size_blocks + ind_blocks
if has_dind_block:
total_blocks += 1
# Find free blocks we can use at the offset
offset_block = offset / BLOCK_SIZE
print "Finding ", total_blocks, " free blocks from block ", offset_block
process = subprocess.Popen(["debugfs", device, "-R", "ffb %d %d" % (total_blocks, offset_block)], stdout=subprocess.PIPE)
output = process.stdout
# The first three entries after splitting are "Free", "blocks", "found:", so we skip those.
blocks = output.readline().split(" ")[3:]
output.close()
# The last entry may contain a line-break. Removing it this way to be safe.
blocks = filter(lambda x: len(x.strip(" \n")) > 0, blocks)
if len(blocks) != total_blocks:
print "Not enough free blocks found for the file."
return
# The direct blocks in the inode are blocks 0-11
# Write the first indirect block, listing the blocks for file blocks 12-1035 (inclusive)
if ind_blocks > 0:
write_indirect_block(device, int(blocks[DIRECT_BLOCKS]), blocks[DIRECT_BLOCKS + 1 : DIRECT_BLOCKS + 1 + BLOCKS_PER_INDIRECT_BLOCK])
if has_dind_block:
dind_block_index = DIRECT_BLOCKS + 1 + BLOCKS_PER_INDIRECT_BLOCK
dind_block = blocks[dind_block_index]
ind_block_indices = [dind_block_index+1+(i*(BLOCKS_PER_INDIRECT_BLOCK+1)) for i in range(ind_blocks-1)]
# Write the double indirect block, listing the blocks for the remaining indirect block
write_indirect_block(device, int(dind_block), [blocks[i] for i in ind_block_indices])
# Write the remaining indirect blocks, listing the relevant file blocks
for i in ind_block_indices:
write_indirect_block(device, int(blocks[i]), blocks[i+1:i+1+BLOCKS_PER_INDIRECT_BLOCK])
# Time to generate a script for debugfs
script = tempfile.NamedTemporaryFile(mode = "w", delete = False)
# Mark all the blocks as in-use
for block in blocks:
script.write("setb %s\n" % (block,))
# Change direct blocks in the inode
for i in range(DIRECT_BLOCKS):
script.write("sif %s block[%d] %s\n" % (file, i, blocks[i]))
# Change indirect block in the inode
if size_blocks > DIRECT_BLOCKS:
script.write("sif %s block[IND] %s\n" % (file, blocks[DIRECT_BLOCKS]))
# Change double indirect block in the inode
if has_dind_block:
script.write("sif %s block[DIND] %s\n" % (file, dind_block))
# Set total number of blocks in the inode (this value seems to actually be sectors
script.write("sif %s blocks %d\n" % (file, total_blocks * (BLOCK_SIZE / SECTOR_SIZE)))
# Set file size in the inode
# TODO: Need support of size_high for large files
script.write("sif %s size %d\n" % (file, size))
script.close()
# execute the script
print "Modifying file"
subprocess.call(["debugfs", "-w", device, "-f", script.name])
script.unlink(script.name)
if __name__ == "__main__":
main(sys.argv)
The script can be used as follows to create a 1GB file at offset 200GB (you need to be root):
touch /mount/point/myfile
sync
python ext3allocfile.py /dev/sdb1 /myfile 1024 204800
umount /dev/sdb1
mount /dev/sdb1
The umount/mount combo is necessary to get the system to recognize the change. You can unmount before invoking the script but that makes invoking debugfs slower.
If anyone wants to use this: I don't guarantee it'll work right, I don't take responsibility if you lose any data. In general, don't use it on a file system that contains anything important.
| Allocate file at a specific offset in ext3/4 |
1,386,093,579,000 |
This is actually a ctf game: Enigma 2017 practice at hackcenter.com
We have to recover a deleted file on ext3.
I am following this tutorial.
The inode is 1036.
istat gives Group 0
fsstat undelete.img
Group: 0:
Inode Range: 1 - 1280
...
Inode Table: 24 - 183
...
From here the node table has a size of 160 blocks, each block has 8 inodes.
Inode 1036 is in block 153 and is the 4th entry.
This is confirmed by
debugfs -R 'imap <1036>' undelete.img
debugfs 1.43.4 (31-Jan-2017)
Inode 1036 is part of block group 0
located at block 153, offset 0x0180
jls undelete.img | grep 153$
46: Unallocated FS Block 2153
206: Unallocated FS Block 153
214: Unallocated FS Block 153
224: Unallocated FS Block 153
680: Unallocated FS Block 4153
jcat undelete.img 8 206 | dd bs=128 skip=3 count=1 | xxd
1+0 records in
1+0 records out
00000000: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000010: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000020: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000060: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000070: 0000 0000 0000 0000 0000 0000 0000 0000 ................
128 bytes copied, 0,00719467 s, 17,8 kB/s
jcat undelete.img 8 214 | dd bs=128 skip=3 count=1 | xxd
1+0 records in
1+0 records out
00000000: a481 0000 2000 0000 4d70 8b58 4d70 8b58 .... ...Mp.XMp.X
00000010: 4d70 8b58 0000 0000 0000 0100 0200 0000 Mp.X............
00000020: 0000 0000 0100 0000 ef08 0000 0000 0000 ................
00000030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000050: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000060: 0000 0000 17ea 60e7 0000 0000 0000 0000 ......`.........
00000070: 0000 0000 0000 0000 0000 0000 0000 0000 ................
128 bytes copied, 0,00714798 s, 17,9 kB/s
jcat undelete.img 8 224 | dd bs=128 skip=3 count=1 | xxd
1+0 records in
1+0 records out
00000000: a481 0000 0000 0000 4d70 8b58 4d70 8b58 ........Mp.XMp.X
00000010: 4d70 8b58 4d70 8b58 0000 0000 0000 0000 Mp.XMp.X........
00000020: 0000 0000 0100 0000 0000 0000 0000 0000 ................
00000030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000050: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000060: 0000 0000 17ea 60e7 0000 0000 0000 0000 ......`.........
128 bytes copied, 0,00556548 s, 23,0 kB/s
00000070: 0000 0000 0000 0000 0000 0000 0000 0000 ................
The only direct block pointer I got is 0x8ef at offset 40. The block size was reported by fsstat. But
dd bs=1024 skip=2287 count=1 if=undelete.img | xxd
gives only zeros.
I do not know what is wrong.
|
You conveniently forgot to mention the URL of the filesystem image, but after registering on hackcenter.com it wasn't that hard to find. (I'm not going to repeat the URL here).
Instead of blindly following a recipe, let's look at the image and figure out what happens. fls shows that there's lots of files named filler-0, filler-1 etc. until filler-1023, then there's a file key and that has been deleted.
Looking for commits
jls undelete.img | grep Commit
...
228: Unallocated Commit Block (seq: 9, sec: 1485533263.2387673088)
...
finds that 9 is the last commit. Let's look at what happens before that commit (I've annoted the block numbers)
205: Unallocated FS Block 3112
206: Unallocated FS Block 153 # our inode
207: Unallocated FS Block 3113 # data
208: Unallocated FS Block 3114 # data
209: Unallocated FS Block 3115 # data
210: Unallocated Commit Block (seq: 7, sec: 1485533262.1970733056)
211: Unallocated Descriptor Block (seq: 8)
212: Unallocated FS Block 23 # inode bitmap
213: Unallocated FS Block 2 # group desc
214: Unallocated FS Block 153 # our inode blk
215: Unallocated FS Block 24 # first inode blk
216: Unallocated FS Block 5118
217: Unallocated FS Block 22 # data bitmap
218: Unallocated FS Block 3116 # data
219: Unallocated Commit Block (seq: 8, sec: 1485533262.2227109888)
220: Unallocated Descriptor Block (seq: 9)
221: Unallocated FS Block 5118
222: Unallocated FS Block 24 # first inode blk
223: Unallocated FS Block 1 # super blk
224: Unallocated FS Block 153 # our inode blk
225: Unallocated FS Block 22 # data bitmap
226: Unallocated FS Block 2 # group desc
227: Unallocated FS Block 23 # inode bitmap
228: Unallocated Commit Block (seq: 9, sec: 1485533263.2387673088)
229: Unallocated FS Block Unknown
So in commit #7, our inode block and three data blocks were written. In commit #8, some allocation and touching of inode is going on and a single data block is written. In commit #9, it's nearly the same, but no data block is written.
So the guess is that in commit #7, we see the last of our filler files being created, in commit #8, key is created and written, and in commit #9, it's deleted again.
Now let's look at the copies of inode block 153 in the journal. 224 (inode after deletion) and 206 (inode before creation) have an empty direct block pointer list. I don't know what happened when you looked at 214, but I do get:
$ jcat undelete.img 8 214 | dd bs=128 skip=3 count=1 | xxd
00000000: a481 0000 2000 0000 4e70 8b58 4e70 8b58 .... ...Np.XNp.X
00000010: 4e70 8b58 0000 0000 0000 0100 0200 0000 Np.X............
00000020: 0000 0000 0100 0000 2c0c 0000 0000 0000 ........,.......
00000030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000050: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000060: 0000 0000 8682 a674 0000 0000 0000 0000 .......t........
00000070: 0000 0000 0000 0000 0000 0000 0000 0000 ................
So in the direct block list at 0x28, we have one block at 0x0c2c or 3116, as guessed before.
Let's verify that we are not off by looking at some contents:
$ fcat filler-1022 undelete.img
f1755813fae6d0f542f962f50ff37184
$ dd if=undelete.img bs=1024 skip=3114 count=1 2> /dev/null ; echo
f1755813fae6d0f542f962f50ff37184
$ fcat filler-1023 undelete.img
aa08cba3462555833ffed443474bd133
$ dd if=undelete.img bs=1024 skip=3115 count=1 2> /dev/null ; echo
aa08cba3462555833ffed443474bd133
Yes, that's the data in filler written, as guessed. So what's in block 3116? Turns out to be only zeroes, which means that block never was updated. But we do have copies in the journal. In case of our two filler files:
$ jcat undelete.img 208
f1755813fae6d0f542f962f50ff37184
$ jcat undelete.img 209
aa08cba3462555833ffed443474bd133
And now finding the key should be easy (I won't do it publicly, for obvious reasons).
| Recovering a file on ext3 |
1,386,093,579,000 |
Why does it show 0 in the available column?
[root@server log]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 4128448 4096484 0 100% /
It's an ext3 filesystem.
|
df reports the percentage of used blocks relative to the blocks not reserved for root use (by default I think it's 5% of the drive in ext3). It can be changed by using the -m option of tune2fs e.g. to set it to 2%
tune2fs -m 2 /dev/sdXY
The reserved blocks allow system daemons to keep going even when the disk is full, while non-root processes will not be able to write to it. It also helps reducing drive fragmentation.
| Why does df show Available 0 when 1K-blocks minus Used is greater than 0? [duplicate] |
1,386,093,579,000 |
I just saw an answer question about filesystems for embedded hardware on another Stack Exchange site. The question was "What file system format should I use on flash memory?" and the answer suggested the ext2 filesystem, or the ext3 filesystem with journaling disabled a'la tune2fs -O ^has_journal /dev/sdbX
This made me wonder... What would the advantage be to using ext3 (with journaling disabled) over ext2? As far as I understood, the only real difference between the two was the journal. What other differences between ext2 and ext3 are there?
|
The journal is the difference. You can not have an ext3 filesystem without a journal. If you disable the journal, it becomes an ext2 filesystem again.
ext4 has a number of beneficial features and can run without a journal, making it a much better choice.
| Besides the journal, what are the differences between ext2 and ext3? |
1,386,093,579,000 |
I have a large, frequently read, ext3 file system mounted read-only on a system that is generally always hard power cycled about 2-3 times per day.
Because the device is usually powered off by cutting the power, fsck runs on boot on that file system, but for this application fast boot times are important (to the second).
I can disable boot time checks on the file system in fstab, but my question is, is it safe to do this? Given that the file system is mounted read-only but is never unmounted properly, is there any risk of accumulating file system corruption over a long period of time if I disable the boot time check?
|
From the mount manpage,
-r, --read-only
Mount the filesystem read-only. A synonym is -o ro.
Note that, depending on the filesystem type, state and kernel
behavior, the system may still write to the device. For example,
Ext3 or ext4 will replay its journal if the filesystem is dirty.
To prevent this kind of write access, you may want to mount ext3
or ext4 filesystem with "ro,noload" mount options or set the
block device to read-only mode, see command blockdev(8).
If ro,noload should prove to be insufficient, I know of no way to set up a read only device with just an fstab entry; you may need to call blockdev --setro or create a read-only loop device (losetup --read-only) by some other means before your filesystem is mounted.
If you make it truly read-only, it won't even know it was mounted. Thus no mount count updates and no forced fsck and especially no corruption possible, as long as nothing ever writes to the device...
| Safe to disable boot fsck on read-only ext3 file system? |
1,386,093,579,000 |
I'm trying to install Debian Squeeze 6.0.5 on a new HP Proliant Microserver N40L with 4 GB RAM and 1.5 GHZ and 2 new Seagate BARRACUDA 2TB HDD (delivered yesterday).
The installation stuck at the point of formatting the 2 TB HDD at 33%. I gave him 16 hours, then I aborted the formatting and started the SEATOOLS HDD Utility to check for errors. The HDD passed the check and I freshly ereased both hdd with zero's by the SEATOOLS Utility.
Now I restarted the installation and it got stuck again at 33%. How long I need to wait for this formatting?
Partitions of Software-Raid 1:
/boot 500MB ext2
/ 1995.9 GB ext3
swap 4 GB
With Alt + F4 I can't see any errors in the terminal, because the whole time my USB-Devices appear..., Swichting back to Alt + F1, it keep staying at 33%.
Is there any way to speed up this formatting? The hdd are new, so there is no need of secure deleting etc...
Update:
I found now out, that resync is incredible slow: http://up.picr.de/11594927qr.jpg writing with 700 k/sec. How could this happen? -> Estimated time: 30 Days!
I set already echo 50000 >/proc/sys/dev/raid/speed_limit_min without any change...
Thank you very much!
|
Thank you very much for your answers.
The solution was to prepare the installation for raid 1, but just mount one hdd in the raid.
(Active devices: 2, reserved devices: 0, but just SDA and not SDA + SDB)
This solved the problem with resync and the installer worked normally.
After the installation of debian, I simply added the second hdd to my raid:
mdadm --add /dev/md0 /dev/sdb1 (this for every partition)
Result: resync speed increased from 750 k (at the installer) to 70000 k at the running system.
| Debian stucks at formatting 33% |
1,386,093,579,000 |
I had an hard-drive with two partitions: one was 460GB NTFS and the other was 5GB ext3 Ubuntu 10.10.
I wanted to extend the Ubuntu partition, so I was going to shrink the NTFS partition by 15GB, but I accidentally right-clicked the NTFS partition and chose "Make Partition Active".
It actually made all the ext3 partition to become "Unallocated". It seems I can't boot from it anymore.
My question is, how can I undone it? Because it took like a millisecond to complete, I'm almost sure the data is still there.
Thanks.
|
The program calling the Linux partition "unallocated" sounds like the Windows Disk Management tool. Microsoft could make it recognize non-Microsoft partition types, but they haven't. It may be that your Ubuntu partition is still there and unharmed.
If that is the case, you may just have to mark the Ubuntu /boot partition active. The Windows tool will probably refuse to mark any non-Microsoft partition active, so you'll have to use another tool. I recommend booting your system with the Ubuntu install disk and telling it to use rescue mode. I haven't used the Ubuntu rescue mode recently; it may have a menu option for fixing this sort of thing automatically. If not, you will have to get to a command prompt, then say something like this:
# fdisk /dev/sda
Command (m for help): p
...partition list; /boot will be the smallest one you see in all likelihood
Command (m for help): a
Partition number (1-8): 1
That sets /dev/sda1 to be active. That's the most likely one to be /boot, but isn't necessarily it. You can try rebooting now.
If that didn't work, try repairing your GRUB boot loader.
If that also fails, go back into rescue mode, get into fdisk and look at the partition table again. If you find a 5 GB partition and it isn't marked as NTFS, Linux, or Linux swap, you may have found the "unallocated" partition. Say it's /dev/sda3. Then in fdisk:
Command (m for help): t
Partition number (1-8): 3
Hex code (type L to list codes): 83
Command (m for help): w
That sets /dev/sda3 to partition type 83, which says it contains one of several Linux-compatible filesystems: ext[234], XFS, ReiserFS...
Again, try booting.
If that's still not doing it, there are other steps you can take, but we've run out of easy ones. It sounds like this was just a hobby install, so it's probably not worth going to heroic measures to fix it.
In older versions of Ubuntu, you could have chosen to switch to Wubi to reduce the chances of a conflict with Windows. Unfortunately, UEFI conflicts with Wubi and it looks too difficult to work around the problems, so it was removed from Ubuntu, starting in 13.04.
| How can I restore my linux? |
1,386,093,579,000 |
I'm curious, what is the smallest size a file can really be on Linux? (Assuming Ext3 fs, so why not ext4 fs as well).
Sure you can write a file that only contains one byte, or maybe even less; but surely that'll allocates a minimum, and reasonable amount of data for convenience.
So what is the minimum allocation / block size that can be allocated on ext3, and or ext4?
|
The smallest possible allocation size for a file in ext3/ext4 is 0 (none at all) because of inline data: files with sizes smaller than 60 bytes can be stores completely inside the inode itself.
Of course, every file, whether it's a regular file, symlink, directory (which can contain data), or character device or block device or named pipe (none of which possess the concept of "contents"), still occupies an inode. You can read about the size of the inode itself.
| Smallest file block size (ext 3, 4) |
1,386,093,579,000 |
I just read this article about the virtually non-existent disk fragmentation on *nix filesystems.
It was mentioned that due to the way ext handles writing data to the disks, fragmentation may only begin manifesting on hard drives that are at least 80%, where the free space between the files starts to run out.
On how to deal with this fragmentation, the final paragraph reads:
If you actually need to defragment a file system, the simplest way is probably the most reliable: Copy all the files off the partition, erase the files from the partition, then copy the files back onto the partition. The file system will intelligently allocate the files as you copy them back onto the disk.
That sounds illogical to me. Because as far as I understand, when copying all files back to the erased drive, a similar process should take place where files are written and written with gradually decreasing portions of free space between them, to the point where fragmentation will manifest again.
Am I right on this one?
|
What you have read is true. File systems become fragmented over time - as you write more of your epic screenplay, or add to your music collection, or upload more photos, etc, so free space runs low and the system has to split files up to fit on the disk. In the process described in the excerpt you posted, the final stage, copying the files back onto the recently cleaned disk, is done sequentially - so files are written to the file system, one after another, allowing the system to allocate disk space in a manner that avoids the conditions that led to fragmentation in the first place.
On some UNIX file systems, fragmentation is actually a good thing - it helps to save space, by allocating data from two files to a single disk block, rather than using up two blocks that would each be less than half filled with the data.
UNIX file systems don't start to suffer from fragmentation until nearly full, when the system no longer has sufficient free space to use as it attempts to shuffle files around to keep them occupying contiguous blocks. Similarly, the Windows defragmenter needs around 15% of the disk to be unused to be able to effectively perform its duty.
| How to fix a fragmented ext disk - myth or truth? |
1,386,093,579,000 |
Leaving out many details, I need to create a read/write file system on a device with the following main goals:
Eliminate all writes while data is not being explicitly written.
Reduce all indirect writes when data is written.
Run fsck on boot after unclean unmount.
Currently I am using ext3, mounted with noatime. I am not familiar with the details of ext3. In particular, is data written to an ext3 system during "idle" time when no programs are explicitly writing data (specifically, I'm thinking of kjournald and the commit= mount option)?
If I switch to ext2, will that meet all the above requirements? In particular, do I have to set anything up to force an fsck after a sudden power cut?
My options are fat32, ext, ext2, and ext3, plus all of the settings available via mount. Performance is not critical, neither is robustness wrt bad sectors developing over time.
|
You don't need to switch to ext2, you can tune ext3.
You can change fsck requirements of a filesystem using tune2fs. A quick look tells me the correct command is tune2fs -c <mount-count>, but see the man page for the details.
You can change how data will be written to the ext3 filesystem during mounting. You want either data=journal or data=ordered. You can further optimize journal commits via other options. Please see this page.
Last but not least, on big drives fsck can take a long time while using ext3. Why don't you consider ext4 as an option?
Please comment this answer if I left anything in dark.
| Minimizing "idle" writes on a file system |
1,386,093,579,000 |
I have created a Virtual ext3 partition on a armv7 machine with:
dd if=/dev/zero of=./system.img bs=1000000 count=200
mkfs.ext2 ./system.img
tune2fs -j ./system.img
Now I need to get info about this fs like total space, free space and used space. How can I do this without mounting the fs. Is it possible?
|
tune2fs will display filesystem information with the -l option.
> /sbin/tune2fs -l ./tmpfile
tune2fs 1.39 (29-May-2006)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: da61d942-4e9f-4c29-9f20-ab809fb90fbf
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: resize_inode dir_index filetype sparse_super
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 128
Block count: 1024
Reserved block count: 51
Free blocks: 986 # free space
Free inodes: 117
First block: 1
Block size: 1024
Fragment size: 1024
Reserved GDT blocks: 3
Blocks per group: 8192
Fragments per group: 8192
Inodes per group: 128
Inode blocks per group: 16
...
| Getting info about a Virtual file system |
1,386,093,579,000 |
I am just trying to salvage files off the disk I pulled from a dying Maxtor Shared Storage enclosure (failed to come back up after powering it off, presumably because the OS image on the disk got corrupted, no files on shares were in use at the time). The firmware of the MSS is Linux-based.
I took out the disk, placed into a SATA USB enclosure and plugged it into my laptop, which runs Ubuntu MATE 16.04.
I've been able to mount the partition that holds all user data and can see the files on it. It appears to be an ext3/ext4 filesystem – Linux recognizes it as such, and I can browse the directory tree.
However, there are issues on certain files/directories. Example below:
$ ls -la Photos/
ls: cannot access 'Photos/2012-06 Königssee': No such file or directory
ls: cannot access 'Photos/2003-08 Fußballspiel': No such file or directory
ls: cannot access 'Photos/2013-06 München': No such file or directory
total 8
drwxrwxrwx 6 michael michael 12288 Nov 19 21:05 .
drwxrwxrwx 3 michael michael 4096 Nov 19 21:05 ..
d????????? ? ? ? ? ? 2003-08 Fußballspiel
d????????? ? ? ? ? ? 2012-06 Königssee
d????????? ? ? ? ? ? 2013-06 München
This seems to affect only files with characters beyond the 7-bit limit. Some files with such characters work, however – I should mention that at some point I restored some files from backup (using the appliance's backup/recovery feature), while others were created from a client machine via Samba.
It is reproducible in that it's the same files leading to this error on every attempt. Other operations, such as chown -R . on the whole dir, also give the same error for the same files. When I try to move the parent dir to a different filesystem, I get the same error and the parent dirs of such problematic objects don't get removed because they are not empty. The MSS had been able to read these files/dirs with no problem.
What's happening here, and how can I regain access to these files?
|
After trying everything else (mounting the drive on a different machine, restoring old backups), I eventually decided to risk it and fsck the partition.
fsck -Dfp complained about errors and requested to be run again without the -p option.
fsck -Df then found a couple of errors:
Pass 2 (directory structure) found a few errors like:
Problem in HTREE directory inode 4997425: block #1 has bad max hash
Problem in HTREE directory inode 4997425: block #2 has bad min hash
Invalid HTREE directory inode 4997425 (/misc/Downloads). Clear HTree index<y>? yes
Pass 3 discovered some non-unique filenames and suggested to create a copy. IIRC these were files which I'd recreated because they were not visible through Samba.
I allowed fsck to fix all these errors, then mounted the partition again.
lost+found contains nothing. The offending objects are present in their original locations. I'm now happily copying the remainder of my files off the partition.
| No such file or directory for files with accented characters |
1,412,342,467,000 |
I recently partitioned a new drive in windows to NTFS. I want to make it ext3 so that I can transfer a WUBI Ubuntu installation onto it. I don't care about the data on this partition.
Is there a simple way to do this in either ubuntu or windows 7?
|
You can't convert, but can reformat the partition. Boot into Ubuntu or from a live CD and format the partition from there. Be careful not to format the wrong partition.
mkfs.ext3 /dev/hdx1
| Converting NTFS to Ext3 |
1,412,342,467,000 |
In Windows, when a file/folder is created, it is associated with an FRN (a unique number which is like an Index into the MFT). All the file metadata is stored in the MFT which occupies a reserved size of 12.5 % of the disk size.
How and where is the metadata stored on Unix filesystems like ext2, ext3 etc... ?
An inode is a unique value for every single file/folder on Unix, but where does this information get stored ?
In other words:
What is the size occupied by an inode?
Where is the metadata for a file/folder on Unix stored? Is there something similar to the MFT on Unix?
|
What Windows (or more precisely NTFS) calls MFT is what typical Unix filesystems call the inode table, and what Windows calls FRN is the inode number. It contains the metadata for a file (permissions, timestamps, etc.), but not the file name (that's part of the directory entries). It also contains the address of the first few blocks of the file, or the blocks containing the addresses of the blocks of the file.
Run tune2fs -l /dev/sdz99 (replace sdz99 by the proper path to the block device you're intersted in) to get some information about an ext2/ext3/ext4 filesystem, including the “Inode count” (number of inodes) and the “Inode size” (in bytes). For these filesystems, the number of inodes is chosen when the filesystem is created, it doesn't grow dynamically with the number of files. You can run df -i to see how many inodes are in use on a mounted filesystem.
There are filesystems that have different data structures. Although the concept of inode is universal on Unix, because the filesystem APIs associate a unique inode number to every file, implementations can differ. For example Btrfs doesn't reserve space for inodes, they're allocated as needed.
| How much space does an inode occupy? |
1,412,342,467,000 |
What's the command to format my external 2.5 Tb USB hdd to ext3? Using mkfs.ext3 /dev/sdc1 works, but only gives me 300 Gb of space allocated -- where am I failing?
|
Run parted:
parted /dev/sdc
and do the following in it:
mklabel gpt
mkpart primary ext3 4MiB -1MiB
quit
Only then try to format it:
mkfs.ext3 /dev/sdc1
As a side note: fsck on a 2.5TB partition will take a long long time, use ext4 if you can, jfs or xfs otherwise.
| Formatting hdd to ext3 fails? |
1,412,342,467,000 |
We have servers which have been running for a long time. When they reboot, we see this message:
kernel: EXT4-fs (sda3): warning: maximal mount count reached, running e2fsck is recommended
My question is: what if you never ever run e2fsck? Man page does not shed enough light. The warning message says "is recommended" - but does not say it is mandatory.
What are consequences of not running it?
What does it mean to have maximal count reached?
|
An ext* filesystem has a couple of values in the metadata; how many times a filesystem can be mounted before it should be checked, and how long between checks should be allowed.
These values can be checked with the dumpe2fs command;
eg
% sudo dumpe2fs -h /dev/vdb | egrep -i 'check|mount count'
dumpe2fs 1.42.9 (28-Dec-2013)
Mount count: 15
Maximum mount count: 25
Last checked: Sun Jan 2 22:03:00 2022
Check interval: 15552000 (6 months)
Next check after: Fri Jul 1 23:03:00 2022
This says the filesystem has been mounted 15 times and needs to be checked after 25 mounts; a check should be run every 6 months; the last check was Jan 2022, so the next check should be Jul 2022.
These values can be changed with the tunefs command (-i and -c) options.
And they can be turned off. eg
% sudo dumpe2fs -h /dev/vda3 | egrep -i 'check|mount count'
dumpe2fs 1.42.9 (28-Dec-2013)
Mount count: 138
Maximum mount count: -1
Last checked: Sun Jul 12 17:23:17 2015
Check interval: 0 (<none>)
This basically says "the disks never should be checked".
So now the question; should we run it regularly?
Essentially the rationale for regular-ish checking is to try and discover filesystem inconsistencies and try and fix them. On a modern system that doesn't shut down abnormally (eg crash, power failure) there's little risk, so it may not need to be done.
Indeed, on large filesystems or ones with large number of files this could take a long time! Potentially hours!
Contrariwise, on small filesystems with the correct entries in /etc/fstab it can happen automatically on reboot and only slows the reboot down a small amount.
So you might want to let small filesystems be checked via fstab but not allow large ones or ones with lots of files.
Red Hat, for example, recommends "In general Red Hat does not suggest disabling the fsck except in situations where the machine does not boot, the file system is extremely large or the file system is on remote storage." (https://access.redhat.com/solutions/281123)
| What happens if you never ever run e2fsck? |
1,412,342,467,000 |
I use partimage to backup my ext4 partition, but during backup, the partition was detected as an ext3 partition. So I'm wondering if this can cause something bad.
|
http://www.partimage.org/Main_Page
Limitations - Partimage does not support ext4 or btrfs filesystems.
It is unwise to use it for ext4 as long as that message is on their website.
| is it safe to backup ext4 partition with partimage , which is detected as a ext3 partition |
1,412,342,467,000 |
anyfs-tools promises to convert a ntfs partition into ext3. That's what I want to do. (I have backed up what was necessary).
But when I try to compile anyfs-tools I get a make compilation error complaining about the ext2fs library. So I suppose I do not have the good version that makes this executable compile.
What should I do when such case happens ?
Should I search for the version of e2fslibs that is compatible and install it so as the make can succeed. Could it break my current distro ?
Or should I try to modify the code of anyfs-tools (that has not been updated since 2010-06-19)
This is the output of make:
gcc -O3 -Wall -Winline --param inline-unit-growth=1000 --param large-function-growth=10000 -std=gnu99 -I../../include -I/usr/include/ext2fs -I/usr/include/et -g -O2 -o anysurrect anysurrect.o -rdynamic -L../../lib -lany -ldl -lext2fs -L. -lanysurrect
./libanysurrect.so: undefined reference to `ext2fs_unmark_block_bitmap_range2'
./libanysurrect.so: undefined reference to `ext2fs_inode_data_blocks2'
./libanysurrect.so: undefined reference to `ext2fs_mark_block_bitmap_range'
./libanysurrect.so: undefined reference to `ext2fs_unmark_generic_bitmap'
./libanysurrect.so: undefined reference to `ext2fs_group_last_block2'
./libanysurrect.so: undefined reference to `ext2fs_get_generic_bitmap_end'
./libanysurrect.so: undefined reference to `ext2fs_test_block_bitmap_range2'
./libanysurrect.so: undefined reference to `ext2fs_group_of_blk2'
./libanysurrect.so: undefined reference to `ext2fs_get_generic_bmap_end'
./libanysurrect.so: undefined reference to `com_err'
./libanysurrect.so: undefined reference to `ext2fs_test_generic_bitmap'
./libanysurrect.so: undefined reference to `ext2fs_mark_generic_bitmap'
./libanysurrect.so: undefined reference to `ext2fs_group_first_block2'
./libanysurrect.so: undefined reference to `ext2fs_unmark_block_bitmap_range'
./libanysurrect.so: undefined reference to `ext2fs_get_generic_bmap_start'
./libanysurrect.so: undefined reference to `ext2fs_unmark_generic_bmap'
./libanysurrect.so: undefined reference to `ext2fs_test_generic_bmap'
./libanysurrect.so: undefined reference to `ext2fs_mark_block_bitmap_range2'
./libanysurrect.so: undefined reference to `ext2fs_test_block_bitmap_range'
./libanysurrect.so: undefined reference to `ext2fs_mark_generic_bmap'
./libanysurrect.so: undefined reference to `ext2fs_get_generic_bitmap_start'
collect2: ld returned 1 exit status
make[2]: *** [anysurrect] Error 1
make[2]: Leaving directory `/usr/local/src/anyfs-tools-0.85.1c/src/anysurrect'
make[1]: *** [anysurrect_util] Error 2
make[1]: Leaving directory `/usr/local/src/anyfs-tools-0.85.1c/src'
make: *** [progs] Error 2
|
Looking here - https://launchpad.net/~develop7/+archive/ppa/+build/1545234 - looks like anyfs-tools failed to build for them as well. The manual is a recommended read (http://anyfs-tools.sourceforge.net/), especially this snippet: "anyfs-tools anyfs-tools allows a user to convert filesystems. There is only one requirement for the existing source filesystem: there must be FIBMAP system call ioctl(2) support in the filesystem driver (maybe read-only) for Linux OS. Currently anyfs-tools supports filesystem conversion to ext2fs/ext3fs or xfs, [...]" NTFS and ext* are way too incompatible to even hope to convert.
| Cannot make anyfs-tools. My e2fslibs package seems not to be the compatible version |
1,412,342,467,000 |
My Debian vmware image has run out of space. I've expanded the disk image but now need to increase my root partition to see the additional space. My volume is setup as follows
Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x37ce2932
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 48236543 48234496 23G 83 Linux
/dev/sda2 48238590 52426751 4188162 2G 5 Extended
/dev/sda5 48238592 52426751 4188160 2G 82 Linux swap / Solaris
I understand that in order to expand sda1, any new space has to be directly after it. All the examples I've read either a) use LVM or b) dont have an Extended sda2 partition directly after sda1. Can anyone point me to a reference that will show me how to expand sda1 in this scenario? I know I will have to switch off/remove swap on sda5, but what do I do about sda2?
|
UPDATE - I found this answer, and the others, to be quite helpful. You may want to compare those too.
You need to do like this:
swapoff, thus "freeing" the swap partition
fdisk, and delete both the extended partition and the physical partition.
You are now left with just /dev/sda1.
You can now enlarge the image using fdisk again, up to the maximum "physical" size offered by VMware less the new swap size. You can either use a partition-resizing tool, or you can delete /dev/sda1 and recreate it with the same starting point (and type and boot flag). If you can't do so, do not save changes and exit immediately fdisk, then find a tool such as partition-resize or growpart, or a different fdisk (e.g. cfdisk) which can.
Exiting fdisk, run kpartx /dev/sda to inform the kernel of the size change. I'm sure I must have forgotten more often than not, and never did anything bad happen to me, but it might just have been luck on my part.
Once the partition has been enlarged, you can add a new physical partition /dev/sda2. Leave it type 82h; there's no need to create an extended partition and then another swap partition inside. Keep the swap on /dev/sda2.
Then run mkswap on /dev/sda2 and verify/recreate its UUID, because you want it to be correct in /etc/fstab if it's UUID-based (if it's referred as /dev/sda5, just correct to /dev/sda2)
Finally you can run resize2fs to make the FS grow to fill the new /dev/sda1.
| How to resize root ext3 file system without LVM |
1,412,342,467,000 |
Is it possible to convert / and /boot file system from ext3 to btrfs? I have not experienced converting previously but I seen that filesystem from ext3 needs to be unmounted.
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vg_system-lv_root ext3 4.8G 3.6G 1.1G 78% /
/dev/sda1 ext3 266M 92M 161M 37% /boot
SUSE Linux Enterprise Server 12 (x86_64)
VERSION = 12
PATCHLEVEL = 4
4.12.14-95.16-default
|
Yes, it's possible to convert an ext3 filesystem to BTRFS. Use btrfs-convert.
Yes, the filesystem needs to be unmounted; btrfs-convert uses the filesystem's free space to perform the conversion, so you can't have the free space being modified (by ext3) during this process.
WARNING about LVM
I see you're using LVM to contain the filesystem. It would be best to put the BTRFS filesystem on the partition rather than on an LVM logical volume due to a potentially-catastrophic gotcha with BTRFS. In short, if you promise to never, ever, ever take a snapshot of the BTRFS filesystem, you should be OK.
| Convert / and /boot from ext3 to btrfs |
1,412,342,467,000 |
I have a volume group, which has a size of approximately 30 TByte. It has an EXT3 File System on it:
SERVER:/home/usfman # dumpe2fs -h /dev/mapper/datavg-foolv
dumpe2fs 1.41.9 (22-Aug-2009)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: censored
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 289669120
Block count: 1158676480
Reserved block count: 57920704
Free blocks: 216859296
Free inodes: 289592213
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 747
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Filesystem created: censored
Last mount time: censored
Last write time: censored
Mount count: 5
Maximum mount count: 33
Last checked: censored
Check interval: 15552000 (6 months)
Next check after: censored
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: censored
Journal backup: inode blocks
Journal size: 128M
SERVER:/home/usfman #
Question: Can we increase this FS from 4.3TB to ~13 TByte?
Or there will be some limitions on the maximum size? The EXT3 Wiki page suggests the maximum size could be from 4 to 32 TByte. Can someone clarify?
https://en.wikipedia.org/wiki/Ext3
UPDATE:
-->> Block size: 4096
So 16 TByte is the maximum FS size with 4K blocks. I just need that someone confirms this wiki link.
SLES 11, 64 bit kernel
|
This gives you an overview of maximum file size https://access.redhat.com/solutions/1532 . You may want to think about upgrading ext3 to ext4 https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/ext4converting.html . So 13 TB should go fine if your kernel is not too old, e.g. you are running SLES 11, SP4.
| Are there any size restrictions when increasing an EXT3 File System? |
1,412,342,467,000 |
I was creating a new file system in my external HDD. While formatting, I had to format this partition to the remaining available partition which is somewhere around 850GB. Now, I created an ext3 file system in this partition. This is the output of my mkfs.ext3 command.
mkfs.ext3 /dev/sdb3
mke2fs 1.41.3 (12-Oct-2008)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
52060160 inodes, 208234530 blocks
10411726 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
6355 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Can someone help me debug the information as am not clear on what these values actually represent?
|
First, let us use the bytes notation to understand the concepts. Now, the actual size of the external HDD was 850GB which translates to 912680550400 bytes.
Block size and fragment size
The block size specifies the size that the file-system will use to read and write data. Here the default block size of 4096 bytes is used. The ext3 file system doesn't support block fragmentation so a one byte file will use a whole 4096 block. This can be modified by specifying the -f in the mkfs command but it is not suggested as the file systems today have enough capacity.
Total blocks possible = 912680550400/4096 = 222822400 blocks
So in our command output we have actually got 208234530 blocks which is pretty close to our calculation and because there will always be some blocks that cannot be used.
Total inodes in this example = 208234530/4 = 52058632.5 inodes
As per derobert's comment, the total inodes is the number that mkfs is actually creating. inodes on ext2/3/4 are created at mkfs time. We can change how many it creates with several options (-i, -N) and different -T options do so implicitly.
It is always a heuristic and so the total inodes possible as per our command is 52060160 inodes.
Maximum file system size possible = 4294967296 * 4096 (block size)
So theoretically the file system size can be upto 16 TB but however, it is not true.
The size of a block group is specified in sb.s_blocks_per_group blocks, though it can also calculated as 8 * block_size_in_bytes. So total block groups possible could be,
total block groups = 208234530/32768 = 6354.81
So it is close to 6355 groups as per our command output.
Total inodes per group = 32768/4 = 8192 inodes
References
http://www.redhat.com/archives/ext3-users/2004-December/msg00001.html
https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout
https://serverfault.com/a/117598
What is a fragment size in an ext3 filesystem?
| debug mkfs.ext3 command output |
1,412,342,467,000 |
I understand that any files being written to during power loss can get corrupted, but is it possible for an entire ext3 filesystem to become corrupted during a power loss event? If so, how?
Thanks!
|
TL;DR: it is not likely with the default mount options but it still may happen. If you tune the mount options and set unsafe flags, yes it is possible.
ext3 is a journaled filesystem meaning that it is less likely to be corrupted by a hard power-off than ext2 for instance which is not using journaling.
That being said, it is not impossible for an ext3 partition to be corrupted. In particular, data stored in the cache when the power-off happen will be lost. As no checksum is done on the journal is made on ext3, it can still lead to significant problems, see Wikipedia for more information (references 32, 33 and 34).
Also, ext3 mode can be changed at mount time, some options being more dangerous than others, see ext3 documentation. If journaling is disabled, of course, the file system will be vulnerable to corruption on power-off.
One last note: corruption of an "entire filesystem" is very unlikely for any filesystem. If you exclude particular and pathological cases (power-off during filesystem check, etc), no filesystem will ever operate manipulations on the whole filesystem at once. Therefore, the usual corruption issues are for some inode of your partition, not all the data.
Related:
What mount option to use for ext3 file system to minimise data loss or corruption?
| Can an entire ext3 filesystem be corrupted if the system loses power? |
1,412,342,467,000 |
I know that it isn't possible to change the inode count of an ext filesystem after its creation, but I haven't been able to find any explanation on why it isn't.
Can anyone enlighten me?
|
Why? Because no one has written a tool that does it. And that's probably because it's a not entirely trivial change to the filesystem metadata.
There are other issues like this; for example you can't resize ext4 to >16TB. That needs 64bit structures which aren't used by default.
Same with other filesystems, for example you can't shrink XFS.
None of these things are impossible, but it seems that no tools exist to do it either, at least not directly. Someone would have to develop them... and that usually requires in depth knowledge of the specific filesystem.
| Why is it impossible to change the inode count of an ext filesystem? |
1,412,342,467,000 |
I am installing crunchbang linux (#!) to my eeePC and it is unable to start the disk partitioner. I traced the problem to partman and partman-lvm that states
No volume groups found.
So I have done some snooping, and I can get around that part of the installer (that just hangs) if I can mount my future root partition to /target and then go from there.
However, I'm having a lot of trouble with the mount command.
I want to mount /dev/sda1 to /target. /dev/sda1 is ext3.
When I try
mount -t ext3 /dev/sda1 /target
it states:
mount -t ext3 /dev/sda1 /target/ failed: Invalid argument.
To get a place (/target) I simply did mkdir /target. Perhaps this is not the proper way to do this?
Gracias =)
|
You're doing it the right way. It may be that the device /dev/sda1 doesn't exist yet. You also probably don't need to specify -t ext3 since that should be default. I don't expect having it would cause any problem though.
| mount root fs to /target |
1,412,342,467,000 |
uname -a gives:
Linux devuan 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1 (2018-04-29) x86_64 GNU/Linux
All filesystems on all disks in this box are ext3 (~15T worth over six disks)
ps -A gives:
...
14684 ? 00:00:00 jbd2/sdc1-8
14685 ? 00:00:00 ext4-rsv-conver
14688 ? 00:00:00 jbd2/sdc2-8
14689 ? 00:00:00 ext4-rsv-conver
14692 ? 00:00:00 jbd2/sdc3-8
14693 ? 00:00:00 ext4-rsv-conver
14696 ? 00:00:00 jbd2/sdd1-8
14697 ? 00:00:00 ext4-rsv-conver
14700 ? 00:00:00 jbd2/sdd2-8
14701 ? 00:00:00 ext4-rsv-conver
14704 ? 00:00:00 jbd2/sdd3-8
14705 ? 00:00:00 ext4-rsv-conver
14708 ? 00:00:00 jbd2/sdd4-8
14709 ? 00:00:00 ext4-rsv-conver
14712 ? 00:00:00 jbd2/sdf1-8
14713 ? 00:00:00 ext4-rsv-conver
...
Googling doesn't find explanation for "ext4-rsv-conver" to exist, especially since all I use are ext3.
Why does this exist here, is it really needed & can I get rid of it?
|
Since version 4.3 of the kernel, Ext3 file systems are handled by the Ext4 driver. That driver uses workqueues named ext4-rsv-conversion, one per file system; there is no way to get rid of them.
| Can I get rid of "ext4-rsv-conversion" process? |
1,412,342,467,000 |
I am in need of a tool that would run on an Ubuntu system that would be able to report the following:
Bad physical locations on a disk (cylinders, sectors)
Files that are affected by these bad locations.
Filesystem I currently have is NTFS but it would be good to have for ext2/3/4 as well.
|
Won't work nowadays. Modern disks "hide" bad blocks (even the most carefully manufactured new disks have them, they are unavoidable with current data densites) by remapping them to spares. You'll "see" bad blocks only when the disk runs out of spares, and in my experience that means that 99% of the time the disk has hours (at best) left before joining the big RAID in the sky.
| Tool to create a bad physical location report on disk |
1,412,342,467,000 |
Possible Duplicate:
Recover formatted ext3 partition
I have a folder of about 5GB that suddenly disappeared. When I checked its hard disk, I found out it has bad sector for about 2-3MB on this folder. Maybe it is on the folder's pointer.
The partition is EXT3 , and operating system is Debian.
I tried the fsck command , but it hasn't worked.
What should I do? How can I recover data? Any program or command?
|
Maybe testdisk will handle this.
| Recover ext3 files from hard disk with bad sector [duplicate] |
1,412,342,467,000 |
This USB driver has two partitions, one is ext3 and another NTFS. Now I want to convert the ext3 partition to ext2, is it possible?
The partition has around 200G data and I have no spare disk or space to temporarily store that.
|
Simply remove journaling:
# tune2fs -O ^has_journal /dev/sdbX
# fsck.ext2 /dev/sdbX
Then you can simply remove .journal file.
| How to convert USB driver from ext3 to ext2 without losing data? |
1,412,342,467,000 |
Earlier I encountered this error.
lv_root: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY
Which may be caused by the constant power failure here in our office.
I fixed it by inserting a centos 6.4 disc and running e2fsck from there.
I followed this blog post to fix it.
It worked but after rebooting I encountered another error
modprobe fatal could not open lib/modules/.../kernel/fs/ext3/ext3.ko
no such file or directory
I tried this blog post but when I run the insmod mbcache, it says that the file exists.
I checked the blkid and the fstab.
-- blkid
/dev/sda1: UUID="22cda703-e846-4f35-894e-144aed40ebf2" TYPE="ext4"
/dev/sda2: UUID="W9xhJS-mFKO-Nxfr-DbkI-zPJt-M1Km-kMKe4B" TYPE="LVM2_member"
/dev/sdb1: UUID="71d748c9-e894-4b5d-9c9d-2a93ec6a9161" SEC_TYPE="ext2" TYPE="ext3"
/dev/mapper/VolGroup-lv_root: UUID="d988536f-62c8-4a42-8142-9ae6a3292bdc" TYPE="ext4"
/dev/mapper/VolGroup-lv_swap: UUID="925b8d63-cd64-42f1-9c06-1f9a4cff4b05" TYPE="swap"
-- fstab
/dev/mapper/VolGroup-lv_root / ext4 defaults 1 1
UUID=22cda703-e846-4f35-894e-144aed40ebf2 /boot ext4 defaults 1 2
/dev/mapper/VolGroup-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/sdb1 /mnt/ext ext3 defaults 0 0
The LV is supposed to be mounted automatically in the /mnt/ext directory
Here is the result of my lsmod
Module Size Used by
vboxsf 37129 0
nf_conntrack_ftp 10475 0
ipt_REJECT 1867 2
nf_conntrack_ipv4 7694 14
nf_defrag_ipv4 1039 1 nf_conntrack_ipv4
iptable_filter 2173 1
ip_tables 9567 1 iptable_filter
ip6t_REJECT 3987 2
nf_conntrack_ipv6 6940 2
nf_defrag_ipv6 8839 1 nf_conntrack_ipv6
xt_state 1064 16
nf_conntrack 65661 4 nf_conntrack_ftp,nf_conntrack_ipv4,nf_conntrack_ipv6,xt_state
ip6table_filter 2245 1
ip6_tables 10301 1 ip6table_filter
ipv6 261676 25 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6
jbd 65369 0
ppdev 7297 0
parport_pc 19086 0
parport 29925 2 ppdev,parport_pc
i2c_piix4 11156 0
vboxguest 209345 2 vboxsf
pcnet32 29202 0
mii 4476 1 pcnet32
vboxvideo 1352 0
drm 227439 1 vboxvideo
i2c_core 25632 2 i2c_piix4,drm
sg 24038 0
ext4 335766 2
jbd2 76054 1 ext4
mbcache 6017 1 ext4
sd_mod 34952 3
crc_t10dif 1217 1 sd_mod
sr_mod 13282 0
cdrom 33416 1 sr_mod
ahci 35561 2
pata_acpi 2513 0
ata_generic 2805 0
ata_piix 20861 0
dm_mirror 11969 0
dm_region_hash 9644 1 dm_mirror
dm_log 8322 2 dm_mirror,dm_region_hash
dm_mod 70099 8 dm_mirror,dm_log
Here are the list of kernels installed
-bash-4.1$ rpm -qa kernel
kernel-2.6.32-358.23.2.el6.i686
kernel-2.6.32-431.20.3.el6.i686
kernel-2.6.32-358.18.1.el6.i686
kernel-2.6.32-358.11.1.el6.i686
kernel-2.6.32-431.17.1.el6.i686
I tried accessing the volume via the rescue disk and it worked. Other kernels seem to have ext3.ko but not the one being loaded kernel-2.6.32-431.20.3.el6.i686.
|
To verify everything in the kernel package that might be missing or damaged, run
# rpm -V kernel-2.6.32-431.20.3.el6.i686
missing /lib/modules/2.6.32-431.20.3.el6.i686/kernel/fs/ext3/ext3.ko
The missing file may be in /lost+found. Run modinfo /lost+found/* and look for a file with fields
vermagic: 2.6.32-431.20.3.el6.i686 SMP mod_unload modversions 686
description: Second Extended Filesystem with journaling extensions
If it's not there, reinstall the kernel package
# yum reinstall kernel-2.6.32-431.20.3.el6.i686
I would boot from a different, known-good kernel before running that.
| Centos missing ext3.ko |
1,303,114,106,000 |
According to the Filesystem Hierarchy Standard, /opt is for "the installation of add-on application software packages". /usr/local is "for use by the system administrator when installing software locally". These use cases seem pretty similar. Software not included with distributions usually is configured by default to install in either /usr/local or /opt with no particular rhyme or reason as to which they chose.
Is there some difference I'm missing, or do both do the same thing, but exist for historical reasons?
|
While both are designed to contain files not belonging to the operating system, /opt and /usr/local are not intended to contain the same set of files.
/usr/local is a place to install files built by the administrator, typically by using the make command (e.g., ./configure; make; make install). The idea is to avoid clashes with files that are part of the operating system, which would either be overwritten or overwrite the local ones otherwise (e.g., /usr/bin/foo is part of the OS while /usr/local/bin/foo is a local alternative).
All files under /usr are shareable between OS instances, although this is rarely done with Linux. This is a part where the FHS is slightly self-contradictory, as /usr is defined to be read-only, but /usr/local/bin needs to be read-write for local installation of software to succeed. The SVR4 file system standard, which was the FHS' main source of inspiration, is recommending to avoid /usr/local and use /opt/local instead to overcome this issue.
/usr/local is a legacy from the original BSD. At that time, the source code of /usr/bin OS commands were in /usr/src/bin and /usr/src/usr.bin, while the source of locally developed commands was in /usr/local/src, and their binaries in /usr/local/bin. There was no notion of packaging (outside tarballs).
On the other hand, /opt is a directory for installing unbundled packages (i.e. packages not part of the Operating System distribution, but provided by an independent source), each one in its own subdirectory. They are already built whole packages provided by an independent third party software distributor. Unlike /usr/local stuff, these packages follow the directory conventions (or at least they should). For example, someapp would be installed in /opt/someapp, with one of its command being /opt/someapp/bin/foo, its configuration file would be in /etc/opt/someapp/foo.conf, and its log files in /var/opt/someapp/logs/foo.access.
| What is the difference between /opt and /usr/local? |
1,303,114,106,000 |
On most FHS systems, there is a /tmp folder as well as a /var/tmp folder. What is the functional difference between the two?
|
/tmp is meant as fast (possibly small) storage with a short lifetime. Many systems clean /tmp very fast - on some systems it is even mounted as RAM-disk. /var/tmp is normally located on a physical disk, is larger and can hold temporary files for a longer time. Some systems also clean /var/tmp, but less often.
Also note that /var/tmp might not be available in the early boot-process, as /var and/or /var/tmp may be mountpoints. Thus it is a little bit comparable to the difference between /bin and /usr/bin. The first is available during early boot - the latter after the system has mounted everything. So most boot-scripts will use /tmp and not /var/tmp for temporary files.
Another (upcoming) location on Linux for temporary files is /dev/shm.
| What is the difference between /tmp and /var/tmp? |
1,303,114,106,000 |
I have an executable for the perforce version control client (p4). I can't place it in /opt/local because I don't have root privileges. Is there a standard location where it needs to be placed under $HOME?
Does the File System Hierarchy have a convention that says that local executables/binaries need to be placed in $HOME/bin?
I couldn't find such a convention mentioned on the Wikipedia article for the FHS.
Also, if there indeed is a convention, would I have to explicitly include the path to the $HOME/bin directory or whatever the location of the bin directory is?
|
In general, if a non-system installed and maintained binary needs to be accessible system-wide to multiple users, it should be placed by an administrator into /usr/local/bin. There is a complete hierarchy under /usr/local that is generally used for locally compiled and installed software packages.
If you are the only user of a binary, installing into $HOME/bin or $HOME/.local/bin is the appropriate location since you can install it yourself and you will be the only consumer. If you compile a software package from source, it's also appropriate to create a partial or full local hierarchy in your $HOME or $HOME/.local directory. Using $HOME, the full local hierarchy would look like this.
$HOME/bin Local binaries
$HOME/etc Host-specific system configuration for local binaries
$HOME/games Local game binaries
$HOME/include Local C header files
$HOME/lib Local libraries
$HOME/lib64 Local 64-bit libraries
$HOME/man Local online manuals
$HOME/sbin Local system binaries
$HOME/share Local architecture-independent hierarchy
$HOME/src Local source code
When running configure, you should define your local hierarchy for installation by specifying $HOME as the prefix for the installation defaults.
./configure --prefix=$HOME
Now when make && make install are run, the compiled binaries, packages, man pages, and libraries will be installed into your $HOME local hierarchy. If you have not manually created a $HOME local hierarchy, make install will create the directories needed by the software package.
Once installed in $HOME/bin, you can either add $HOME/bin to your $PATH or call the binary using the absolute $PATH. Some distributions will include $HOME/bin in your $PATH by default. You can test this by either echo $PATH and seeing if $HOME/bin is there, or put the binary in $HOME/bin and executing which binaryname. If it comes back with $HOME/bin/binaryname, then it is in your $PATH by default.
| Where should a local user executable be placed (under $HOME)? |
1,303,114,106,000 |
I know many directories with .d in their name:
init.d
yum.repos.d
conf.d
Does it mean directory? If yes, from what does this disambiguate?
UPDATE: I've had many interesting answers about what the .d means, but the title of my question was not well chosen. I changed "mean" to "stand for".
|
The .d suffix here means directory. Of course, this would be unnecessary as Unix doesn't require a suffix to denote a file type but in that specific case, something was necessary to disambiguate the commands (/etc/init, /etc/rc0, /etc/rc1 and so on) and the directories they use (/etc/init.d, /etc/rc0.d, /etc/rc1.d, ...)
This convention was introduced at least with Unix System V but possibly earlier. The init command used to be located in /etc but is generally now in /sbin on modern System V OSes.
Note that this convention has been adopted by many applications moving from a single file configuration file to multiple configuration files located in a single directory, eg: /etc/sudoers.d
Here again, the goal is to avoid name clashing, not between the executable and the configuration file but between the former monolithic configuration file and the directory containing them.
| What does the .d stand for in directory names? |
1,303,114,106,000 |
I need to compile some software on my Fedora machine. Where's the best place to put it so not to interfere with the packaged software?
|
Rule of thumb, at least on Debian-flavoured systems:
/usr/local for stuff which is "system-wide"—i.e. /usr/local tends to be in a distro's default $PATH, and follows a standard UNIX directory hierarchy with /usr/local/bin, /usr/local/lib, etc.
/opt for stuff you don't trust to make system-wide, with per-app prefixes—i.e. /opt/firefox-3.6.8, /opt/mono-2.6.7, and so on. Stuff in here requires more careful management, but is also less likely to break your system—and is easier to remove since you just delete the folder and it's gone.
| Where should I put software I compile myself? |
1,303,114,106,000 |
Or: where can I put files belonging to a group?
Suppose there are two users on a Unix system: joe and sarah. They are both members of the movies-enthusiast group. Where should I put their movie files?
/home/{joe,sarah}/movies are not appropriate because those directories belongs to joe / sarah, not to their group;
/home/movies-enthusiast is not appropriate too, because movies-enthusiast is a group, not a user;
/var/movies-enthusiast might be an option, but I'm not sure this is allowed by the FHS;
/srv/movies-enthusiast might be an option too, however movies are not files required by system services.
|
Don't use
/usr is for sharable read-only data. Data here should only change for administrative reasons (e.g. the installation of new packages.)
/opt is generally for programs that are self-contained or need to be isolated from the rest of the system for some reason (low and medium interaction honeypot programs, for example).
/var is for "files whose content is expected to continually change during normal operation of the system---such as logs, spool files, and temporary e-mail files." I like to think of it like this: if your data wouldn't look right summarized in a list, it generally doesn't belong in /var (though, there are exceptions to this.)
Use
/home is for user home directories. Some see this directory as being an area for group files as well. The FHS actually notes that, "on large systems (especially when the /home directories are shared amongst many hosts using NFS) it is useful to subdivide user home directories. Subdivision may be accomplished by using subdirectories such as /home/staff, /home/guests, /home/students, etc."
/srv is an acceptable and often-preferred location for group files. I generally use this directory for group-shared files for the reason mentioned in Chris Down's answer; I see group file sharing as being a service that the server provides.
See the hier(7) man page (man hier) for more information of the purpose of each directory described by the FHS.
| What's the most appropriate directory where to place files shared between users? |