[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gentoo-desktop
Subject:    [gentoo-desktop] Re: System problems - some progress
From:       Duncan <1i5t5.duncan () cox ! net>
Date:       2011-04-01 3:22:07
Message-ID: pan.2011.04.01.03.22.06 () cox ! net
[Download RAW message or body]

Lindsay Haisley posted on Sat, 26 Mar 2011 10:57:33 -0500 as excerpted:

> Yep, I know where you're coming from there.  Iptables isn't all that
> hard to understand, and I've become pretty conversant with it in the
> process of using for my own and others' systems.  I'd always rather deal
> with the "under the hood" CLI tools than with some GUI tool that does
> little more than obfuscate the real issue.  That way lies Windows!

Indeed, the MSWindows way is the GUI way.  But I wasn't even thinking 
about that.  I was thinking about the so-called "easier" firewalling CLI/
text-editing tools that have you initially answer a number of questions to 
setup the basics, then have you edit files to do any "advanced" tweaking 
the questions didn't have the foresight to cover.

But my (first) problem was that while I could answer the questions easy 
enough, I lacked sufficient understanding of the real implementation to 
properly do the advanced editing.  And if I were to properly dig into 
that, I might as well have mastered the IPTables/Netfilter stuff on which 
it was ultimately based in the first place.

The other problem, when building your own kernel, was that the so-called 
simpler tools apparently expect all the necessary Netfilter/IPTable kernel 
options to be available as pre-built modules (or built-in) -- IOW, they're 
designed for the binary distributions where that's the case.  Neither the 
questions nor the underlying config file comments mentioned their kernel 
module dependencies.  One either had to pre-build them all and hope they 
either got auto-loaded as needed, or delve into the scripts to figure out 
the dependencies and build/load the required modules.

Now keep in mind that I first tried this on Mandrake, where I was building 
my own kernel within 90 days of first undertaking the switch, while I was 
still booting to MS to do mail and news in MSOE, because I hadn't yet had 
time to look at user level apps well enough to make my choices and set 
them up.  So it's certainly NOT just a Gentoo thing.  It's a build-your-
own-kernel thing, regardless of the distro.

The problem ultimately boiled down to having to understand IPTables itself 
well enough to know what kernel options to enable, either built-in or as 
modules which would then need to be loaded.  But if I were to do that, why 
would I need the so-called "easier" tool, that only complicated things.  
Honestly, the tools made me feel like I was trying to remote-operate some 
NASA probe from half-way-across-the-solar-system, latency and all, instead 
of using the direct-drive, since what I was operating on was actually 
right there next to me!

At that time I simply punted.  I had (or could have and did have, by 
(wise) choice on MS) a NAPT based router between me and the net anyway, 
and already knew how to configure /it/.  So I just kept it and ran the 
computer itself without a firewall for a number of years.  Several years 
later, after switching to Gentoo, when I was quite comfortable on Linux in 
general, I /did/ actually learn netfilter/iptables, configure my computer 
firewall accordingly, and direct-connect for a year or two -- until my 
local config changed and I actually had the need for a NAPT device as I 
had multiple local devices to connect to the net.

Which brings up a nice point about Gentoo.  With Mandrake (and most other 
distributions of the era, from what I read), there were enough ports open 
by default that having a firewall of /some/ sort, either on-lan NAPT 
device or well configured on-computer IPChains/IPTables based, was wise.  
IOW, keeping that NAPT device was a good choice, even if it /was/ an MS-
based view of things, because the Linux distros of the time still ran with 
various open ports (whether they still do or not I don't know, I suspect 
most do, tho they probably do it with an IPTables firewall up now too).

Gentoo's policy by contrast has always (well, since before early 2004, 
when I switched to it) been:

1) Just because it's installed does NOT mean it should have its initscript 
activated so it runs automatically in the default runlevel -- Gentoo ships 
by default with the initscripts for net-active services in /etc/init.d, 
but does NOT automatically add them to the default runlevel.

2) Even when a net-active service IS activated, Gentoo's default 
configuration normally has it active on the loopback localhost address 
only.

3) Gentoo ships X itself with IP-forwarding disabled, only the local Unix 
domain socket active.

As such, by the time I actually got around to learning IPTables/netfilter 
and setting it up on my Gentoo box, it really wasn't as necessary as it 
would be on other distributions, anyway, because firewall or no firewall, 
the only open ports were ports I had deliberately opened myself and thus 
already knew about.

But of course defense in depth is a VERY important security principle, 
correlating as it does with the parallel "never trust yourself not to fat-
finger SOMETHING!"  (Now, if the so-called security services HBGary, et. 
al., only practiced it! ...  I think that's what galled most of the world 
most, not that they screwed up a couple things so badly, but that they so 
blatantly violated the basic defense-in-depth, or we'd have never read 
about the screw-ups in the first place as they'd have not amounted to 
anything if the proper layers of defense had been there... and for a 
SECURITY firm, no less, to so utterly and completely miss it!)  So 
regardless of the fact that in theory I didn't actually need the firewall 
by then since the only open ports were the ones I intended to be open, I 
wasn't going to run direct-connected without /some/ sort of firewall, and 
I learned and activated IPTables/netfilter before I did direct-connect.  
And now that I have NAPT again, I still keep it running, as that's simply 
another layer of that defense in depth, and I can use the NAPT router for 
multiplexing several devices on a single IP, not its originally accidental 
side-effect of inbound firewalling, tho again, I keep that too as it's 
another layer of that defense in depth, I just don't /count/ on it.

>> Bottom line, yeah I believe ext4 is safe, but ext3 or ext4, unless you
>> really do /not/ care about your data integrity or are going to the
>> extreme and already have data=journal, DEFINITELY specify data=ordered,
>> both in your mount options, and by setting the defaults via tune2fs.
> 
> So does this turn off journaling?  What's a good reference on the
> advantages of ext4 over ext3, or can you just summarize them for me?

No, this doesn't turn off journaling.

Briefly...

There's the actual data, the stuff in the files we care about, and 
metadata, the stuff the filesystem tracks behind the scenes so we don't 
have to worry about it.  Metadata includes stuff like the filename, the 
dates (create/modify/access, the latter of which isn't used that much any 
more and is often disabled), permissions (both traditional *ix set*/user/
group/world and if active SELinux perms, etc), INODE AND DIRECTORY TABLES 
(most important in this context, thus the CAPS, as without them, your data 
is effectively reduced to semi-random binary sequences), etc.

It's the metadata, in particular, the inode and directory tables, that fsck 
concerns itself with, that's potentially damaged in the event of a chaotic 
shutdown, that fsck checks and tries to restore on remount after such a 
shutdown, etc.

Because the original purpose of journaling was to shortcut the long fscks 
after a chaotic shutdown, traditionally it concerns itself only with 
metadata.  In practice, however, due to reordered disk operations at both 
the OS and disk hardware/firmware level, the result of a recovery with 
strict meta-data-only journaling on a filesystem can be perfectly restored 
filesystem metadata, but with incorrect real DATA in those files, because 
the metadata was already written to disk but the data itself hadn't been, 
at the time of the chaotic shutdown.

Due to important security implications (it's possible that the previous 
contents of that inode was an unlinked but not secure-erased file 
belonging to another user, UNAUTHORIZED DATA LEAK!!!), such restored 
metadata-only files where the data itself is questionable, are normally 
truncated to zero-length, thus the post-restore zero-length "empty" file 
phenomenon common with early journaled filesystems and still occasionally 
seen today.

The data= journaling option controls data/metadata handling.

data=writeback is "bare" metadata journaling.  It's the fastest but 
riskiest in terms of real data integrity for the reasons explained above.  
As such, it's often used where performance matters more than strict data 
integrity in the event of chaotic shutdown -- where data is backed up and 
changes since the backup tend to be trivial and/or easy to recover, where 
the data's easily redownloaded from the net (think the gentoo packages 
tree, source tarballs, etc), and/or where the filesystem is wiped at boot 
anyway (as /tmp is in many installations/).  Zeroed out files on recovery 
can and do happen in writeback mode.

data=ordered is the middle ground, "good enough" for most people, both in 
performance and in data integrity.  The system ensures that the commit of 
the real data itself is "ordered" before the metadata that indexes it, 
telling the filesystem where it's located.  This comes at a slight 
performance cost as some write-order-optimization must be skipped, but it 
GREATLY enhances the integrity of the data in the event of a chaotic 
shutdown and subsequent recovery.  There are corner-cases where it's still 
possible at least in theory to get the wrong behavior, but in practice, 
these don't happen very often, and when they do, the loss tends to be that 
of reverting to the pre-update version of the file, losing only the 
current edit, rather than zeroing out of the file (or worse yet, data 
leakage) entirely.

data=journal is the paranoid option.  With this you'll want a much larger 
journal, because not only the metadata, but the data itself, is 
journaled.  (And here most people thought that's what journaling did /all/ 
the time!)  Because ALL data is ultimately written TWICE in this mode, 
first to the journal and then from there to its ultimate location, by 
definition it's a factor of two slower, but provided the hardware is 
working correctly, the worst-case in a chaotic shutdown is loss of the 
current edit, reverting to the previous edition of the file.

FWIW and rather ironically, my original understanding of all this came 
from a series of IBM DeveloperWorks articles written in the early kernel 
2.4 series era, explaining the main filesystem choices, many of them then 
new, available in kernel 2.4.  While the performance data and some 
filesystem implementation detail (plus lack of mention of ext4 and btrfs 
as this was before their time) is now somewhat dated, the theory and 
general filesystem descriptions remain solid, and as such, the series 
remains a reasonably good intro to Linux filesystems to this day.  As 
such, parts of it are still available as linked from the Gentoo 
Documentation archived copy of those IBM DeveloperWorks articles.  In 
particular, two parts covering ext3 and the data= options remain available:

http://www.gentoo.org/doc/en/articles/afig-ct-ext3-intro.xml
http://www.gentoo.org/doc/en/articles/l-afig-p8.xml

The ironic bit is who the author was, one Daniel Robbins, the same DRobbins 
who founded the then Enoch Linux, now Gentoo.  But I read them long before 
I ever considered Gentoo, when I was first switching to Linux and using 
Mandrake.  It was thus with quite some amazement a number of years later, 
after I'd been on Gentoo for awhile, that I discovered that the *SAME* 
DRobbins who founded Gentoo (and was still active tho on his way out in 
early 2004 when I started on Gentoo), was the guy who wrote the Advanced 
Filesystem Implementor's Guide in IBM DeveloperWorks, the guide I'd found 
so *INCREDIBLY* helpful years before, when I hadn't a /clue/ who he was or 
what distribution I'd chose years later, as I just starting with Mandrake 
and trying to figure out what filesystems to choose.

As to the ext3/ext4 differences... AFAIK the (second) biggest one is that 
ext4 uses extents by default, thus fragmenting files somewhat less over 
time.  (Extents are a subject worth their own post, which I won't attempt 
as while I understand the basics I don't understand all the implications 
thereof myself.  But one effect is better efficiency in filesystem layout, 
when the filesystem was created with them anyway... it won't help old 
files on upgraded-to-ext4-from ext2/3 that much.  Google's available for 
more. =:^)

There's a lot of smaller improvements as well.  ext4 is native large-
filesystem by default.  A number of optimizations discovered since ext3 
are implemented in ext4 that can't be in ext3 for stability and/or old-
kernel backward compatibility reasons.  ext4 has a no-journal option 
that's far better on flash-based thumb-drives, etc.  There are a number of 
options that can make it better on SSDs and flash in general than ext3.

And the biggest advantage is that ext4 is actively supported in the kernel 
and supports ext2/3 as well, while ext2/3, as separate buildable kernel 
options, are definitely considered legacy, with talk, as I believe I 
mentioned, of removing them as separate implementations entirely, relying 
on ext4's backward compatibility for ext2/3 support.  In that regard, ext3 
as a separate option is in worse shape than reiserfs, since it's clearly 
legacy and targeted for removal.  As part of ext4, support will 
*DEFINITELY* continue for YEARS, more likely DECADES, so is in no danger 
in that regard (more so than reiserfs support, which will continue to be 
supported as well for at least years), but the focus is definitely on ext4 
now, and as ext3 becomes more and more legacy, the chances of corner-case 
bugs appearing in ext3-only code in the ext4 driver do logically 
increase.  In that regard, reiserfs could actually be argued to be in 
better shape, since it's not implemented as a now out-of-focus older-
brother to a current filesystem, so while it has less focus in general, it 
also has less chances of being accidentally affected by a change to the 
current-focus code.

Which can be argued to have already happened with the default ext3 
switching to data=writeback for a number of kernels, before being switched 
back to the data=ordered it always had before.  A number of kernels ago 
(2.6.29 IIRC), ext4 was either officially just out of or being discussed 
for bringing out of experimental.  I believe it was Ubuntu that first made 
it a rootfs system install option, in that same time period.  Shortly 
thereafter, a whole slew of Ubuntu on ext4 users, most of whom it turned 
out later were using the closed nVidia driver, which was unstable in that 
version against that Ubuntu version and kernel, thus provoking many cases 
of "chaotic shutdown", a classic worst-case trial-by-fire test for the 
then still coming out of experimental ext4, began experiencing the classic 
"zeroed out file" problems on reboot after their chaotic shutdowns.

*Greatly* compounding the problem were some seriously ill-advised Gnome 
config-file behaviors.  Apparently, they were opening config-files for 
read-write simply to READ them and get the config in the process of 
initializing GNOME.  Of course, the unstable nVidia driver was 
initializing in parallel to all this, with the predictable-in-hindsight 
results...  As gnome was only READING the config values, it SHOULD have 
opened those files READ-ONLY, if necessary later opening them read-write 
to write new values to them.  As with the security defense-in-depth 
mentioned in the HBGary parenthetical above, this is pretty basic 
filesystem principles, but the gnome folks had it wrong.  The were opening 
the files read/write when they only needed read, and the system was 
crashing with them in that state.  As a result, these files were open for 
writing in the crash, and as is standard security practice as explained 
above, the ext4 journaling system, defaulting to write-back mode, restored 
them as zeroed out files to prevent any possibility of data leak.  
Actually, there were a few other technicalities involved as well (file 
renaming on write, failure to call fsync, due in part to ext3's historic 
bad behavior on fsync, which it treated as whole-filesystem-sync, etc), 
but that's the gist of things.

So due to ext4's data=writeback and the immaturity of the filesystem such 
that it didn't take additional precautions, these folks were getting 
critical parts of their gnome config zeroed out every time they crashed, 
and due to the unstable nVidia drivers, they were crashing frequently!!

*NOT* a good situation, and that's a classic understatement!!

The resulting investigation discovered not only the obvious gnome problem, 
but several code tweaks that could be done to ext4 to reduce the 
likelihood of this sort of situation in the future.

All fine and good, so far.  But they quickly realized that the same sort 
of code tweak issues existed with ext3, except that because ext3 defaulted 
to data=ordered, only those specifically setting data=writeback were 
having problems, and because those using data=writeback were expected to 
have /some/ problems anyway, the issues had been attributed to that and 
thus hadn't been fully investigated and fixed, all these years.

So they fixed the problems in ext3 as well.  Again, all fine and good -- 
the problems NEEDED fixed.  *BUT*, and here's where the controversy comes 
in, they decided that data=writeback was now dependable enough for BOTH 
ext3 and ext4, thus changing the default for ext3.

To say that was hugely controversial is an understatement (multiple 
threads on LKML, LWN, elsewhere where the issue was covered at the time, 
often several hundreds of posts long each), and my feelings on 
data=writeback should be transparent by now so where I stand on the issue 
should be equally transparent, but Linus never-the-less merged the commit 
that switched ext3 to data=writeback by default, AFAIK in 2.6.31.  (AFAIK, 
they discovered the problem in 2.6.29, 2.6.30 contained temporary work-
around-fixes, 2.6.31 contained the permanent fixes and switched ext3 to 
data=writeback.)

Here's the critical point.  Because reiserfs isn't so closely related to 
the ext* family, it retained the data=ordered default it had gotten years 
early, the same kernel Chris Mason committed the code for reiserfs to do 
data=ordered at all.  ext3 got the change due to its relationship with 
ext4, despite the fact that it's officially an old and stable filesystem 
where arguably such major policy changes should not occur.  If the seperate 
kernel option for ext3 is removed in ordered to remove the duplicate 
functionality already included in ext4 for backward compatibility reasons, 
by definition, this sort of change to ext4 *WILL* change the ext3 it also 
supports, unless deliberate action is taken to avoid it.  That makes such 
issues far more likely to occur again in ext3, than in the relatively 
obscure ext4.

Meanwhile, as mentioned, with newer kernels (2.6.36, 37, or 38, IDR which, 
tho it won't matter for those specifying the data=option either via 
filesystem defaults using tune2fs, or via specific mount option), ext3 
reverted again to the older and safer default, data=ordered.

And as I said, it's my firm opinion that the data= option has a stronger 
effect on filesystem stability than any possibly remaining issues with 
ext4, which is really quite stable by now.  Thus, ext3, ext4, or reiserfs, 
I'd **STRONGLY** recommend data=ordered, regardless of whether it's the 
default as it is with old and new (but with a gap) ext3 and reiserfs as it 
has been for years, or not, as I believe ext4 still defaults to 
data=writeback.  If you value your data, "just do it!"

Meanwhile, I believe the default on the definitely still experimental 
btrfs is data=writeback too.  While I plan on switching to it eventually, 
you can be quite sure I'll be examining that default and as of this point, 
have no intentions of letting it be data=writeback, when I do.

....

> The problem with Gentoo was that because EVMS was an orphaned project, I
> believe the ebuild wasn't updated.  The initrd file was specific for
> EVMS.

That's quite likely, indeed.

> Of course.  I like technology that _lasts_!  We have a clock in our
> house that's about 190 years old [...] turned me on to the Connecticut
> Clock and Watch museum, run by one George Bruno [who] also makes working
> replicas [and] was able to send me an exact replacement part!  Try
> _THAT_ with your 1990's era computer ;-)

That reminds me...  I skipped it as irrelevant to the topic at hand, but 
due to kernel sensors and ACPI changes, I decided to try the last BIOS 
upgrade available for this Tyan, after having run an earlier BIOS for some 
years.  Along about 2.6.27, I had to start using a special boot parameter 
to keep the sensors working, as apparently the sensor address regions 
overlap ACPI address regions (not an uncommon issue in boards of that era, 
the kernel folks say).  The comments on the kernel bug I filed suggested 
that a BIOS update might straighten that out (it didn't, BIOS still too 
old and board EOLed, even if it is still working), so I decided to try it.

The problem was that I had a bad memory stick.  Now the kernel has 
detectors for that and I had them active, but the kernel drivers for that 
were introduced long after I got the hardware, and while it was logging an 
issue with the memory, since it had been doing that since I activated the 
kernel drivers for it, I misinterpreted that as simply how it worked, so 
wasn't aware of the bad memory it was trying to tell me about.

So I booted to the FreeDOS floppy I used for BIOS upgrades (I've used 
FreeDOS for BIOS upgrades for years, without incident before this) and 
began the process.

It crashed half-way thru the flash-burn, apparently when it hit that bad 
memory!!

Bad situation, but there's supposed to be a failsafe direct-read-recover 
mode built-in, that probably would have worked had I known about it.  
Unfortunately I didn't, and by the time I figured it out, I'd screwed that 
up as well.

But luckily I have a netbook, that I had intended to put Gentoo on but had 
never gotten around to at that point (tho it's running Gentoo now, 2.6.38 
kernel, kde 4.6.1, fully updated as of mid-March).  It was still running 
the Linpus Linux it shipped with (first full system I've bought since my 
original 486SX25 w/ 2MB memory and 130 MB hard drive in 1993, or so, and 
I'd have sooner done without the netbook than pay the MS tax, I DID have 
to order it from Canada and have it shipped to the US).  I was able to get 
online with that, grab a yahoo webmail account since my mail logins were 
stuck on the main system without a BIOS, and use that to order a new BIOS 
chip shipped to me, the target BIOS pre-installed.

That new BIOS chip rescued my system!

I suspect my feelings after that BIOS chip did the trick rather mirror 
yours after that gear did the trick for your clock.  The computer might 
not be 190 years old, but 2003 is old enough in computer years, and I 
suspect I have rather more of my life wound up in that computer than you 
do in that clock, 190 years old or not.

Regardless, tho, you'll surely agree,

WHAT A RELIEF TO SEE IT RUNNING AGAIN!  =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic