Another case of WONTFIX
Duncan
1i5t5.duncan at cox.net
Mon Apr 18 03:12:39 BST 2011
Kevin Krammer posted on Sun, 17 Apr 2011 19:16:10 +0200 as excerpted:
> On Sunday, 2011-04-17, Duncan wrote:
>
>> Plus, because unlike md/raid, lvm must be configured at every run from
>> userspace, either lvm can't be run on root itself, rather limiting its
>> flexibility which is much of its point, or if it is to be run on root,
>> it requires an initr*, adding yet MORE complexity to an otherwise
>> reasonably simple initr*-less boot.
>
> What do you mean with "must be configured at every run"?
>
> I never have any interaction with LVM tools unless I want to
> add/remove/resize LVM volumes.
What I mean is that the config is found in userspace on the disk, and must
be read and auto-configured by the userspace helper at every boot to tell
lvm what devices to operate on, what devices to create out of them, and
what their physical and logical limits are. This job MUST be done by
userspace, as the data (static or not) cannot be either handed to the
kernel on the kernel command line (from grub/lilo/whatever or on the
compile-time specified kernel command line).
This contrasts with traditional disk layouts and with md/raid, for
instance, which is smart enough to scan and configure itself without any
outside config, in many cases, or can read its config directly from the
kernel command line.
As a consequence, were the original rootfs to be managed by lvm, there'd
be the proverbial chicken-and-egg problem, as the kernel could not load
the rootfs without knowing about the lvm it's on, and could not load the
lvm configuration and userspace binary to give it that info, since it's on
the rootfs.
Now binary distributions tend to handle this in the same way they do the
kernel modules necessary to load the rootfs itself -- they put it all on
the "hack" of a solution called an initr* (initrd, the separate-image init-
ramdisk, early on, now sometimes initramfs, with this early user-space
stub-filesystem appended to the kernel file itself).
But using an initr* is normally an option (quite a complex one, to the
point it's really a hack, tho a practically necessary one at times!) for
those who build a custom kernel, with at least all the drivers necessary
to load the real-root on their hardware, directly built into the kernel
itself. And because the ability to load modules at all, means a cracker
could load who knows /what/ sort of code directly into kernel mode itself,
particularly those running public servers exposed to cracking attempts
should be STRONGLY considering built-ins for ALL kernel code, at which
point they can disable the module loader code (and all the complexity that
comes with it) entirely, thus avoiding that security issue entirely.
So where it's an option, there's a number of quite strong reasons NOT to
run an initr*. (1) Given that code /will/ contain bugs, complexity is the
enemy of reliability. (2) Complexity also decreases the sysadmin's
ability to understand what's going on, thus MARKEDLY increasing the
chances of "fat-fingering" something -- and this factor gets MANY TIMES
WORSE when a sysadmin is operating under the pressures of recovery mode,
when stress is at max, information and time are often limited, and the
normal usermode tools may be unavailable or themselves limited. (3)
Module loading (as from an initr*) is a non-trivial security issue,
particularly for those running public servers.
In this thread I've been stressing point two the most, as it was one of
the biggest deciding factors, here. I do not want and chose to simply not
have, since it wasn't necessary, the piled on pressure of trying to deal
with either an initr* or lvm, in the event that I'm ever faced with a
recovery scenario. It's that sort of decisions that can make a very real
difference between successful recovery, and failure, either due to simply
not understanding the situation, or due to fat-fingering due to working
with tools one isn't used to under the most difficult and stressful
scenario most admins will ever deal with, that of trying to recover a
failed system, often under extreme time pressure (perhaps millions of
dollars of sales a day go thru that system, don't ask me why the bosses
won't authorize redundant backups systems!). Now I'm a hobbyist, but I
spend enough time on my system and value what's on it enough that I don't
want to foolishly be putting myself in situations that make a bad recovery
situation even worse!
Now people who simply trust the tools to do what's right probably don't
even notice the issue on many binary distributions as lvm and its config
are simply built-into the initr* along with the modules necessary to boot
the real root. But at least to me, that's the sort of person I'd expect
not to have actually tested his recovery scenario as well, or perhaps to
have hardly even thought about it, simply trusting the distribution and
their tools.
Unfortunately, when push comes to shove and it's time to actually /use/
those recovery tools, such people are often at a loss, and may not even
have the ability to do it at all (bad backups come to mind here, or as
Gene mentioned the problem he's actually having earlier, probably good
backups, but backed-up with a custom-built tool that's lost now as well,
making recovery far more difficult than it should have been!)
But some people like to have a decent enough understanding of not only the
tools, but the principles and layers involved, so they can have a
reasonable confidence of actually doing that recovery successfully, even
under the stressful conditions we're talking about. And they test them,
too, and know exactly how to for instance rebuild that raid after pulling
a disk from a running system, and what alternative commandline options to
feed the kernel to boot from the backup-root-partition on an entirely
separate md-raid, should not only the working root be unbootable (bad
update of a critical package, but the working root's filesystem and/or
raid get scrambled beyond recovery (as, perhaps, by the live-git kernels
they might test!).
These are situations I've both tested and in some cases actually dealt
with, "This is not a drill!"
> The latter is why I primarly use it, i.e. in order to distribute the
> rather limited space of my SSD to the respective partitions on demand.
>
> Occasionally adding and later removing additionally encrypted volumes
> when working on something you need to safely dispose of when finished.
Now that scenario actually does make a decent amount of sense. Consider,
for much of the system data (the generally encrypted bit), and any
temporary volumes, encrypted or not, that can go on other than the rootfs,
so LVM managing it does make some sense. But of course you either can't
put root on it, or you must use an initr* if you do (which you may well do
anyway, but which I didn't want to do).
FWIW, here, I'd probably throw hardware at the problem. I took the 120
gig conventional disk option on my netbook, but it'd be IIRC 8 or 16 gig
SSD otherwise, and has two memory card slots and of course several usb
slots (plus the ability to wire one internally if desired), for SSD
storage upgrade if one wishes. All those are cheap enough, at least in
comparison to the netbook itself, that I expect I'd be using them for the
temporary use you outlined, keeping the space on the SSD for other uses.
But as it's available and convenient for you to use, why not, right? =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
___________________________________________________
This message is from the kde mailing list.
Account management: https://mail.kde.org/mailman/listinfo/kde.
Archives: http://lists.kde.org/.
More info: http://www.kde.org/faq.html.
More information about the kde
mailing list