Location of some executables

briaeros007 briaeros007 at gmail.com
Wed May 11 11:49:15 EDT 2011


2011/5/11 briaeros007 <briaeros007 at gmail.com>:
> 2011/5/11 Gordan Bobic <gordan.bobic at gmail.com>:
>> On 05/11/2011 02:40 PM, briaeros007 wrote:
>>
>>>>>> Apps are generally small. Data is generally big. I am all for putting
>>>>>> data
>>>>>> onto separate volumes, but when the entire OS install + apps is smaller
>>>>>> than
>>>>>> the amount of RAM on mid-range graphics card, I don't really see the
>>>>>> gain
>>>>>> of
>>>>>> splitting it up in the general case.
>>>>>>
>>>>> Apps aren't small.
>>>>> If i install a Business Object, I must have  got 5 Go of free space to
>>>>> install it. Yes 5 Go.
>>>>> If i play with BMC remedy, it's the same idea. And data are in db.
>>>>
>>>> So separate that app's data store somewhere more sensible, not the
>>>> entirety
>>>> of /usr.
>>>
>>> Hum...
>>> Data store are ALREADY in a db, so not with the apps (and on a separate
>>> server).
>>
>> Are you saying that the entire 5GB of bloat is in the binaries/libraries? I
>> don't (want to) believe that.
>>
> Well, you doesn't want to believe that, don't believe it.
> It's just a fact.
> And perhaps it's not "only bin and lib", but it's only the apps, and
> strictly no data in a functionnal point of view.
>
>>>>> Everybody doesn't use their system only as a desktop.
>>>>
>>>> Desktops can actually call for or at least justify _more_ partitioning
>>>> than
>>>> servers, if you are using solid state storage.
>>>>
>>> sure...
>>> A desktop with ... 99% of the time only one session, no critical apps
>>> and onlye one  "nine" of availability (ie 90%) needs more care than
>>> critical servers.
>>> I don't agree, but it's only me after all ;)
>>
>> This has nothing to do with availability - if you have lost your /usr,
>> you've lost all your 9s of availability.
>>
> So you're saying that you have the exact same constraint in a desktop
> and a server ?
> I wasn't speaking specifically about /usr in this sentence.
>
>>>>>>> - boot pxe for the system (one nfs share for the core system, and
>>>>>>> other for applications which are loaded when the system initialize
>>>>>>> itself).
>>>>>>> - etc...
>>>>>>
>>>>>> Sounds like an administrative nightmare. What commonly available
>>>>>> package
>>>>>> management system will cope with that?
>>>>>>
>>>>> If you can't follow multiple nfs share, well pxe isn't for you.
>>>>> What commonly available package management system manage pxe ? none.
>>>>> I'm talking about specific need due to a boot procedure, and you talk
>>>>> about package management system. I don't see the links.
>>>>
>>>> So you are complaining that a completely custom brew setup you are
>>>> running
>>>> doesn't agree with the default file locations in the ZFS package? And if
>>>> you
>>>> are PXE booting with NFS root how does that relate to ZFS, exactly?
>>>>
>>>
>>> I don't complain.
>>> I never complained on this ml.
>>> You wanted use cases where /usr are not on the same partition. I give you
>>> one.
>>
>> By citing NFS roots? Sorry, that doesn't wash. The situation is no different
>> whether you are PXE booting a kernel off NFS or using local disks. The
>> examples are analogous as far as separating /usr is concerned.
>>
> I do think that to have a share with only "low level" system share on
> nfs, and after init choose what to mount  (local fs or not) is a
> proper way that booting a full distro, and after hiding directory or
> other following the configuration of the machine.
>
>
>>>>>>> /bin and /sbin are here for a reason : to provide a lightweigth
>>>>>>> environnement who can be used to manage the server, even if other
>>>>>>> partitions can't be mounted.
>>>>>>> I think It's a best practice to separate things on a server. if
>>>>>>> something bad happen, the other things aren't impacted.
>>>>>>
>>>>>> The chances are that you're not going to get very far in fixing things
>>>>>> without the things in /usr/bin and /usr/sbin.
>>>>>>
>>>>> So you're just saying that generations of sysadmin are just dumbass
>>>>> and protection/compartimentalization aren't useful ?
>>>>
>>>> I'm saying that times and resources have changed since this was a good
>>>> idea.
>>>> Compartmentalization is a good thing if used appropriately. I do not
>>>> believe
>>>> splitting every directory under / to a separate volume is appropriate for
>>>> any sane use-case I can think of.
>>>>
>>>> What was a good idea on SunOS 4.1.1 running off 40MB 9in SMD disks in
>>>> 1990
>>>> isn't necessarily a good idea on the current generation of systems.
>>>> Blindly
>>>> following pragmas deprecated by technology might imply some dumbassness
>>>> (your choice of words, not mine).
>>>
>>> I'd love to be a dumbass who apply choice wich simplify administration ;)
>>> And to have separate partition instead of a big / simplify the
>>> administration when you must act or diag  on system fs.
>>> (and you can do parallel fsck ^^)
>>
>> You are _almost_ half way to having a useful feature of splitting things up
>> with parallel fscking. But in case of ZFS we have no fsck (for better or
>> worse), so in the case of the starting point of this argument, that's not a
>> valid argument.
>>
> And you're force to use zfs for all your fs ?
>
> And the main "advice" of zfs is to create fs in place of directory
> (for example, one fs by user for theirs homes)
>
>
>> In all other cases, I still haven't heard an argument for why a 100MB root
>> wouldn't be corrupted if the 5GB /usr is or vice versa.
>>
> For a really simple things :
> corruption can be from multiple factor
> -> hardware factors. These factor tends can be proportionnal to the
> number of sector since the main trouble is corrupted sector.
> So the less sector you have, the less corrupted sector you have.
> -> modification of the fs.
> The less modification of the fs you have, the less risk of corruption you have.
>
> If your system fs is in the same fs that one which is really stressed,
> you tend to have much more chance to screw something than if you just
> let it be.
> And /usr is much more updated than /bin and /sbin.
>
>
>
>>>>> Let's just put all the things inside a big partition, and if something
>>>>> goes wrong, well, cry.
>>>>> Example : ->    the fs crash, and you lose all the data in the partition
>>>>> (and backup, as murphy law edicts, wasn't working this day)
>>>>> Did you prefer to lose just /usr, which are easily recreated, or /etc,
>>>>> /boot (can't boot withtout it ), /usr, /var and so on ?
>>>>
>>>> So by that logic, are you going to put /etc on a separate partition? Can
>>>> you
>>>> list me a sensible use case where losing one wouldn't usually lead to
>>>> losing
>>>> the lot? You can't argue that a solution to poor backups is partitioning
>>>> /usr off to a different partition (especially on the same backing
>>>> storage,
>>>> be it disk, SAN, array, or whatever).
>>>>
>>> I love you.
>>> No sincerely.
>>> I give you just a simple example. And yet you use theses example as if
>>> it was all a new paradigm about system admin.
>>> Well, it was just what i say first : an example. perhaps not the best
>>> one, but only an example.
>>
>> A _bogus_ example, dismantled as such, since it didn't support the point you
>> were making.
>>
>>> Poor backup IS a trouble that i've found in nearly ALL entities I've
>>> come across. And big entities!
>>> Multiple problem impacting one critical server at the same time exists.
>>
>> I'm not disagreeing on either of those points. :)
>>
>>> Hopefully, theses problemes represents less than 1% of normal platform
>>> but they do exits
>>> And when it strikes, you doesn't want to add more troubles to the
>>> incident.
>>
>>>
>>>>
>>>> Since /usr isn't user-writable any more than / is, I don't really why
>>>> losing
>>>> one is more likely than the other. And re-creating /usr without a backup
>>>> is
>>>> going to mean a reinstall/rebuild/restore whatever you do. So IMO your
>>>> example is completely bogus.
>>>>
>>>
>>> /usr , and /etc are "apps writable" in some configuration.
>>> and /usr is more "every day" or application management, and "/" is
>>> more system management.
>>> The two things are not always on the same team.
>>
>>
>>
>>>>> Other example : due to unforseen evenement, you must add 4 Go to /usr or
>>>>> /var.
>>>>> did you want to take the risk to do an "resize2fs" on the root
>>>>> partition and an modification of physical partition, or just lvextend
>>>>> and resize2fs on a non root partition?
>>>>
>>>> Why would you possibly be rationing space that tightly? Disk space is
>>>> worth
>>>> £0.04/GB. So be extravagant - splash out an extra £4 in disk space at
>>>> installation time and have a 100GB /. More than you'll even sanely use up
>>>> for an app installation that you hadn't anticipated at the time when you
>>>> built the system.
>>>>
>>> Excuse me ?
>>> When you ask a server you are always working
>>> - without recommandation or technical procedure
>>> - can command any hardware which exists
>>> - add any options to it
>>> or you are restrained in your choice of hardware, software AND
>>> configuration ?
>>
>> Usually the constraints are not such that they cause bad design decisions.
>> Most tasks fall into two categories:
>>
>> 1) Keeping existing systems running
>>
>> This usually doesn't involve dumping gigabytes of apps on an existing system
>> in production.
>>
> There are other system tha "official production" th
>
>> 2) Deploying and designing new systems
>>
>> If that is what I am doing, then yes, I get to spec the hardware and write
>> the procedures.
>>
>> What you are suggesting is akin to saying that it is OK to try to fix every
>> problem with a hammer because it's the only tool you have been given.
>>
>>> (and SAS disk aren't 0.04/GB but budget isn't really the point here.
>>> When i've got a blade with 70 Go disk to mount an oracle server with
>>> 50 GB of datafile ... I wouldn't create a 100 GB /usr. And experience
>>> learns me that you must always let some unallocated free space with
>>> lvm for subsequent demands)
>>
>> Personally, I think LVM is _evil_ when it comes to any kind of performance
>> optimization. It destroys in one fell swoop any careful disk layout
>> optimization you may have carried out if you are on any kind of a RAID
>> setup. If you carefully pick your chunk size, stripe-width and block-group
>> size, you can get a lot of extra performance out of your disks (if you get
>> it wrong you can reduce the random file read performance of your array down
>> to the performance of a single disk). LVM headers upset this, but that is
>> manageable. What isn't manageable is when you start using LVM to stretch a
>> volume to another disk since you can no longer know how everything lines up
>> underneath (note RAID reshaping will throw things out in the same way).
>>
>>> Actually, the technical procedure i've got to use rhel is
>>> / : 1G
>>> /home : 1G
>>> /opt : 1G
>>> /tmp : 1G
>>> /var : 2G
>>> /usr : 2G
>>>
>>> I don't decide if I want to make more or less. My enterprise says "you
>>> create the system with theses settings, and only after you adapt".
>>
>> Right - so we have gone from "I think this is the best way to do things" to
>> "this is the way I have to do things due to a dictat". The latter is a
>> political issue I'm not going to discuss. It is the former that I was
>> arguing against.
>>
>> But with the numbers you list above, you'd still be in trouble with your 4GB
>> app on /usr, even though it is separate. So the point is moot.
>>
>>>>> In all corporations I work, there was never a "big /". And this for
>>>>> linux, aix, solaris or hp-ux.
>>>>
>>>> My experience is varied - very limited purpose hardware (server farms)
>>>> small
>>>> root is relatively common (then again, in a thin majority of such setups
>>>> I've worked on Solaris is in use, and as mentioned previously, that has
>>>> /usr
>>>> implicitly on /).
>>>>
>>>> But I have _never_ seen a call to suddenly and unexpectedly dump an extra
>>>> 4GB of _apps_ on an existing system. That's just poor planning. And
>>>> regardless, if you are making a change of that magnitude, you'll
>>>> typically
>>>> jumpstart/kickstart a new server, test the new app, the migrate to live,
>>>> not
>>>> just randomly dump gigabytes of apps onto a system that is sufficiently
>>>> narrow-purposed to have a tiny root fs.
>>>>
>>>
>>> I have already seen this one.
>>>
>>> You've got a server which was correctly adapted for an apps.
>>> you install the apps, you do all the configuration with specific
>>> procedure and so on.
>>> Well pretty though stuff.
>>> All works ok.
>>> You manage the server normally, and follow what the client ask. And
>>> they asked to add this apps, this account and so on the webserver.
>>
>>>
>>>
>>> And one day you must add a security/... update of the apps (or another
>>> apps, or ..).
>>> the update ask you 5 Go of free space
>>> But you don't have 5 Go. What do you do ?
>>> 1°) you recreate a new server and lost one weeks of works ?
>>
>> If it takes you a week to build a (replacement) server, you're doing it
>> wrong.
>>
>> And you haven't actually mentioned the testing of such massive changes on a
>> production system. You wouldn't expect them to "just work", would you?
>>
>> You can't argue constraints of enterprise level setups without having
>> enterprise level procedures.
>>
>>> 2°) you say to the client :  "i can't do this, technically i could but
>>> i doesn't want since it's not up my standards"
>>> 3°) you say to the client  "well, it's not optimal, but i could do
>>> this and that to make it work. Are you ok with that?"
>>
>> Most of my work comes from clients who have to deal with fallout and
>> consequence of somebody having done option 3.
>>
>>>>> And the fact that isn't on "text install but on gui" doesn't seem
>>>>> relevant to me.
>>>>> Any server now can use a gui, and it's been some times we don't do
>>>>> console install (or in specific hardware).
>>>>
>>>> So what do ancient text-only installers you mentioned have to do with
>>>> anything, then?
>>>>
>>> I'm afraid my explanation wasn't clear.
>>> If I used  "true (plain old?) installer" was to show two things
>>> 1°) that it's a policy which was well established and wasn't just some
>>> folks fancy.
>>
>> Well established based on what was useful and appropriate 20 years ago. Not
>> necessarily based on what is useful and appropriate today.
>>
>>> 2°) to not take into account desktop centric installer.
>>
>> RHEL6 (what I mentioned in the example) isn't a desktop centric distro.
>>
>> And from what you said about your distro - there being no default distro
>> kernel - well, that implies a distro that isn't even a desktop one, let
>> alone a server one.
>>
>> Gordan
>>
>



sh**.
Gmail just decided to send the email I was writing. So, sorry for the
sentences which was cut.

I think that each of us have seen the arguments of the other so it was
not useful to continue this discussion.
But if you think otherwise, I will be happy to continue it, perhaps
outside the ml since it doesn't really concern zfs after all ;)


Just one thing : you seem to think that there are one and only one
person who speaks in favor of /usr on a separate partition.
It isn't true.

I wrote in the ml only to add a voice that yes some want to have /usr
on a different partition and is not bad in all situation.

But I didn't say that my lib don't contain default distro kernel ;)


I think the tolerant approach of Patrick Hahn is the correct one, and
I personnaly always seen free software in a "try to do first, ask
after".

Cordially



More information about the zfs-discuss mailing list