Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Ksplice



On 09/15/2010 03:34 PM, Jarod Wilson wrote:
> On Wed, Sep 15, 2010 at 2:56 PM, Richard Pieri <richard.pieri-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org=
> wrote:
>  =20
>> On Sep 15, 2010, at 2:31 PM, Jerry Feldman wrote:
>>    =20
>>> I agree, this was the consensus, but the OP was just looking for some=

>>> outside guidance. IMHO, if there is no reason to reboot, then don't. =
In
>>> the very near future, we will be able to patch running kernels.
>>>      =20
>> I'm not sold on Ksplice.  I see it as a solution looking for a problem=
=2E  If you're that concerned about uptime then you're running a high ava=
ilability cluster.  That degree of redundancy gives you the ability to pe=
rform rolling updates across the cluster.  You can install and test your =
updates without any impact on your users.
>>
>> Ksplice removes testing from the loop.  Without rebooting you have to =
take it on faith that the in-memory kernel matches the on-disk kernel, th=
at the on-disk kernel and associated initrd actually works, and that the =
on-disk kernel in question is actually the one that gets loaded at boot. =
 I for one don't take anything on faith with my production systems.  I in=
sist on seeing it work for real before signing off on the work.
>>    =20
> Also not sold on Ksplice, despite having a discussion w/their CTO
> about it a few months back. Neat trick, but what Rich said. And think
> about it from your vendor's (Red Hat's) viewpoint, if you're running a
> paid-support distro, like Red Hat Enterprise Linux... You're NOT
> running Red Hat's official bits, you're running something that
> resembles them, and if done right, *probably* runs the same as if you
> were running the latest errata kernel that your ksplice patches were
> extracted from... But not exactly. Modifying a live-running kernel can
> never be the same as booting a new kernel in many cases, because some
> code only gets run at boot time or device initialization time. Is Red
> Hat supposed to support that mutant combo it hasn't vetted itself? I
> think not.
>
> The way I've heard it spun though is that its so you can apply
> critical hotfixes on the fly, then boot the proper updated kernel when
> its more convenient. So not really so much of an uptime thing, but a
> convenience thing. If bringing your server(s) down during peak
> business hours costs you tons of money and you can patch a critical
> vulnerability on the fly, then reboot after hours, maybe its worth it,
> dunno. But that goes back to the issue Rich brought up, where if
> keeping the systems up is *that* critical, that you should have fully
> redundant systems already, capable of handling rolling updates. Maybe
> your main admin is away at a seminar and can only ssh in and apply the
> hotfix now, and will do the full reboot later in the evening. Grasping
> at straws here trying to find a reason I'd really want to use Ksplice
> other than "because its a neat trick!".
>
>  =20
I think the jury is still out on Ksplice. I would agree with both on the
fully redundant issue. The only advantage of this is to apply the
critical fixes during a time period when you need the uptime, and reboot
later.  There are not too many situations when you need to apply a fix
immediately. There are also too many times when a fix can cause problems
and could bring a system down.

Richard brings up one of the most important points, testing.

--=20
Jerry Feldman <gaf-mNDKBlG2WHs at public.gmane.org>
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846








BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org