Re: Contingency planning for move



On Thu, 2009-12-10 at 14:42 -0800, Sriram Ramkrishna wrote:
> 
> 
On Thu, Dec 10, 2009 at 2:28 PM, Owen Taylor <otaylor redhat com> wrote



> Ha, yes indeed.
>  
>         
>         We need to work with the GNOME board and advisory board soon
>         to get a
>         couple of servers to replace the old ones, and it's going to
>         take a bit
>         of figuring out exactly how to handle that - do we want to
>         limit
>         ourselves to asking for help from the tiny fraction of the
>         advisory
>         board that actually is server companies? or ask for cash and
>         have the
>         foundation procure the hardware, or...?
> 
> 
> You know what might be interesting is to get much larger machines and
> virtualize the services.  This would reduce the cost of supporting the
> hardware while keeping the same number of machines and simplify your
> hardware contract if you go with a server company like HP.

We're definitely moving into that direction - vbox.gnome.org is a 32G
server with 4 VMs running on it.

My ideal architecture for the gnome.org looks something like:

    vbox: git, bugzilla, etc.
drawable: databases
   label: LDAP and Mango
  "hbox": replaces menubar, window, similar machine to vbox
   "bin": storage server replacing container

So drop down from the current 8 servers to 5. I don't see a reason
to have more than that unless we start doing more ambitious things
than we do now (e.g., end-user oriented infrastructure.)
 
>         (Hmm, I'm sure there's something cool we could with an X25-E
>         or two if
>          you have an inside track for a donation there.... :-) Well,
>         actually,
>          no, not so sure, I would want to see someone come up with a
>         real
>          proposal first.)

> Well, I'll ask around, but white boxes are going to be a bitch to get
> someone to do a support contract for. 

My thought was that we might be able to use a small fast SSD in an
accelerator role - put speed critical stuff on it, if it fails, then
it fails, and we fall back to the system drives without any major
disruption.

But it was really just a random thought; a real proposal would require:

 A) Where are we currently bottlenecked on random access IO.
    (I'm not aware of any of our current services where that is the
    case, but I may be missing something.)

 A') What services could we provide that we aren't currently providing
     that could be made to scale out with 32-64G of really fast
     storage.

 B) How *would* we handle failure (I don't have much of an idea about
    failure rates of SSDs. Probably nobody does :-)

Without a concrete proposal in hand, don't think it's really worth
investigating.

- Owen





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]