Re: Contingency planning for move

On Thu, Dec 10, 2009 at 3:03 PM, Owen Taylor <otaylor redhat com> wrote:

We're definitely moving into that direction - is a 32G
server with 4 VMs running on it.

Excellent!  You'll save a lot of money for Red Hat in terms of electrical/heat/cooling.

My ideal architecture for the looks something like:

   vbox: git, bugzilla, etc.
drawable: databases
  label: LDAP and Mango
 "hbox": replaces menubar, window, similar machine to vbox
  "bin": storage server replacing container

So drop down from the current 8 servers to 5. I don't see a reason
to have more than that unless we start doing more ambitious things
than we do now (e.g., end-user oriented infrastructure.)

Well pie in teh sky stuff is always fun.. hopefully we could do something like that.

> Well, I'll ask around, but white boxes are going to be a bitch to get
> someone to do a support contract for.

My thought was that we might be able to use a small fast SSD in an
accelerator role - put speed critical stuff on it, if it fails, then
it fails, and we fall back to the system drives without any major

Yeah, let me see what I can do.
But it was really just a random thought; a real proposal would require:

 A) Where are we currently bottlenecked on random access IO.
   (I'm not aware of any of our current services where that is the
   case, but I may be missing something.)

 A') What services could we provide that we aren't currently providing
    that could be made to scale out with 32-64G of really fast

 B) How *would* we handle failure (I don't have much of an idea about
   failure rates of SSDs. Probably nobody does :-)

Maybe we should have that as an agenda item at some point and see if we could investigate such things.  It would be a learning experience for me.  I'm more of a storage network fileserver kind of guy than a strictly client person.  So I'm pretty rusty but it might be fun. :)


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]