[lug] Hosting Question

Sean Reifschneider jafo at tummy.com
Sat Sep 30 02:25:03 MDT 2006

On Fri, Sep 29, 2006 at 01:29:27PM -0600, dio2002 at indra.com wrote:
>i'm wondering how feasible it is to host a *responsive* 24x7 server from

It really depends on what your definition of "responsive" is.  Most DSL
services have pathetic upstream bandwidth.  You're lucky if you can get a
megabit.  DSL's 768kbps works out to around 75KB/sec.  I think Comcast
offers services at 45KB/sec and 80KB/sec.  Speakeasy in some areas offers a
$110 T1 equialent which should run around 180KB/sec.  However, by that
point you probably should be looking at outsourcing, as I've mentioned

If your services can fit within that small a pipe, and you don't have
serious uptime requirements, then you'll probably be pretty happy with DSL
or similar.  Comcast I'd worry about because I've gone years without IPs
changing and then had them change several times in the last few months.

Hosting in your own home can be convenient, when you're there, but if you
ever want to vacation or otherwise leave home it can be problematic.  If
it's your personal site and it's just an inconvenience, maybe you live with
it.  Or maybe you get someone your keys to check in on the pets and
servers.  :-)

Depends how sticky the issue is though, whether a pet-sitter can recover
from a trashed boot-sector depends on your pet sitter.  :-)

That said, we've had one client who hosted their services off a QWest DSL
line for 5 years and was very happy.  The only outage they had that I
recall was a fiber cut that took down all QWest services in Fort Collins
for 4 hours.  I don't think they had a single other issue in 5 years with
the line.

They recently moved and decided to switch to one of those T1 voice plus
data plans, and at that time also moved the hosting of their web presence
to us.  About the only difference they mentioned was improved performance
from being on such a fast pipe.

When you're serving up 50 or 100KB of data, being on a 75KB/sec pipe means
that page will take a solid second or two for a client to load, no matter
what the end user has...

So, while your average traffic may not add up to much, if you are pushing
much data across in a single page, the ability to burst may significantly
impact the performance your users see.

>- do you run a separate mail server or on the same box?
>- do you run a separate dns server or on the same box?

These really depend on what you're doing...  For most people, running DNS
and mail and web on the same box makes sense.  For our hosting, we
typically recommend DNS on your machine, but firewalled off so that only
our DNS server can reach it.  We replicate that out of state, so end users
don't touch your box, and you don't have to worry about exploits against

>- firewall setup/config (same box - i know not the best approach vs
>separate box/device)?

I recommend running a firewall on all boxes.  Whether you add an additional
firewall beyond that is up to you, but realize that you're introducing
another point of potential failure.

>- once it's setup and locked down, how much admin is involved on a regular

That depends.  If you're running only one distribution, using entirely
stock packages, maybe expect to spend 30 minutes a week on applying
updates, reviewing and managing monitoring.  You may even get lucky and be
able to respond to security events within that time.  If you are building a
lot of things from scratch, without using packaging, or are running a
number of different distributions, expect to spend maybe double that for up
to 5 servers.

We try to apply security errata within 24 hours of it's release, and it
seems like we're doing updates the majority of the days of the week, but
we're also supporting a bunch of different distribution flavors (Debian,
Fedora, CentOS, RHEL, Ubuntu) and versions.  However, that does tend to
scale pretty well, we look at the available updates every day on every of
the machines, which on average only takes something around a minute per
machine.  Doing it for a single machine isn't likely to take a minute a day
though.  ;-/

>- what distribution you use & possibly why?

We recommend and the vast majority of our clients use CentOS.  It's a
community rebuild of Red Hat Enterprise, and once you deploy it you are
able to continue using it while getting errata for up to 7 years.

Ubuntu with Dapper also has "LTS" (Long Term Support) where they will
provide errata for up to 5 years, so it's probably also a good choice.

Right now, CentOS version 4 is fairly old, so it may not have some of the
versions of software you'd like to see.  I'm expecting to see a new CentOS
in the next 6 months though.  Ubuntu Dapper, on the other hand, was
released 3 months ago, so it's pretty fresh.

>- mimimum hardware requirments?

8 way CPUs, 16GB RAM, a few TB of storage should do it...

There's no way somebody else can answer this for you...  Depends on what
you're wanting to do.

>- where you see the most return in terms of performance and hw component -
>ram, disk, etc..

Depends on what your application needs most...  On average I'd say a
Celeron 2.5GHz with 1GB of RAM and 80GB ATA disc is a good place to start.
Lots of applications will work well in that sort of footprint, unless you
know that you need more in one or more areas.

>- backup / restore strategy

Having one is definitely recommended.

>on the other hand, maybe it's just easier/better to host with an ISP.  if

Since we do hosting, I'll avoid making any recommendations.

>2) i will be updating and adding more sites/functionality over the course
>of a year so i have more flexibility in what i can do in my sandbox versus
>having to haggle with an isp all the time to change configs etc

It may make sense for you to start off hosting it off your DSL, nail things
down, and then outsource at a later time.

>3) my sense is that administering multiple domains with an ISP on a box i
>don't own is likely to be more challenging than me just owning my own box

I don't know.  We have lots of clients that host many domains on their
dedicated or virtual machines, and it doesn't seem to be very problematic.
With name-based virtuals, it real easy.  I'm not sure where you expect the
difficulty to be when running on a hosted box.  But, I guess we target a
fairly savvy client.  A lot of the lower-cost shared hosting environments
don't give you access to httpd.conf files or the like, so I imagine that
could be quite a challenge.  All of our clients get root access, so if
you're comfortable editing an httpd.conf, it's real easy to add a virtual.

 Give me immortality or give me death!
Sean Reifschneider, Member of Technical Staff <jafo at tummy.com>
tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability
      Back off man. I'm a scientist.   http://HackingSociety.org/

More information about the LUG mailing list