Back to: Interview | Forward to: Note

Upcoming maintenance

One of the hard drives on this 'ere server has died and needs to be replaced, so there may be some downtime on my blog on Tuesday or Wednesday. (It's a bank holiday back in Blighty this Monday and a dead drive in a RAID 1 system is not an emergency.)

We're also likely to be switching to a new server in mid to late September. I'll keep you posted when it's going to happen.

(In case you're wondering why the warnings are necessary, this blog appears to be turning into a community hub these days; if it goes down for any length of time I get email, and lots of it ...)

21 Comments

1:

Whyn't you just switch over to WordPress or Typepad or somesuch and have done with it?

You know, 'the cloud', and all?

;>

2:

Why don't you just hand your business to someone else and hope like hell that they don't go bust or decide to jack up their prices 500% on impulse once they've got you trapped in their cloud?

3:

(Incidentally, performance-wise, we've just weathered 242 spams on the blog in around 24 hours, up from 125 in the previous 24 hour period and a long-term average of around 10 per day. Weird and annoying, but it's mostly not getting through.)

4:

WordPress and Typepad have been around for a while, and so far haven't jacked up prices for hosted weblogs. And of course, whether you host your own servers/apps/services or whether you go the out-tasked route, you're still at the mercy of your ISP(s).

Given the fact that one can always make more money, but can't make more time, it seems to me that out-tasking the weblogging and comment system to someone who'll do it for a nominal fee, with some level of redundancy, et. al.

Of course, perhaps you enjoy tinkering with your own server, which means that there's more to it than mere economics. There are plenty of folks who offer VMs with optional HA capabilities at reasonable prices, which gives you total control of your (virtual) box, whilst out-tasking worries about things like failing disk drives.

5:

"We used to have a different word for 'cloud computing, we called it 'a Mainframe'"

(-- Vint Cerf ?)

6:

We have at least four domains hanging off the server. I already know, from my vegan blog, that Wordpress ain't up to it (and the "buy my books" links would violate the ToS). The costs for Typepad for those domains would exceed the server costs, even assuming their hosted version offers the flexibility we have here.

7:

What makes you think I'm going to replace a hard drive (on the other side of the planet) tomorrow? I've already outsourced the hardware. As for Typepad, (a) it's Six Apart (bad history -- ask anyone with a Livejournal lifetime account) and (b) given what we've got running on my colo server it'd be more expensive to shift to Typepad, not cheaper (hint: multiple blogs in multiple domains).

And that's before you consider that this (the main) blog gets around 9000-13,000 unique visitors a day, handles roughly 30,000 pages and 90,000-100,000 http requests a day, and is just slightly busier (and a higher-profile target) than your usual hosted blog. Which makes it unsuitable for Wordpress. Or that if some ass-hat serves a spurious/false DMCA takedown notice on Typepad they take down the blog first and investigate later.

8:

Non hotswap drives? Just say no to shoddy hardware.

9:

"the "buy my books" links would violate the ToS"

No, they wouldn't. The ToS say "does not contain unethical or unwanted commercial content designed to drive traffic to third party sites or boost the search engine rankings of third party sites," They allow links to, for example, the Amazon affiliate program. My own wp.com hosted blog has links both to an Amazon 'astore' where I put any book, DVD, etc I mention, and to places you can purchase my own music, and it's not caused any problems.

The other issues mentioned are real ones - in general, if you have the time/money/technical skills to host your own blog - especially if it's a big part of the way you earn a living - rather than trusting to 'the cloud', but WP are pretty reasonable when it comes to ToS...

10:

Here in Virginia, the Department of Motor Vehicles (and other Departments) aren't functional. They've been disfunctional for about a week because a batch of servers are having problems. Northrop Grumman are the contractors who built and are running the system.

11:

Want hotswap drives? That'll be an extra £50 a month, thank you very much. I'll take 15 minutes' downtime once every year or two, thanks.

12:

For which we both thank and congratulate you.

13:

Regrettably, the non-hotswap drives probably identifies the hardware vendor in question.

It would have cost them an extra $20/server (on $2-3k servers) to put in hot swap. But apparently that just wasn't worth it.

Sigh. We used to have standards in this industry...

14:

The servers aren't $2-3K each -- more like $0.5-1K each. They're bespoke Atom boxes assembled to order by the ISP themselves, fractional-width 1U so they can put lots of them in the datacentre and lease them cheap. Think in terms of a netbook without a screen. Of course, even a cheap netbook today could kick sand in the face of a Cray-1A (as the hobbyist in that link demonstrates) ...

15:

Oh, those things. Those little cases seem to have a lot of cooling problems; not so bad on an Atom laptop/desktop, but with server quantity RAM and multiple HDs and the power supply internal rather than a brick...

Crays used to be status symbols. Why, there was the time we mumble mumble borrowed mumble...

16:

It's amazing when we stop and consider just how far we've come. A modern day computer can emulate, in software, at full speed, hundreds of instances of a computer that was incredibly popular in its day, and still have enough processing power left over to render POV-Ray images pretty damn quickly. The cache on a CPU nowadays is larger than the total amount of RAM installed in typical high end systems in the 80s. And what's more, you get all of this capability for just a couple of hundred bucks, compared with millions of dollars.

Hell, the only reason I'm about to upgrade my home desktop computer is because I loathe the built-in glossy screen with a passion normally reserved for Redmondware. The new system I'm getting should last me for a good ten years, very comfortably ... where are all those Pentium III systems now?

17:

Hm, so this shows some of the (potential) advantages of hosting virtual servers instead of dedicated hardware boxes. Assuming that your vendor gets four of these per rack unit, that's 32 servers plus a switch in 9U.

On the other hand, that 9U can get you an IBM BladeCenter loaded with 14 HS22s. Assuming that the Atom is about 1/2 of a 3GHz Xeon core (it's probably less), that's 15 virtual servers per blade (with up to 12GB RAM per server, but you'd probably allocate more like 1GB), or 210 servers and two switches in 9U, pulling no more than 3000W. Of course, you don't have the same expansion capability for network I/O (only 4 gigE ports per blade) and disk (1TB/blade, halve that for RAID-1, giving you 32GB/server, or use external fibre channel disks with all that entails). You're probably looking at something like $12K for the chassis, PSUs and switches, and $5K/blade, giving you a hardware cost of $82K, or about $400/server, though.

What are the failure modes here? Drive failure: hot swap the drive, no downtime. PSU failure: swap the PSU, no downtime. Blade failure: swap blade, move drives while 15 servers are down. Switch or chassis failure: swap the whole thing, while 210 servers are down.

And of course you can allocate "larger" servers as necessary. Seems to me that the economics make stunning sense, except for servers requiring a lot of disk or relatively enormous amounts of network I/O that are still cheap on processing power. (And that would be for serving static content, where by my calculations you usually run out of disk bandwidth before you run out of network bandwith on a host.)

Note, Charlie, that I'm not suggesting that you or your hosting provider do this; I've seen enough suggestions here from people that didn't know the full circumstances that I'm not about to say anything about that. :-) I'm just putting this up for the fun of the calculation, since if you run your own server you probably enjoy this sort of thing as much as I do.

18:

You're probably thinking in terms of a large, well-capitalized national-scale ISP. I'm thinking in terms of a small, poorly-capitalized local ISP ... with a senior sysadmin I drink with and who is willing to hold my hand when I run into problems. NB: we went with the cheap atom solution last time I researched optimal solutions to my needs, which was 2-3 years ago. Another year or so and it'll be time to revisit the question. But for now, while the hardware is low grade, I'm more than happy with the bandwidth and service level (which, these days, is more important: hardware is a commodity but peopleware is priceless).

19:

Mmm. The main reason we went with a certain hosting provider at the last contract job I did was because the guy we had a meeting with was a) very clueful, b) the CEO, and c) I had his cellphone number.

And I'm familiar with the poorly capitalized thing, too, and not only because I don't run hardware like this at home. Back in the mid '90s I was one of those guys with rows and rows of USR Sportsters nailed to the wall of the server room.

The biggest issue with getting out of that is the ironic loss of redundancy: at the point when you want to move up from more homebuilt machines to a blade chassis with a couple of blades usually the cost of spares (an extra blade, switch, PSU and chassis) is a good fraction of the cost of the initial equipment, and that's usually prohibitive. Yeah, there is the option of support contracts, but those are not cheap either, and often don't give you the support you were hoping for. (Now that Sun's dead I guess I'm allowed to mention how a Sun field engineer, coming in with a replacement CPU board for an E5000 or something similar, actually broke the new CPU board when he was inserting it.)

But cheers for supporting the little guy. Even if you might end up with more downtime (which, surprisingly to many, you don't--small hosting providers often have better uptime records than large ones) at least you have the comfort of knowing that someone's working his ass off fixing the problem, rather than the "issue" being mired in the ticketing system of some faceless bureaucracy somewhere. That can make all the difference in the world when it comes to one's frustration level.

20:

"Drive failure: hot swap the drive, no downtime. PSU failure: swap the PSU, no downtime. Blade failure: swap blade, move drives while 15 servers are down. Switch or chassis failure: swap the whole thing, while 210 servers are down."

How much are these failures going to cost in cash money? I think you said a blade was 5k whereas replacing half a dozen of the little Atom servers over a period of a year or so would only cost 2.5k or so. They can probably be repaired on site, not something anyone could reasonably do with high-end IBM kit). It's a lot cheaper to stock the spares cupboard with a few Atom units than it is to keep a couple of replacement IBM blades on-site in shrink-wrap, depreciating in value whether they ever get used or not.

Charlie's hosting service isn't aimed at 4-9s uptime as it doesn't need to be. The virtual server idea is better aimed at large flexible operations that do need the 4-9s SLA and that have the deep pockets to pay for it.

21:

Virginia's DMV, Tax Department, and Elections Board are still offline; the other departments are back. Not only did the EMC DMX-3 storage array die, but the backup system did, too.

Specials

Merchandise

About this Entry

This page contains a single entry by Charlie Stross published on August 30, 2010 8:00 AM.

Interview was the previous entry in this blog.

Note is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda