a companion discussion area for blog.codinghorror.com

Building Servers for Fun and Prof... OK, Maybe Just for Fun


#43

Apologies in advance - I am that lowest of orders: an end-user and RentaServer renter.

I have been getting screwed by Hosts ever since the Web was invented. (literally)

Numerous SERIOUS marketing efforts have been started over the years and all have ended with crashed Sites.
To explain: I am a serious marketer and learned to drive traffic the hard way in the hard world with pay-in-advance ads.
The Web should have been a paradise for me as a small operator with miniscule costs compared to the “real” world.
Almost EVERY promotion I’ve run - and they are still expensive even out in the Cyberbog - has worked thus resulting in Server crashes from even minor peak traffic.

We aren’t talking Markus Frind figures here, just a few thousand hits.

After all these years, I have never got a straight answer and never had a Server service stay up for one month without downtime.

I don’t need PeerOne, I keep hearing of guys running Servers from home, like Plentyoffish.com did and all these years later, with traffic that would make ME a billionaire on 10% of it, he still runs everything as almost a one-man band on literally, 1/300th of the number of Servers his competition uses.

I had high hopes for the “Cloud” with its lies of distributed loading and and from 1&1 who promoted it heavily, to others, they fall over, so where is this redundancy?
My three current “trials” have all fallen over in the last 3 months - all “cloud-based”…

  1. Run a few Forums.
  2. No high-demand music/flash/video downloads, mainly text.
  3. Have 5000 concurrent users.
  4. Handle spikes of visitors of 10,000 per hour NOT second or minute, per hour.

plentyoffish.com was handling 100 times that with the colossal demands of a dating service system, on a home PC. Running Windows as the final insult! :-}
I don’t even need big data pipes. No videos, no music.

With all the tech expertise I’ve seen on this Board here, someone must be able to tell me the secret.
Or, at least how Markus did it.
Why do others need 600 Servers and 500 staff and he needs a couple of renta-boxes and his girlfriend?


#45

Thank a lot for this blog. I bookmarked it last year knowing that this year I would be building a new server. Now I am ready to build it.

Would you still use the same specs? Or would you move to an Intel E5-2620 (6 core) platform?

My server will be mainly file storage, but I would like to leave the opportunity to expand into Virtualization in the future.


#47

Here’s what we built in 2013

  • Intel Xeon E3-1280 V2 Ivy Bridge 3.6 Ghz / 4.0 Ghz turbo quad-core ($640)
  • SuperMicro X9SCM-F-O mobo ($190)
  • 32 GB DDR3-1600 ($292)
  • SuperMicro SC111LT-330CB 1U rackmount chassis ($200)
  • Two Samsung 830 512GB SSD ($1080)
  • 1U Heatsink ($25)

$2,427

vs. what we’re building in 2016

  • Intel i7-6700k Skylake 4.0 Ghz / 4.2 Ghz turbo quad-core ($370)
  • Supermicro X11SSZ-QF-O mobo ($230)
  • 64 GB DDR4-2133 ($680)
  • Supermicro CSE-111LT-330CB 1U rackmount chassis ($215)
  • Two Samsung 850 Pro 1TB SSD ($886)
  • 1U Heatsink ($20)

$2,401

About the same price, but twice as much memory, twice as much (and probably 50-100% faster) storage, and ~33% faster CPU.

Some load numbers:

  • 2015 Skylake build – 14w (!) at idle, 81w full CPU load
  • 2012 Ivy Bridge build – 31w at idle, 87w full CPU load

#48

No concerns about not using ECC memory in the new build?

I like racking servers and the price savings myself, but the reason AWS is killing it in the market is instant provisioning. Getting quotes back and forth with data center people was the most annoying part of the whole thing to me. The hardware folks are great at getting you hardware quickly, but getting it into a rack under a new contract was always a huge PITA and waiting around.


#49

New blog post related to the ECC issue going up today! Keep :eyeglasses: out for it.


#50

Over the past few years, to leverage the benefits of instant provisioning which AWS provide, I have implemented VM environments with SALT or Puppet to empower the development teams to provision their own servers. As previous updates mention it is far cheaper to Co-Lo rather than AWS - its up to the imagination of your SysAdmin or DevOps guys to turn wish lists into reality


#51

@codinghorror: Thanks for the write-up but ESPECIALLY thanks for continuing to post newer builds long after the original article. This is very helpful. Thank you.