0
Items : 0
Subtotal : $0.00 USD
View CartCheck Out

Bootin’ Up The Beast!

Alrighty, I’m back! This past month has been a whirlwind of many sleepless nights, with no shortage of frustration! Organizational growing pains being one of the major sources of frustration (for me at least, that is). One thing I’ve come to realize about our cloud based society, is how many low cost monthly services rack up on a person to person basis. As an example, two services that we use on the daily are Slack, and Taiga. Slack is our go-to, user friendly, communications platform, and Taiga is our project management platform.

As both services offer a free version, we were initially unconcerned. However, after we started getting into heavy development, we realized how much we were communicating back and forth. Due to this we quickly surpassed our monthly limits in slack. Second to this, we had our Taiga to worry about. Not having the ability set our project, Radio Violence, into the so-called “private mode” was also a topic of discussion. After a brief period of testing out how much we enjoyed the premium features, we decided to move forward with commissioning a server to ‘self-host’ the project management platform, and a new chat platform.

One of the key deciding factors regarding this decision would have to be the cost. Currently, our studio has 7 main employees, with a few interns (praise be to the art gods!). So, with our slack that would be $6.67/user/month for a total of $560 USD per year. Then let’s say, conservatively, that we have 3 extra people on at any given time year round. For those 3 people (payed at the monthly rate of $8/mo.) it would be an additional $288 USD per year. On top of that, we have our Taiga to factor in as well. A little friendlier on the wallet, Taiga is only $5/user/month. Again, with our 7 + 3 additional contractors we end up with another total of $600 USD, for grand total of $1,448.00 USD (or $1,909.16 Canadian Dollars). Whaat, that’s crazy!

So here we are, much later, with me deciding what to build for a server. I knew we wanted something that we could expand on, something that would be reliable and have a relatively short ROI time (1 year), and most importantly, something that I could virtualize so I could have my own private media server haha!

So, here’s what I’ve come up with:

Motherboard:
Gigabyte GA-7PESH2
– 2 10Gb Ethernet ports
– Onboard SLI controller (so I can expand up to 15 drives total!!)
– Dual Xeon LGA2011 CPU sockets
– RAM support up to 512GB
CPU(s):
2 x Intel Xeon E6-5620 w/ 2 x Arctic Freezer 33 CO
– Hexacore processor @ 2.0GHz
RAM:
2 x 8GB HynixLow Profile DDR3 ECC REG
Drives:
6 x 2TB Seagate Ironwolf Pro NAS Drives(Storage)
1 x 120GB Kingston SSD (boot)
2 x 4TB WD Blue drives (yet to get, for backup drives)
Case:
Phanteks Enthoo Pro Full ATX
PSU:
EVGA 750w G3 80+ GOLD Fully Modular

Whooowee! That’s a mighty sight for sore eyes! With this beast of a home server we have plenty of resources available to us. Right, so now that we have all of this, what is the best way to use it?

I spent quite some time before purchasing the hardware agonizing over which platform to use as the hypervisor, as I knew I wanted to have a virtualized environment. I had originally chosen windows server for this role due to my familiarity with the platform. However, one drawback of that is that I would have to deal with a full hardware RAID configuration in HyperV. I wanted to avoid dealing with the hassle of managing a full blown RAID setup and the potential for failure of the SAS controller.

Cue FreeNAS! I ended up deciding to use FreeNAS due to its ZFS support. With ZFS, the software controls the “RAID” setup. The reason why I say “RAID” instead of RAID, is that FreeNAS doesn’t actually use RAID. You create what are called Pools which contains your “zvol” and otherwise. This zvol, or zVolume, is what you would actually use for storage. I won’t go further into details regarding the specifics of ZFS as it really could be a post all on its own (you can find the details Here). Being that this is “Software “RAID””, I wouldn’t need to get a whole new SAS controller to restore and recover the data on the drives. The only thing to destroy that data aside from mass drive failure is if I’d somehow lost the dataset off of the drive. So, with this in mind I decided to configure the drives as follows:
– I used the SSD for the server boot drive (way overkill, I know, but I can use it for other things down the road)
– 2 x “RAID 5” arrays between the 6 NAS drives. (2 separate because I wanted more than a partition separating my media library, and the platforms we were about to install)
– (when we can save up for them :’) ) 2 x 4TB backup drives, one for each array

As stated above, I wanted to maintain a totally separate array for each server. With the 2 “RAID 5” setups, we have a total of 4TB of raw storage on each array, with the last 2TB within each being used for parity. That way if any one drive goes down then we have the remaining two to rebuild our array with.

Now with the “RAID” configured it’s time to decide which platform to virtualize. …Of course I chose Linux, but given the nature of each server I couldn’t justify ‘spending’ the extra system resources on the overhead of a windows based VM. So, each server is running Ubuntu, with 5 CPU cores dedicated to each, and 4GB of RAM each. This leaves 8GB of RAM, and the two remaining CPU cores for the base server to use as it sees fit!

So there we have it! I get my personal media server to play with (Plex, if you’re interested), and No Sleep gets a new chat platform (Rocket Chat) with a self-hosted version of Taiga to boot! Though it does cost something to host the server, it definitely will not cost the $120 USD it would have otherwise for just those two services mentioned. Perhaps the best thing to come out of all of this whole thing, is that our server can grow with us as our needs require.

Also I’ve affectionately named the server Snowball..

Okay, bye!

Andrew MacMillan
About the author

Co-Founder, Developer of No Sleep Software Ltd.
3 Responses

Leave a Reply