Our new server will run Ubuntu Server 13.04
In an effort to stay on top of the latest technology, I'm migrating our sites to new a server and also switching from Centos 6.4 to Ubuntu 13.04.
Centos 6.4 still uses the Linux 2.6 kernel and updates to it focus on security and fewer new features since 2009. Most the work on the Linux kernel in the last 4 years has been on newer versions with the current stable version of the Linux kernel at 3.10. See the development chart on wikipedia. Ubuntu 13.04 comes with version 3.8 of the kernel, which is much closer to the latest version. The amount of changes over 4 years of Linux development surely have to support modern hardware better and support a variety of new software features in the system to make it easier, safer, faster, etc.
Switching from Centos to Ubuntu was easy
Yesterday, it took me about 5 hours to build (including writing documentation) a working virtual machine running Ubuntu for all the software we're using. I'm now doing all my development on the Ubuntu after just 1 day. Ubuntu can install many things for you, but I choose to do most of it manually so that I know exactly what is installed.
It was a great experience getting used to Ubuntu's package system.
Many of the packages were available directly from Ubuntu including very recent versions, which is not true on Centos. I did have to find a custom PPA for PHP 5.4, and some other software, but that was easy to do. I was able to quickly find resources online to install the latest versions of all the software I use.
It seemed easier then Centos for many things because it takes some of the steps out of building the system. Ubuntu's core install has many of the necessary packages already pre-installed compared to a minimal Centos install. While some of the system is organized differently and uses different commands for management, it's not completely alien and Ubuntu Server more user friendly then centos/fedora/rhel distros even without a desktop installed.
Ubuntu's space requirements are nearly the same as Centos
My Centos virtual machine is limited to a 4gb fixed size disk and has 1gb of free space after being fully configured and the yum cache cleaned with only 2 kernels installed. I found that Ubuntu uses nearly the same amount of space after fully configured after running "apt-get clean". To make my VM easier smaller when being distributed, I put the 2gb swap disk on a dynamic VDI file, so that it would initially use 0 bytes. This allows the system to have a large enough swap disk, without making the download significantly larger. I also keep the virtual machines small by storing the source code and database on the host system and use Samba and the network to connect to the host from the virtual machine. This way they don't grow over time.
Additional security hardening this time
I also spent a few more hours studying the NSA's linux hardening guides to better understand how to partition the system before installation. Many of the security changes that are recommended can be done after a system is installed, but when it comes to partitioning and mounting the Linux file system, it is a lot easier to do that before it is installed since mistakes could cause data loss and downtime. Specifically, I learned how to use the fstab options noexec, nosuid, nodev on various linux directories so that if there is a system breach, it is better contained. Also having separate partitions can better ensure that logging continues to function when a user fills up all available space on another partition.
Why do I self-install the Operating System (OS)?
Soon I'll be setting up Ubuntu on our new server. I actually install the OS myself so that I can document the entire process and have total control over the system.
Many web developers rely on hosting companies to provide the OS installation and they use third party control panel software such as Cpanel or Plesk to manage their server. This is inferior in some ways because it is harder to upgrade and understand those systems when something goes wrong. For example, an old version of cpanel may not be compatible with a newer version of some service. This leaves you running old versions longer which may compromise your security or reduce performance. Having full documentation of how my system is configured helps me a great deal with having a reliable test configuration that I can reproduce on the live server exactly the same way. I don't want third party software or companies automatically deciding when to make changes to my system. Over the last 2 years, this has worked out great since I've been able to run the latest software and security updates typically within 1 to 3 days of their release.
I also plan on distributing my Ubuntu virtual machine as part of the open source Jetendo release. This will give other web developers a fully working system when first exploring my application. It will save them dozens of hours learning how to configure Linux, and jump right into building great web sites. If my platform was based on proprietary third party software, I wouldn't be able to do this.
Thoughts on the new server hardware
The new server specs are as follows:
- 1 x Intel E3-1230V2 Ivy Bridge Quad Core Cpu (3.4ghz)
- 16gb system memory
- 1 x Intel S3500 480gb SSD
- 1 x 7200rpm Sata 3 hard drive
- 100mbps port
The Intel SSD has approximately 5 times the random read performance compared to the Intel 520 series drive we are currently using for the OS and database (~75,000 IOPS). High end hard drives are only able to achieve around 200 IOPS. Since we were already using an SSD drive in the current server capable of around 10,000 IOPS, our customers didn't have to suffer with that kind of slow performance, but I think it's amazing how we can have performance that is up to 300 times faster then cheaper hosting without much additional cost. This should make disk access really incredibly fast, which is really important because we handle millions of files and hundreds of database tables. Additionally, the SSD storage space is much larger, so ALL the files will be now stored on the SSD making performance more consistent and shorten maintenance related downtime since services will restart faster.
According to cpubenchmark.net, the new cpu is about 20% faster then the current cpu we are using (Intel E3-1230 Sandy Bridge 3.3ghz). It is also a slightly cheaper despite being faster since hardware prices continue to drop and our hosting company offer great pricing. www.hivelocity.com in Tampa, FL hosts the new and old server. I've really enjoyed their excellent service and affordable pricing.
Premium hosting services for our customers
It takes a lot of work to move to a new server, but this kind of proactive service is what my customers are willing to pay extra for premium quality hosting. I try to move to a new server every 1 to 2 years to reduce the risk of downtime due to running older hardware and operating systems. It has only been about 16 months since the last hosting migration, but a lot has improved in that time with the hardware, so now is the right time to move.
Bookmark & Share
Most Popular Articles
- Mass virtual hosting security tip when using a reverse proxy to connect to other servers
- Solution for MariaDB Field 'xxx' doesn't have a default value
- How to lock Windows immediately upon smart card removal
- Stop using sleep mode with Windows Bitlocker for better security. Learn how to use hibernate in Windows 8.
- Is Google Public DNS actually better then your ISP?
- Planning a system to visually create responsive data-driven web page layouts & widgets in the Jetendo CMS browser interface
- Pros and Cons of CFML vs PHP and other languages
- My dog survived eating a box of Oreos