Sunday, April 3, 2011

Linux Administrators: Support and End of LIfe Cycles for Linux Server OS

Linux Server OS Life Cycles
When choosing a Linux distro for a production server, there are many things to consider.  One of the more important ones is the length of time the version will be supported with timely updates. 
My personal preference is to choose about a 3-4 year life cycle for a server distro.  To me, it makes sense to downgrade a server hardware unit from mission critical to non-mission critical after 3 years.

Disclaimer:  I realize this is not an exhaustive list, but it does cover most of the major server distros.
 
With that in mind, here is a list with notes and links of the major distros and their respective support schedules.  Read carefully, do your own research, and let the information work for you.
Have fun with Linux.
Jim
--------------------------------------------------------------
SuSE and openSuSE

Note: At the end of 2010, Novell, the owners of SuSE, was bought by another company. 
http://en.wikipedia.org/wiki/SUSE_Linux_distributions
OpenSuSE currently has a new release cycle of 8 months.
openSUSE releases have a lifetime of 2 releases + 2 months overlap.
With a release cycle of 8 months this makes it 18 months.
Dear openSUSE users,
SUSE Security announces that the SUSE Security Team will stop releasing updates for
openSUSE 11.1 soon. Having provided security-relevant fixes for the last two years, we will stop
releasing updates after December 31st 2010.
As a consequence, the openSUSE 11.1 distribution directory on our server
download.opensuse.org will be removed from /distribution/11.1/ to free space on our mirror
sites. The 11.1 directory in the update tree /update/11.1 will follow, as soon as all updates have
been published.
Also the openSUSE buildservice repositories building openSUSE 11.1 will be removed.
The discontinuation of openSUSE 11.1 enables us to focus on the openSUSE distributions of a
newer release dates to ensure that our users can continuously take advantage of the quality that
they are used to with openSUSE products.
This announcement holds true for openSUSE 11.1 only. As usual, the openSUSE project will
continue to provide update packages for the following products:
* openSUSE 11.2 (supported until approximately May 12th 2011)
* openSUSE 11.3 (supported until approximately Jan 15th 2012)
* openSUSE 11.4 (currently in development, to be released in March 2011)

SLES ( SuSE Linux Enterprise Server)
SLED (SuSE Linux Enterprise Desktop)
http://support.novell.com/lifecycle/
Up to 10 years life cycle.

-----------------------------------------------------------------------------------------------

RedHat Enterprise
CentOS
Fedora
http://en.wikipedia.org/wiki/CentOS
https://access.redhat.com/support/policy/updates/errata/
http://wiki.centos.org/FAQ/General#head-fe8a0be91ee3e7dea812e8694491e1dde5b75e6d
http://fedoraproject.org/wiki/Releases/Schedule
The RHEL Life Cycle identifies the various levels of maintenance for each major release
of RHEL over a total period of up to ten years from the initial release date, which is often referred to as
the general availability (GA) date.




19. What is the support ''end of life'' for each CentOS release?

CentOS-3 updates until Oct 31, 2010
CentOS-4 updates until Feb 29, 2012
CentOS-5 updates until Mar 31, 2014

The Fedora Project releases a new version of Fedora approximately every 6 months and provides
updated packages (maintenance) to these releases for approximately 13 months. This allows users to
"skip a release" while still being able to always have a system that is still receiving updates.

-------------------------------------------------------------------------------

DEBIAN
http://en.wikipedia.org/wiki/Debian
Two(2) Year Planned Release Cycle
Security Policy:
The Debian Project, being free software, handles security policy through public disclosure rather than
through security through obscurity. Many advisories are coordinated with other free software vendors
(Debian is a member of vendor-sec) and are published the same day a vulnerability is made public.
Debian has a security audit team that reviews the archive looking for new or unfixed security bugs.
Debian also participates in security standardization efforts: the Debian security advisories are
compatible with the Common Vulnerabilities and Exposures (CVE) dictionary, and Debian is
represented in the Board of the Open Vulnerability and Assessment Language (OVAL) project.[53]
The Debian Project offers extensive documentation and tools to harden a Debian installation both
manually and automatically.[54] SELinux (Security-Enhanced Linux) packages are installed by default
though not enabled.[55] Debian provides an optional hardening wrapper but does not compile their
packages by default using gcc features such as PIE and Buffer overflow protection to harden their
software, unlike Ubuntu, Fedora and Hardened Gentoo among others.[56] These extra features greatly
increase security at the performance expense of 1% in 32 bit and 0.01% in 64 bit.[57]



Ubuntu
http://en.wikipedia.org/wiki/List_of_Ubuntu_releases
http://www.ubuntu.com/server
Up to 5 years.
--------------------------
End of this Post.

Wednesday, March 30, 2011

New Posts are On the Way

Some new posts are on the way.  I was sidetracked with some other stuff.
Soon to be posted.

Linux Administrators Guide to Life Cycles for Linux Server OS

Why Linux Client Machines Need the Latest Linux Version

Little Tips that Make Linux Support Easier

and more.

Jim

Sunday, February 13, 2011

Tar, Zip, and Rsync #2

Rsync:
According to wikipedia, rsync was first announced in June of 1996.
See link: http://en.wikipedia.org/wiki/Rsync
In its basic form, rsync is used to copy files from one folder to another folder and maintain a synchronized copy of the files, when activated.
The folders can be on the same computer, different computers on the same network, or on another network.  It uses a delta algorithm to compare new files to old and only transfer enough bytes to compare, make changes, and delete when necessary.  There are many options when using rsync.  Read the man pages, read the internet forums, and do what works best for you.

I use ssh and rsync to maintain a copy of about 100GB worth of important data that changes on a weekly basis.  Rsync is loaded on the source and destination machines.  I ssh between the machines and initiate rsync on the destination machine.  Bandwidth is not an issue for me as they are both on a gigabit router so I do not use compression.  IF you are going across the internet, use compression to more efficiently use bandwidth.

My rsync command looks something like this:
rsync -r root@<server>:/folder1/folder2/folder3/ /home/Rsync/folder1/

From the destination server, the command starts rsync, login as root on the source server,
and begins rsync on the source server.  (You should have rsync loaded on both servers)
The "r" says recursive. (get all the underlying folders and files).
The first path ending in folder3/ is my source directory. 
This is followed by a space and the path to my destination folder.
I end each path in a "slash" but the syntax is up to you.

I don't save the owners/rights because of the type of data I am saving. 
If you are backing up user folders, you should save owners/rights to simplify any restore you might need to make.

I have just briefly touched on rsync and how it works for me.  Study it, read the man pages, read the forums on the net, and let it work for you.

NOTE:
This post was later than I expected.  We had record 100 year snowfall and low temperatures in my area of Oklahoma and several things were put on hold for a few days.  Through the use of linux and ssh, we were able to maintain servers, computer support, backups, etc., from our homes, as the power and internet stayed up.

Have fun computing and until next time,
Jim

    

Sunday, January 30, 2011

Tar, Zip, and Rsync #1 Linux

A little history or TAR and Zip.  TAR, or tar, actually is a Tape ARchive program.  For those of you younger than 45 or so, we actually backed up our data on tape drives; serial and slow.  If you go back far enough, the tapes were reel-to-reel and we had to specify the tape length, starting point, compression (if any), etc.   When used with zip, TAR becomes a serial backup program with compression.  On today's computers and media, this process is many times faster than the old tape drives and is still very reliable.  Zip basically is a compression program that works well.  Most of us have used Zip in either Windows or Unix/Linux based backups. 

There are many websites with information on tar zip backups.  Used with ssh,
you can safely backup across networks, the internet, etc.  Bandwidth becomes the slowdown obstacle.  Personally, I use tar zip and ssh to make those daily backups of small databases and other stuff.  I use rsync for the really big stuff.
More on rsync next week.

OK.  For the actual command line using ssh, I do the following:

tar -cvzpf - /folder to backup/* | ssh jim@BackupStorageServer1 "cat > /destination folder/DestinationFileName.$1.tar.gz"

Read the man pages on tar and you will understand the -cvzpf.
I usually put the date in the $1 place.  
Practice extracting a file in a location on your computer that is not vital. 
That is all for now.  Next time: rsync. 
Have fun computing.
Jim

Wednesday, January 19, 2011

Panic, Paranoia, and Planning

I said I was going to write about tar, zip, and rsync, but I decided this needed to go first.  I used to ask my clients, "how long can you afford to be down?"
They always said never.   Wrong answer.  The truth is, everyone can afford to be down for a given period of time, if you know and plan ahead for that specific time.
Whether it is 1 minute, 1 hour, or 1 day, all network/sysadmins plan for a down time, sooner or later, for a specific server or group of servers, or applications.
Backups are for those unplanned times when hardware fails, or a worm attacks the system, or data gets accidentally overwritten. 

Backups are an insurance policy.  You only need it when things go wrong.
Very large corporations use multiple server farms, (cloud computing), to store and backup data.  Most of us use multiple servers, RAID systems, off site data mirroring, and/or other stuff.  Each one of these is a form of backup.

So, let us start with a common sense approach. 
How much gross income did the company make in 2010?
There are approximately 250 work days in a year. 
If the gross was $250000, then the average is $1000 per day gross income.
If the gross was $1 million, then the average is $4000 per day gross income.
If it cost you about $4000 per day for your business server to be down, then you can easily justify a $4000 backup plan.   

So, let' start at the basic hardware level.
Does the company lose time and money if a certain hard drive fails?
If the answer is yes, then mirror, stripe, etc .....  that drive.
If the drive is just a convenient temporary bucket for non-critical data,
the answer might be no.  Just keep a spare drive on hand.


What about at the next level: whole servers.
If your server mainboard fails unexpectedly, how much would it cost you in down time? (Real dollars!)
Maybe it is time to mirror that server with another complete server, on site or off site. Or, will just a copy of all critical data updated every 24 hours to a secondary server  keep you going? 


What about a router failure, web page server, or email system?
What if your UPS battery fails, causes a short in the system, and downs everything attached to it? (Rare, but I have experienced it.)

And, I have seen new hardware fail.  Just because the box is new does not guarantee 100% success.  


Write down (on paper and in red ink) the time lost in hours and days, and the cost in real dollars.  Be logical and think it through.  This will help you make your decision.

Much better to plan now than to be in the middle of a panic attack because a mission-critical server is down and there are no spare parts. 
Plan for the best and worst case scenarios, and sleep well.

Jim

Tuesday, January 18, 2011

Save Your Thoughts

Next posts: backups in Linux.  Tar, zip, rsync, and more.

Saturday, January 15, 2011

Open Source -- Why It Works

Wikipedia defines open source as "practices in production and development that promote access to the end product's source materials." 
"http://en.wikipedia.org/wiki/Open_source"

I like that definition.  Notice that it says "practices that promote".  Open source is not an accident, but a decision.  Open source is a concept put into practice. 

It is a great concept AND practice.  When companies lock down their code, and sometimes there are legitimate reasons to do so, they are restricting the development and debugging of that code, to their code writers.
Open source takes advantage of good code writers all over the world.
Code and programs can be tweaked, altered, and configured to run better under certain situations, or overall.  Literally, hundreds and thousands of people contribute to open source code.  The internet, email, web pages, and blogs, made this possible.  It is a natural development in the sharing of information.  

So, open source is here to stay.  Take advantage of it, contribute to it; either in good code, good testing, buying products, or by donating money to those legitimate web sites and companies that you feel are doing a good job.  We all benefit.