Sunday, April 3, 2011

Linux Administrators: Support and End of LIfe Cycles for Linux Server OS

Linux Server OS Life Cycles
When choosing a Linux distro for a production server, there are many things to consider.  One of the more important ones is the length of time the version will be supported with timely updates. 
My personal preference is to choose about a 3-4 year life cycle for a server distro.  To me, it makes sense to downgrade a server hardware unit from mission critical to non-mission critical after 3 years.

Disclaimer:  I realize this is not an exhaustive list, but it does cover most of the major server distros.
 
With that in mind, here is a list with notes and links of the major distros and their respective support schedules.  Read carefully, do your own research, and let the information work for you.
Have fun with Linux.
Jim
--------------------------------------------------------------
SuSE and openSuSE

Note: At the end of 2010, Novell, the owners of SuSE, was bought by another company. 
http://en.wikipedia.org/wiki/SUSE_Linux_distributions
OpenSuSE currently has a new release cycle of 8 months.
openSUSE releases have a lifetime of 2 releases + 2 months overlap.
With a release cycle of 8 months this makes it 18 months.
Dear openSUSE users,
SUSE Security announces that the SUSE Security Team will stop releasing updates for
openSUSE 11.1 soon. Having provided security-relevant fixes for the last two years, we will stop
releasing updates after December 31st 2010.
As a consequence, the openSUSE 11.1 distribution directory on our server
download.opensuse.org will be removed from /distribution/11.1/ to free space on our mirror
sites. The 11.1 directory in the update tree /update/11.1 will follow, as soon as all updates have
been published.
Also the openSUSE buildservice repositories building openSUSE 11.1 will be removed.
The discontinuation of openSUSE 11.1 enables us to focus on the openSUSE distributions of a
newer release dates to ensure that our users can continuously take advantage of the quality that
they are used to with openSUSE products.
This announcement holds true for openSUSE 11.1 only. As usual, the openSUSE project will
continue to provide update packages for the following products:
* openSUSE 11.2 (supported until approximately May 12th 2011)
* openSUSE 11.3 (supported until approximately Jan 15th 2012)
* openSUSE 11.4 (currently in development, to be released in March 2011)

SLES ( SuSE Linux Enterprise Server)
SLED (SuSE Linux Enterprise Desktop)
http://support.novell.com/lifecycle/
Up to 10 years life cycle.

-----------------------------------------------------------------------------------------------

RedHat Enterprise
CentOS
Fedora
http://en.wikipedia.org/wiki/CentOS
https://access.redhat.com/support/policy/updates/errata/
http://wiki.centos.org/FAQ/General#head-fe8a0be91ee3e7dea812e8694491e1dde5b75e6d
http://fedoraproject.org/wiki/Releases/Schedule
The RHEL Life Cycle identifies the various levels of maintenance for each major release
of RHEL over a total period of up to ten years from the initial release date, which is often referred to as
the general availability (GA) date.




19. What is the support ''end of life'' for each CentOS release?

CentOS-3 updates until Oct 31, 2010
CentOS-4 updates until Feb 29, 2012
CentOS-5 updates until Mar 31, 2014

The Fedora Project releases a new version of Fedora approximately every 6 months and provides
updated packages (maintenance) to these releases for approximately 13 months. This allows users to
"skip a release" while still being able to always have a system that is still receiving updates.

-------------------------------------------------------------------------------

DEBIAN
http://en.wikipedia.org/wiki/Debian
Two(2) Year Planned Release Cycle
Security Policy:
The Debian Project, being free software, handles security policy through public disclosure rather than
through security through obscurity. Many advisories are coordinated with other free software vendors
(Debian is a member of vendor-sec) and are published the same day a vulnerability is made public.
Debian has a security audit team that reviews the archive looking for new or unfixed security bugs.
Debian also participates in security standardization efforts: the Debian security advisories are
compatible with the Common Vulnerabilities and Exposures (CVE) dictionary, and Debian is
represented in the Board of the Open Vulnerability and Assessment Language (OVAL) project.[53]
The Debian Project offers extensive documentation and tools to harden a Debian installation both
manually and automatically.[54] SELinux (Security-Enhanced Linux) packages are installed by default
though not enabled.[55] Debian provides an optional hardening wrapper but does not compile their
packages by default using gcc features such as PIE and Buffer overflow protection to harden their
software, unlike Ubuntu, Fedora and Hardened Gentoo among others.[56] These extra features greatly
increase security at the performance expense of 1% in 32 bit and 0.01% in 64 bit.[57]



Ubuntu
http://en.wikipedia.org/wiki/List_of_Ubuntu_releases
http://www.ubuntu.com/server
Up to 5 years.
--------------------------
End of this Post.

Wednesday, March 30, 2011

New Posts are On the Way

Some new posts are on the way.  I was sidetracked with some other stuff.
Soon to be posted.

Linux Administrators Guide to Life Cycles for Linux Server OS

Why Linux Client Machines Need the Latest Linux Version

Little Tips that Make Linux Support Easier

and more.

Jim

Sunday, February 13, 2011

Tar, Zip, and Rsync #2

Rsync:
According to wikipedia, rsync was first announced in June of 1996.
See link: http://en.wikipedia.org/wiki/Rsync
In its basic form, rsync is used to copy files from one folder to another folder and maintain a synchronized copy of the files, when activated.
The folders can be on the same computer, different computers on the same network, or on another network.  It uses a delta algorithm to compare new files to old and only transfer enough bytes to compare, make changes, and delete when necessary.  There are many options when using rsync.  Read the man pages, read the internet forums, and do what works best for you.

I use ssh and rsync to maintain a copy of about 100GB worth of important data that changes on a weekly basis.  Rsync is loaded on the source and destination machines.  I ssh between the machines and initiate rsync on the destination machine.  Bandwidth is not an issue for me as they are both on a gigabit router so I do not use compression.  IF you are going across the internet, use compression to more efficiently use bandwidth.

My rsync command looks something like this:
rsync -r root@<server>:/folder1/folder2/folder3/ /home/Rsync/folder1/

From the destination server, the command starts rsync, login as root on the source server,
and begins rsync on the source server.  (You should have rsync loaded on both servers)
The "r" says recursive. (get all the underlying folders and files).
The first path ending in folder3/ is my source directory. 
This is followed by a space and the path to my destination folder.
I end each path in a "slash" but the syntax is up to you.

I don't save the owners/rights because of the type of data I am saving. 
If you are backing up user folders, you should save owners/rights to simplify any restore you might need to make.

I have just briefly touched on rsync and how it works for me.  Study it, read the man pages, read the forums on the net, and let it work for you.

NOTE:
This post was later than I expected.  We had record 100 year snowfall and low temperatures in my area of Oklahoma and several things were put on hold for a few days.  Through the use of linux and ssh, we were able to maintain servers, computer support, backups, etc., from our homes, as the power and internet stayed up.

Have fun computing and until next time,
Jim

    

Sunday, January 30, 2011

Tar, Zip, and Rsync #1 Linux

A little history or TAR and Zip.  TAR, or tar, actually is a Tape ARchive program.  For those of you younger than 45 or so, we actually backed up our data on tape drives; serial and slow.  If you go back far enough, the tapes were reel-to-reel and we had to specify the tape length, starting point, compression (if any), etc.   When used with zip, TAR becomes a serial backup program with compression.  On today's computers and media, this process is many times faster than the old tape drives and is still very reliable.  Zip basically is a compression program that works well.  Most of us have used Zip in either Windows or Unix/Linux based backups. 

There are many websites with information on tar zip backups.  Used with ssh,
you can safely backup across networks, the internet, etc.  Bandwidth becomes the slowdown obstacle.  Personally, I use tar zip and ssh to make those daily backups of small databases and other stuff.  I use rsync for the really big stuff.
More on rsync next week.

OK.  For the actual command line using ssh, I do the following:

tar -cvzpf - /folder to backup/* | ssh jim@BackupStorageServer1 "cat > /destination folder/DestinationFileName.$1.tar.gz"

Read the man pages on tar and you will understand the -cvzpf.
I usually put the date in the $1 place.  
Practice extracting a file in a location on your computer that is not vital. 
That is all for now.  Next time: rsync. 
Have fun computing.
Jim

Wednesday, January 19, 2011

Panic, Paranoia, and Planning

I said I was going to write about tar, zip, and rsync, but I decided this needed to go first.  I used to ask my clients, "how long can you afford to be down?"
They always said never.   Wrong answer.  The truth is, everyone can afford to be down for a given period of time, if you know and plan ahead for that specific time.
Whether it is 1 minute, 1 hour, or 1 day, all network/sysadmins plan for a down time, sooner or later, for a specific server or group of servers, or applications.
Backups are for those unplanned times when hardware fails, or a worm attacks the system, or data gets accidentally overwritten. 

Backups are an insurance policy.  You only need it when things go wrong.
Very large corporations use multiple server farms, (cloud computing), to store and backup data.  Most of us use multiple servers, RAID systems, off site data mirroring, and/or other stuff.  Each one of these is a form of backup.

So, let us start with a common sense approach. 
How much gross income did the company make in 2010?
There are approximately 250 work days in a year. 
If the gross was $250000, then the average is $1000 per day gross income.
If the gross was $1 million, then the average is $4000 per day gross income.
If it cost you about $4000 per day for your business server to be down, then you can easily justify a $4000 backup plan.   

So, let' start at the basic hardware level.
Does the company lose time and money if a certain hard drive fails?
If the answer is yes, then mirror, stripe, etc .....  that drive.
If the drive is just a convenient temporary bucket for non-critical data,
the answer might be no.  Just keep a spare drive on hand.


What about at the next level: whole servers.
If your server mainboard fails unexpectedly, how much would it cost you in down time? (Real dollars!)
Maybe it is time to mirror that server with another complete server, on site or off site. Or, will just a copy of all critical data updated every 24 hours to a secondary server  keep you going? 


What about a router failure, web page server, or email system?
What if your UPS battery fails, causes a short in the system, and downs everything attached to it? (Rare, but I have experienced it.)

And, I have seen new hardware fail.  Just because the box is new does not guarantee 100% success.  


Write down (on paper and in red ink) the time lost in hours and days, and the cost in real dollars.  Be logical and think it through.  This will help you make your decision.

Much better to plan now than to be in the middle of a panic attack because a mission-critical server is down and there are no spare parts. 
Plan for the best and worst case scenarios, and sleep well.

Jim

Tuesday, January 18, 2011

Save Your Thoughts

Next posts: backups in Linux.  Tar, zip, rsync, and more.

Saturday, January 15, 2011

Open Source -- Why It Works

Wikipedia defines open source as "practices in production and development that promote access to the end product's source materials." 
"http://en.wikipedia.org/wiki/Open_source"

I like that definition.  Notice that it says "practices that promote".  Open source is not an accident, but a decision.  Open source is a concept put into practice. 

It is a great concept AND practice.  When companies lock down their code, and sometimes there are legitimate reasons to do so, they are restricting the development and debugging of that code, to their code writers.
Open source takes advantage of good code writers all over the world.
Code and programs can be tweaked, altered, and configured to run better under certain situations, or overall.  Literally, hundreds and thousands of people contribute to open source code.  The internet, email, web pages, and blogs, made this possible.  It is a natural development in the sharing of information.  

So, open source is here to stay.  Take advantage of it, contribute to it; either in good code, good testing, buying products, or by donating money to those legitimate web sites and companies that you feel are doing a good job.  We all benefit.

Saturday, January 8, 2011

Linux Layers

This post is a back to the basics for understanding how Linux works.  For those of you just now migrating from Windows to Linux, this should help you understand the difference in concepts in the two operating systems.

Several years ago, Bill Gates, with the help of some other people, wrote DOS; Disk Operating System.  It was command line only, no graphics.  Then Windows came along.  It was a layer of graphic programming that "sat" on top of DOS.
Windows could not function without the bottom layer of DOS.  It was like 2 layers of a cake.  DOS on bottom, Windows on top.  Over the years,  the separation between layers has become fuzzy and the Windows part includes everything but the basic commands.  You can still run command line programs in Windows, if you know what you are doing. 
The top layer is the software, like MS Office, that interacts with the Windows program, and helps the user to be productive.

With Linux, the cake has more layers and they are distinctly separate, so far.
The bottom layer is Linux.  It will run by itself with no need for other layers.
It is command line only.  Some people build servers this way to cut down on the number of extra programs running in the background.
The next layer is the X-windows graphical interface.  It contains graphical programming to interface between a GUI (graphical user interface) desktop view and commands, and the underlying Linux operating system.
The next layer is the Desktop Window Manager.  It interacts with the X-windows system graphical interface, has the icons, mouse effects, screen savers, colors, etc.  Typically, it is KDE, Gnome, Xfce4, WindowMaker, Fluxbox, Ice, etc.      
The top layer is the software that will run on almost any Linux Desktop Window Manager.  Typically it is OpenOffice, Firefox, GIMP, etc.

I hope that helps you understand some of the basic differences in the two operating systems.

Next time, what is open source, and why it works.  

Have a good day.
Jim

Thursday, January 6, 2011

Reading Material

This will be a short but important post.  It is almost impossible to know every command, shortcut, and trick in any operating system and Linux is no different.  I recommend keeping a reference book handy on your desk.
The first book I go to is:

The Linux Pocket Guide by Daniel J. Barrett.
Essential Commands
Published by O'Reilly.
ISBN: 978-0-596-00628-0

It is written to cover Fedora Linux, but the information is valuable for all distros.  You should be able to find it online or at your local book store.

Also,
Learn to use "man" pages in Linux. 
At the command prompt, type:  man ls <enter>
You will see a manual for the ls (list) command.
This works with most commands in linux.

Read some each day and enjoy being part of the world-wide open source community.
Jim

Wednesday, January 5, 2011

Morning Coffee

As promised, this post is for you network/sysadmins that get to work early.
I was walking around and checking on machines every morning and even though I enjoyed the walk sometimes,  decided there should be a better way.
I check around 50 devices every morning this way.
So, with the idea of keeping it simple stupid (KISS), here is what you can do.

This is a simple program to run while you drink your morning coffee.  It will "reach out" to all of your networked servers, workstations, routers, printers, copiers, and anything else on the network.


Here is the premise.  Use the ping command to your advantage. 
[ping -c 2 mycomputer@mydomain.com]   will send out 2 pings to my computer@mydomain.com and wait for 2 replies for the standard wait time.
(Remove the brackets in the above line)
You can change the count to 1, if desired.  I always do 2 just to be sure.
If there is no reply and a timeout, the ping command will reply with "no packets received" or "0 packets received"  etc. 
Redirect the output to a text file and you have a morning report.
Overwrite the report each morning and there is no report cleanup at the end of the week. 


Use VI or similar text editor and make a file called:
start_ping_all.sh

Include the following code:
/home/<your files>/ping_all.sh > ping_report.txt
exit
#end


(The file path must be explicit)
This creates/overwrites a text file: ping_report.txt
--------------------------------------------------
Make another file named:
ping_all.sh

Include the following code:
ping -c 2 yourcomputer@yourdomain.com
#
ping -c 2 yournetworkprinter@yourdomain.com
#
ping -c 2 123.123.123.123


#END

If you are using DHCP, you will not have a static IP address, so you must use the full machine name.
If you are using static IP addresses, you can use the IP address.
Build the ping_all.sh file one time and edit as machines are deleted or added.
Run chmod 755 on both files to make them executable.


Run the program when you sit down with your first cup of coffee and you can read the report in a few minutes.  I check 50 pieces of hardware in about 5 minutes. 
Get ahead of the game in the mornings and enjoy the day.
Jim

Tuesday, January 4, 2011

A Good Place to Start

Linux can be easy and Linux can be very difficult.  It depends on which "flavor" or "distro" you choose.  There are: Fedora, Mint, Debian, Ubuntu, SuSE, Redhat, CentOS, Slax, Slackware, and more.  I just named some of the major distros.
There is also AMD64, i386, i586, i686, x86_64, and so on.  And then there are the desktops: KDE, Gnome, Xfce4, and several more.  I am just demonstrating that there are several choices and that can be confusing to a novice or mid-level Linux user getting ready to load their new box with a Linux distro. 

So, you bought the new hardware, have it all assembled and you are ready to load something.   Read the following sections and download the net install or live  iso file.  Burn it to a CD and boot your new box with it. 

Everything I say here can be checked out on wikipedia, google, etc.  The internet is a highly developed research tool.  Use it.

EASY
If this is your first time with Linux and you want everything to load automatically and run out of the box, then this section is for you.
Ubuntu and Linux Mint are easy to load, self configure, and have 95% of the bells and whistles for most users.  Linux Mint is reported to load and run well on laptops.  I use it on mine.

 MID LEVEL
The next level of difficulty will probably be: Fedora, SuSE, Debian, Redhat, and CentOS.  More user interaction required and a little more computer experience required.

 CHALLENGE
Some of the most difficult are Slax, Slackware, etc.  You should know computer hardware well, and be able to configure networking parameters, etc.  Many operating systems at this level require user interaction to compile.

EXTRA STUFF
Load the 32 bit systems unless you are running more than 4 GB of ram.
Load the 64 bit systems if you are building a server. 
KDE and Gnome are fully developed desktop managers.
Xfce4 is the new kid, runs fast, and is easier to support over a several machine environment.  (In my opinion and experience)

That is a very brief guideline for choosing which Linux distro you should choose.  Do your research, make a decision, and try it.  Don't be afraid to try something different. 

Next time, something for the network support person.  Make your computer work for you while you drink your morning coffee.
Have fun.
Jim

Monday, January 3, 2011

Practical-Linux

This is my first post, so bear with me for a few lines.  As an experienced network/computer guy, I have seen the good and bad of computing, in both the private and business sectors.  Computers are here to stay.  I like computers, but,  sometimes they get in the way of getting the real work completed, or they get in the way of human relationships, and so on.  You get the idea. 
Practical-Linux is my way of helping you past the eye candy and innumerable add-ons, to get the job done in a "tell it like it is" style. 

I started out on Bill Gates DOS back in 1990-1991.  I graduated to Windows 2.x, 3.x, 95, 98, and so on.  I taught NT Server 3.x at one of the local colleges for a while.  I attended a few Novell classes in the early 1990's when Novell 2.x came on 5 1/4" floppy disks.  I progressed to Novell 3.x and later 4.x. 
In the mid 1990's, I became aware of MS Access 2.0.  It was a great new idea back then.  I actually could call Redmond and talk to an Access design team member.  Paradox, Dbase, Access, SQL, etc were the database jargon of the day, and still are in some instances. 

In 1999, a customer handed me a floppy disk and said I needed to take a look at it.  It was a copy of Linux.  I loaded it and had to manually configure everything.  "Back to the basics" was my first thought.  I liked the raw feel of Linux.  I stayed with it and eventually converted 100% to Linux. 

Presently, I work at a University and support about 40 users, and 4-5 servers.  Most of it Linux.  A handfull of XP and Win7 machines thrown in for grins.  We run TCPIP, FTP, NFS, SSH, and a few other things.  All basic stuff that works very well when configured correctly.  All of the users run GUI desktops, productivity suite software, and so on.  

Bill Gates, Steve Jobs, Linus Torvalds, and others changed the way our world thinks, works, and plays.  So, that being said, I am here to have fun and help you make Linux decisions a little easier.
Let's have fun.
Jim