A little history or TAR and Zip. TAR, or tar, actually is a Tape ARchive program. For those of you younger than 45 or so, we actually backed up our data on tape drives; serial and slow. If you go back far enough, the tapes were reel-to-reel and we had to specify the tape length, starting point, compression (if any), etc. When used with zip, TAR becomes a serial backup program with compression. On today's computers and media, this process is many times faster than the old tape drives and is still very reliable. Zip basically is a compression program that works well. Most of us have used Zip in either Windows or Unix/Linux based backups.
There are many websites with information on tar zip backups. Used with ssh,
you can safely backup across networks, the internet, etc. Bandwidth becomes the slowdown obstacle. Personally, I use tar zip and ssh to make those daily backups of small databases and other stuff. I use rsync for the really big stuff.
More on rsync next week.
OK. For the actual command line using ssh, I do the following:
tar -cvzpf - /folder to backup/* | ssh jim@BackupStorageServer1 "cat > /destination folder/DestinationFileName.$1.tar.gz"
Read the man pages on tar and you will understand the -cvzpf.
I usually put the date in the $1 place.
Practice extracting a file in a location on your computer that is not vital.
That is all for now. Next time: rsync.
Have fun computing.
Jim
A no-nonsense straight forward advice blog on linux. Geared towards the support person and mid-level user with useful guidelines and tips for all linux users.
Sunday, January 30, 2011
Wednesday, January 19, 2011
Panic, Paranoia, and Planning
I said I was going to write about tar, zip, and rsync, but I decided this needed to go first. I used to ask my clients, "how long can you afford to be down?"
They always said never. Wrong answer. The truth is, everyone can afford to be down for a given period of time, if you know and plan ahead for that specific time.
Whether it is 1 minute, 1 hour, or 1 day, all network/sysadmins plan for a down time, sooner or later, for a specific server or group of servers, or applications.
Backups are for those unplanned times when hardware fails, or a worm attacks the system, or data gets accidentally overwritten.
Backups are an insurance policy. You only need it when things go wrong.
Very large corporations use multiple server farms, (cloud computing), to store and backup data. Most of us use multiple servers, RAID systems, off site data mirroring, and/or other stuff. Each one of these is a form of backup.
So, let us start with a common sense approach.
How much gross income did the company make in 2010?
There are approximately 250 work days in a year.
If the gross was $250000, then the average is $1000 per day gross income.
If the gross was $1 million, then the average is $4000 per day gross income.
If it cost you about $4000 per day for your business server to be down, then you can easily justify a $4000 backup plan.
So, let' start at the basic hardware level.
Does the company lose time and money if a certain hard drive fails?
If the answer is yes, then mirror, stripe, etc ..... that drive.
If the drive is just a convenient temporary bucket for non-critical data,
the answer might be no. Just keep a spare drive on hand.
What about at the next level: whole servers.
If your server mainboard fails unexpectedly, how much would it cost you in down time? (Real dollars!)
Maybe it is time to mirror that server with another complete server, on site or off site. Or, will just a copy of all critical data updated every 24 hours to a secondary server keep you going?
What about a router failure, web page server, or email system?
What if your UPS battery fails, causes a short in the system, and downs everything attached to it? (Rare, but I have experienced it.)
And, I have seen new hardware fail. Just because the box is new does not guarantee 100% success.
Write down (on paper and in red ink) the time lost in hours and days, and the cost in real dollars. Be logical and think it through. This will help you make your decision.
Much better to plan now than to be in the middle of a panic attack because a mission-critical server is down and there are no spare parts.
Plan for the best and worst case scenarios, and sleep well.
Jim
They always said never. Wrong answer. The truth is, everyone can afford to be down for a given period of time, if you know and plan ahead for that specific time.
Whether it is 1 minute, 1 hour, or 1 day, all network/sysadmins plan for a down time, sooner or later, for a specific server or group of servers, or applications.
Backups are for those unplanned times when hardware fails, or a worm attacks the system, or data gets accidentally overwritten.
Backups are an insurance policy. You only need it when things go wrong.
Very large corporations use multiple server farms, (cloud computing), to store and backup data. Most of us use multiple servers, RAID systems, off site data mirroring, and/or other stuff. Each one of these is a form of backup.
So, let us start with a common sense approach.
How much gross income did the company make in 2010?
There are approximately 250 work days in a year.
If the gross was $250000, then the average is $1000 per day gross income.
If the gross was $1 million, then the average is $4000 per day gross income.
If it cost you about $4000 per day for your business server to be down, then you can easily justify a $4000 backup plan.
So, let' start at the basic hardware level.
Does the company lose time and money if a certain hard drive fails?
If the answer is yes, then mirror, stripe, etc ..... that drive.
If the drive is just a convenient temporary bucket for non-critical data,
the answer might be no. Just keep a spare drive on hand.
What about at the next level: whole servers.
If your server mainboard fails unexpectedly, how much would it cost you in down time? (Real dollars!)
Maybe it is time to mirror that server with another complete server, on site or off site. Or, will just a copy of all critical data updated every 24 hours to a secondary server keep you going?
What about a router failure, web page server, or email system?
What if your UPS battery fails, causes a short in the system, and downs everything attached to it? (Rare, but I have experienced it.)
And, I have seen new hardware fail. Just because the box is new does not guarantee 100% success.
Write down (on paper and in red ink) the time lost in hours and days, and the cost in real dollars. Be logical and think it through. This will help you make your decision.
Much better to plan now than to be in the middle of a panic attack because a mission-critical server is down and there are no spare parts.
Plan for the best and worst case scenarios, and sleep well.
Jim
Tuesday, January 18, 2011
Saturday, January 15, 2011
Open Source -- Why It Works
Wikipedia defines open source as "practices in production and development that promote access to the end product's source materials."
"http://en.wikipedia.org/wiki/Open_source"
I like that definition. Notice that it says "practices that promote". Open source is not an accident, but a decision. Open source is a concept put into practice.
It is a great concept AND practice. When companies lock down their code, and sometimes there are legitimate reasons to do so, they are restricting the development and debugging of that code, to their code writers.
Open source takes advantage of good code writers all over the world.
Code and programs can be tweaked, altered, and configured to run better under certain situations, or overall. Literally, hundreds and thousands of people contribute to open source code. The internet, email, web pages, and blogs, made this possible. It is a natural development in the sharing of information.
So, open source is here to stay. Take advantage of it, contribute to it; either in good code, good testing, buying products, or by donating money to those legitimate web sites and companies that you feel are doing a good job. We all benefit.
"http://en.wikipedia.org/wiki/Open_source"
I like that definition. Notice that it says "practices that promote". Open source is not an accident, but a decision. Open source is a concept put into practice.
It is a great concept AND practice. When companies lock down their code, and sometimes there are legitimate reasons to do so, they are restricting the development and debugging of that code, to their code writers.
Open source takes advantage of good code writers all over the world.
Code and programs can be tweaked, altered, and configured to run better under certain situations, or overall. Literally, hundreds and thousands of people contribute to open source code. The internet, email, web pages, and blogs, made this possible. It is a natural development in the sharing of information.
So, open source is here to stay. Take advantage of it, contribute to it; either in good code, good testing, buying products, or by donating money to those legitimate web sites and companies that you feel are doing a good job. We all benefit.
Saturday, January 8, 2011
Linux Layers
This post is a back to the basics for understanding how Linux works. For those of you just now migrating from Windows to Linux, this should help you understand the difference in concepts in the two operating systems.
Several years ago, Bill Gates, with the help of some other people, wrote DOS; Disk Operating System. It was command line only, no graphics. Then Windows came along. It was a layer of graphic programming that "sat" on top of DOS.
Windows could not function without the bottom layer of DOS. It was like 2 layers of a cake. DOS on bottom, Windows on top. Over the years, the separation between layers has become fuzzy and the Windows part includes everything but the basic commands. You can still run command line programs in Windows, if you know what you are doing.
The top layer is the software, like MS Office, that interacts with the Windows program, and helps the user to be productive.
With Linux, the cake has more layers and they are distinctly separate, so far.
The bottom layer is Linux. It will run by itself with no need for other layers.
It is command line only. Some people build servers this way to cut down on the number of extra programs running in the background.
The next layer is the X-windows graphical interface. It contains graphical programming to interface between a GUI (graphical user interface) desktop view and commands, and the underlying Linux operating system.
The next layer is the Desktop Window Manager. It interacts with the X-windows system graphical interface, has the icons, mouse effects, screen savers, colors, etc. Typically, it is KDE, Gnome, Xfce4, WindowMaker, Fluxbox, Ice, etc.
The top layer is the software that will run on almost any Linux Desktop Window Manager. Typically it is OpenOffice, Firefox, GIMP, etc.
I hope that helps you understand some of the basic differences in the two operating systems.
Next time, what is open source, and why it works.
Have a good day.
Jim
Several years ago, Bill Gates, with the help of some other people, wrote DOS; Disk Operating System. It was command line only, no graphics. Then Windows came along. It was a layer of graphic programming that "sat" on top of DOS.
Windows could not function without the bottom layer of DOS. It was like 2 layers of a cake. DOS on bottom, Windows on top. Over the years, the separation between layers has become fuzzy and the Windows part includes everything but the basic commands. You can still run command line programs in Windows, if you know what you are doing.
The top layer is the software, like MS Office, that interacts with the Windows program, and helps the user to be productive.
With Linux, the cake has more layers and they are distinctly separate, so far.
The bottom layer is Linux. It will run by itself with no need for other layers.
It is command line only. Some people build servers this way to cut down on the number of extra programs running in the background.
The next layer is the X-windows graphical interface. It contains graphical programming to interface between a GUI (graphical user interface) desktop view and commands, and the underlying Linux operating system.
The next layer is the Desktop Window Manager. It interacts with the X-windows system graphical interface, has the icons, mouse effects, screen savers, colors, etc. Typically, it is KDE, Gnome, Xfce4, WindowMaker, Fluxbox, Ice, etc.
The top layer is the software that will run on almost any Linux Desktop Window Manager. Typically it is OpenOffice, Firefox, GIMP, etc.
I hope that helps you understand some of the basic differences in the two operating systems.
Next time, what is open source, and why it works.
Have a good day.
Jim
Thursday, January 6, 2011
Reading Material
This will be a short but important post. It is almost impossible to know every command, shortcut, and trick in any operating system and Linux is no different. I recommend keeping a reference book handy on your desk.
The first book I go to is:
The Linux Pocket Guide by Daniel J. Barrett.
Essential Commands
Published by O'Reilly.
ISBN: 978-0-596-00628-0
It is written to cover Fedora Linux, but the information is valuable for all distros. You should be able to find it online or at your local book store.
Also,
Learn to use "man" pages in Linux.
At the command prompt, type: man ls <enter>
You will see a manual for the ls (list) command.
This works with most commands in linux.
Read some each day and enjoy being part of the world-wide open source community.
Jim
The first book I go to is:
The Linux Pocket Guide by Daniel J. Barrett.
Essential Commands
Published by O'Reilly.
ISBN: 978-0-596-00628-0
It is written to cover Fedora Linux, but the information is valuable for all distros. You should be able to find it online or at your local book store.
Also,
Learn to use "man" pages in Linux.
At the command prompt, type: man ls <enter>
You will see a manual for the ls (list) command.
This works with most commands in linux.
Read some each day and enjoy being part of the world-wide open source community.
Jim
Wednesday, January 5, 2011
Morning Coffee
As promised, this post is for you network/sysadmins that get to work early.
I was walking around and checking on machines every morning and even though I enjoyed the walk sometimes, decided there should be a better way.
I check around 50 devices every morning this way.
So, with the idea of keeping it simple stupid (KISS), here is what you can do.
This is a simple program to run while you drink your morning coffee. It will "reach out" to all of your networked servers, workstations, routers, printers, copiers, and anything else on the network.
Here is the premise. Use the ping command to your advantage.
[ping -c 2 mycomputer@mydomain.com] will send out 2 pings to my computer@mydomain.com and wait for 2 replies for the standard wait time.
(Remove the brackets in the above line)
You can change the count to 1, if desired. I always do 2 just to be sure.
If there is no reply and a timeout, the ping command will reply with "no packets received" or "0 packets received" etc.
Redirect the output to a text file and you have a morning report.
Overwrite the report each morning and there is no report cleanup at the end of the week.
Use VI or similar text editor and make a file called:
start_ping_all.sh
Include the following code:
/home/<your files>/ping_all.sh > ping_report.txt
exit
#end
(The file path must be explicit)
This creates/overwrites a text file: ping_report.txt
--------------------------------------------------
Make another file named:
ping_all.sh
Include the following code:
ping -c 2 yourcomputer@yourdomain.com
#
ping -c 2 yournetworkprinter@yourdomain.com
#
ping -c 2 123.123.123.123
#END
If you are using DHCP, you will not have a static IP address, so you must use the full machine name.
If you are using static IP addresses, you can use the IP address.
Build the ping_all.sh file one time and edit as machines are deleted or added.
Run chmod 755 on both files to make them executable.
Run the program when you sit down with your first cup of coffee and you can read the report in a few minutes. I check 50 pieces of hardware in about 5 minutes.
Get ahead of the game in the mornings and enjoy the day.
Jim
I was walking around and checking on machines every morning and even though I enjoyed the walk sometimes, decided there should be a better way.
I check around 50 devices every morning this way.
So, with the idea of keeping it simple stupid (KISS), here is what you can do.
This is a simple program to run while you drink your morning coffee. It will "reach out" to all of your networked servers, workstations, routers, printers, copiers, and anything else on the network.
Here is the premise. Use the ping command to your advantage.
[ping -c 2 mycomputer@mydomain.com] will send out 2 pings to my computer@mydomain.com and wait for 2 replies for the standard wait time.
(Remove the brackets in the above line)
You can change the count to 1, if desired. I always do 2 just to be sure.
If there is no reply and a timeout, the ping command will reply with "no packets received" or "0 packets received" etc.
Redirect the output to a text file and you have a morning report.
Overwrite the report each morning and there is no report cleanup at the end of the week.
Use VI or similar text editor and make a file called:
start_ping_all.sh
Include the following code:
/home/<your files>/ping_all.sh > ping_report.txt
exit
#end
(The file path must be explicit)
This creates/overwrites a text file: ping_report.txt
--------------------------------------------------
Make another file named:
ping_all.sh
Include the following code:
ping -c 2 yourcomputer@yourdomain.com
#
ping -c 2 yournetworkprinter@yourdomain.com
#
ping -c 2 123.123.123.123
#END
If you are using DHCP, you will not have a static IP address, so you must use the full machine name.
If you are using static IP addresses, you can use the IP address.
Build the ping_all.sh file one time and edit as machines are deleted or added.
Run chmod 755 on both files to make them executable.
Run the program when you sit down with your first cup of coffee and you can read the report in a few minutes. I check 50 pieces of hardware in about 5 minutes.
Get ahead of the game in the mornings and enjoy the day.
Jim
Tuesday, January 4, 2011
A Good Place to Start
Linux can be easy and Linux can be very difficult. It depends on which "flavor" or "distro" you choose. There are: Fedora, Mint, Debian, Ubuntu, SuSE, Redhat, CentOS, Slax, Slackware, and more. I just named some of the major distros.
There is also AMD64, i386, i586, i686, x86_64, and so on. And then there are the desktops: KDE, Gnome, Xfce4, and several more. I am just demonstrating that there are several choices and that can be confusing to a novice or mid-level Linux user getting ready to load their new box with a Linux distro.
So, you bought the new hardware, have it all assembled and you are ready to load something. Read the following sections and download the net install or live iso file. Burn it to a CD and boot your new box with it.
Everything I say here can be checked out on wikipedia, google, etc. The internet is a highly developed research tool. Use it.
EASY
If this is your first time with Linux and you want everything to load automatically and run out of the box, then this section is for you.
Ubuntu and Linux Mint are easy to load, self configure, and have 95% of the bells and whistles for most users. Linux Mint is reported to load and run well on laptops. I use it on mine.
MID LEVEL
The next level of difficulty will probably be: Fedora, SuSE, Debian, Redhat, and CentOS. More user interaction required and a little more computer experience required.
CHALLENGE
Some of the most difficult are Slax, Slackware, etc. You should know computer hardware well, and be able to configure networking parameters, etc. Many operating systems at this level require user interaction to compile.
EXTRA STUFF
Load the 32 bit systems unless you are running more than 4 GB of ram.
Load the 64 bit systems if you are building a server.
KDE and Gnome are fully developed desktop managers.
Xfce4 is the new kid, runs fast, and is easier to support over a several machine environment. (In my opinion and experience)
That is a very brief guideline for choosing which Linux distro you should choose. Do your research, make a decision, and try it. Don't be afraid to try something different.
Next time, something for the network support person. Make your computer work for you while you drink your morning coffee.
Have fun.
Jim
There is also AMD64, i386, i586, i686, x86_64, and so on. And then there are the desktops: KDE, Gnome, Xfce4, and several more. I am just demonstrating that there are several choices and that can be confusing to a novice or mid-level Linux user getting ready to load their new box with a Linux distro.
So, you bought the new hardware, have it all assembled and you are ready to load something. Read the following sections and download the net install or live iso file. Burn it to a CD and boot your new box with it.
Everything I say here can be checked out on wikipedia, google, etc. The internet is a highly developed research tool. Use it.
EASY
If this is your first time with Linux and you want everything to load automatically and run out of the box, then this section is for you.
Ubuntu and Linux Mint are easy to load, self configure, and have 95% of the bells and whistles for most users. Linux Mint is reported to load and run well on laptops. I use it on mine.
MID LEVEL
The next level of difficulty will probably be: Fedora, SuSE, Debian, Redhat, and CentOS. More user interaction required and a little more computer experience required.
CHALLENGE
Some of the most difficult are Slax, Slackware, etc. You should know computer hardware well, and be able to configure networking parameters, etc. Many operating systems at this level require user interaction to compile.
EXTRA STUFF
Load the 32 bit systems unless you are running more than 4 GB of ram.
Load the 64 bit systems if you are building a server.
KDE and Gnome are fully developed desktop managers.
Xfce4 is the new kid, runs fast, and is easier to support over a several machine environment. (In my opinion and experience)
That is a very brief guideline for choosing which Linux distro you should choose. Do your research, make a decision, and try it. Don't be afraid to try something different.
Next time, something for the network support person. Make your computer work for you while you drink your morning coffee.
Have fun.
Jim
Monday, January 3, 2011
Practical-Linux
This is my first post, so bear with me for a few lines. As an experienced network/computer guy, I have seen the good and bad of computing, in both the private and business sectors. Computers are here to stay. I like computers, but, sometimes they get in the way of getting the real work completed, or they get in the way of human relationships, and so on. You get the idea.
Practical-Linux is my way of helping you past the eye candy and innumerable add-ons, to get the job done in a "tell it like it is" style.
I started out on Bill Gates DOS back in 1990-1991. I graduated to Windows 2.x, 3.x, 95, 98, and so on. I taught NT Server 3.x at one of the local colleges for a while. I attended a few Novell classes in the early 1990's when Novell 2.x came on 5 1/4" floppy disks. I progressed to Novell 3.x and later 4.x.
In the mid 1990's, I became aware of MS Access 2.0. It was a great new idea back then. I actually could call Redmond and talk to an Access design team member. Paradox, Dbase, Access, SQL, etc were the database jargon of the day, and still are in some instances.
In 1999, a customer handed me a floppy disk and said I needed to take a look at it. It was a copy of Linux. I loaded it and had to manually configure everything. "Back to the basics" was my first thought. I liked the raw feel of Linux. I stayed with it and eventually converted 100% to Linux.
Presently, I work at a University and support about 40 users, and 4-5 servers. Most of it Linux. A handfull of XP and Win7 machines thrown in for grins. We run TCPIP, FTP, NFS, SSH, and a few other things. All basic stuff that works very well when configured correctly. All of the users run GUI desktops, productivity suite software, and so on.
Bill Gates, Steve Jobs, Linus Torvalds, and others changed the way our world thinks, works, and plays. So, that being said, I am here to have fun and help you make Linux decisions a little easier.
Let's have fun.
Jim
Practical-Linux is my way of helping you past the eye candy and innumerable add-ons, to get the job done in a "tell it like it is" style.
I started out on Bill Gates DOS back in 1990-1991. I graduated to Windows 2.x, 3.x, 95, 98, and so on. I taught NT Server 3.x at one of the local colleges for a while. I attended a few Novell classes in the early 1990's when Novell 2.x came on 5 1/4" floppy disks. I progressed to Novell 3.x and later 4.x.
In the mid 1990's, I became aware of MS Access 2.0. It was a great new idea back then. I actually could call Redmond and talk to an Access design team member. Paradox, Dbase, Access, SQL, etc were the database jargon of the day, and still are in some instances.
In 1999, a customer handed me a floppy disk and said I needed to take a look at it. It was a copy of Linux. I loaded it and had to manually configure everything. "Back to the basics" was my first thought. I liked the raw feel of Linux. I stayed with it and eventually converted 100% to Linux.
Presently, I work at a University and support about 40 users, and 4-5 servers. Most of it Linux. A handfull of XP and Win7 machines thrown in for grins. We run TCPIP, FTP, NFS, SSH, and a few other things. All basic stuff that works very well when configured correctly. All of the users run GUI desktops, productivity suite software, and so on.
Bill Gates, Steve Jobs, Linus Torvalds, and others changed the way our world thinks, works, and plays. So, that being said, I am here to have fun and help you make Linux decisions a little easier.
Let's have fun.
Jim
Subscribe to:
Posts (Atom)