A Linux Tree from ForLinux

Here is a script that, when executed, will draw a Christmas tree using the characters chosen by the user on the linux terminal.

The script will prompt the user to enter the character using which the tree has to be drawn. Then it will prompt the user to enter two characters that will be used to decorate the tree. The last input it will ask for is the character using which the base of the tree has to be drawn.

trapi() {
cols=`tput cols`
# char="*"
# tput clear
deco1=$(($RANDOM % rows))
deco2=$((RANDOM % rows))
tput cup $i $cols
#printf "$char" 
if [ $i -eq $((rows-1)) ]
printf "$char "
elif [ $j -eq 0 ]
        printf "$char "
        if [ $j -eq $((i+$2)) ]
                printf "$char "
                if [ $j -eq $deco1 ]
                printf "$4 "
                elif [ $j -eq  $deco2 ]
                printf "$5 "
                printf "  "
triangle() {
cols=`tput cols`
#echo "How many rows do you want"
#read rows
#echo "What character do you want to use"
#read char
# char="*"
tput clear
tput cup $i $cols
#printf "$char" 
if [ $i -eq $((rows-1)) ]
printf "$char "
elif [ $j -eq 0 ]
        printf "$char "
        if [ $j -eq $i ]
                printf "$char "
                if [ $j -eq $deco ]
                printf "$2 "
                printf "  "
# sleep 5 
base() {
cols=`tput cols`
# tput clear
tput cup $(($1+i)) $cols
#printf "$char" 
if [ $i -eq $((rows-1)) ]
printf "$char "
elif [ $j -eq 0 ]
        printf "$char "
        if [ $j -eq $((i+2)) ]
                printf "$char "
        printf "  "
echo "Enter character for the tree" 
read char 
echo "Enter first decoration character"
read ch_deco1
echo "Enter second decoration character"
read ch_deco2 
echo "Enter character for base" 
read base_char
triangle "$char" "$ch_deco1"
trapi 5 1 "$char" "$ch_deco1" "$ch_deco2"
trapi 10 2 "$char" "$ch_deco1" "$ch_deco2"
trapi 15 3 "$char" "$ch_deco1" "$ch_deco2"
base 20 "$base_char"

Save the script as tree.sh and allow it to be executable

$ chmod +x chritstmas_tree.sh 

Execute the script

$ ./chritstmas_tree.sh 
Enter character for the tree
Enter first decoration character
Enter second decoration character
Enter character for base

Have a very merry Christmas and a happy new year from all of us her at Forlinux.

Posted in Managed Hosting | Leave a comment

YUM – Automating updates

It’s important to make sure your operating system and applications are regularly patched, to ensure the system is kept up-to-date and reduce the chances of it being compromised.

If you run Red Hat Enterprise, Fedora or CentOS (any rpm/yum based distro) you can use yum-updatesd to automate the updates.

It’s probably not running by default, so will need starting:

service yum-updatesd start

Then make sure it starts again reboots by running:

chkconfig yum-updatesd on

The main configuration file can be edited by running:

vi /etc/yum/yum-updatesd.conf

First, set how often the daemon will check for updates. You can set this to run as frequently as you need, but lets set it to check every 24 hours. Values are in seconds, so change the value of the run_interval directive to:

run_interval = 86400

Next, set up email notifications. It can be set to send noifications to the logs or via the message bus, but emails is easier and more convenient. Edit the following directives:

# set how to send notifications:
emit_via = email

# set email address to send to:
email_to =

# set email address to send from:

email_from =

Note: There must be a mail service running on the server to be able to send mail from it. Install the mailx program if you need one installing – it’s usually just a matter of running ‘yum install mailx’ to install it.

Next, set it to automatically update any packages, including downloading any dependancies, by setting the next three directives to ‘yes’:

# automatically install updates
do_update = yes

# automatically download updates
do_download = yes

# automatically download deps of updates
do_download_deps = yes

Save the file and restart the yum-updatesd service to load changes:

/etc/init.d/yum-updatesd restart

One important caveat of this approach is, there is no conflict or fault resolution mechanism in this system. And, as it’s automated, you don’t get any approval over what is or isn’t installed on the system (beyond any ‘excludes’ inherited from yum.conf), which might cause problems with some systems.

An alternative (and perhaps, safer) approach is to leave the checking and email notification options enabled, but set the automatic download and install options to ‘no’.

Whenever yum-updatesd runs and finds updates are available, it will then send you an email alert, and you can then decide whether or not to update it.

A typical email (here, notifying of 3 updates) looks something like this:

This is the automatic update system on myserver.co.uk.
There are 3 package updates available. Please run the system updater.
Packages available for update:
Thank You,
Your Computer

Which option you use depends on your system and particular requirements. If you use generic settings and applications, then there shouldn’t be any issues with automating the updates. But, if your code base replies on specific versions and is heavily modified, the ‘notify only’ approach may be your best option.

Posted in Managed Hosting | Leave a comment

Server security and integrity

If you have presence in internet, whether it is a server you fully manage or hosted, even shared by somebody else, you need to think how you can secure yourself from potential malicious attack.

Truth is simple, you will be a target at some point. It may be directed at you, it may be that you are only collateral damage, this will happen. The only sure way to prevent your assets from being attacked from internet is to not connect them to it.

One of the tools that can certainly help is OSSEC. It is a host-based intrusion detection system, HIDS in short. It monitors log files, system binaries, generic files and kernel for changes that may potentially be indication of the intrusion.

It is intended to run in server-agent architecture, where server controls several agents and gathers all events that occurring on them. It then can notify administrator about them, as well as instruct agent what to do based on rules for example to add suspicious IP to a firewall.

OSSEC can be useful as well for stand alone hosts, for example single server or even post-mortem investigation on a compromised server.

You can install OSSEC in local mode and then use it via command line, for example:

/var/ossec/bin/syscheck_control -i 000

Integrity checking changes for local system 'localhost -':

Changes for 2012 Dec 19:
2012 Dec 19 11:40:01,0 - /etc/blkid/blkid.tab
2012 Dec 19 11:40:01,0 - /etc/blkid/blkid.tab.old
2012 Dec 19 11:40:21,0 - /etc/apf/internals/.apf.restore
2012 Dec 19 11:40:21,0 - /etc/apf/internals/.last.full
2012 Dec 19 11:41:41,0 - /etc/passwd.nouids.cache
2012 Dec 19 11:42:03,0 - /etc/sysconfig/hwconf
2012 Dec 19 11:46:22,0 - /usr/sbin/r1soft/log/cdp.log
2012 Dec 19 12:24:48,0 - /etc/blkid/blkid.tab
2012 Dec 19 12:24:48,0 - /etc/blkid/blkid.tab.old
2012 Dec 19 12:25:08,0 - /etc/apf/internals/.apf.restore
2012 Dec 19 12:25:08,0 - /etc/apf/internals/.last.full
2012 Dec 19 12:26:32,0 - /etc/passwd.nouids.cache
2012 Dec 19 12:26:52,0 - /etc/sysconfig/hwconf
2012 Dec 19 12:31:22,0 - /usr/sbin/r1soft/log/cdp.log

/var/ossec/bin/rootcheck_control -i 000

Policy and auditing events for local system 'localhost -':

Resolved events:

** No entries found.

Outstanding events:

** No entries found.

This is a limited insight to the IDS events, much better is to have mail notifications about them send to admin address, along with possibility to execute a command via agent, for example iptables block.

Posted in Managed Hosting | Leave a comment

Using ngrep to identify potentially malicious traffic

There are times when a webserver which you’re in charge of administrating has a traffic spike and you need to try to identify whether that is malicious or standard traffic.

There is always the possibility to check the log files, but this isn’t always easy if there are 100s of sites on a server with an individual log file for each domain name.

This is when the tool ngrep can come in handy! Ngrep allows you to watch real-time the traffic as it is being served by the Apache web server in a similar vein to how tcpdump works for network packets.

The best way to see how this works is to see it in action, so below is an example of me accessing Steve’s most recent blog post:

ngrep -l -q -d eth0 "^GET" tcp and port 80
interface: eth0 (xxx.xx.xxx.xx/xxx.xxx.xxx.xxx)
filter: (ip or ip6) and ( tcp and port 80 )
match: ^GET

T xxx.xxx.xxx.xxx:38110 -> xxx.xxx.xxx.xxx:80 [AP]
GET /expertise/blog/2012/12/12/retro-blog-christmas-1982-zx-spectrum/ HTTP/1.0..Host: forlinux.co.uk..User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.91 Safari/537.11..Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8..Accept-Encoding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.3..Via: 1.1 (squid/2.6.STABLE21)..X-Forwarded-For: max-age=259200..Connection: keep-alive....

As you can see this shows me accessing the page from our office, not particularly useful in isolation but if you saw this scrolling your terminal output very quickly and all from one IP address it might indicate that you have specific IP address accessing a lot of pages.

You can search through this based on any parameters which you can see in the above output. Another example is me downloading a file using wget.

ngrep -l -q -d eth0 "User-Agent: Wget" tcp and port 80
interface: eth0 (xxx.xx.xxx.xx/xxx.xxx.xxx.xxx)
filter: (ip or ip6) and ( tcp and port 80 )
match: User-Agent: Wget

T xxx.xxx.xxx.xxx:38110 -> xxx.xxx.xxx.xxx:80 [AP]
GET /downloads/script HTTP/1.0..User-Agent: Wget/1.12 (linux-gnu)..Accept: */*..Host: forlinux.co.uk..Via: 1.0 (squid/2.6.STABLE21)..X-Forwarded-For:
.Cache-Control: max-age=259200..Connection: keep-alive....

More information on this application can be viewed at the site http://ngrep.sourceforge.net/.

Posted in Managed Hosting | Leave a comment

Retro Blog : Christmas 1982 = ZX Spectrum

Christmas 1982. I ran downstairs like 1000′s of kids (mainly boys) and got my hands on my new Sinclair ZX Spectrum 48K. Wow. With its colour graphics, sound and rubber keyboard I was in heaven. It plugged directly into a TV via the aerial socket and games loaded via a cassette player. The first game I loaded was Manic Miner by Matthew Smith. Games used to take about 5 minutes to load which heightened the anticipation of playing something new. I reached Eugene’s Lair (screen 5) before I had to stop for Christmas dinner. Needless to say I spent 1000′s of hours with my Speccy, playing games and poking unlimited lives.

BinatoneFast forward to 2012.

The Speccy is 30 years old this year so I thought I would recapture my youth and buy some retro computers.

My first purchase was one of the original Pong machines, a Binatone TV Master MK IV. Back in their day this was cutting edge technology. 3 inbuilt games – Tennis, Squash and Football – all based around hitting a ball with a bat. Back in the 70′s it was fantastic to play a game on your TV. With its unique “Blip Blip” noise it entertained the whole family for hours. This model is from 1977 and cost about £30 back then.

Binatone is still going strong today. Originally started in 1958 by three brothers who named the company after their little sister – Bina. Circa 1974 they imported the TV Game console and in 1978 became a best seller in the UK. Following the deregulation of phone lines in the early 1980′s Binatone moved into telecoms and renamed the Binatone Communications Group. Today they are still going strong and launched the Android based tablet for kids called Kidzstar, which after only 2 months became the best selling toys in the Argos catalogue.

Next Retro Blog – Sinclair ZX81.

Posted in Managed Hosting | 2 Comments

Android – Raspberry Pi

No it’s not a new Android update. If you own a Raspberry Pi, you’re probably either thinking of some practical uses for it or you just like to play with it and see what you can do. Either way installing Android 2.3 should appeal to you. Whilst it is not the most up to date version, there is work going on to port 4.0 to work with Raspberry Pi.

To do this, you will need:

Raspberry Pi
SD card – with at least 4G free and formatted as FAT32
The custom Android ROM CyanogenMod7.2 – which can be downloaded from here:

This can be done on the 3 major operating systems (Windows, Mac OS X and Linux based distributions) but I will just be covering the Linux install here.

Once you have downloaded the .7z file you will need to extract the file. For this you can use most archive managers or the command line tool p7zip. To install p7zip use:

sudo apt-get install p7zip-full
sudo yum install p7zip

depending on your distro.

You can then use p7zip to extract the file using

7za e /path/to/file.7z

Now you need to know what the OS has called you SD card. You can view this information typing this command into your terminal session

df -h

This will likely be called something like /dev/sdb1. You will have to unmount this drive using

umount /dev/sdb1

replacing /dev/sdb1 with the actual location of your SD card.

Now you need to get the image file you created when extracting the .7z file earlier on to the SD card. You can do this using the “dd” command.

dd bs=4M if=/path/to/extracted/file.img of=/dev/sdb

You’ll see that the “1” has been dropped from the end as you want the location not the partition (the 1 notes the partition).
After waiting for the image to be copied across you can start doing fun Android things without having to have the phone such as browse the web and check email.

So now you have a nice lightweight operating system designed for use on lower spec platforms or a fun new toy to play with which ever your intention was.

Posted in Managed Hosting | Leave a comment

SSH and everyday tunneling

If you ever used SSH then you must heard about tunneling feature.

“By using tunneling one can (for example) carry a payload over an incompatible delivery-network, or provide a secure path through an untrusted network.” – wikipedia.org

I’m going to look at some examples of usage rather than at definition and specification.

So let assume that we have two computers with openssh-server installed on them :

LOCAL – a computer connected to network with partially restricted access to internet on which we’ll created a tunnel.
REMOTE – a computer connected to internel without restrictions and visible to internet (i.e home computer with public IP)

Example 1:

So lets say now that your administrator of your network limited mail to business only and you would like to check your own too. To achieve this run:


Where POP_EXTERNAL_MAIL and SMTP_EXTERNAL_MAIL insert you mail server addresses respectively for POP and SMTP. REMOTE-IP is of course IP on REMOTE machine. Now in your local mail client configure new account using “localhost” as server and ports 10025 for POP and 10110 for SMTP

Example 2:

Administrator became more restrictive and now decided to limit the access to some services without realizing how vital they are for you. For example to access facebook.com (that is blocked), run:

ssh user@REMOTE-IP -L 10080:www.facebook.com:80

and now go to you browser and in proxy configuration enter localhost as server and port 10080, Next enter in address bar facebook.com and ready!

However your joy will vaporizer quickly when you try to go to – again you’ll see facebook.com and what ever else you try there will facebook only – this is just because port 10080 is statically redirected to facebook only. But don’t worry – there is fix for this too.


Run this command now

ssh user@REMOTE-IP -D 10080

and then go back to proxy setting in your browser and remove them. Now find line that says something like “SOCKS Host” and enter there localhost as server, 10080 as port and set protocol to 5. Now you can enjoy freedom of internet again.
For details about -D option please check man pages.

Users of opera may be disappointed at this point as this browser doesn’t allow SOCKS server configuration but for those there is solution too.
To go around Opera and SOCKS issue you may want to do some reading on tsocks package. This clever program will allow you to run opera and catch all it’s requests and pass on to ssh!

If you have proxy server on your way to internet the this could be jumped over too – in this case you’d like to do some reading on corkscrew package.

Of course all the rest of option that you would use with ssh applies, you could configure you REMOTE ssh server to listen on port 443 and then run ssh on your LOCAL machine with -p443 – this could be helpful if other ports are locked down.

Posted in Managed Hosting | Leave a comment

Limit CPU usage of a Process

cpulimit is a small program written in C that allows to limit CPU usage by Linux process. Limit is specified in percentage so it’s possible to prevent high CPU load generated by scripts, programs or processes.

cpulimit is pretty useful for the scripts running from cron, for example you can do overnight backups and be sure that compression of 50GB file via gzip won’t eat all CPU resources and all other system processes will have enough CPU time.

In most of Linux distributions cpulimit is available from binary repositories so you can install it using commands:

sudo apt-get install cpulimit


sudo yum install cpulimit

If it’s not possible in your distro then it’s extremely easy to compile it:

scd /usr/src/
wget --no-check-certificate https://github.com/opsengine/cpulimit/tarball/master -O cpulimit.tar
tar -xvf cpulimit.tar
cd opsengine-cpulimit-9df7758
ln -s cpulimit /usr/sbin/cpulimit

From that moment you can run commands limited by CPU percentage, e.g. below command executes gzip compression so that gzip process will never step over 10% of CPU limit:

/usr/sbin/cpulimit –limit=10 /bin/gzip vzdump-openvz-102-2012_06_26-19_01_11.tar

You can check actual CPU usage by gzip using commands:

ps axu | grep [g]zip



Btw, the first command contains ‘grep [g]zip’ to avoid the last line in common output:

root 896448 10.0 3.1 159524 3528 ? S 13:12 0:00 /usr/sbin/cpulimit --limit=10 /bin/gzip vzdump-openvz-102-2012_06_26-19_01_11.tar
root 26490 0.0 0.0 6364 708 pts/0 S+ 15:24 0:00 grep gzip

Using cpulimit you can also allocate CPU limit to already running processes, e.g. below command will allocate 20% CPU limit to process with PID 2342:

/usr/sbin/cpulimit -p 2342 -l 20

It’s possible to specify process by its executable file instead of PID:

/usr/sbin/cpulimit -P /usr/sbin/nginx -l 30
Posted in Managed Hosting | Leave a comment

PHP 5.3.19 & 5.4.9 Released

PHP have now announced that the latest versions of PHP have been released today, they are 5.3.19 and 5.4.9.

This releases fixes about 15 bugs. All users of PHP are encouraged to upgrade to PHP 5.4.9, or at least 5.3.19.

The full list of changes are recorded in the ChangeLog on http://www.php.net/ChangeLog-5.php

If you are running an older version of PHP then we would always recommend updating to the latest stable version of the branch you’re on.

Posted in Managed Hosting | Leave a comment

Memory usage in Linux

Memory usage in linux always seems high, regardless of what applications you
are running and if you restarting them or now. And if you leave it as it is,
it raises over time without any reason.

There is a simple explanation : this memory is being used by the system.

This high memory usage is caused by system itself. When there are free
resources, Linux uses them for temporary data, in this situation disk caches.
After system start it would be only applications that use memory, then over
time disk caches usage will be added on top when system is putting cached files
into memory for speedier access.

Application takes priority for that memory, so if there’s need for it to be
used, caches are being dropped quickly and memory is allocated for
applications. Therefore, this is not an issue

To check this on your system, use the following commands :

free -m
echo 3 | tee /proc/sys/vm/drop_caches
free -m

Second run of free should reveal a lot less memory usage if your system is
doing reads/writes.

Posted in Managed Hosting | Leave a comment