Archive for the ‘linux’ Category

Facebook cover and profile template for Gimp

abril 19, 2013

It is not that I like to play a lot with my Facebook profile or cover pictures, in fact, in 5 years I think I have just had 3 or 4 different profile pictures but lately I had some free time and I wanted to make something funny.

Thus, after doing some research I found a really nice Gimp template in Jason Klein’s Blog which I used to create something I had in mind. But a couple of days later I realized that Facebook changed something and it no longer worked properly so I decided to adapt that template to the new format.

With the template you can create pictures integrating both your cover and profile picture easily, here is the example I am using. I still have to add a text saying something like “Recursion KillS!.

FB cover and profile

FB cover and profile

You can download the template with instructions included in the .xcf format here. (Remove the .jpg extension as WordPress does not allow me to upload the .xfc file.

Go create! 😀

Checking automatically your server status from Android device

marzo 6, 2013

After a while without writing anything related to computer/scripting stuff I am back presenting a script for Android devices that will help you notifying when your personal server is down.

I have a personal server running in a virtual machine that during some weeks was a bit buggy, as my personal webpage is hosted there and I needed it to be up 24/7 when system failed the host was no longer available but it took me some days to realize as I wasn’t checking it everyday. As I wanted to apply some Python knowledge under Android I came out with the idea: I have a phone that is always connected so let’s create a script that periodically (using already-available tools) check the status of the server and in case of problem notifies it, thus, I don ‘t have to worry about checking the availability of my server.

The concept is quite simple and will work as follows:

  • Small script in Python that will run under SL4A.
  • A tool (TaskBomb) that will run the script periodically.
  • A server to be checked, http://hoyhabloyo.com in this case.

So, according to that steps, the process for creating the automatic check was:
First I installed TaskBomb (the scheduler), SL4A (scripting layer), the Python interpreter and SL4A Script Launcher which allows TaskBomb to run scripts as it can’t handle them directly. All these tools can be downloaded for free in the Android Market.

Second, I created the following script in Python:

import android
import urllib2

# Web page to check
web_page = 'http://hoyhabloyo.com'

try:
	urllib2.urlopen(web_page)
	# print 'Page was found!'
	# Ok, nothing to do
except (urllib2.HTTPError,urllib2.URLError) as e:
	# Error? Notifiy!
	droid = android.Android()
	droid.notify("Server seems down!", web_page)

Third, and finally, I configured the TaskBomb to run the script (through ScriptLauncher) everyday at 6.15:

And that is it! Now, every morning your phone will try to connect to the server, if everything is ok nothing will happen but if anything fails, it will display a small notification on the status bar (I established the rule on IPTABLES to make it fail).

Notification when failure

Notification when failure

In SL4A page you can find examples and a book to start developing some powerful scripts. Enjoy!

Good bye MLDonkey, hello Transmission!

marzo 27, 2012

Ok, I’ve been a while without updating and althoguh I have some post almost written I wanted to share with you my last piece of code.

As you may remember four years ago I writed about an script which was in charge of setting automatically your MLDonkey rates in order to not overload your bandwith according to the number of clients in your local area network. This script has been running for almost four years and has been a really helpful in achieving the following stats:

more than 3 year sharing; 1.6 T shared!!

more than 3 year sharing; 1.6 T shared!!

Well, after this past years I’ve been using MLDonkey as my bittorrent client but as it doesn’t support magnet links I’ve decided to move on and give Tranmission a try. Installing it has been a piece of cake and as I was really glad with its web-ui I’ve decided to make it my default client and therefore, remake that script in order to work with this new client!

The code its a port from the past one, but you can see it here:

#!/bin/bash
####################################################################################
####################################################################################
# A Transmission (http://www.transmissionbt.com/) script that 
# will vary the download and upload speed limits based on the
# number of hosts currently active on the LAN.
#
# Very usefull when installed in a mediabox that when being the
# only host will set no up/down limits but when a shared connection
# will limit to not overload the network
# 
# Author:
# Jaime Bosque jaboto(at)gmail(dot)com
#
# This script is based in a previous work from the author plus
# - Miguel Mtz (aka) Xarmaz
# - aRDi
# - tazok de esdebian.org
#
# Requirements:
# transmission-remote, transmission, grep, nmap, cron 
#
####################################################################################
####################################################################################

#-----------------------------------------------------------------------------------
# Transmission and network vars.
# -hosts should be 2 if you are using typical network config (router + mediabox) 
#  but may  vary if is in the same box or you have an always-active host
#-----------------------------------------------------------------------------------
transmission=/usr/bin/transmission-daemon
config_file=/home/kets/Transmission-script/settings.json
t_remote=/usr/bin/transmission-remote
user=transmission
pass=transmission
lan=192.168.1
server=localhost
port=9091
log=/home/kets/Transmission-script/transmission_limits.log
hosts=2
#-----------------------------------------------------------------------------------
# Specific rate settins according to the lan usage
# -solo_(up|donw) settings for when just this machine is in lan
# -shared_(up|down) settings for when more that this machine are in lan
#-----------------------------------------------------------------------------------
solo_down=4000
solo_up=4000
shared_down=5
shared_up=5

# Detect if transmission is running
running=`pidof transmission-daemon | wc -l`
pid=`pidof transmission-daemon`

if [ "$running" == "1" ]; then
    # Use nmap to retrieve the number of hosts in lan 
    hosts_up=`nmap -sP $lan.* | grep $lan | wc -l`
    last_read=`tail -n1 $log`
    hosts_up_before=`tail -n1 $log | grep -o -E "H[0-9]+" | grep -o -E [0-9]+`
    if [ -z "$hosts_up_before" ]; then hosts_up_before=0; fi


    # If something has changed in the lan update limits
    # echo "Hosts up $hosts_up  vs $hosts_up_before"
    if [ "$hosts_up" -ne "$hosts_up_before" ]; then
        if [ "$hosts_up" -gt "$hosts" ]; then
            down_limit=$shared_down
            up_limit=$shared_up
        else
            down_limit=$solo_down
            up_limit=$solo_up
        fi
        #echo "Setting limits $down_limit and $up_limit "
        $t_remote $server:$port -n $user:$pass -d $down_limit
        $t_remote $server:$port -n $user:$pass -u $up_limit

        #Log that changes were done!
        echo `date +"%d/%m/%y -- %H:%M"` "S$running H$hosts_up U$up_limit D$down_limit P$pid" >> $log
    fi
else
    # Log that daemon is not running :_(
    echo `date +"%d/%m/%y -- %H:%M"` "Transmission-daemon is not running!" >> $log	

    # Start transmission daemon with the specified config file
    `$transmission -g $config_file`
    echo `date +"%d/%m/%y -- %H:%M"` "Transmission-daemon was lunched!" >> $log
fi	
exit 0

I’ve also created a github repository that you might like to follow for further changes or to know a bit more about the script and settings.

Hope it helps!

Basic WordPress crawler in Python

enero 29, 2012

I’m developing a new personal web where I will probably include some links to this, my blog, I thought it would be a good idea to have an automatic way to extract them from this site. As I was a bit bored last Monday and wanted to remember my Python skills, I thought “Let’s do a really simple WP crawler to make that work for me!”.

The next Python crawler it’s a really simple script. I know it could have been coded showing a beautiful menu with options and some methods having default params and all that, but just wanted to code it quickly. Every tricky part it’s commented but I will explain it in a few lines.

  1. WordPress blogs show only the last entries and at the bottom, there’s a link which allows you to go to older entries (linking to something like …/page/number)
  2. The crawler will start seeking for every link in the webpage indicated as source
  3. For every link found, if it’s an ‘h2’ type, will consider it as a blog entry so will store it in the list wp_entries
  4. If it’s an internal link will crawl it as long as it contains the word ‘page’
  5. There’s a depth limit so it doesn’t crawl through every page if desire
  6. When it has finished crawling, it will show every entry found

Here is the code:

import urllib2
import urlparse
import re

class Crawler:
    _source = ''
    _depth = 0
    _links = []
    _wp_entries = []
    _debug = False

    def __init__(self, source, depth):
        self._source = source
        self._depth = depth
        self._links = [] 
        self._wp_entries = []
    
    def get_childs(self, url, level):
        if (level <= self._depth):
            if self._debug : print 'Crawling ', url
            try:
                page = urllib2.urlopen(url)

                # Discard non html files
                page_info = page.info()['content-type']
                if not(re.search('text\/html', page_info)):
                    if self._debug: print 'Found a non-html file ', url, ' :', page.info()['content-type']

                else:
                    for line in page:
                        # Find every link in source code
                        if ((re.search('a href=', line) != None)):
                            # If several links in a line -->  Iterate through all
                            links_in_line = re.findall('a href="(.*?)"', line)
                            for link in links_in_line:
                                # For each link found
                                #   0 - Check the link its internal (regex source or a page in same dir)
                                #   1 - Create the new page (only if it's without source in link)
                                #   2 - Check it doesnt been crawled (check self._links)
                                #   3 - Add to the list (self._links)
                                #   4 - Crawl the new one according to the rules (match 'page' in this case)
                                new_link = ''
                                crawl_link = False
                                if (re.match(self._source, link)):
                                    # Subpage sharing source (source/sthing) -> use that link
                                    if self._debug: print 'Found internal link ', link
                                    new_link = link
                                    crawl_link = True

                                elif (re.match('http:\/\/', link)):
                                    # External link -> do nothing
                                    if self._debug: print 'Found external link ', link

                                elif (re.match('\w|\_|\.', link)):
                                    # Internal link -> construct full url
                                    if self._debug: print 'Found internal link', link
                                    new_link = urlparse.urljoin(url, link)
                                    crawl_link = True

                                else:
                                    # Weird link
                                    if self._debug: print  'Found weird link ', link

                                if ((self._links.count(new_link) == 0) & crawl_link):
                                    # Found a new link, store & crawl it
                                    self._links.append(new_link)

                                    # WP specific actions
                                    # Store only h2 links (meaning entries)
                                    # Just crawl whenever link contains 'page' 
                                    if(re.search('h2', line)): 
                                        if self._debug: print 'WP entry ', new_link
                                        self._wp_entries.append(new_link)

                                    level += 1
                                    if(re.search('page', new_link)): self.get_childs(new_link, level)
                                    level -= 1

            except urllib2.HTTPError as e:
                if self._debug: print 'Found error while crawling ', url, e

            except urllib2.URLError as e:
                if self._debug: print 'Found error while crawling ', url, e
        else:
            if self._debug: print 'Maximum depth level reached, skipping'
                
    def show_childs(self):
        print len(self._links),' links were found, listing:'
        for link in self._links:
            print link

    def show_wp_entries(self):
        print len(self._wp_entries),' WP entries were found, listing:'
        for head in self._wp_entries:
            print head


# Create a new crawler with page and maximu depth level
crawler = Crawler('https://hoyhabloyo.wordpress.com/', 5)

# Get childs for that page (could have used defaults params in method)
crawler.get_childs('https://hoyhabloyo.wordpress.com/', 0)

# Show me what you found
crawler.show_wp_entries()

And now, let’s check the output where I’m asking the crawler to list this blog 6 pages of entries:

kets@ExoduS:~/programacion/sources/python$ python wordpres_crawl.py 
30  WP entries were found, listing:
https://hoyhabloyo.wordpress.com/2012/01/24/mitos-y-verdades-sobre-las-becas-icex-en-informatica/
https://hoyhabloyo.wordpress.com/2012/01/18/xfce-display-switching-dual-single-monitor/
https://hoyhabloyo.wordpress.com/2012/01/08/ano-nuevo-vida-nueva/
https://hoyhabloyo.wordpress.com/2011/12/31/adios-2011/
https://hoyhabloyo.wordpress.com/2011/12/23/100-000-visitas/
https://hoyhabloyo.wordpress.com/2011/12/19/de-dibujos-animados/
https://hoyhabloyo.wordpress.com/2011/12/01/hablemos-de-los-rumanos-vorbim-despre-romanii/
https://hoyhabloyo.wordpress.com/2011/12/01/reencuentro-de-becarios-ic3x-en-navaluenga/
https://hoyhabloyo.wordpress.com/2011/11/29/sigo-vivo/
https://hoyhabloyo.wordpress.com/2011/10/07/va-de-despedidas/
https://hoyhabloyo.wordpress.com/2011/09/27/los-rincones-de-bucarest-piata-matache/
https://hoyhabloyo.wordpress.com/2011/09/22/los-rincones-de-bucarest-parcul-carol-i/
https://hoyhabloyo.wordpress.com/2011/09/13/los-rincones-de-bucarest-piata-universitatii/
https://hoyhabloyo.wordpress.com/2011/09/12/receta-hummus/
https://hoyhabloyo.wordpress.com/2011/09/08/viaje-por-los-balcanes/
https://hoyhabloyo.wordpress.com/2011/09/01/receta-gazpacho/
https://hoyhabloyo.wordpress.com/2011/08/31/los-rincones-de-bucarest-parcul-herestrau/
https://hoyhabloyo.wordpress.com/2011/08/16/viaje-expres-a-espana/
https://hoyhabloyo.wordpress.com/2011/08/02/ruta-por-la-rumania-profunda-valaquia/
https://hoyhabloyo.wordpress.com/2011/07/17/de-paseo-por-los-carpatos/
https://hoyhabloyo.wordpress.com/2011/07/10/guia-para-vivir-en-bucarest/
https://hoyhabloyo.wordpress.com/2011/06/30/viaje-por-asia/
https://hoyhabloyo.wordpress.com/2011/06/28/receta-pescado-blanco-al-microondas/
https://hoyhabloyo.wordpress.com/2011/05/20/la-revolucion-espanola-spanishrevolution/
https://hoyhabloyo.wordpress.com/2011/05/17/viaje-a-belgrado/
https://hoyhabloyo.wordpress.com/2011/04/29/semana-santa-por-la-rumania-profunda/
https://hoyhabloyo.wordpress.com/2011/04/14/bruselas-y-amsterdam/
https://hoyhabloyo.wordpress.com/2011/04/04/roma/
https://hoyhabloyo.wordpress.com/2011/03/17/las-1000-grullas/
https://hoyhabloyo.wordpress.com/2011/03/02/receta-torretas-de-berenejena-queso-y-tomate-vegetarianas/

It works!! As you can see, with just a hundred lines of Python where typed to achieve that, and much more could be done just modifying some parameters. Hope it helps!

XFCE display switching (dual & single monitor)

enero 18, 2012

It’s been a long time since I don’t write neither in English nor about computer related stuff. It’s not that I stopped doing things but was a bit lazy about posting them here.

Anyway, a few days ago I bout a new monitor tired of working in a 12” screen. I connected it and nothing happened, i mean, there was no signal output so, as usual, I prompted out a terminal and used that wonderful tool called xrandr. Perfect! I worked like a charm until I disconnected it and realized I have no output trough my laptop screen, dammit! and as my key-combination wasn’t working I had to restart it.

I found that xfce4-settins-manager -> keyboard -> application shortcuts were supposed to launch xfce4-display-settings –minimal which did nothing so, instead of that, based on a script I found in the web (see below) I coded a small script, gave it execution permissions and asigned it to my preferred shortcut (XF86Display – Fn+F7 in my Thinkpad X61s).

Keyboard shortcuts

Keyboard shortcuts

Here is the script, working perfectly:

#!/bin/sh
# Based on the script from
# http://quepagina.es/ubuntarium/vamos-a-personalizar-la-tecla-switch-display-de-portatil.html

# Will do a cycling output from state 0 to 2 and use a sound as feedback
# State VGA LVDS Beeps
#   0    0   1      1
#   1    1   0      2   
#   2    1   1      3
#   (back to state 0)   

#For identifying our monitors use xrandr tool and view output
LVDS=LVDS1      # could be another one like: LVDS, LVDS-1, etc
VGA=VGA1        # could be another one like: VGA, VGA-1, etc
EXTRA="--right-of $LVDS" # addtional info while dual display

# Lets check both LVDS and VGA state from the string "$display connected (" 
xrandr | grep -q "$LVDS connected (" && LVDS_IS_ON=0 || LVDS_IS_ON=1
xrandr | grep -q "$VGA connected (" && VGA_IS_ON=0 || VGA_IS_ON=1

# Output switch cycle
if [ $LVDS_IS_ON -eq 1 ] && [ $VGA_IS_ON -eq 1 ]; then
    #Go to state 0 -&gt; just LVDS
    xrandr --output $LVDS --auto 
    xrandr --output $VGA --off
    beep 
elif [ $LVDS_IS_ON -eq 1 ]; then
    #Go to state 1 -&gt; just VGA
    xrandr --output $LVDS --off
    xrandr --output $VGA --auto 
    beep && beep
elif [ $VGA_IS_ON -eq 1 ]; then
    #Go to state 2 -&gt; both outputs
    xrandr --output $LVDS --auto
    xrandr --output $VGA --auto $EXTRA
    beep && beep && beep
else
    #This should never be reached but just in case..
    xrandr --output $LVDS --auto 
    beep && beep && beep && beep
fi

As you might already guessed, that script will only work as long as you are logged in a Desktop System (XFCE, Gnome, KDE…) and running the daemon listening to your shortcuts. But there’s no problem, you could also use it from a terminal if you assign it to your acpi event (see this) if you previously know which event is being launched on your combination (use acpid_listen).

Hope it helps!