A Temporary Form Of Democratic Surplus
I do not usually blog about this matters but this is something I care about. Two weeks ago ( March 1st ) there was an important vote in the Legal Affairs committee (JURI) of the EU parliament : the weakening of a copyright law .
Preamble .
When a work has gone orphan, it means that it is effectively lost until the copyright monopoly expires, 70 years after the creator’s death. You can only hope that somebody has kept a copy illegally and copied it across new forms of storage media as they go in and out of fashion as the decades come and go, or it will be lost forever.
What they claimed it happened
in the JURI commetee is, quoting MEP Engström’s assistant - Henrik Alexandersson, “a temporary form of democratic surplus”. You have to understand that the JURI commetee has the responsibility of safeguarding the integrity and trustworthiness of the legal framework as a whole in Europe. Are you still with me ? Ok. That committee counts 24 individual votes. This particular reform was rejected counting 12 votes for reform and 14 against . Yes, that counts 26. If you also note that there was a person not voting you understand that if there were 12 votes for reform there must have been 11 against it and the reform should have passed. The other fantastic thing that is claimed to have happend two weeks ago was that, unfortunately, when this was done, along with formally requesting a re-vote, that re-vote was denied.
Conclusion
What can I say ? I have checked out the minutes and the docs about that meeting on the EU website and honestly did not find anything, BUT according to Rick Falkvinge there are eye witnesses : you might put enough pressure on him and they might come out in the open.
IMHO : I can't honestly believe that something like that can happen ( I refer to the denial of a re-vote ) in a democracy. Someone might argue that we may not be in a democracy but have the illusion of it . The point is that YOU have to commit : follow what's going on. Ask your representatives to share documents, minutes and everything that can possibly help you make up your own opinion.; put pressure on your members of the parliament, let your voice be heard : it doesn't matter if we don't agree, just don't stay silent.
my own 2 c.
Linux backup
Today I would like to share an old backup script that has been serving me for three years. It is not entirely the product of my brain : I have modified a script that I found on the internet that was created back in the year 2000.
The script is pretty well commented and self explanatory : it creates a full backup on Sundays and an incremental backup the other days. Every beginning of month it creates a full backup.
#!/bin/bash # full and incremental backup script # created 07 February 2000 # Based on a script by Daniel O'Callaghan <danny@freebsd.org> # and modified by Gerhard Mourani <gmourani@videotron.ca> # modified by Marco Ferretti <marco.ferretti@gmail.com> on 16 Aug 2008 #Change the 5 variables below to fit your computer/backup #COMPUTER=server # name of this computer PROJECTS="/home/marco/Projects/Workspaces /home/marco/Projects/Nitro /home/marco/Projects/Readytec" # projects directory CVS="/home/marco/dev /home/cvs" # CVS directory DOCUMENTS="/etc /home/marco/Documents" # documents directory MAIL="/home/marco/.mozilla-thunderbird" # email USERS="/home/bruno " # users to fully backup DIRECTORIES="$DOCUMENTS $USERS $PROJECTS $MAIL $CVS" # directoris that will be passed to tar BACKUPDIR=/media/backup-disk/backup/archives # where to store the backups TIMEDIR=/media/backup-disk/backup/log # where to store time of full backup TMPFILE=/tmp/backup.local.tmp TAR=/bin/tar # name and locaction of tar LOGFILE=/media/backup-disk/backup/log/backup.log # log file #You should not have to change anything below here PATH=/usr/local/bin:/usr/bin:/bin DOW=`date +%a` # Day of the week e.g. Mon DOM=`date +%d` # Date of the Month e.g. 27 DM=`date +%d%b` # Date and Month e.g. 27Sep # On the 1st of the month a permanet full backup is made # Every Sunday a full backup is made - overwriting last Sundays backup # The rest of the time an incremental backup is made. Each incremental # backup overwrites last weeks incremental backup of the same name. # # if NEWER = "", then tar backs up all files in the directories # otherwise it backs up files newer than the NEWER date. NEWER # gets it date from the file written every Sunday. echo "`date` Starting backup backup script" > $LOGFILE # Monthly full backup if [ $DOM = "01" ]; then echo "`date +%d-%b-%y` Starting montly full backup" >> $LOGFILE NEWER="" # $TAR $NEWER -cfj $BACKUPDIR/$COMPUTER-$DM.tar.bz2 $DIRECTORIES # $TAR $NEWER -cjf $TMPFILE $DIRECTORIES #Version that creates one big file # $TAR $NEWER -cjf $BACKUPDIR/monthly/$DM-full.tar.bz2 $DIRECTORIES #Version that splits the file in 4Gb parts (DVD burn ready $TAR $NEWER cjf - $DIRECTORIES | split --bytes=4G -d - $BACKUPDIR/monthly/$DM-full.tar.bz2_ #Remove the big file ... rm BACKUPDIR/monthly/$DM-full.tar.bz2 # cp $TMPFILE $BACKUPDIR/monthly/$DM-full.tar.bz2 fi # Weekly full backup if [ $DOW = "Sun" ]; then echo "`date +%d-%b-%y` Starting weekly full backup" >> $LOGFILE NEWER="" NOW=`date +%d-%b` # Update full backup date # echo $NOW > $TIMEDIR/$COMPUTER-full-date # $TAR $NEWER -cjf $BACKUPDIR/$COMPUTER-$DOW.tar $DIRECTORIES echo $NOW > $TIMEDIR/full-date $TAR $NEWER -cjf $BACKUPDIR/weekly/$DOW-full.tar.bz2 $DIRECTORIES # $TAR $NEWER -cjf $TMPFILE $DIRECTORIES # cp $TMPFILE $BACKUPDIR/weekly/$DOW-full.tar.bz2 # Make incremental backup - overwrite last weeks else # Get date of last full backup # NEWER="--newer `cat $TIMEDIR/$COMPUTER-full-date`" # $TAR $NEWER -cf $BACKUPDIR/$COMPUTER-$DOW.tar $DIRECTORIES echo "`date +%d-%b-%y` Starting incremental backup" >> $LOGFILE NEWER="--newer `cat $TIMEDIR/full-date`" $TAR $NEWER -cjf $BACKUPDIR/weekly/$DOW.tar.bz2 $DIRECTORIES # $TAR $NEWER -cjf $TMPFILE $DIRECTORIES # cp $TMPFILE $BACKUPDIR/weekly/$DOW.tar.bz2 fi echo "`date` end of backup backup script" >> $LOGFILE
Ubuntu's upgrade of tomcat breaks your configuration !!!
Ubuntu has recently released an upgrade for tomcat 6. If you have installed webapps that need a particular configuration of heap or permjam or any parameter that you want to pass to catalina engine... well you'll have to rewrite your config since catalina.sh and your setenv.sh in /usr/share/tomcat6/bin are going to be overwritten.
If you have, for example, installed liferay and upgraded it to version 6.0.6 you are likely to have problems my frieds; it took me a whole day to understand what was going ( literally ) tits-up in my server since i noticed an exception in liferay that I thought I already had fixed with the use of portal-legacy-5.2.properties.
Said that I would like to thank the guys at Debian and at Ubuntu for their great job ... and I can't help but ask them to pay a little more attention when they release upgrades that impact server releases of their stuff.
hope this helps someone out there
Yet another Liferay Upgrade X-File
Yesterday I was testing the upgrade of one of the Liferay instances I ( sort of ) manage. Everything has gone sweet ( meaning I had no major failures ) during the upgrades until I actually started testing the content of some pages. One of the standard tests I do is open a web-content for editing, check that everything runs fine, make a small change, change the version, publish, check the published content and then revert. Image my face when I opened the a content for editing and all the images where ... gone! Absolutely gone ! I closed the content and checked the website and the images where there. I thought it must have been just fantastic luck, so I restarted the server and tried another content . Here we go again : in editing mode I cannot see the images any longer.
It took me some time but then I found out that, for some strange reason, the images url where not resolved due to an error in variable sobstitution : the groupid attribute was set ( in the pages where the problem arose ) as @group_id@; this was ok viewing the content, not so ok editing it .
I solved this thanks to the support of the #liferay channel on freenode, namely thanks to the patience of adaro.
The solution consists in upgrading the content in the databse. Here's the script I used on a PostgreSQL database:
update journalarticle set content = REPLACE(content, '@group_id@', trim(to_char(groupId, '99999999999999999')));
Now the question, at least for me, is how is it that it was working on 5.2.3 and it is not on 6.0.6 ? Looks like the folks @Liferay have some issues with regression tests ...
Cheers !
Verification of a Liferay Document library
I was in the unhappy situation of having an inconsistent document library in Liferay; unfortunately I realized I had the problem when I tried to upgrade to Liferay 6.0.6 .
The whole process consists of using webdavclient4j in order to access the document library through web dav and testing the files for read.
Here's the code :
/** * Checkes a Liferay document library for consistency ( Files listed are actually available ) * */ package com.fermasoft.liferay.dl; import java.io.FileWriter; import java.io.IOException; import java.io.InputStream; import java.io.PrintWriter; import java.text.SimpleDateFormat; import java.util.Date; import org.apache.commons.vfs.FileObject; import org.apache.commons.vfs.FileSystemException; import org.apache.commons.vfs.FileSystemManager; import org.apache.commons.vfs.FileType; import org.apache.commons.vfs.VFS; import org.apache.commons.vfs.provider.webdav.WebdavFileObject; import org.apache.log4j.Logger; /** * @author <a href="mailto:marco.ferretti@gmail.com">Marco Ferretti</a> * */ public class DAVFsChecker { /** * the protocol */ private static final String protocol = "webdav://"; /** * the logger */ private static transient Logger logger = Logger.getLogger(DAVFsChecker.class); /** * the username for connecting */ String username = "XXXX"; /** * the password */ String password = "XXXXX"; /** * webdav server */ private String server = "your.server.whatever"; /** * port */ private String port = "port"; /** * root of document library to check ( within webdav ) */ private String root="/tunnel-web/secure/webdav/liferay.com/<your community>/document_library"; /** * used internally */ String initial_path =null; /** * the writer for the list of files that cause problems */ PrintWriter writer = null; public static void main(String[] args ) throws Exception { DAVFsChecker checker = new DAVFsChecker(); checker.init(); checker.test(); checker.finish(); } /** * initializes the variables and the printwriter * @throws Exception */ public void init() throws Exception{ initial_path = server; if ( port != null ) initial_path += ":" + port; if ( root != null ) initial_path += root; else initial_path += "/"; writer = new PrintWriter(new FileWriter("/tmp/corrupted.log")); writer.print("Starting test on "); writer.println(new SimpleDateFormat("dd/MM/yyyy HH:mm").format(new Date(System.currentTimeMillis()))); } /** * flushes the printwriter and closes nicely * @throws Exception */ public void finish() throws Exception{ writer.print("Ending test on "); writer.println(new SimpleDateFormat("dd/MM/yyyy HH:mm").format(new Date(System.currentTimeMillis()))); writer.flush(); writer.close(); } /** * opens the connection and tests the document library * @throws IOException */ public void test ( ) throws IOException { String s = protocol+username+":"+password+"@"+initial_path; FileSystemManager fsManager = VFS.getManager(); WebdavFileObject resource = (WebdavFileObject)fsManager.resolveFile( s ); testLibrary(resource); } /** * tests an object. If the object is a directory, it traverses it and tests * every children; if it's a file it tries to read it * @param file the object to test * @throws IOException */ void testLibrary(WebdavFileObject file ) throws IOException { if ( isFolder(file) ) { FileObject[] children = file.getChildren(); for(FileObject o : children){ testLibrary((WebdavFileObject)o); } } else { try{ logger.info(canRead(file)); } catch(IOException e ){ log(file); } } } /** * tests a {@link FileObject} by calling getType and confronting * the result with {@link FileType#FOLDER} * @param file the {@link FileObject} to test * @return true if the file is a directory * @throws FileSystemException */ boolean isFolder(FileObject file) throws FileSystemException{ return file.getType().equals(FileType.FOLDER); } /** * Tries to read the {@link FileObject}. If the {@link FileObject} can * be read it returns true, else it propagates an {@link IOException} * @param file * @return * @throws IOException */ boolean canRead(FileObject file) throws IOException{ WebdavFileObject resource = (WebdavFileObject)file; InputStream in = resource.getInputStream(); in.read(new byte[1024]); in.close(); return true; } /** * Writes in the application's result log * @param file */ void log(WebdavFileObject file){ writer.println(file.getName().getPath()); writer.flush(); } }
Happy checking !
Install Virtual Box 4 on Ubuntu 10.04 ( and get the USB support )
The default Virtual Box packaged in the standard Ubuntu 10.04 ( lucid ) repos is 3.8.somehting OSE . The main problem with that software is that it does not have USB support. The only way you have to get that is either by getting a non OSE version or upgrade to version 4.
In case you want to upgrade to version 4 you shall remove your virtual box, get the ppa, get the extension for usb support, install the new virtual box and install the extensions. Don't worry, your virtual machines will keep on working and the whole process can be scripted/copied pasted :
sudo apt-get remove --purge virtualbox-ose sudo -v echo "deb http://download.virtualbox.org/virtualbox/debian $(lsb_release -sc) contrib" | sudo tee -a /etc/apt/sources.list wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- | sudo apt-key add - sudo apt-get update sudo apt-get install virtualbox-4.1 wget http://download.virtualbox.org/virtualbox/4.1.4/Oracle_VM_VirtualBox_Extension_Pack-4.1.4-74291.vbox-extpack
start virtual box ( now in applications/system tools ) , go to file/prefences, click Extensions, click add package ( the little diamond on the right ) and locate the file you downloaded with the last wget ( see above ) .
happy virtualization !
Delete all tables in a PostgreSQL schema without dropping the schema itself
Did you ever have had to re-load a postgres database without having the privileges needed to create a database/schema nor the ones needed to drop it ? Well ... I had to today. The schema had 220 tables, thus dropping each table one by one was not an option. With a little bit of patience and some googling I came out with this :
psql -h <server name/ip address> -U <user> -t -d <database> -c "SELECT 'DROP TABLE ' || n.nspname || '.' || c.relname || ' CASCADE;' FROM pg_catalog.pg_class AS c LEFT JOIN pg_catalog.pg_namespace AS n ON n.oid = c.relnamespace WHERE relkind = 'r' AND n.nspname NOT IN ('pg_catalog', 'pg_toast') AND pg_catalog.pg_table_is_visible(c.oid)" >/tmp/droptables
then
psql -h <server> -d <database> -f /tmp/droptables
Let's explain it a little .
The first command creates, for every relation of the schema 'public', a string of the form 'DROP TABLE public.[relation name]; and appends it to the file /tmp/droptables ; the result is a file that contains a set of SQL drop instructions that just fits your schema.
The second command is simply issuing all the SQL you just generated to the database.
The funny thing is that once I came up with it I even script-ed it :
#!/bin/bash die () { echo >&2 "$@" exit 1 } if [ $# -eq 0 ] ; then die "usage : droptables server user database" elif [ $# -eq 1 ] ; then die "usage : droptables server user database" elif [ $# -eq 2 ] ; then die "usage : droptables server user database" elif [ $# -eq 3 ] ; then SERVER=$1 USER=$2 DATABASE=$3 else die "usage : droptables server user database" fi echo "s:$SERVER u:$USER d:$DATABASE" echo "collecting informations to drop data in /tmp/droptables; please provide a password if required : " psql -h $SERVER -U $USER -d $DATABASE -t -c "SELECT 'DROP TABLE ' || n.nspname || '.' || c.relname || ' CASCADE;' FROM pg_catalog.pg_class AS c LEFT JOIN pg_catalog.pg_namespace AS n ON n.oid = c.relnamespace WHERE relkind = 'r' AND n.nspname NOT IN ('pg_catalog', 'pg_toast') AND pg_catalog.pg_table_is_visible(c.oid)" > /tmp/droptables echo "dropping all tables ; please provide a password if required : " psql -h $SERVER -d $DATABASE -U $USER -f /tmp/droptables
Happy dropping !!!
Upgrade existing Liferay 5.2.3 to Liferay 6.0.6
After a week ripping my hair off because of problems ... i'm almost there :
Windows 7 Starter edition : fine tune
At the end of last year ( about three weeks ago ) I decided that my netbook ( Samsung n230 ) needed to be tweaked for performance reasons... and I did what a normal lazy person would do : I doubled the RAM ( too bad I couldn't install more than 2 gig ) .
The performance boost was evident under Linux ( Ubuntu 11.10 ) but Windows 7 ( Starter Edition ) ... not so much
Today I decided I needed to do something for my netbook pal ... the result was ( pardon me ) fantastic .
Here's what I did
- Removed the following visual effects :
- Animate controls and elements inside windows
- Animate windows when minimizing and maximizing
- Enable transparent glass
- Fade or slide menus into view
- Fade or slide ToolTips into view
- Fade out menu items after clicking
- Show shadows under windows
- Show thumbnails instead of icons
- Show translucent selection rectangle
- Show window contents while dragging
- Slide open combo boxes
- Disabled ( manual start ) the following services :
- Block Level Backup Engine Service
- Bonjour Service (from iTunes)
- Certificate Propagation
- Group Policy Client (if not on domain)
- HomeGroup Listener
- HomeGroup Provider
- Offline Files
- Portable Device Enumerator Service
- Security Center*
- Software Protection (make sure you activate Windows first)
- SSDP Discovery
- Windows Defender*
- Windows Media Player Network Sharing Service
- Windows Search
Also, since I do backups on my own, I zeroed the system restore space ( re-enable it only if I have to do some major upgrades like a service pack )
believe me : the 30 minutes I spent tweaking my machine were totally worth it .
GNOME 3 - Sotto il vestito molto: un talk a Firenze di Cosimo Garcia Lopez
GNOME 3 è sicuramente uno degli argomenti più discussi di questo 2011 e continuerà ad esserlo visto le mille novità che il team sforna giornalmente. Il 19 Dicembre, presso l’Aula 108 Centro Didattico Morgagni, Cosimo Garcia Lòpez terrà un talk dal titolo “GNOME 3 versus the World”. Lo sviluppatore Red Hat, durante il suo talk analizzerà il nuovo prodotto del team di GNOME, lo confronterà col “vecchio” GNOME 2.X, analizzerà il rapporto con utenti e distribuzioni e parlerà del futuro di uno dei desktop più amati di sempre. A seguito di questo interessante talk seguirà una sessione di hacking su GNOME.
Recent Comments