Using gnome nautilus as ssh and smb gui

Everyday i use ssh in the command line to administrative tasks and the need to copy files is big. I’m comfortable using the command line and at any given time, i have 4 and 5 consoles open but sometimes it’s nice to have a GUI for some tasks (when the character set of one system is not the same and all the accentuated characters do not display correctly and scp has an hard time dealing with it).

I wounder if Linux had such a tool like WinSCP. I sometimes (only when absolutely needed and work related) use windows and WinSCP comes handy.

If you use Gnome, you don’t need a GUI, because Nautilus has support for several protocols. I use Samba when i need to connect to Windows computers, and Nautilus is my tool.

When in Nautilus, if you press Ctrl+L (the L is not in  caps. I type it in caps just for readability) , it opens a location text bar so you can type where you want to go.
(more…)

How to connect Android (ICS and JB) to Linux for file access

Since Android switch from mass storage to MTP, it’s being hard to connect an Android phone to Linux and use it to browse files. I had that problem every single day, having to resort to another choices.

I’ve tried this on Gentoo Linux and Ubuntu 12.04, but I guess it should work on any distribution.

Since using mass storage, two partitions cannot be mounted simultaneously, Android developers switch to MTP. Here’s a better explanation

Some parts of this tutorial have been borrowed from here: http://www.omgubuntu.co.uk/2011/12/how-to-connect-your-android-ice-cream-sandwich-phone-to-ubuntu-for-file-access

Most credits should go to Bilal Akhtar (from www.omgubuntu.com).

What I’ve did was taken a step further. Since udev can execute scripts automatically, why not use udev to manage most of the operations?

Note: unless specified, all commands are to be run as the root user (or if you use sudo, just prepend sudo to the commands)

Install packages

For this to work, we’ll rely on fuse and libmtp

Ubuntu:

apt-get install mtp-tools mtpfs

Gentoo:

emerge -av sys-fs/fuse sys-fs/mtpfs media-libs/libmtp

Go to /media (or /mnt) and create a folder with a relevant name for the device

cd /media

mkdir AsusFT201

We’re doing this because udev only knows about the removed device when you actually remove it and that’s is not good to remove the device without first umount it. Here’re going with Bilal Akhtar solution (references in the bottom of the document) and have an alias to umount the device

Now, on the computer, run the following command:

tail -f /var/log/messages (or tail -f /var/logo/syslog on Ubuntu)

Now connect your device to the computer. In your device, make sure to select “Media device (MTP)”

The kernel messages after connecting the device:

Oct 8 16:43:27 nightraider kernel: [ 5263.446971] usb 1-3: default language 0x0409
Oct 8 16:43:27 nightraider kernel: [ 5263.447477] usb 1-3: udev 11, busnum 1, minor = 10
Oct 8 16:43:27 nightraider kernel: [ 5263.447479] usb 1-3: New USB device found, idVendor=04e8, idProduct=6860
Oct 8 16:43:27 nightraider kernel: [ 5263.447481] usb 1-3: New USB device strings: Mfr=2, Product=3, SerialNumber=4
Oct 8 16:43:27 nightraider kernel: [ 5263.447483] usb 1-3: Product: SAMSUNG_Android
Oct 8 16:43:27 nightraider kernel: [ 5263.447484] usb 1-3: Manufacturer: SAMSUNG
Oct 8 16:43:27 nightraider kernel: [ 5263.447486] usb 1-3: SerialNumber: 111X1111XY11111Z
Oct 8 16:43:27 nightraider kernel: [ 5263.447552] usb 1-3: usb_probe_device
Oct 8 16:43:27 nightraider kernel: [ 5263.447555] usb 1-3: configuration #1 chosen from 2 choices

SerialNumber is the line we’re looking for.

With this value, let’s create the udev rules.

create the following file:

(I use vi, you can use emacs or gedit. Still, you don’t know what you’re missing by not trying vi)

vi /etc/udev/rules.d/81-android.rules

Read here why the file is called 81-android.rules

and add the following lines (replace 111X1111XY11111Z with the serial with your device):

SUBSYSTEM==”usb”, ATTRS{serial}==”111X1111XY11111Z”, MODE=”0666″, GROUP=”plugdev”, SYMLINK+=”AndroidPhone”, RUN+=”/usr/bin/mtpfs -o allow_other /media/ AsusFT201″

Notes:

The full path of the commands is necessary. Read the udev manual for more information

I went with serial because it is unique and I’ve seen the idDevice change in the same computer. – can someone confirm if this is possible ?

Now save and quit the editor.

Reload the udev rules:

udevadm control –reload-rules

Now, edit /etc/fuse.conf and remove the comment from the last line (just delete the ‘#’)

From:

#user_allow_other

to:

user_allow_other

Add yourself to the fuse group (if not already)

To find out, type:

id (as your user)

uid=1000(username) gid=1000(username) groups=1000(username),4(adm),24(cdrom),27(sudo),29(audio),30(dip),44(video),46(plugdev),60(games),105(fuse),109(scanner),111(lpadmin),115(netdev),124(sambashare),1012(sharing),1013(bumblebee)

If you don’t have the group in your list, just run the following command:

gpasswd -a <your_username> fuse
Adding user <your_username> to group fuse

On Gentoo you need to add your user to the group disk

The command is the same, just replace fuse with disk

You need to logout and login again for it to take effect (if not in the group)

After that, just plug your device and wait a bit. I’m saying wait a bit because for me, my phone (with ICS) and my tablet (with JB) take a while to mount. Don’t know why. This happens in both distributions.

Umounting

To unmount the device, just edit your .bashrc file (in your home) and add the following lines:

vi ~/.bashrc

Give it the name you want – replace android-disconnect

alias android-disconnect=”fusermount -u /media/AsusFT201″

Save and quit

Now, execute the command:

source ~/.bashrc

Now you can remove the device issuing the command android-disconnect

References:

http://hackaday.com/2009/09/18/how-to-write-udev-rules/

http://www.omgubuntu.co.uk/2011/12/how-to-connect-your-android-ice-cream-sandwich-phone-to-ubuntu-for-file-access

DHCP failover / load balancing (and synchronization) with CentOS 6

DHCP is a wonderful piece of software. It keeps our networks running smoothly. For small networks, probably 100 machines or so, one server is enough, but to larger networks, is not a bad idea to have another one, just in case the firts one fails or the load is just to much..

DHCP has some configurations for load balancing and failover, the – failover declaration – that allows us to configure it.

The Servers:

Primary IP address: 10.1.2.1

Secondary IP address: 10.1.2.2

To keep things simple, you can create a new file, and then just insert it in the global dhcpd.conf (configuration below)

Primary DHCP server

Open a new file and put the following lines in it:

vi /etc/dhcp/dhcpd.failover

# Failover specific configurations

failover peer “dhcp” {
primary;
address 10.1.2.1;
port 647;
peer address 10.1.2.2;
peer port 647;
max-response-delay 60;
max-unacked-updates 10;
mclt 600;
split 128;
load balance max seconds 3;
}
Now, in the secondary DHCP server, just to the same:
Secondary Server
failover peer “dhcp” {
secondary;
address 10.1.2.2;
port 647;
peer address 10.1.2.1;
peer port 647;
max-response-delay 60;
max-unacked-updates 10;
load balance max seconds 3;
}
After creating the files, just add the configuration to the global config file (in both servers), somewhere before (groups | share networks | host) definitions
include “/etc/dhcp/dhcpd.failover”;
Now, to the definitions.
To take advantage of the new definitions, we need to create a pool. You can put it in any (subnet| shared-network|etc..) declaration:
 
 
network 10.1.2.0 netmask 255.255.0.0 {
option routers 10.1.2.254;
option subnet-mask 255.255.0.0;
option broadcast-address 10.1.255.255;
pool {
failover peer “dhcp”;
range 10.1.2.1 10.1.254.254;
}
}
 
Note: If you have static declarations (i have all my clients in static declarations), to avoid warnings in the log about dynamic and static mappings, reduce the range to only one client. It’s mandatory the range declaration
IMPORTANT: Is of utmost importance to have both servers with the same date and time. If they are not, DHCP will complain and the secondary server (the whole server) will not work well… You can accomplish this with ntpdate
If you are getting this messages in syslog:
Failover CONNECT to dhcp rejected: Connection rejected, time mismatch too great.

Then the time is not the same.

If you want to know more about the configuration parameters in the failover declarations, go to the ipamworldwide web site
Synchronization
DHCP doesn’t have a synchronization mechanism (as far as i know – please correct me if i’m wrong), so changes you’ll make to the primary server will not reflect in the secondary server. This could be done using scripting or yourself manually copying the changed file over to the secondary server. But sometimes, in the heat of the moment, because something important happened or someone is waiting for your attention, you forget…
There’s a small program that can accomplish the synchronization without you even remember that you must copy the files…
iwatch is a small program that monitors wherever you want (files, directories) and upon changes, it can perform several actions.
A few months ago i wrote about iwatch and how’s installed and configured (portuguese), but i’ll replicate the steps here, using the DHCP files has an example.
The CentOS minimal installation doesn’t have rsync and wget installed, so we need to install those.
yum install rsync wget
 
Note: The steps above are only required in the primary server. The changes are made here and then replicated to the secondary server.
Install the rpmforge repositories. You can get the rpm and instructions here
Install the required perl packages
yum install perl-Event perl-Mail-Sendmail perl-XML-SimpleObject perl-XML-Simple perl-Linux-Inotify2
 
After installing, we can finally install iwatch.
Download it from sourceforge
After download, untar it:
tar -zxvf iwatch-0.2.2.tgz
A new directory is created
cd iwatch
 
In there, you’ll find a few files.
Let’s copy the files to the proper places
cp iwatch /usr/local/bin/
cp iwatch.xml /etc/
cp iwatch.dtd /etc/
A few considerations before continuing:
We want to synchronize changes in the DHCP configurations, so, we’ll monitor the /etc/dhcp directory for:
  • Creation of files
  • changes in files
  • deleting of files
  • add an exception for dhcpd.failover (those are different in the servers – depending of primary or secondary server)
Now that we know what we want, let’s proceed:
Before we edit the configuration file, so we can execute iwatch as a daemon, let’s execute it in command line and edit a file so we can see what’s happening. Open two terminals: One will be used to execute iwatch and some arguments, the other will be to edit a file.
First terminal:
Execute iwatch and see some output:
/usr/local/bin/iwatch -e modify,create,close_write -c “touch /tmp/someaction” -r -v /etc/dhcp/
 

output:
Watch /etc/dhcp
Watch /etc/dhcp/Configs
Watch /etc/dhcp/dhclient.d

 
Second terminal
Let’s edit a file in the watched directory and see what’s happening in terminal 1. You can just open it, no changes, but save the file and watch the output in Terminal 1.
vi /etc/dhcp/dhcpd.conf
In terminal 2, you’ll see:
[14/Mar/2012 16:18:34] IN_CREATE /etc/dhcp/Configs/.dhcp.vlan.swp
[14/Mar/2012 16:18:34] * Command: touch /tmp/someaction
[14/Mar/2012 16:18:34] IN_CREATE /etc/dhcp/Configs/.dhcp.vlan.swx
[14/Mar/2012 16:18:34] * Command: touch /tmp/someaction
[14/Mar/2012 16:18:34] IN_CLOSE_WRITE /etc/dhcp/Configs/.dhcp.vlan.swx
[14/Mar/2012 16:18:34] * Command: touch /tmp/someaction

Now that we saw it working, let’s configure the daemon part.

Edit the file /etc/iwatch.xml. The file syntax is XML. Here’s an example of my configuration.
You can read more in iwatch sourceforge page.
<?xml version=”1.0″ ?>
<!DOCTYPE config SYSTEM “/etc/iwatch.dtd” ><config charset=”utf-8″>

<guard email=”informatica@ulscb.min-saude.pt” name=”IWatch”/>

<watchlist>

<title>DHCP Sync</title>
<contactpoint email=”sysadmin@domain.com” name=”Administrator”/>
<path type=”recursive” syslog=”on” alert=”off” events=”create,delete,close_write” exec=”/root/scripts/syncFiles”>/etc/dhcp</path>
<path type=”regexception”>b4913b</path>
<path type=”exception”>/etc/dhcp/dhcpd.failover</path>
<path type=”exception”>/etc/dhcp/dhclient.d</path>

<path type=”regexception”>.*.swp*</path>

<path type=”regexception”>.*~</path>

</watchlist>

</config>

Now, edit that file and make the changes you want

I’ve added a few exceptions, because there are files i don’t need to sync.

Also, vi creates a few temporary files (directory 4913 and backups with ~ | swp extensions) when you’re editing, and those don’t mind.

We are not also using modify, because if a file is closed with write, it was modified, right ?

The exec  parameter tells iwatch what to do when any of the events occurs. I have a script (syncFiles) that synchronizes with the secondary server and sends and email

#!/bin/bash
# Script to synchronized dhcp changes
# This script will be called by iwatch
# DO NOT EXECUTE THIS SCRIPT – IT WILL BE EXECUTED AUTOMATICALLY
# 15/12/2011
log=”/tmp/synclog.log”
echo “Syncing dhcp from primary server to secondary server” >> $log
# Using rsync so it can only copy different files – Low on bandwith/usr/bin/rsync -avz –delete -e ssh /etc/dhcp/ –exclude dhcpd.failover root@secondary:/etc/dhcp >> $log
# Restart the service with the new configurations
ssh root@secondary -C “service dhcpd restart” >> $log
service dhcpd restart >> $log
# Email
if [ -a $log ]; then
mail sysadmin@domain.com -s “Sync dhcp ” < $log
rm -f $log
fi

I use rsync to perform the copy. I exclude dhcpd.failover because the files are not the same and they depend on the server (primary or secondary)

Notes: A few security issues. iwatch is executed with root privileges – it’s started by /etc/rc.local

If you do nothing, every time the script is executed, you’ll have to give the root password of the secondary server. You can prevent this (if you want) by adding the ssh key to the authorized keys and have a password-less ssh configuration between those two servers (using only keys)

Now, just put iwatch executing when the machine start:

vi /etc/rc.local
# Exec iwatch
/usr/local/bin/iwatch -d

Execute iwatch as daemon

/usr/local/bin/iwatch -d

Now you have a dhcp failover instalation and synchronization

Hope it helps anyone

References

http://www.lithodyne.net/docs/dhcp/dhcp-4.html

http://www.madboa.com/geek/dhcp-failover/

http://www.ipamworldwide.com/dhcp-failover-a-load-balancing/declarations.html

Install logwatch Centos 6

Logwatch is a wonderfull Linux tool that informs us (by email if you like) to what happened to a server during the previous day (configurable).

EDIT: I’ve just tried with CentOS 6.3 minimal install, with the default mirrors configured and logwatch (yum install logwatch) installed just perfectly.

In CentOS 6, there’s a problem installing it (at least in all my servers with CentOS 6 i had it) because of perl-Date-Manip, with the error:

http://mirrors.nfsi.pt/CentOS/6.2/os/i386/Packages/perl-Date-Manip-6.24-1.el6.noarch.rpm: [Errno -1] Package does not match intended download.

I guess it’s because of the version…

The solution: Get perl-Date-Manip-5.54-4.el6.noarch.rpm from the internet (ie rpm.pbone.net) and before installing it, install all it’s dependencies.

NOTE: This version is i686. For x86_64 just replace the arch.

Go to http://rpm.pbone.net/index.php3/stat/4/idpl/17455805/dir/centos_6/com/perl-Date-Manip-5.54-4.el6.noarch.rpm.html and download it.

Edit (new package): http://rpm.pbone.net/index.php3/stat/4/idpl/17468903/dir/centos_6/com/perl-Date-Manip-6.24-1.el6.noarch.rpm.html

Before installing, install all it’s dependencies:

yum install mailx perl perl-Module-Pluggable perl-Pod-Escapes perl-Pod-Simple perl-YAML-Syck perl-libs perl-version

And then, install perl-Date-Manip that we have downloaded before:

rpm -ivh perl-Date-Manip-5.54-4.el6.noarch.rpm

and next, we can install logwatch:

yum install logwatch

This way, logwatch is installed

2012/07/03

If you want to have logwatch mailed to you, you need to install sendmail (or postfix). According to logwatch.conf, only sendmail (and mailers that support output stream) can be used

From logwatch.conf:

# By default we assume that all Unix systems have sendmail or a sendmail-like system. # The mailer code Prints a header with To: From: and Subject:. # At this point you can change the mailer to any thing else that can handle that output # stream. TODO test variables in the mailer string to see if the To/From/Subject can be set # From here with out breaking anything. This would allow mail/mailx/nail etc….. -mgt

Installing sendmail

yum install sendmail

Configuring the email address

Now, we have two choices – or we just put our email address in logwatch configuration file – or we put it in /etc/aliases, thus receiving all email to root from the system. The latter is better, since we catch all email from our system.

Editing aliases

vi /etc/aliases

Go to line 96, uncomment the line (remove the ‘#’) and change the name to your email address:

# Person who should get root’s mail
root: youremail@yourdomain.com

Save and run:

newaliases

Start sendmail

/etc/init.d/sendmail start

and you can see how things are going by watching /var/log/maillog

tail -f /var/log/maillog

 Note: If you get an error because of Perl-Date-Manip and the timezone, that’s a problem and i couldn’t find a solution.

Upgrade Linux Mint 11 (Katya) to version 12 (Lisa)

Well, Linux Mint 12 is out and it’s time to upgrade and take advantaged of the new features…

There are many ways of upgrading, as you can read here: http://community.linuxmint.com/tutorial/view/2. Please read it carefully before proceeding with the upgrade. You can turn your current installation useless.

Both are valid, but not without possible problems.

I have Linux Mint 11 transformed in a Media Center, running XBMC and connected to my Video Projector! And now, i’m ready to upgrade, using the “C2 – Packages Upgrades”.

NOTE: After altering the sources.list and start the upgrade, there’s no turning back.

Warning: I’ve tried and it didn’t work for me… I had errors about apparmor and udev… Had to install all over again… But this method is supported and i’ve tried twice (from Mint 9 to 10 and from 10 to 11) and always had worked.

 

Warning: It didn’t work for me. It didn’t work for me.It didn’t work for me. It didn’t work for me. It didn’t work for me. It didn’t work for me. It didn’t work for me.

 

This means that, i’m going to edit the file sources.list of apt, and change the sources.

First, let’s upgrade Linux Mint 11 –

apt-get update

apt-get dist-upgrade

After Linux Mint 11 updates itself, let’s go to update the sources.list. First, copy the file, so we can have a backup (just in case, but after you start to upgrade, there’s no turning back…)

So, i’ll transform the following lines:

 

deb http://packages.linuxmint.com/ katya main upstream import backport

deb http://archive.ubuntu.com/ubuntu/ natty main restricted universe multiverse

deb http://archive.ubuntu.com/ubuntu/ natty-updates main restricted universe multiverse

deb http://security.ubuntu.com/ubuntu/ natty-security main restricted universe multiverse

deb http://archive.canonical.com/ubuntu/ natty partner

deb http://extras.ubuntu.com/ubuntu natty main

deb http://packages.medibuntu.org/ natty free non-free

into:

 

deb http://packages.linuxmint.com/ lisa main upstream import backport

deb http://archive.ubuntu.com/ubuntu/ oneiric main restricted universe multiverse

deb http://archive.ubuntu.com/ubuntu/ oneiric-updates main restricted universe multiverse

deb http://security.ubuntu.com/ubuntu/ oneiric-security main restricted universe multiverse

deb http://archive.canonical.com/ubuntu/ oneiric partner

deb http://extras.ubuntu.com/ubuntu oneiric main

deb http://packages.medibuntu.org/ oneiric free non-free

Basically, just alter katya to lisa and natty to oneiric

After you change the sources, let’s update the packages:

apt-get update

apt-get dist-upgrade

(....)

 

1133 upgraded, 373 newly installed, 23 to remove and 0 not upgraded.

Need to get 786 MB of archives.

After this operation, 987 MB of additional disk space will be used.

Do you want to continue [Y/n]?

Just press <enter> and let the “games” begin…

Good Luck

EDIT: Even if you get some errors when apt is upgrading, and the errors does not seam to be critical, ignore them and reboot the machine. After rebooting, just run apt-get upgrade and apt-get dist-upgrade so apt can update any package and configure again the “broken” configurations… I’m not saying the upgrade will work, but i’m finished and everything seems working.

Install core fonts Centos 6

Centos 6 does not comes with the Core Fonts from M$. Here 's how can we add them

yum install ttmkfdir cabextract rpm-build

For chkfontpath, we need the ATrpms repository or download the file directly from http://pkgs.org/centos-6-rhel-6/atrpms-i386/chkfontpath-1.10.1-2.el6.i686.rpm.html

Note: The chkfontpath has dependencies, so it's best to add the ATrpms repository…

If you want to add the ATRPMS repository, just download the rpm to add the repository from here: http://dl.atrpms.net/el6-i386/atrpms/stable/atrpms-repo-6-4.el6.i686.rpm

rpm -ivh atrpms-repo-6-4.el6.i686.rpm

yum install chkfontpath

wget http://corefonts.sourceforge.net/msttcorefonts-2.0-1.spec

rpmbuild -bb msttcorefonts-2.0-1.spec

cd rpmbuild/RPMS/noarch

ls

msttcorefonts-2.0-1.noarch.rpm

rpm -ivh msttcorefonts-2.0-1.noarch.rpm

cd /usr/share/fonts/msttcorefonts

mkfontscale

mkfontdir


Voilá! We have now the M$ core fonts installed and available to us.

References: http://corefonts.sourceforge.net/

Gnome 3 open with custom applications

Gnome 3 is an excellent piece of software ! Is just amazing, but lacks some things. In gnome 2, when we wanted to open an file with an application not in the menu, we could use “Open With” and choose an application – All in graphic mode.

Well, in gnome 3, that’s not possible (at least, after searching a lot, could not find) –

 

If we choose “Open With Other Application”, we get the “Recommended Applications” and we can choose “Show other Applications”, but the application does not appear.

So, how can we add an application not in the list ?

I want to open AVI files with mplayer and not with Totem. So we get to the command line. Using mimeopen, we can set the default application for files:

 

Using the -d parameter, we set the default application. The application asks for the application to open, and we can choose “3) Other” and is done.

Disco Rígido (hdd) não detectado antes da instalação

Já me aconteceu, algumas vezes (bem, demasiadas vezes), quando quer instalar um Linux e chega à parte de criar partições, o disco não é detectado.

Isto tem a ver com os novos modos dos discos nas bios, em AHCI ou IDE. Por várias vezes isto tras problemas, mesmo nos mais recentes Linux.

Como resolver?

Bem, existem várias formas:

 - Ás vezes resolve trocar o modo de operação da drive de AHCI para IDE, mas perde-se performance.

 - Outras vezes, é, na linha de arranque de kernel, colocar pci=nomsi . É so arrancar e esperar pelo melhor.

 – Uma das ultimas que descobri (quando nenhuma das anteriores funcionou) é a seguinte:

Arranquem com o Live CD (ou DVD) e entrem no ambiente de trabalho. Se o instalador não detectar o vosso disco, experimentem o seguinte:

Abram uma consola (usem sudo antes dos comandos se não puderem passar para root – comandos executados como root):

fdisk -l

Deverá dar uma listagem dos discos disponíveis (sim, mesmo que o instalador não reconheça o vosso disco, ele aparece listado).

Depois executem:

dmraid -E -r <a vossa drive - caminho completo>

(confirmem sim, que desejam apagar a informação do raid). 

Experimentem novamente o instalador e o disco já deve aparecer.

O que significam os parâmetros:

-r : Lista todos os dispositivos de RAID encontrados como formato, nível de RAID, sectores usados, etc…

-E : Elimina a metadata. Juntamente com o -r, toda a metadata do RAID é eliminada condicionalmente.

 

Referências

http://www.pendrivelinux.com/ubuntu-installer-cant-find-my-sata-drive/

http://ubuntuforums.org/showthread.php?t=370386

Configurar http proxy com filtro de conteúdos (dansguardian)

Activar repositórios necessários.

rpmforge

wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el5.rf.x86_64.rpm

rpm -ivh rpmforge-release-0.5.2-2.el5.rf.x86_64.rpm

Instalar a chave GPG de DAG

rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt

EPEL

O Repositório EPEL (Extra Packages for Enterprise Linux) é um repositório construído por voluntários do projecto do Fedora para criar um repositório de pacotes adicionais de grande qualidade que complementa o Red Hat Enterprise Linux e distribuições compatíveis (tipo o CentOS). Este repositório fornece muitos pacotes para CentOS/RHEL, que não são parte dos repositórios oficiais, mas que são desenhados para trabalhar com estas distribuições.

wget http://download.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

rpm -ivh epel-release-5-4.noarch.rpm

Verificar os repositorios

yum repolist

Após o comando anterior, vão aparecer os repositórios disponiveis e a quantidade de pacotes em cada um deles.

Instalar o Squid e o dansguardian

yum install squid dansguardian

Squid

Editar o squid.conf

vi /etc/squid/squid.conf

O ficheiro de configuração do squid parece extenso, mas está muito bem documentado. De qualquer forma, as unicas linhas necessárias para uma minima configuração do squid são:

 

http_port 3128

#Definicao de ACLS

# Redes permitidas
acl AllowedNetworks src 192.168.100.0/24

# Se preferirem, em vez de colocar as redes separadas por espaco, podem ser colocadas num ficheiro e adicionadas aqui:
# acl <nome_da_acl> src "ficheiroComRedes"

# Maquinas sem acesso - Se quiserem que maquinas nao tenham acesso a internet
acl Nonet src "ficheiroIPsSemAcesso"

# Sites nao autorizados 
acl sitesDeny dstdom_regex "ficheiroComListaSitesParaNaoAcederemAEles"

#Permitir / Negar ACLS definidas em cima

# Redes
http_access allow AllowedNetworks

# Localhost
http_access allow localhost

# Sites negados
http_access deny sitesDeny

# Negar tudo o resto
http_access deny all

icp_access allow AllowedNetworks
icp_access allow all

hierarchy_stoplist cgi-bin ?

#Cache stuff
cache_replacement_policy lru

# Definicao da cache - tamanho (ate 80% do espaco disponivel) NumerodeDirectoriasPrimeiroNivel NumeroDirectoriasSegundoNivel
cache_dir ufs /cache/cache1 16384 16 256
cache_dir ufs /cache/cache2 16384 16 256

store_dir_select_algorithm round-robin

# Qual o tamanho maximo de objectos a guardar na cache
maximum_object_size 31457280 KB

access_log /var/log/squid/access.log squid
acl QUERY urlpath_regex cgi-bin ?

cache deny QUERY
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern .               0       20%     4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

visible_hostname <hostname da maquina visivel>
cache_mgr <email gestor da maquina>
cache_effective_user squid
cache_effective_group squid

coredump_dir /var/spool/squid

Guardar o ficheiro.
Para iniciar a cache (antes de arrancar com o squid):
squid -z 
Arrancar com o squid:
/etc/init.d/squid start
Para ter a certeza que este inicia aquando de um reboot ao servidor:
chkconfig --levels 35 squid on

Dansguardian

O Dansguardian é um filtro de conteúdos bastante bom. Também podem filtrar conteúdos com o squid, mas o Dansguardian foi criado só para isso, e é muito bom. Mantendo a filosofia de Unix – Write programs that do one thing and do it well.
Os ficheiros de configuração encontram-se em /etc/dansguardian
A configuração do dansguardian é efectuada em dansguardian.conf
As opcoes mais relevantes são:

filterip =  
filterport = 8080
proxyip = 127.0.0.1
proxyport = 3128
As opções:
  • filterip diz respeito ao IP onde o dansguardian deverá escutar por pedidos. Se deixado em branco, escuta em todos os IPs que a máquina tiver. Preencham sempre.
  • filterpor a porta onde escutar. Esta é porta que devem configurar nos clientes
  • proxyip o IP onde o squid está à escuta. Por defeito, é a propria maquina. Se o squid estiver numa maquina diferente, coloquem aqui o IP.
  • proxyport a porta onde o SQUID escuta. Configurada no squid como http_port . Não deverá ser a mesma que o dansguardian.

No ficheiro dansguardianf1.conf encontram-se algumas opções relativamente à filtragem de conteúdos.

Tenham atenção à opção naughtynesslimit.

As listas de conteúdos para filtrar, vamos retirar de URLBlacklist.com. Vão a download e retirem o ficheiro.

Assim que tiverem o ficheiro, descomprimam para /etc/dansguardian/. Automaticamente é criada a directoria blacklists. A configuração das listas que desejam que o dansguardian filtre é feita em vários ficheiros:

  • bannedsitelist
  • bannedurllist

Nestes ficheiros encontram-se linhas comentadas que podem ser descomentadas. Atenção que, um dos ficheiros diz respeito a sites e outro aos URLs. Tenham atenção a isso. A seguinte listagem exemplo é do ficheiro bannedurllist.

#.Include</etc/dansguardian/blacklists/ads/urls>

#.Include</etc/dansguardian/blacklists/adult/urls>

#.Include</etc/dansguardian/blacklists/aggressive/urls>

#.Include</etc/dansguardian/blacklists/audio-video/urls>

#.Include</etc/dansguardian/blacklists/chat/urls>

#.Include</etc/dansguardian/blacklists/drugs/urls>

#.Include</etc/dansguardian/blacklists/entertainment/urls>

#.Include</etc/dansguardian/blacklists/frencheducation/urls>

#.Include</etc/dansguardian/blacklists/gambling/urls>

#.Include</etc/dansguardian/blacklists/government/urls>

#.Include</etc/dansguardian/blacklists/hacking/urls>

Descomentem as que desejarem.  Se desejarem, podem adicionar, seguindo as mesmas linhas. Podem haver categorias nas blacklists que não estejam aqui.
Dentro da directoria blacklists encontram um ficheiro chamado CATEGORIES que contém as categorias todas das listas e o conteúdo.
Quando efectuarem alguma alterção com o dansguardian em execução, para que ele leia novamente a configuração:
dansguardian -r
Para iniciar o dansguardian:
/etc/init.d/dansguardian start
Para se certificarem que ele arranca aquando de um reboot do servidor:
chkconfig --levels 35 dansguardian on

Ligação entre dansguardian e squid.

Para que exista ligação entre o dansguardian e o squid, é necessário activar algumas opções:

No squid, existem duas linhas de configuração que temos que activar:

# Para permitir o dansguardian

acl_uses_indirect_client on

# Se nao funcionar, experimentem, na linha seguinte, alterar localhost para o nome da acl para as vossas redes permitidas

follow_x_forwarded_for allow localhost 

No Dansguardian, as linhas são as seguintes:

# if on it adds an X-Forwarded-For: <clientip> to the HTTP request

# header.  This may help solve some problem sites that need to know the

# source ip. on | off

forwardedfor = on

# if on it uses the X-Forwarded-For: <clientip> to determine the client

# IP. This is for when you have squid between the clients and DansGuardian.

# Warning - headers are easily spoofed. on | off

usexforwardedfor = on

Desta forma, os vossos clientes ligam-se ao Dansguardian, onde é efectuada a filtragem de conteúdos e é passada para o squid a informação dos clientes, nomeadamente o IP e os headers necessários.
Neste momento, já deverão ter o squid e o dansguardian em perfeita sintonia.
Nota: Em vez de configurar IPTABLES para negar ligações directas à porta do squid (3128 por defeito), é mais simples, na tag de configuração do squid http_port, escrever:
http_port 127.0.0.1:3128
Desta forma, apenas a máquina local (neste caso o dansguardian; 127.0.0.1 o dispositivo de loopback) pode efectuar ligações directas. Simples e eficaz.
Configuração usando IPTABLES
Até agora, nada impede (a não ser politicas de domínio ou outras restrições) que os vossos clientes chegem ao browser e troquem a porta do proxy, deixando de se ligar ao dansguardian e ligando directamente ao squid, passando assim a filtragem de conteúdos. Para evitar isso, usem a seguinte regra de IPTABLES que nega ligações à porta 3128 excepto do localhost (funciona se o squid e o dansguardian estiverem na mesma maquina – Se assim não for, troquem o ip 127.0.0.1 pelo IP do vosso dansguardian):
Primeiro, permitimos ligações à porta do squid (3138) da própria maquina – Se o Dansguardian estiver noutra maquina, troquem o IP:
iptables -A INPUT -i eth0 -s 127.0.0.1 -p tcp --destination-port 3128 -j ACCEPT
Seguidamente, negamos todas as ligações:
iptables -A INPUT -i eth0 -p tcp --destination-port 3128 -j DROP
Desta forma, se alguém se tentar ligar directamente à porta do squid, não vai conseguir ligação.

Referências

http://wiki.centos.org/AdditionalResources/Repositories/RPMForge

http://fedoraproject.org/wiki/EPEL/FAQ#How_can_I_install_the_packages_from_the_EPEL_software_repository.3F

http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html

Instalar Koha 3.00.06 em Centos 5.5

install koha-3.00.06 on Centos 5.5

Koha é um ILS (Integrated Library System – Sistema de Gestão de Bibliotecas) gratuito e Open Source. É usado em todo o mundo, demonstrando a sua robustez.

Sendo todo baseado em PERL, não é dificil de instalar, mas tem muitas dependências e o processo é demorado.

Activar repositórios necessários

rpmforge

wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.1-1.el5.rf.x86_64.rpm
rpm -ivh rpmforge-release-0.5.1-1.el5.rf.x86_64.rpm

Repositórios Opcionais – Não necessárias à instalação do Koha

O Repositório EPEL (Extra Packages for Enterprise Linux) é um repositório construído por voluntários do projecto do Fedora para criar um repositório de pacotes adicionais de grande qualidade que complementa o Red Hat Enterprise Linux e distribuições compatíveis (tipo o CentOS). Este repositório fornece muitos pacotes para CentOS/RHEL, que não são parte dos repositórios oficiais, mas que são desenhados para trabalhar com estas distribuições.

wget http://download.fedora.redhat.com/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
rpm -ivh epel-release-5-4.noarch.rpm

Verificar repositorios

yum repolist

Instalar o plugin yum-priorities

yum install yum-priorities

Este plugin vai-nos permitir definir prioridades nos repositorios para evitar problemas com actualizações e preferências ao instalar determinados pacotes

editar o ficheiro /etc/yum/pluginconf.d/priorities.conf e verificar que está activado
[main]
enabled = 1

A cada repositorio, adicionamos (ou configuramos) a prioridade, de 1 a 99.
priority=N (mais pequeno, mais prioritario)

Definições recomendadas são:
Mirrors do Centos [base], [addons], [updates], [extras] com priority=1
[contrib] .. priority=2

Repositorios de terceiros, superior a 10

Antes de se começar a instalar o koha propriamente dito, é necessário ter algumas aplicações necessárias instaladas.

HTTP SERVER – Servidor Web

yum install httpd

configurem o apache como desejarem

vim /etc/httpd/conf/httpd.conf

Adicionem o apache aos serviços de arranque
chkconfig --levels 345 httpd on

Base de dados – MySQL

yum install mysql-server

Adicionar ao arranque
chkconfig --levels 345 mysqld on

Configurar o Mysql
/etc/init.d/mysqld start

/usr/bin/mysql_secure_installation

Criar a base de dados para o koha
mysql -uroot -p
create database koha;
create user 'kohaadmin'@'localhost' identified by '<password>';
grant select, insert, update, delete, create, drop, alter, lock tables on koha.* to  'kohaadmin'@'localhost';
flush privileges;
quit

Instalar o memcached

O memcached (que é usado pelo koha, se instalado) é um sistema de distribuição de objectos em memória, de alta-performance.

Por problemas com o pacote perl-Net-SSLeavy, temos que o instalar à mão. O que está disponivel (e instalado) é a versão 1.30 e precisamos pelo menos da versão 1.33.
Remover o anterior
yum remove perl-Net-SSLeay

wget http://packages.sw.be/perl-Net-SSLeay/perl-Net-SSLeay-1.36-1.el5.rfx.x86_64.rpm
rpm -ivh perl-Net-SSLeay-1.36-1.el5.rfx.x86_64.rpm


Instalar o memcached e adicionar ao arranque da máquina

yum install memcached

/etc/init.d/memcached start
chkconfig --levels 345 memcached on

O memcached corre no porto 11211
Para verificar definições:


echo "stats settings" | nc localhost 11211
STAT maxbytes 67108864
STAT maxconns 1024
STAT tcpport 11211
STAT udpport 11211
STAT inter NULL
STAT verbosity 0
STAT oldest 0
STAT evictions on
STAT domain_socket NULL
STAT umask 700
STAT growth_factor 1.25
STAT chunk_size 48
STAT num_threads 4
STAT stat_key_prefix :
STAT detail_enabled no
STAT reqs_per_event 20
STAT cas_enabled yes
STAT tcp_backlog 1024
STAT binding_protocol auto-negotiate
STAT auth_enabled_sasl no
STAT item_size_max 1048576
END

Mais informações no site do memcached

Instalar Zebra Utilities

http://www.indexdata.com/zebra

Necessario gcc e automake
yum install gcc (retira as dependencias necessarias)

Instalar aplicacoes necessarias
yum install bison libxml2-devel libxslt-devel libicu-devel tcl-devel libxlt-devel expat-devel

Efectuar o download da aplicacao YAZ
http://www.indexdata.com/yaz/

tar -zxvf yaz-4.1.1.tar.gz
cd yaz-4.1.1.tar.gz
./configure
make
make install

cd idzebra-2.0.44
./configure
make
make install

Adicionar utilizador e grupo koha

group add koha
useradd -d /usr/share/koha -g koha -s /bin/false koha

Dependencias do KOHA

Algorithm::CheckDigits
Biblio::EndnoteStyle
CGI::Session
CGI::Session::Serialize::yaml
Class::Accessor
Class::Factory::Util
DBD::mysql
DBI 1.53
Data::ICal
Date::Calc
Date::ICal
Date::Manip
Digest::SHA
Email::Date
GD
GD::Barcode::UPCE
HTML::Scrubber
HTML::Template::Pro
HTTP::OAI
IPC::Cmd
Lingua::Stem
List::MoreUtils
MARC::Charset
MARC::Crosswalk::DublinCore
MARC::File::XML
MARC::Record
MIME::Lite
Mail::Sendmail
Net::LDAP
Net::LDAP::Filter
Net::Z3950::ZOOM
PDF::API2
PDF::API2::Page
PDF::API2::Util
PDF::Reuse
PDF::Reuse::Barcode
POE
SMS::Send
Schedule::At
Text::CSV
Text::CSV::Encoded
Text::CSV_XS
Text::Iconv
XML::Dumper
XML::LibXML
XML::LibXSLT
XML::RSS
XML::SAX::ParserFactory
XML::SAX::Writer
XML::Simple
YAML::Syck

Agora temos duas opcoes – Segundo o INSTALL do koha, podemos executar o ficheiro perl install-CPAN.pl e ele instala todas as dependencias via CPAN, ou instalamos manualmente:

Os que podemos instalar via yum:

yum install -y perl-Algorithm-CheckDigits perl-CGI-Session perl-Class-Accessor perl-Class-Factory-Util perl-DBD-MySQL perl-Data-ICal perl-Date-Calc perl-Date-Manip perl-Date-ICal perl-Digest-SHA perl-Email-Date perl-GD perl-GD-Barcode perl-List-MoreUtils perl-Lingua-Stem perl-IPC-Cmd perl-HTML-Template perl-HTML-Template-Pro perl-HTML-Scrubber perl-Mail-Sendmail perl-MARC-Record perl-MIME-Lite perl-PDF-API2 perl-Schedule-At perl-POE perl-Text-CSV perl-Text-CSV_XS perl-Text-Iconv perl-XML-Dumper perl-XML-LibXML perl-XML-LibXSLT perl-XML-RSS perl-XML-SAX-Writer perl-YAML-Syck

NOTE: O koha posteriormente queixa-se das versoes instaladas: Aqui ficam algumas actualizacoes:

perl-DBI

wget http://packages.sw.be/perl-DBI/
rpm -Uvh perl-DBI-1.616-1.el5.rfx.i386.rpm

perl-DBD-Mysql

http://packages.sw.be/perl-DBD-MySQL/
rpm -Uvh perl-DBD-MySQL-4.014-1.el5.rfx.x86_64.rpm

Os restantes, instalamos via CPAN
perl -MCPAN -e shell

install Biblio::EndnoteStyle
install CGI::Session::Serialize::yaml
install HTTP::OAI
install DBI (apesar de estar disponivel pelo yum, o koha queixou-se)
install MARC::Charset MARC::Crosswalk::DublinCore
install MARC::File::XML
install Net::LDAP::Filter
install PDF::API2::Page PDF::API2::Util
install PDF::Reuse PDF::Reuse::Barcode
install SMS::Send
install Text::CSV::Encoded
install XML::Simple

O ZOOM corre bem (compilação é efectuada), mas falha nos testes e não instala. Podemos forcar a instalação com o seguinte comando:
force install Net::Z3950::ZOOM

KOHA
tar -zxvf koha-3.00.06-all-translations.tar.gz (ou seja qual o nome do vosso ficheiro)
cd koha-3.00.06

perl Makefile.PL
Responder a questoes colocadas pelo ficheiro de instalacao.
Eu fiz assim:

Installation mode - Standard
Base installation directory - /usr/share/koha
User Account - koha
Group - koha
DBMS - Mysql
Database server - localhost
DMBS - 3306
Name of Database (criado em cima) - koha
Username - kohaadmin
password - <password>
Install zebra configuration files - yes
MARC Format for zebra indexing - <depende da vossa biblioteca>
Primary language - en
Authorities indexing mode - dom (mais recente, mais rapido)
Zebra database user - kohauser
zebra database password - zebrastripes
SRU cinfiguration files - yes
SRU database host - localhost
SRU port for bibliographic data - 9998
SRU port for authority data - 9999
PazPar2 - yes
Zebra bibliographic server host - localhost
PazPar2 port - 11001
PazPar2 host - localhost
PazPar2 port - 11002
Database test suit - no

O koha a seguir mostra as dependencias do perl que ainda nao estejam satisfeitas.
No meu caso, ele queixa-se do DBI que não encontra… mas ele está instalado.

make
make install

Seguindamente, a instalação pede-nos para definirmos duas variáveis de ambiente. O melhor local para as colocar será em /etc/profile

vi /etc/profile e acrescentar:

export KOHA_CONF=/etc/koha/koha-conf.xml
export PERL5LIB=/usr/share/koha/lib

Para as definir imediatamente, executem
source /etc/profile

Para definir as configurações para o servidor do koha

ln -s /etc/koha/koha-httpd.conf /etc/httpd/conf.d/

Para o koha funcionar bem, precisamos que o apache tenha o mod_rewrite activado. Para verificar,

grep mod_rewrite /etc/httpd/conf/httpd.conf

e tem que aparecer a seguinte linha:
LoadModule rewrite_module modules/mod_rewrite.so
Editar o ficheiro /etc/httpd/conf/httpd.conf e acrescentar:

(manter a linha Listen 80)
Listen 8080

Reiniciamos o apache
/etc/init.d/httpd restart

Servicos Zebra

Agora, temos que arrancar com o zebrasrv, mas vamos colocar em background (daemon mode):
zebrasrv -D -f /etc/koha/koha-conf.xml
e vamo-nos certificar que inicia sempre que efectuarmos um reboot ao servidor, adicionando a seguinte linha a /etc/rc.local

echo "/usr/local/bin/zebrasrv -D -f /etc/koha/koha-conf.xml" >> /etc/rc.local

Para finalizar, navegar ate:
http://<vosso_servidor_koha>:8080/ e finalizar a instalação

Problemas

O Koha mantem os logs em /var/log/koha. Se tiverem algum problema sera aqui que devem comecar a verificar a origem do mesmo.

Dependências de PERL
Se o koha se queixar de algum problemad de dependências, verifiquem se foi instalado o pacote do qual ele se queixa por RPM.
rpm -qa | grep -i <pacote | parte do nome>
Se obtiverem resultados, significa que foi instalado por RPM.
Neste caso, instalem o mesmo pacote pelo CPAN.
 

References

http://memcached.org/

http://wiki.centos.org/PackageManagement/Yum/Priorities

http://www.question-defense.com/2010/01/25/install-ncftp-ncftpget-ncftpput-using-yum-on-centos-linux-server

http://www.rpmrepo.org/RPMforge/Using