Skip navigation

Category Archives: Linux

Occassionally when you provide managed services for clients and there are issues, fingers get pointed and accusations get made about the integrity of the network – particularly if the medium in question uses fibre, or another less common network medium (like wireless).

We host a clients server on our premises for backup purposes, on the end of 2Km of multimode fibre connected to media converters at both sides.  When the client was having issues with the speed and  integrity of the network (packet loss and timeouts), it was necessary to do a little research to initially test and then to prove that the issue was not resultant of the fibre link.  Of course, with the aid of an OTDR it’s easy to demonstrate that the fibre does not show losses – but an OTDR is a very expensive piece of equipment to buy or rent, and does not provide any throughput data illustrating whether the endpoints are doing as they should.  As the clients IT support were only testing from a Windows->Windows box and they were only using ping to illustrate the issue, it was necessary to do a little more digging.

I put together a short test plan:

1)A standard ping test
2) A short packet capture while running a standard ping test.
3) An isolated packet capture to ascertain whether there are any obvious network issues
(excessive ARP, retransmission, etc).
4) A flood ping test
5) A bi-directional iperf test to measure the bandwidth and throughput of the fibre link
(through one of the clients network switches)
6) A bi-directional iperf test to measure the bandwidth and throughput of the fibre link
directly from the media converter.

As the ping test yielded no unusual results and the packet capture (tcpdump -i eth0 -s0 -w pingtest.pcap) of the ping test didn’t show anything unusual, I ran the flood ping back to my box (note that it’s fairly important to only flood ping boxes that are capable of handling more traffic than you can generate (wikipedia).

#ping 10.202.4.130 -f

— 10.202.4.130 ping statistics —
11011 packets transmitted, 11010 received, 0% packet loss, time 1725ms
rtt min/avg/max/mdev = 0.129/0.133/5.692/0.055 ms, ipg/ewma 0.156/0.132 ms

Again, this illustrated that even with a massive burst of data in a short space of time that there were no errors in transmission.

Next it was necessary to run test #5.  Iperf was installed on the remote side and on my laptop, so I started the server on the remote side using:

#iperf -s

and started the client side on my laptop using:

#iperf -c 10.202.4.130 -r

The results showed a slow throughput on the client and server side:

ID] Interval  Transfer Bandwidth
[ 5] 0.0-10.0 sec 18.8 MBytes 15.8 Mbits/sec

This looked likely to be the cause of the problem, but the fibre link should have been running at 100Mbps.  The next step was to connect directly into the media converter rather than through the clients switch.  I ran the test directly through the media converter:

#iperf -c 10.202.4.130 -r
ID] Interval  Transfer Bandwidth
[ 5] 0.0-10.0 sec 112 MBytes 94.2 Mbits/sec

A much improved result!  I ran the test again to verify the findings and then plugged into an alternative switch port at the clients side to run the test again, and this time got the 94Mbps I was hoping to see, proving that the issue was with the switch and most likely to be caused by rate-limiting on the switch port.

Sometimes a simple ping is not enough to thoroughly test a network and other tools need to be used to verify findings….iperf is excellent for providing a tangible measurement of throughput, and tcpdump & wireshark are useful for looking for packet retransmissions, excessive arp and other clues to performance issues..

I’ve seen so many people attempt to restore Exchange and fail using Microsofts built in tools, or come unstuck because they want to restore a single mailbox, that I thought I’d document the free method of backing up Exchange that we use, so that it will hopefully help others.

One of the tools available from Microsoft free is Exmerge.  It allows individual mailboxes to be individually exported to PST files, which can then either be re-imported back into Exchange or simply opened in Outlook.  Exmerge is available from http://www.microsoft.com/downloads/details.aspx?familyid=429163ec-dcdf-47dc-96da-1c12d67327d5&displaylang=en

Extract and save to the Exchsrv/bin directory, and when the appropriate mailboxes have been selected, destinations set save the configuration.  This will create an exmerge.ini file.

This can then be scripted in a batch file and run as a scheduled task.  I create a folder on the local disk of the Exchange server (although this can be done to a mapped drive) for each day I want the backup to run.

My exmon.bat file reads:

D:\exchsrvrbinexmerge.exe -F C:\scriptsexmonexmerge.ini -B

Which runs the exmerge.exe, with the options specified in scriptsexmonexmerge.ini and runs the script as a batch job using the -B switch.

To clean the folder prior to running, I have a separate batch file that runs earlier on the same day that runs

del /F /Q /S z:\Exchangeexmon*.*

Subsequently to back up the PST files to a separate server I use the excellent BackupPC running on a Debian server.  Installation instructions for Debian are here: http://www.debianhelp.co.uk/backuppc.htm

The BackupPC box is confugured to access the SMB share that the PST’s are stored in, as well as additional file shares on the server.  BackupPC supports incremental backups and backups via a variety of methods (including SSH and rsync, as well as SMB).

It’s also possible to archive off historic backups for off-site using the archive functions within BackupPC.  As a free solution for backing up mailboxes and beiong able to recover easily (with version control) this is very effective…

On trying to connect to a device that has no DNS or public visibility, connecting via SSH seemed to hang for an almost indefinite period of time.

This can be avoided by stopping the server from performing a reverse DNS lookup against the connecting IP address by adding the following line to the /etc/ssh/sshd_config:

UseDNS    no

I stumbled across this as a result of a thread on Experts Exchange (http://www.experts-exchange.com/OS/Linux/Distributions/Red_Hat/Q_24539260.html?cid=359) and it made for fantastic reading, just highlighting what can be possible if the need ever arises to do a remote secure wipe of a server.  This can be achieved by installing an OS by using the /swap partition as /

Many thanks to Emma Jane Hogbin for this.  I’ve copied the notes here purely in case of the original found here being deleted.

A very long time ago I leased some server space that had RedHat and I wanted Debian. So I did a remote install using the /swap partition as a / partition. I thought the notes were lost, but I found them. I include them here for historical (hysterical?) purposes only.

# One hundred thank yous to Azhrarn and Karsten.
# Their HOWTOs and personal support were infinitely useful
# http://twiki.iwethey.org/Main/DebianChrootInstall by Karsten
# http://trilldev.sourceforge.net/files/remotedeb.html by Azhrarn (Erik Jacobson)
# ~ emma jane hogbin

# First grab the base system that you’re going to be using
# wget -q http://archive.debian.org/dists/Debian-2.2/main/disks-i386/current/base2…

# Make sure you have the full archive
# md5sum base2_2.tgz
# should give: 8010d9f0467ebbb54d89ac84261cb696

# Install debootstrap
rpm -ivh http://azhrarn.underhanded.org/debootstrap-0.2.23-1.i386.rpm

# output of /sbin/lsmod
ipt_state               1080   0 (autoclean)
ipt_REJECT              3992   0 (autoclean)
ipt_LOG                 4184   0 (autoclean)
ipt_limit               1560   0 (autoclean)
iptable_filter          2412   0 (autoclean)
ip_tables              15096   5 [ipt_state ipt_REJECT ipt_LOG ipt_limit iptable_filter]
ip_conntrack_ftp        5296   0 (autoclean) (unused)
ip_conntrack           27272   2 (autoclean) [ipt_state ip_conntrack_ftp]
autofs                 13268   0 (autoclean) (unused)
8139too                18120   1
mii                     3976   0 [8139too]
keybdev                 2976   0 (unused)
mousedev                5556   0 (unused)
hid                    22244   0 (unused)
input                   5856   0 [keybdev mousedev hid]
ehci-hcd               20072   0 (unused)
usb-uhci               26412   0 (unused)
usbcore                79040   1 [hid ehci-hcd usb-uhci]
ext3                   70784   2
jbd                    51924   2 [ext3]

# figure out some information about your current setup
# ssh in to your machine and check the network information with
/sbin/ifconfig

# You’ll need the following information from the output
# eth0 will have a line that starts with “inet…”
inet addr:66.98.212.88  Bcast:66.98.213.255  Mask:255.255.254.0

# Partition the harddrive to match the above configuration
su
mkdir /mnt/debinstall

# try working out of swap instead
/sbin/swapoff -a
/sbin/fdisk /dev/hda
p # look at the list of partitions
t # change the type
2 # of swap
83 # to regular linux
w # write and quit
/sbin/mke2fs /dev/hda2 # convert the partition to ext2 — do not use ext3
/sbin/tune2fs -O ^dir_index /dev/hda2 # from remotedb.html on sf

# edit the /etc/fstab file to change the /swap partition to
# /mnt/debinstall
/etc/fstab
/dev/hda3    /                        ext3            defaults 1 1
/dev/hda1    /boot                    ext3            defaults 1 2
none            /dev/pts             devpts        gid=5,mode=620 0 0
none            /proc                    proc            defaults 0 0
none            /dev/shm             tmpfs            defaults 0 0
/dev/hda2    /mnt/debinstall    ext2             defaults 1 1

# NB this is how big they have their partitions
[root@plain root]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda5              71G  1.4G   66G   3% /
/dev/hda1              99M   15M   80M  16% /boot
/dev/hda3            1012M   33M  928M   4% /tmp
none                  247M     0  247M   0% /dev/shm

# reboot the system
reboot

# on the reboot the /swap partition should now be mounted as the new
# partition. Double check to see that it’s actually working though
df # confirm that it’s actually mounted
#cd /mnt/debinstall
# su
#cp /home/admin/base2_2.tgz .

# unpack the base system
#gunzip base2_2.tgz
#tar -xvf base2_2.tar

# install the base system
/usr/sbin/debootstrap –arch i386 woody /mnt/debinstall http://http.us.debian.org/debian

# copy over the important config files
# according to remotedeb.html
cp /etc/resolv.conf etc/resolv.conf
cp /etc/hosts etc/hosts
cp /etc/fstab etc/fstab

# the default EV1 server does not have an /etc/hostname
hostname xtrinsic.net # sets the host name
hostname –fqdn # tests to see if it’s set

# configure network stuff
# this can be done either from the original system or the new one
route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
66.98.212.0     0.0.0.0         255.255.254.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
127.0.0.0       0.0.0.0         255.0.0.0       U     0      0        0 lo
0.0.0.0         66.98.212.1     0.0.0.0         UG    0      0        0 eth0

# enter into chroot.
/usr/sbin/chroot . bin/bash

# the prompt has now changed to:
xtrinsic:/#

# vi is installed but won’t run because it doesn’t know the term type
# set that now with:
export TERM=vt100
export PATH=/usr/local/sbin:/usr/sbin/:/sbin:/usr/bin:/bin

xtrinsic:/# cat > /etc/fstab << “EOF”
> # filesystem   mount-point fs-type    options     dump    fsck-order
> /dev/sda5      /           auto       defaults    0       1
> proc           /proc       proc       defaults    0       0
> EOF

# mount proc
mount -t proc proc /proc

# edit the following files
etc/resolv.conf # should be ok because it was cped from RedHat
etc/network/interfaces # this will be a new file and should have the following

——— /etc/network/interfaces ———
# the loopback interface
auto lo
iface lo inet loopback

# the first (and only) network card
auto eth0
iface eth0 inet static
# 1st from ifconfig
address 66.98.212.88
# 3rd from ifconfig
netmask 255.255.254.0
# 1st from route -n
network 66.98.212.0
# 2nd from ifconfig
broadcast 66.98.213.255
# last line, 2nd column of route -n
gateway 66.98.212.1
——————————————–

# run the base configuration
/usr/sbin/base-config

yes # gmt
Canada EST # time zone

# next configure the base system
# this worked the first time (i.e. Server V. 1), but refused to work the
# second time (citing nmap running out of space, or something). I tried
# increasing the Cache in /etc/apt/apt.conf but it didn’t work
# dpkg-reconfigure –install base-config
# the rest of the questions
No # md5 passwords
Yes # shadow passwords
root password
Yes # new user

Yes # Remove pcmcia
no # PPP
simple # for how to install software
# then wait for it to chug a bit
http # method for installing
yes # non-free
yes # non-us
yes # contrib
[pick a mirror]
<blank> # no proxy to get out
[get ready to install some stuff, yes to security updates]
no # taskel to install new software

dialog # for installing
medium # for questions
no # readable home directories
ask # about PCMCIA card when installing new things
yes # start support after install
american # spelling stuff
no locales # for now
leave alone # default locale
auto save once # type of automatic serial port configuration
yes # upgrade glibc now

apt-get install netselect wget
cd /etc/apt; netselect-apt woody
echo “deb http://security.debian.org stable/updates main contrib non-free” >> /etc/apt/sources.list

# install a few more packages
apt-get install aptitude screen ssh vim gpw

# config options
Allow only SSH2? Yes

Do you want /usr/lib/ssh-keysign to be installed SUID root? Yes # default
Run the sshd server? Yes # default

default # all exim stuff (to be replaced by postfix)

# utility to see what modules you need loaded
apt-get install discover
discover –enable-all –format=”%m on %d – %V %Mn” bridge ide scsi usb ethernet
xtrinsic:/# discover –format=”%m on %d – %V %Mn” bridge ide scsi usb ethernet
discover: Bus not found.

# edit the modutils file and add the ethernet stuff
vi  /etc/modutils/aliases
alias eth0 8139too
update-modules

# remove the /sbin/unconfigured.sh file
# rm /sbin/unconfigured.sh — didnt’ exist

# run the base-config again, there are other options you don’t have yet
base-config
# edit the apt.sources list by hand and don’t run any other software stuff
# don’t run taskel, and don’t run dselect

# configure the discover bit
vi /etc/discover.conf
———————
# Enable the PCMCIA scan
# accorinding to remotedb.html
skip=”pcmcia rtl8139″
# Scan for the following types of hardware at boot time:
types=”boot bridge ethernet ide scsi usb”
——————–

# install a new kernel with patches for various security things
# apt-get install kernel-image-2.4.18-1-686
apt-get install kernel-image-2.4.27-2-686
Ignore error messages about initrd (answer “no”)
Create the link, when it asks
Do NOT do anything that lilo asks you about

# make sure the right devices are in place for the kernel/system
cd /dev
./MAKEDEV generic # wait patiently, this may take a minute

# exit the chroot environment
exit

# copy over the new kernel (you should still be root)
cp /mnt/debinstall/boot/vmlinuz-2.4.18-1-686 /boot/.
cp /mnt/debinstall/boot/initrd.img-2.4.18-1-686 /boot/.

# edit /etc/lilo.conf and add the following information
——————-
default=redhat
image=/boot/vmlinuz-2.4.18-1-686
label=Debian
initrd=/boot/initrd.img-2.4.18-1-686
read-only
append=”panic=30″
——————–

# copy the new lilo over to the /mnt/debinstall
cp /etc/lilo.conf /mnt/debinstall/etc/lilo.conf
# make sure all kernels which are listed in /etc/lilo.conf are in the new /boot
cp $(grep “image.*=” /etc/lilo.conf | cut -f 2 -d “=”) /mnt/debinstall/boot

# -R means use the specified image only for the next boot
# therefore if the system panics it will reboot into redhat
/sbin/lilo -v
/sbin/lilo -v -R Debian

touch /mnt/debinstall/fastboot

# and finally — reboot
# wait at least 5-10 minutes before trying to log back in again
# remember to try the new accounts first and the old accounts second
# and remember to delete your old SSH authentication key from teh old username
reboot

# After getting the remote install working, I moved onto post install
# configuration. I started out by adding the following packages:
apt-get install mysql-server php4 php4-mysql apache postfix lynx
(postfix replaces exim)

# to reset the hostname I edited /etc/hostname and added my domain name
# I then reset the hostname with hostname <domainname> and checked it with
# hostname –fqdn “fully qualified domain name”

# A very weird thing has happened. I appear to be running an OS off of a
# partition that isn’t mounted.
emmajane@(none):/$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda3             1.9G  226M  1.6G  13% /
/dev/hda1              99M   13M   81M  14% /boot

emmajane@(none):/$ more /etc/fstab
/dev/hda3 /       ext3    defaults 1 1
/dev/hda1 /boot   ext3    defaults 1 2
none      /dev/pts devpts gid=5,mode=620 0 0
none      /proc   proc    defaults 0 0
none      /dev/shm tmpfs  defaults 0 0
/dev/hda2 /mnt/debinstall    ext2    defaults 1 1

emmajane@(none):/$ more /etc/lilo.conf
image=/boot/vmlinuz-2.4.18-1-686
label=debian-2418
initrd=/boot/initrd.img-2.4.18-1-686
read-only
append=”panic=30″
root=/dev/hda2

My debian is calling itself /dev/hda3 for some reason, when really it’s hda2

xtrinsic:/# more /etc/lilo.conf
prompt
timeout=50
default=debian
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
message=/boot/message
linear

image=/boot/vmlinuz-2.4.18-1-686
label=debian
initrd=/boot/initrd.img-2.4.18-1-686
read-only
root=/dev/hda2
append=”panic=30″

image=/boot/vmlinuz-2.4.20-24.9
label=redhat
initrd=/boot/initrd-2.4.20-24.9.img
read-only
append=”root=/dev/hda3″

xtrinsic:/# lilo -v
LILO version 22.2, Copyright (C) 1992-1998 Werner Almesberger
Development beyond version 21 Copyright (C) 1999-2001 John Coffman
Released 05-Feb-2002 and compiled at 20:57:26 on Apr 13 2002.
MAX_IMAGES = 27

Reading boot sector from /dev/hda
Merging with /boot/boot.b
Fatal: First boot sector is version 21.4. Expecting version 22.2.

http://software.cfht.hawaii.edu/linuxpc/sidious/6_Kernel_Options.html
Change the line which refers to /boot/boot.b to /boot/boot-menu.b

Now the boot-menu.b file is “missing” though because it’s in
/deb/mntinstall/boot, not in /boot. This will need fixing. There appear
to be instructions in remotedeb.html

apt-get install man less
export TERM=vt100

0. backup /boot to a very safe place
1. comment out the /boot partition from /etc/fstab
change your debian install directory to /mnt/tmp (instead of /mnt/deb..)
comment out the old data partition (/dev/hda3)
2. mount /mnt/tmp
3. the new boot information should now be in /mnt/tmp/boot/
pack it up with tar and copy it to the / directory
4. umount /mnt/tmp
5. unpack the contents into /boot-deb
6. edit lilo.conf and change boot to boot-deb
7. run lilo (expect errors) this seems to have cleared out /boot
8. copy boot-deb to /boot and add back any files from your backup of
/boot (for me it was message and the red hat images)
9. edit lilo.conf again and change /boot-deb back to /boot. To be safe, leave
redhat as the default for now and leave the panic set on debian and set lilo
to lilo -v -R debian (reboot into debian only this once)
10. Now you should be able to run lilo
11. Quadruple check your /etc/fstab to make sure it has the right values.
Values should be updated according to the instructions above (but not the
sample /etc/fstab which is way above)
12. as long as there are no errors, reboot

# Re-partition the old data drive
# in the end I decided not to use parted and stuck with good ol’ cfdisk
apt-get install cfdisk # it was already installed

# do the actual partitioning
cfdisk /dev/hda
# cfdisk is just a nicer interface for fdisk
# replace /dev/hda3 with smaller logical partitions
select /dev/hda3
d # delete it

# Now create all of your new partitions
n # create a new partition
L # for logical
<size in megs> # used the sizes below for each of the partitions
B # add the new partition to the beginning of the free space

2000    /usr/local    /dev/hda5
10000    /var            /dev/hda6
# users shouldn’t be storing email on
#    the server
500    /swap            /dev/hda7
t # change the type
82 # linux swap
300    /tmp             /dev/hda8     # bigger than required
5000    /home            /dev/hda9     # most data will be in /web
500    /config        /dev/hda10     # a safe place for config files
5000    /cvsroot        /dev/hda11    # cvs repository
40000    /var/www        /dev/hda12    # all web sites
[~ 14Gigs left open to assign as necessary]

# write this new partition table
# note: I got this error message:
Wrote partition table, but re-read table failed.  Reboot to update
table.

# quit and reboot the system–remember to give the system a minute or two
# to reboot

# format the partitions and add labels for each of the partitions
# while you’re at it, add a label for the / partition
e2label /dev/hda2 /
mkfs.ext2 /dev/hda5
e2label /dev/hda5 /usr/local
mkfs.ext2 /dev/hda6
e2label /dev/hda6 /var
# don’t touch swap
mkfs.ext2 /dev/hda8
e2label /dev/hda8 /tmp
mkfs.ext2 /dev/hda9
e2label /dev/hda9 /home
mkfs.ext2 /dev/hda10
e2label /dev/hda10 /config
mkfs.ext2 /dev/hda11
e2label /dev/hda11 /cvsroot
mkfs.ext2 /dev/hda12
e2label /dev/hda12 /var/www

# confirm all of the labels have been added with cfdisk
# “q” without doing anyting to any of the partitions

# Now add all of the new partitions to the /etc/fstab file
—————- /etc/fstab ——————————
# Partition table
# make sure there are no trailing slashes on any of the directories
/dev/hda1       /boot           ext3    defaults        1 2
/dev/hda2       /               ext2    defaults        1 1
/dev/hda5       /usr/local      ext2    defaults        0 2
/dev/hda6       /var            ext2    defaults        0 2
/dev/hda8       /tmp            ext2    defaults        0 2
/dev/hda9       /home           ext2    defaults        0 2
/dev/hda10      /config         ext2    defaults        0 2
/dev/hda11      /cvsroot        ext2    defaults        0 2
/dev/hda12      /var/www        ext2    defaults        0 2

# swap partition
/dev/hda7       none            swap    sw              0 0

# and then some other stuff that EV1 set up
none      /dev/pts devpts gid=5,mode=620 0 0
none      /proc   proc    defaults 0 0
none      /dev/shm tmpfs  defaults 0 0
———————————————————-

# after adding the new partitions, labelling and adding them to the
# /etc/fstab, copy the information to the new partitions
1. archive the information currently in the directory you’re going to
replace
2. delete the contents of the directory
3. mount the directory
4. copy the files back in
5. Activate and mount the /swap partition
mkswap /dev/hda7
swapon -a
sync;sync;sync

6. Check the /etc/fstab against what’s currently mounted
7. reboot

Many thanks again to Emma for this fantastic guide.

We’ve had a recent issue with IAX2 trunks whereby any DTMF tones played locally are not audible at the remote side of the connection…

Interestingly tones were audible on inbound and internal calls, however, this means that IVR’s are completely non-navigable.

The problem turned out to be due to the fact that it appears DTMF traffic was being sent out over a separate UDP port to the rest of the IAX traffic….calls sounded fine, but DTMF traffic was being blocked due to it running on port 4571.  We’ve opened the range 4569-4571 and now all is working fine….

I’ve just finished installing a pfSense firewall as a second gateway for a network that required a dedicated internet connection for some services. Some of the hosts on the network use the main office internet connection as their default gateway. As a result of this I was unable to connect to these hosts from remotely via the VPN, as the return path for the packets attempts to go via the primary internet connection, rather than via the VPN.

I had a quick glance at the pfSense/OpenVPN docs to see whether there was anything I could specify in pfSense and they stated that the machines needed to use the pfSense as the default gateway – this was unacceptable for our purposes here (one of the devices in question is the Asterisk VoIP server on the network which needs to use the other Internet connection for it’s external traffic). There is an easy solution to this however by simply adding a static route back to the IP range issued to DHCP clients via the pfSense’s internal IP.

This looks something like this:

openvpn

Effectively any internal machines that need to be visible over the VPN need to have an appropriate return path configured. The DHCP scope I have used for VPN clients is 10.0.200.0/24.
For linux machines on the network, the route can be added on a temporary basis (ie. until reboot) by entering the following command on the host:

route add -net 10.0.200.0/24 gw 10.204.6.1

or permanently by adding an entry into the /etc/sysconfig/static-routes (on Centos as per http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s1-networkscripts-static-routes.html)

On Windows hosts this can be achieved by adding a persistent route:

route add -p 10.0.200.0 mask 255.255.255.0 10.204.6.1
:)

I keep on finding and losing bookmarks of good base64 encoding and decoding sites, so thought I’d link to one here:

http://makcoder.sourceforge.net/demo/base64.php

Useful when trying to test SMTP-Auth on a mailserver and needing to encode usernames and passwords!
:)

Bit of an awkward fix, unfortunately, as this involves having access to a Windows/Outlook setup, but to add a Public folder that exists on Exchange, it needs to be bookmarked as a favourite for Evolution to pick it up.

For example, we use a public folder for shared (company-wide) contacts here.  To add the folder I just log onto my account on a Windows machine, then added that public folder as a favourite.

After logging out of evolution and back in, I could then see these “public” contacts under the contact folder (CTRL+2).

Ok – this site made me laugh…..and gave me some inspiration of something fun to do while shopping around for LCD TV’s over the next few weeks :)

http://www.manucornet.net/pcjacking/
:D

Occasionally you find a piece of software that makes life infinitely easier….this has been a very good week, I’ve found 2!

I’ve just installed GLPI as a trouble ticketing system to assist with management of workflow and to track recurring faults.  It’s an open-source, web based tool that uses apache, php and mysql to track issues and produce good quality reports.

It seems extremely stable in trials so far, but I will keep on using for the next few weeks before I roll out to users in our organisation for fault reporting.

Available from here: http://www.glpi-project.org/spip.php?lang=en

The next find was a tool called OCS Inventory.  It’s another web/mysql app that is used for asset management.  The useful thing about this tool is that it uses an agent installed as a service on workstations that can be deployed using a login script.  This then updates the server on workstation boot with an abundance of information about the workstation, such as hardware info, serial number, installed software, installed printers, logged on user, etc

This has turned into a real time-saver for me! It’s available for download from http://www.ocsinventory-ng.org/