Skip navigation

I’ve been looking for a way to find out how far the duplication of an image onto an SD card for my new Raspberry Pi has got…unfortunately DD doesn’t natively give you any idea of progress – therefore there’s no obvious way of checking whether the process has hung…

Fortunately it is possible to see how things are progressing by using the following:

#watch -n 10 killall -USR1 dd

This will display a status update every 10 seconds. Note that for BSD’s you should be able to replace USR1 with INFO (but I’ve not tested this, so feedback welcome :))

I usually reserve my blog for notes on problems solved so that I have something to refer back to in future when encountering the same technical issues. I try to avoid opinions on here as they are not relevant to the context of the blog, however the recent resurrection of the plans put forward by Labour under the last Government really deserve some commentary.

At present UK-based ISP’s are required by law to keep a record of contact – ie. If I browse to then my ISP keeps a record that I have been there. The police can request information on the sites that I have been to if they approach my ISP with a warrant. The extension that is proposed will extend this to social networking sites, and the broadly scoped “new media”, and this is where things start to get a little scary. It is almost functionally impossible to gather any kind of useful data from proxy logs other than the fact that I have visited Facebook and perhaps looked at some friends pages. Beyond that you need to analyse and log the traffic much deeper.

These proposals have either not passed any kind of technical review or the plans involve some VERY deep packet capturing and logging. This would be both expensive (a cost that would be pushed to the ISP and then ultimately the consumer) and hugely impractical (the content would need to be stored by the ISP – and would require a lot of physical storage space).

In addition to this, the proposals suggest that the authorities should be able to have real-time access to data transmitted – WITHOUT A WARRANT. The Government are acknowledging that strict policy and control would be required to make this work – but who would monitor this – who would decide whether the surveillance was lawful without the need for a court to sanction it? With measures in place to permit easy access to this data it could be subject to abuse by staff at the ISP, or worse yet, unauthorised users due to improperly secured systems. I find this all deeply disturbing and prone to abuse on a Orwellian scale.

So what is the motivation for this? Allegedly this is necessary in the fight against terrorism in the UK. Something I find questionable given the number of terror attacks or even attempts in the last 5 years – just ask Wikipedia. Is it worth spending Billions of pounds and infringing the basic right of privacy of millions law-abiding British citizens to prevent a threat that it might be suggested simply does not exist? I personally think not.

And would these measures actively prevent communication between terrorists – would they actually be able to intercept valuable intelligence about these mysterious terror cells based on their Facebook activity? Do Al-Qaeda plan operations by writing on each other’s “Walls”? Are training exercises covertly carried out via Farmville? If a terrorist threat to the British way of life actually exists and terrorists are currently planning operations I would imagine they are well resourced and have taken steps to avoid detection – they are probably encrypting their email using PGP or GPG and anonymizing their web traffic using Tor or i2P. Maybe I’m drawing too much from spy novels but surely laptops and mobile devices are using full disk encryption using CryptFS or Truecrypt . I think it’s deeply naive to assume that terrorists are as incompetent as the UK Government when handling sensitive information.

So – in light of this, what can people do if they do not want every internet conversation and transaction scrutinised, logged and monitored?

1) Look into the Tor project. Onion routing has helped people around the globe living under oppressive governments to get their message out. The beauty of Tor is the more people who use it in their day-to-day activities, the more anonymous it becomes.

2) Encrypt your email – look into GnuPG as a free way of doing this. If you deal with sensitive information you really should be doing this anyway. Encrypt everything you send. The source and destination of mail (headers) will still be visible to prying eyes, but the content will not.

3) Sign a Petition and write to your MP.

4) Find out how they are representing your views by signing up here:

5) Support ORG – the Open Rights Group. The work they do is important and there are many ways people can contribute:

If anyone asks I’ll happily put together detailed tutorials on how to keep your private information private (see 1 and 2 above) and how these measures work.

With the security on Windows devices improving natively, things are a little more difficult now to push applications out to the desktop – this is something that should be welcomed, but at the same time makes installation of products like Sophos Antivirus a little more difficult to deploy via the Enterprise Manager.

There are some pre-requisite steps that now need to be taken prior to deployment:

1) Allow traffic to the SBS Server from the LAN.

netsh firewall add portopening TCP 8192 “Sophos”
netsh firewall add portopening TCP 8193 “Sophos”
netsh firewall add portopening TCP 8194 “Sophos”
netsh firewall add portopening TCP 8081 “Sophos quarantine digest”

2) Open up Group Policy and edit the Domain Group Policy ->Computer Config->Windows Settings ->Security Settings->System Services.  Ensure that:

Remote Registry: Automatic
Computer browser: Automatic

3) Allow traffic on the workstations…. Computer Config > Administrative Templates > network > Network Connections > Windows Firewall > Domain Profile


You should then be able to assign the machines to groups within the Enterprise Console

On a recent search for the RIPE allocation for Hutchinson 3Gs RIPE allocation of IP addresses to limit inbound connections to our firewall, I found the following useful site detailing the allocations assigned to organisations, and thought I’d document it here:

It’s a nice amalgamation of the information listed on the registrars ftp servers here:

In case you should need the Messagelabs IP’s to permit inbound traffic in firewall rules, an up-to-date list is below:


Subnet IP Subnet mask Net mask IP Range /24 – /19 – /19 – /19 – /20 – /20 – /21 – /21 – /21 – /23 – /23 – /23 –

Hope this is useful to someone 🙂

Occassionally when you provide managed services for clients and there are issues, fingers get pointed and accusations get made about the integrity of the network – particularly if the medium in question uses fibre, or another less common network medium (like wireless).

We host a clients server on our premises for backup purposes, on the end of 2Km of multimode fibre connected to media converters at both sides.  When the client was having issues with the speed and  integrity of the network (packet loss and timeouts), it was necessary to do a little research to initially test and then to prove that the issue was not resultant of the fibre link.  Of course, with the aid of an OTDR it’s easy to demonstrate that the fibre does not show losses – but an OTDR is a very expensive piece of equipment to buy or rent, and does not provide any throughput data illustrating whether the endpoints are doing as they should.  As the clients IT support were only testing from a Windows->Windows box and they were only using ping to illustrate the issue, it was necessary to do a little more digging.

I put together a short test plan:

1)A standard ping test
2) A short packet capture while running a standard ping test.
3) An isolated packet capture to ascertain whether there are any obvious network issues
(excessive ARP, retransmission, etc).
4) A flood ping test
5) A bi-directional iperf test to measure the bandwidth and throughput of the fibre link
(through one of the clients network switches)
6) A bi-directional iperf test to measure the bandwidth and throughput of the fibre link
directly from the media converter.

As the ping test yielded no unusual results and the packet capture (tcpdump -i eth0 -s0 -w pingtest.pcap) of the ping test didn’t show anything unusual, I ran the flood ping back to my box (note that it’s fairly important to only flood ping boxes that are capable of handling more traffic than you can generate (wikipedia).

#ping -f

— ping statistics —
11011 packets transmitted, 11010 received, 0% packet loss, time 1725ms
rtt min/avg/max/mdev = 0.129/0.133/5.692/0.055 ms, ipg/ewma 0.156/0.132 ms

Again, this illustrated that even with a massive burst of data in a short space of time that there were no errors in transmission.

Next it was necessary to run test #5.  Iperf was installed on the remote side and on my laptop, so I started the server on the remote side using:

#iperf -s

and started the client side on my laptop using:

#iperf -c -r

The results showed a slow throughput on the client and server side:

ID] Interval  Transfer Bandwidth
[ 5] 0.0-10.0 sec 18.8 MBytes 15.8 Mbits/sec

This looked likely to be the cause of the problem, but the fibre link should have been running at 100Mbps.  The next step was to connect directly into the media converter rather than through the clients switch.  I ran the test directly through the media converter:

#iperf -c -r
ID] Interval  Transfer Bandwidth
[ 5] 0.0-10.0 sec 112 MBytes 94.2 Mbits/sec

A much improved result!  I ran the test again to verify the findings and then plugged into an alternative switch port at the clients side to run the test again, and this time got the 94Mbps I was hoping to see, proving that the issue was with the switch and most likely to be caused by rate-limiting on the switch port.

Sometimes a simple ping is not enough to thoroughly test a network and other tools need to be used to verify findings….iperf is excellent for providing a tangible measurement of throughput, and tcpdump & wireshark are useful for looking for packet retransmissions, excessive arp and other clues to performance issues..

I’ve seen so many people attempt to restore Exchange and fail using Microsofts built in tools, or come unstuck because they want to restore a single mailbox, that I thought I’d document the free method of backing up Exchange that we use, so that it will hopefully help others.

One of the tools available from Microsoft free is Exmerge.  It allows individual mailboxes to be individually exported to PST files, which can then either be re-imported back into Exchange or simply opened in Outlook.  Exmerge is available from

Extract and save to the Exchsrv/bin directory, and when the appropriate mailboxes have been selected, destinations set save the configuration.  This will create an exmerge.ini file.

This can then be scripted in a batch file and run as a scheduled task.  I create a folder on the local disk of the Exchange server (although this can be done to a mapped drive) for each day I want the backup to run.

My exmon.bat file reads:

D:\exchsrvrbinexmerge.exe -F C:\scriptsexmonexmerge.ini -B

Which runs the exmerge.exe, with the options specified in scriptsexmonexmerge.ini and runs the script as a batch job using the -B switch.

To clean the folder prior to running, I have a separate batch file that runs earlier on the same day that runs

del /F /Q /S z:\Exchangeexmon*.*

Subsequently to back up the PST files to a separate server I use the excellent BackupPC running on a Debian server.  Installation instructions for Debian are here:

The BackupPC box is confugured to access the SMB share that the PST’s are stored in, as well as additional file shares on the server.  BackupPC supports incremental backups and backups via a variety of methods (including SSH and rsync, as well as SMB).

It’s also possible to archive off historic backups for off-site using the archive functions within BackupPC.  As a free solution for backing up mailboxes and beiong able to recover easily (with version control) this is very effective…

On trying to connect to a device that has no DNS or public visibility, connecting via SSH seemed to hang for an almost indefinite period of time.

This can be avoided by stopping the server from performing a reverse DNS lookup against the connecting IP address by adding the following line to the /etc/ssh/sshd_config:

UseDNS    no

The Linksys SRW2024 initially appears to be a little strangled in functionality – the browser based configuration doesn’t work in Linux/Firefox for example, and the command line menu doesn’t allow for extended configuration.  I was actually on the cusp of sending the device back (I don’t really want to have to use a Windows VM to be able to configure a switch), but it turns out there is an option to get into a lightweight IOS style command line interface…

First, connect to the device using the supplied serial cable and Minicom.  The settings for the device need to be 38400 8N1 and flow control needs to Off (contrary to the documentation on the Linksys website!)

When logged in, configure the IP address and turn on SSH management for ease of configuration – change the password from the default (admin/blank).

Next, log in using ssh, and once logged in, hold CTRL+Z, then type lcli.

To create a VLAN:

# configure
(config)# vlan database
(config-vlan)# vlan 993 (enter your VLAN ID of choice here)
(config-vlan)# end

You should now have the VLAN of 993.

This can be verified using the

#show vlan

To assign ports to VLANs:

# configure
(config)# interface range ethernet g21-24
(config-if)# switchport access vlan 993
(config-if)# end

To check,

# show interfaces switchport ethernet g1
# show interfaces switchport ethernet g21

Hope this helps someone….

It is now possible to connect to a Windows machine running Logmein from Linux using a Java browser plugin…..unfortunately if you are using a 64-bit kernel on Ubuntu Karmic, then the java version from the Ubuntu repos is incompatible with the plugin.

To work around this, download, and extract to ~/.mozilla/plugins/ then download and install nspluginwrapper from the repos (sudo apt-get install nspluginwrapper).  Nspluginwrapper is a tool to create a layer of compatibility for non-native browser plugins.

You can then use nspluginwrapper by using:

sudo nspluginwrapper -i ~/.mozilla/plugins/

Restart firefox and navigate to the logmein website again and it should work…