Strotmann.de

Wednesday Jun 15, 2011

batch rotate and image processing in GIMP

The GIMP Script-Fu scheme snippet will take a file wildcard pattern ("*.JPG") and an rotate-type value (0 = 90° / 1 = 180° / 2 = 270°), level out the colors in the picture and save the changed image as a PNG file.

(define (batch-rotate pattern rotate-type)
  (let* ((filelist (cadr (file-glob pattern 1))))
    (while (not (null? filelist))
           (let* ((filename (car filelist))
                  (image (car (gimp-file-load RUN-NONINTERACTIVE
                                              filename filename)))
                  (drawable (car (gimp-image-get-active-layer image))))
             (set! filename (strbreakup filename "."))
             (set! filename (butlast filename))
             (set! filename (string-append (unbreakupstr filename ".") ".png"))
             (gimp-image-rotate image rotate-type)
             (gimp-levels-stretch drawable)
             (gimp-file-save RUN-NONINTERACTIVE
                             image drawable filename filename)
             (gimp-image-delete image))
           (set! filelist (cdr filelist)))))

The scheme function above can be stored into the users GIMP scripting directory (~/.gimp-2.6/scripts/) as "rotate-and-level.scm".

The shell script below can be used to start the batch rotate process on a directory of JPEG pictures.

#!/bin/sh
# file rotate.sh
gimp -i -b "(batch-rotate \"${1}\" ${2})" -b '(gimp-quit 0)'
On a Unix shell, the glob file pattern needs to be quoted so that the shell will not expand it:

# ./rotate.sh "*.JPG" 0

Tuesday May 24, 2011

Changing the encoding in Emacs

From the 'note-to-myself' department: To change the text encoding of a file in Emacs, load the file and then press

CTRL+X RET f <encoding>

where <encoding> can be values like utf-8.

Thursday May 12, 2011

Talk on Forth Benchmarks from Vintage Computer Festival Europe 2011

My talk about Forth and the Forth Benchmark Project from the Vintage Computer Festival (VCFe 2011).[Read More]

'strotmann.de' Blog on IPv6

The 'strotmann.de' Blog is now available on IPv6. The AAAA record for 'strotmann.de' will arrive shortly. I'm currently using a HE IPv6 tunnel, native IPv6 will be here after the summer.

Thursday Apr 21, 2011

Video: SWIG Erweiterung für Forth (Gerald Wodni)

Video von der Forth Tagung 2011 in Goslar

Video laden...

Slides

Monday Apr 18, 2011

Spring cleaning MacPorts

The MacPorts projects offers a fine, easy way to install Unix tools and applications on MacOS X. Over time however, MacPorts can accumulate amounts of dead data, as when applications get updated, the old versions stay until they are removed manually. In addition, after compiling a port from source, both the source and the intermediate object code remains on the harddisk.

Two commands can spring clean the MacPorts installation.

sudo port clean --all installed
will run "make clean" on all installed ports, removing the temporary object code generated during compilation.

sudo port -f uninstall inactive
will remove 'inactive' ports, mainly older versions of applications that have been replaced by a more recent version.

Running these two commands can free up some gigabyte of space on a harddisk (depending on the amount of MacPort applications installed).

Thursday Apr 07, 2011

Managing the MacOS X IPv6 firewall

MacOS X (10.3 and up) contains an IPv6 firewall (ip6fw), which has been inherited from FreeBSD and the KAME project. However there are no configuration or startup scripts, nor any other support available in a stock MacOS X system to manage this firewall.

The script presented here will read a firewall configuration from '/etc/ip6fw.conf' and will apply the IPv6 firewall rules to the MacOS X firewall.

[Read More]

Sunday Apr 03, 2011

Fixing the IPv6 Firewall on MacOS X 10.6

On MacOS X 10.6 (Snow Leopard), the IPv6 firewall command line utility 'ip6fw' is broken. It does not store filter rules for ICMPv6 types above type 127:

# sudo ip6fw add 20020 allow ipv6-icmp from any to any in icmptype 1,2,3,4,128,129
20020 allow ipv6-icmp from any to any in icmptype 1,2,3,4

Here is the fix...[Read More]

Saturday Mar 26, 2011

DNS information in IPv6 Router Advertisement on MacOS X

Pierre Ynard, the developer of 'rdnssd', was able to fix the issue with the 'ppoll' interface on MacOS X (see Debugging a shortlived MacOS application). The fix is now in the SVN code and will be available in the next release of 'rdnssd'.

It is now possible to distribute DNS server information to IPv6 clients via router advertisement messages.[Read More]

Thursday Mar 10, 2011

Debugging a shortlived MacOS application

Yesterday I had to debug a MacOS X commandline program that segfaulted immediatly after starting. This program is 'rdnssd' (Recursive DNS Servers discovery Daemon, http://rdnssd.linkfanel.net/). 'rdnssd' implements the client part of RFC 5006 - IPv6 Router Advertisement Option for DNS Configuration. This function lets an IPv6 router send out DNS server IP address information as part of the Router Advertisment messages, helping client finding a DNS server without the need of DHCP or local configuration.[Read More]

Monday Mar 07, 2011

JASSPA MicroEmacs for TinyCore Linux

JASSPA MicroEmacs is a lean, but very powerfull text editor for many different operating systems. It is not driven by Lisp as GNU/Emacs, but with a powerful macro programming language.

JASSPA MicroEmacs Desktop Screenshot

Below are packages for the TinyCore Linux System, compiled from the 11 October 2009 release sourcecode from the JASSPA Website.

There are three different packages:

Monday May 11, 2009

Once upon a time at -- Hobbytronic

Yesterday, I was looking for a 3 ½" floppy diskfor an Atari ST. I found one disk without label, and before formatting it, I tool a look what's stored on it. To my suprise, I found some of the first digitized pictures (for me) made on Hobbytronic 1992 (a computer fair in Dortmund) where ABBUC had a booth opposite a company doing scanner and digitizer cameras. So below we have the ABBUC Hobbytronic booth crew of 1992.[Read More]

Saturday May 09, 2009

Forth on the Vintage Computer Festival Europe 2009

On Booth 23 (me with entry ticket 42, for all numerologicals) I had two RTX2000 single board computer on display.

Image of Booth Setup 1

The theme of the exhibition has been "In space, no one can hear you scream", and for the RTX 2000 machines there was materials collected about NASA, AMSAT and ESA space missions that make use of the RTX 2000 CPU. Both RTX2000 Boards (a "kleiner Muck" and a "Wiesel 2000") were shown working, connected via RS232 to an Amstrad NC100 that was used as a terminal and Forth Editor.

Especially the VME Bus on the "kleiner Muck" got attention, and there will be a follow up project to connect the Muck with an Atari TT (which also has an VME Bus internally).

Image of Booth Setup 2

To be able to judge the processing speed of the RTX2000 compared with a regular CPU, Stefan "Beetle" Niestegge from the Atari TT exhibition helped running a simple benchmark (Primes) on both boards and on an Atari TT. The Atari TT was running BigForth ST on MINT on an Motorola 68030 with 30 Mhz.

As a result we found that the "Muck" with an 6Mhz RTX2000 is about the same processing speed (when it comes to Forth) as the 30Mhz Atari TT. Interestingly the 10Mhz "Wiesel" was about two times slower than the "Muck" and the "TT", which is probably due to some errors in our benchmarking (If someone has worked with the "Wiesel" board and has an idea what can cause this slowdown, please leave a comment).

The VCFe was again a very enjoyable weekend. Unfortunately the weather in Munich was too good, so there were not too much visitors finding their way into the VCFe Exhibition grounds.

The topic for next years VCFe will be "Online Communication". If you have nostalgia for UUCP, Mailboxing and acoustic couplers, please join next year!

Sunday Nov 09, 2008

Learning Clojure

Learning a LISP like language and EMACS (again, next try...) [Read More]

Tuesday Aug 05, 2008

'Timemachine'ish backup with ZFS and rsync

Apple MacOS X Timemachine is a nice piece of software. However it does not compress the data, and it only works on MacOS X. I was looking to a similar solution that works also on other Unix Systems (as well as MacOS X) and does transparent compression of the data. The idea is to have multiple backups on one disk, each backup showing the state of the source hard disk or directory at the time of the backup, browsable by the normal file system tools, without storing any data duplicated. I found a solution using the ZFS file system and 'rsync' (rsync is pre-installed on most Unixish operating systems).

Requirements

My tutorial is for MacOS X, but it can be adapted to any of the Systems that support the ZFS file system

Step 1: preparing a ZFS file system

I used the steps from the ZFS on MacOS-Forge site to create a ZFS pool on an external USB drive: Finding the disk:
# diskutil list . . . /dev/disk2 #: type name size identifier 0: Apple_partition_scheme *9.4 GB disk2 1: Apple_partition_map 31.5 KB disk2s1 2: Apple_HFS FW 9.2 GB disk2s3

writing a GPT label on the external disk (be sure to not format your 'main' disk here!):

# diskutil partitiondisk /dev/disk2 GPTFormat ZFS %noformat% 100% Started partitioning on disk disk2 Creating partition map [ + 0%..10%..20%..30%..40%..50%..60%..70%..80%..90%..100% ] Finished partitioning on disk disk2 /dev/disk2 #: type name size identifier 0: GUID_partition_scheme *9.4 GB disk2 1: EFI 200.0 MB disk2s1 2: ZFS 9.0 GB disk2s2

createing a ZFS pool on the disk called 'macbook-backup':

# zpool create macbook-backup /dev/disk2s2

enable compression on the new pool and disable ATIME:

# zfs set compression=on macbook-backup # zfs set atime=off macbook-backup

the hard drive is now prepared.

Step 2: creating the first (the 'base') backup

now I create the first full backup, which I call the 'base' backup. For this I create a new file system called 'base' in the ZFS pool 'macbook-backup':
# zfs create macbook-backup/base

next I copy all files from my backup-source directory (or the whole source disk) to the backup:

# rsync -avh --progress --delete /Users/myuser /Volumes/macbook-backup/base/

depending on the size of the data to backup, this will take a while.

Once the backup is finished, we can access all files under '/Volumes/macbook-backup/base'. With the ZFS command I can check the compression-ratio of our backup:

# zfs get all macbook-backup/base NAME PROPERTY VALUE SOURCE macbook-backup/base type filesystem - macbook-backup/base creation Wed Jan 31 9:08 2007 - macbook-backup/base used 2.21G - macbook-backup/base available 1.76G - macbook-backup/base referenced 2.21G - macbook-backup/base compressratio 1.38x - macbook-backup/base mounted yes - macbook-backup/base quota none default macbook-backup/base reservation none default macbook-backup/base recordsize 128K default macbook-backup/base mountpoint /Volumes/macbook-backup/base default macbook-backup/base sharenfs off default macbook-backup/base shareiscsi off default macbook-backup/base checksum on default macbook-backup/base compression on local macbook-backup/base atime off local macbook-backup/base devices on default macbook-backup/base exec on default macbook-backup/base setuid on default macbook-backup/base readonly off default macbook-backup/base zoned off default macbook-backup/base snapdir hidden default macbook-backup/base aclmode groupmask default macbook-backup/base aclinherit secure default macbook-backup/base canmount on default macbook-backup/base xattr on default

Step 3: creating an incremental backup

Now, a few weeks later, I want to make a new, incremental backup. So I create a new snapshot of the base file system and then clone that snapshot into a new file system:
zfs snapshot macbook-backup/base@20080804 zfs clone macbook-backup/base@20080804 macbook-backup/20080804

The directory for my new, incremental backup will be '/Volumes/macbook-backup/20080804'. So far the new file system does not use any space on the hard drive. Now I do the new backup with 'rsync':

# rsync -avh --progress --delete /Users/myuser /Volumes/macbook-backup/20080804/

The new backup will only take as much new space on the backup hard drive as there were changes compared to the base backup. But still I am able to browse through the file system at '/Volumes/macbook-backup/20080804' and see all files that were available at the date of the 2nd backup.

Any subsequent snapshots for more backups will be done from the 20080804 file system.

Calendar

Feeds

Search

Links

Navigation

Info