Ubuntu Stopwatch Applet


It has occured to me several times that I would like to have a small stopwatch utility with quick access (particularly for time tracking on various projects). I figured the Ubuntu timer applet would have this function, but alas, it did not. To my surprise, there wasn't any applet in the ubuntu repositories that does this. I decided it would be sufficiently useful to look into writing one myself. As usual, documentation was pretty sparse but I managed to find a good starting point and cobbled together a simple proof of concept. I'd tell you how much time it took, but I didn't have a handy stop- watch applet for me to time myself.

You can find the code in my github here. Compile with gcc or just run make.sh (it's only one line). The .server file needs to go in /usr/lib/bonobo/servers. It also needs to be edited to point to wherever it is that you put the binary. After moving it, log out and then back in. Right click on the panel you want to add it to and select "add to panel". You should see "Stopwatch Applet" in the list.

Add To Panel

It'll drop the simple applet on your panel. The applet steals the "timer- applet.png" image from the timer applet package so you might need that in /usr/share/pixmaps in order for the entry to show up in your list of available applets. The applet looks like this:

Panel

There are basically no features. It starts counting when it's loaded. You can click on it to reset it. I do have some planned improvements for when I get around to it.

  • Ability to control multiple timers from one instance
  • Ability to choose which timer is displayed in the panel
  • Ability to pause/lap/resume by clicking on it (configurable)

Also, just in case you were wondering, I spent a total of 23 minutes and 49 seconds on putting this code in version control, adding the project to redmine, and writing this post.

Generating HTML pages from Latex


While latex is pretty much "not designed" for web content, it is very useful to generate a web-version of a latex document. The purpose of latex is clearly for typesetting layouts on a pre-defined page, but when you want to share the information with others, it's generally a lot easier for them to go to a webpage then it is to download and open a PDF. In addition, it's generally easier to view a webpage than a PDF because the content is continuous, and one can scroll around and click hyperlinks in a way that is far more fluid than on a PDF.

Now that MathML and SVG are becoming more supported by web browsers, there is a strong case for sharing mathy documents on the web in addition to paper documents (or PDFs, which are only slightly more readable than paper).

To this end, I've been evaluating various different Latex to HTML converters. I've tried the following on Linux (Ubuntu):

  1. TTH
  2. LaTeX2HTML
  3. text4ht
  4. LaTeXML

By far my favorite is LaTeXML. It generates crisp, simple pages using MathML and CSS, making it easy to customize the style. It doesn't support a whole lot of packages that I generally would like to use (like algorithm2e), but then again none of them do. Also, the ArXiV project is working on a branch of LaTeXML so there is promise that it will grow quickly to support a lot of the best packages.

Document Setup

My current approach to generating both PDFs and HTMLs from latex source is to use separate top-level documents for both. The directory structure looks something like this:

document
 |- document_html.tex
 |- document_pdf.tex
 |- document.tex
 |- preamble_common.tex
 |- preamble_html.tex
 |- preamble_pdf.tex
 \- references.bib

The two versions of document_[output].tex are the top-level files. They look like this:

%document_html.tex

\documentclass[10pt]{article}
\input{preamble_common}
\input{preamble_html} 
\begin{document}
\input{document}
\end{document} 

The pdf version is the same but it uses preamble_pdf as an input. Note that in latex you cannot nest \include directives, but you can nest \input directives. Also, \include inserts a page-break so there is no need to use them here. Rather document.tex may \include it's chapters as tex files or the like.

Makefile

To ease the process of generating the different types, I'm using a makefile.

# The following definitions are the specifics of this project
PDF_OUTPUT  :=  document.pdf
HTML_OUTPUT :=  document.html

PDF_MAIN    :=  document_pdf.tex
HTML_MAIN   :=  document_html.tex

COMMON_TEX  :=  document.tex \
                preamble_common.tex

PDF_TEX     :=  $(COMMON_SRC) \
                document_pdf.tex \
                preamble_pdf.tex

HTML_TEX    :=  $(COMMON_SRC) \
                document_html.tex \
                preamble_html.tex 

BIB         :=  references.bib



# these variables are the dependencies for the outputs
PDF_SRC     := $(PDF_TEX) $(BIB)
HTML_SRC    := $(HTML_TEX) $(BIB)

# the 'all' target will make both the pdf and html outputs
all: pdf html

# the 'pdf' target will make the pdf output
pdf: $(PDF_OUTPUT)

# the 'html' target will make the html output
html: $(HTML_OUTPUT)

# the pdf output depends on the pdf tex files
# we use a shell script to optionally run pdflatex multiple times until the
# output does not suggest that we rerun latex
$(PDF_OUTPUT): $(PDF_TEX) 
    @echo "Running pdflatex on $(PDF_MAIN)"
    @pdflatex $(basename $(PDF_MAIN)) > $(basename $(PDF_MAIN))_0.log
    @echo "Running bibtex"
    @-bibtex   $(basename $(PDF_MAIN)) > bibtex_pdf.log 
    @echo "Checking for rerun suggestion"
    @for ITER in 1 2 3 4; do \
        STABELIZED=`cat $(basename $(PDF_MAIN)).log | grep "Rerun"`; \
        if [ -z "$$STABELIZED" ]; then \
            echo "Document stabelized after $$ITER iterations"; \
            break; \
        fi; \
        echo "Document not stabelized, rerunning pdflatex"; \
        pdflatex $(basename $(PDF_MAIN)) > $(basename $(PDF_MAIN))_$$ITER.log; \
    done
    @echo "Copying pdf to target file"
    @cp $(basename $(PDF_MAIN)).pdf $(PDF_OUTPUT)

# the html output depends on the html tex files
# we have to process all of the bibliography files separately into xml files, 
# and then include them all in the call to the postprocessor
$(HTML_OUTPUT): $(HTML_TEX) 
    @echo "Running latexml on $(HTML_MAIN)"
    @latexml $(HTML_MAIN) -dest=$(basename $(HTML_OUTPUT)).xml > $(basename $(HTML_MAIN)).log 2>&1
    @BIBSTRING=""; \
    for BIBFILE in $(BIB); do \
        echo "Running latexml on $$BIBFILE"; \
        XMLFILE=`basename "$$BIBFILE" .bib`.xml; \
        LOGFILE=`basename "$$BIBFILE" .bib`_html.log; \
        latexml $$BIBFILE -dest=$$XMLFILE > $$LOGFILE 2>&1; \
        BIBSTRING="$$BIBSTRING -bibliography=$$XMLFILE"; \
    done; \
    echo $$BIBSTRING > bibstring.txt
    @echo "postprocessing with `cat bibstring.txt`"
    @latexmlpost $(basename $(HTML_OUTPUT)).xml `cat bibstring.txt` -dest=$(HTML_OUTPUT) -css=navbar-left.css

# the 2>/dev/null redirects stderr to the null device so that we don't get error
# messages in the console when rm has nothing to remove
clean:
    @-rm -v *.log 2>/dev/null
    @-rm -v *.out 2>/dev/null
    @-rm -v *.aux 2>/dev/null
    @-rm -v *.xml 2>/dev/null
    @-rm -v *.pdf 2>/dev/null
    @-rm -v *.html 2>/dev/null
    @-rm -v bibstring.txt 2>/dev/null

Some notes on the makefile. I execute bibtex ignoring errors (the dash symbol before 'bibtex') because bibtex will exit with an error if it doesn't find any citations, or if there is no bibliography. Each iteration of pdflatex is output to a logfile named "document_pdf_<i>.log" where "<i>" is the iteration number. The output of pdflatex and bibtex is supressed by dumping it to the logfile (I the verbosity useless to have in the console).

The shell script in the PDF recipe iterates up to four times. The first thing it does is greps the output of the most recent run pdf latex looking for the line where latex recommends that we "Rerun" latex. If it finds such a line it sets the shell variable STABELIZED to that string. Otherwise it gets the empty string. Then we test to see if the string is empty. If it's empty, we're done so we break the loop. If it's not, then we rerun pdflatex.

The shell script in the HTML recipe iterates over each of the (potentially multiple, potentially zero) bibliography files, processing each of them with latexml. It then appends the string "-bibliography=<filename>.xml" to the BIBSTRING shell variable. The last thing it does is echos the contents of that shell variable to the file "bibstring.txt". This so so that subsequent commands by make can find it.

Personal Dynamic DNS in Ubuntu


I finally got around to purchasing a personal server and one of the first things I did was set up a private DNS server for cheshirekow.com. As it turns out, setting it up to be dynamic is quite easy. In this post I'll go through the steps I took to get it up and running.

I wont bother with all the fun stuff about how dynamic DNS works or how to properly configure everything, but instead I'll just post my configuration files for posterity.

More detailed information on configuring bind can be found in the Ubuntu Server Guide. A good article on nsupdate and dynamic updates to bind can be found on jeff garzik's linux pages. I found the information I needed on Network manager hooks from sysadmin's journey

Why Dynamic DNS?

Mostly because I'm lazy. I have a work laptop, a personal desktop, a netbook, an android tablet, and an android phone. I'm constantly scp'ing files from one to another, and I really hate having to write out the ip address specifically all the time. Since I own the domain cheshirekow.com, I figured it would be really slick to be able to address all of my machines as subdomains. For instance, I could label them as "laptop.cheshirekow.com", "desktop.cheshirekow.com", "netbook.cheshirekow.com", "tablet.cheshirekow.com", and "phone.cheshirekow.com". If these dns entries are automatically updated when each of these devices connects to a wifi access point using DHCP, then I can even get files from one machine to another without even being physically near them.

named.conf.local

Following the ubuntu guide, I edited /etc/bind/named.conf.local to look like the following:

//
// Do any local configuration here
//

// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918″;

zone "cheshirekow.com" {
    type master;
    file "/var/lib/bind/db.cheshirekow.com";
    allow-transfer { aaa.bbb.ccc.ddd; };
    allow-update { key "user.cheshirekow.com."; };
};

Note that the file is in /var/lib/bind/db.cheshirekow.com not in /etc/bind/db.cheshirekow.com like a lot of tutorials will tell you. This is because ubuntu prevents bind from writing to files in /etc/bind. You can either change the apparmor profile for bind, or, just do as I do, and put the file where you're supposed to go in /var/lib/bind/ (there's a note in the bind apparmor profile about this). Putting it in "/etc/bind" is fine if the dns entries are all static, but if there are dynamic entries then bind will try to create a .jnl file in the same directory as the db.xxx file. Since bind can't write to /etc/bind we need to put the db file somewhere else.

Also, note that aaa.bbb.ccc.ddd is the ip address of my secondary name server for cheshirekow.com. I'm using afraid.org to host my secondary DNS.

The allow-update line allows the user user@cheshirekow.com to update the dns entries (the dynamic part) as verified by a keypair (generating the keypair comes later). Note that I don't use the literal "user".

/var/lib/bind/db.cheshirekow.com

The next thing was to create the db.cheshirekow.com file which looks like this.

$ORIGIN .
$TTL 604800 ; 1 week
cheshirekow.com     IN SOA  ns1.cheshirekow.com. cheshirekow.gmail.com. (
                9          ; serial
                604800     ; refresh (1 week)
                86400      ; retry (1 day)
                2419200    ; expire (4 weeks)
                604800     ; minimum (1 week)
                )
            NS  ns1.cheshirekow.com.
            A   aaa.bbb.ccc.ddd
            AAAA    ::1
$ORIGIN cheshirekow.com.
ns1         A   aaa.bbb.ccc.ddd
www         A   eee.fff.ggg.hhh

Note that aaa.bbb.ccc.ddd is the ipaddress of the name server itself and eee.fff.ggg.hhh is the ip address of my web server (where you are currently reading this). Also note that my email address is cheshirekow@gmail.com but is written in this file as cheshirekow.gmail.com..

You can (should?) also set up reverse dns entries for all these things but I did not as the server is actually sitting in a different physical domain. In other words I don't own a network of ip-addresses so there's no reason to expect my server to be queried for reverse dns lookups.

Create Keys

The next thing we need to do is setup a key that we can use to do dynamic updates. This can be done on a separate machine from the name server… it doesn't matter.

user@ns1:~$ mkdir .bind
user@ns1:~$ cd .bind
user@ns1:~$ dnssec-keygen -a HMAC-MD5 -b 512 -n USER user.cheshirekow.com.

Note that "USER" is a literal string, not a placeholder for something that you create. Also note that "user.cheshirekow.com" is the name of this key, and corresponds to the email address "user@cheshirekow.com".

This command creates a public and private key.

user@ns1:~/.bind$ ls -l
total 8
-rw----- 1 user user 127 2011-06-10 16:51 Kuser.cheshirekow.com.+157+56713.key
-rw----- 1 user user 229 2011-06-10 16:51 Kuser.cheshirekow.com.+157+56713.private

Install Keys

Now we create a file to store these keys. I put them in /etc/bind/keys.local

key "user.cheshirekow.com." {
    algorithm HMAC-MD5;
    secret "2345A/bkd7GDcu9orjzblkj2r37ajglk489DLHD/m987addzjDCadsh8 bbIUOY809glkashDEmPj5alIUoiEeA==";
};

Note that this is not a real key, but random gibberish I pounded out on the keyboard. In reality, this key is copied directly from Kuser.cheshirekow.com.+157+56713.key.

I then added this file to named.conf.local so that it looks like this:

// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind9/README.Debian.gz for information on the 
// structure of BIND configuration files in Debian, *BEFORE* you customize 
// this configuration file.
//
// If you are just adding zones, please do that in /etc/bind/named.conf.local

include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.local";
include "/etc/bind/named.conf.default-zones";
include "/etc/bind/keys.local";

Restart bind

That's it for the bind setup so restart

user@ns1:~$sudo /etc/init.d/bind9 restart

Client Update Script

I then created the following update script in /etc/NetworkManager/dispatcher.d/99updatedns. This script is called as a hook from network manager every time an interface goes up or down. It receives two parameters. The first is the name of the interface (i.e. eth0 or wlan0) and the second is the status (i.e. up or down).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
#!/bin/bash

INTERFACE=$1
STATUS=$2
DIRECTORY="/home/user/Codes/shell/dyndns"

if [ "$STATUS" = "up" ]; then
    IPADDRESS=`ifconfig $INTERFACE | grep inet | grep -v inet6 | cut -d ":" -f 2 | cut -d " " -f 1`
    cp $DIRECTORY/nsupdate_src.txt /tmp/nsupdate.txt
    sed -i "s/IPADDRESS/$IPADDRESS/" /tmp/nsupdate.txt 
    nsupdate -k /home/user/.bind/Kuser.cheshirekow.com.+157+56713.private -v /tmp/nsupdate.txt
fi

Note that this script requires the nsupdate_src.txt which is here:

server ns1.cheshirekow.com
zone cheshirekow.com
update delete netbook.cheshirekow.com. A
update add netbook.cheshirekow.com. 86400 A IPADDRESS
show
send

The script extracts the ip address from the output of ifconfig for the correct interface, copies the file to /tmp/, replaces IPADDRESS with the actual address of the machine, and then calls nsupdate using the private key and the file. This script is saved as /etc/NetworkManager/dispatcher.d/99updatedns, owned by root and flagged executable. Note that this script accesses the key for my specific user, which is fine in my case because my netbook is a single-user machine. If the machine has multiple users, you may want to store the key and text file in /home/root or something.

Result

The result of this process is that netbook.cheshirekow.com always points to the ip address of my netbook, given that it is connected to a wifi access point. Whenever the netbook (re)connects to an access point, the network manager calls the script, and the dns entry on ns1.cheshirekow.com is updated.

(Update) Better Script

I changed the update script a little bit. Since I use a wired connection on my laptop most of the time, I don't want the ip address for the wireless connection to supercede that of the wired connection if it is active.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#!/bin/bash

INTERFACE=$1
STATUS=$2
DIRECTORY="/home/user/Codes/shell/dyndns"

echo "network interface change hook:"
echo "-------------------";

#first, check to see if eth0 is up and running
ETH0STR=`ifconfig eth0 | grep inet | grep -v inet6`
if [ -z "$ETH0STR" ]
then
    echo "eth0 has no address (probably is down or disconnected)"
    echo "checking interface $INTERFACE whose changed launched this script"
    if [ "$STATUS" = "up" ]
    then
        IPADDRESS=`ifconfig $INTERFACE | grep inet | grep -v inet6 | cut -d ":" -f 2 | cut -d " " -f 1`
        if [ -z "$IPADDRESS" ]
        then
            echo "$INTERFACE has no address, aborting (str = $IPADDRESS)"
        else
            echo "$INTERFACE has address $IPADDRESS"
            cp $DIRECTORY/nsupdate_src.txt /tmp/nsupdate.txt
            sed -i "s/IPADDRESS/$IPADDRESS/" /tmp/nsupdate.txt 
            nsupdate -k /home/user/.bind/Kuser.cheshirekow.com.+157+56713.private -v /tmp/nsupdate.txt            
        fi
    else
        echo "Status is not 'up', aborting"
    fi
else
    IPADDRESS=`echo $ETH0STR | cut -d ":" -f 2 | cut -d " " -f 1`
    echo "eth0 has address $IPADDRESS, ignoring changed interface $INTERFACE"
    cp $DIRECTORY/nsupdate_src.txt /tmp/nsupdate.txt
    sed -i "s/IPADDRESS/$IPADDRESS/" /tmp/nsupdate.txt 
    nsupdate -k /home/user/.bind/Kuser.cheshirekow.com.+157+56713.private -v /tmp/nsupdate.txt
fi

Edit:

For some reason whenever I update db.cheshirekow.com bind refuses to restart correctly. When I do this update, I have to delete the file /var/lib/bind/db.cheshirekow.com.jnl and restart.

Inkbook Introduction


Inkbook is a new project I've started to replace Xournal for my needs. What I really want is a tightly integrated, full-features inking experience for Ubuntu.

What's wrong with xournal?

Xournal is great. I use it all the time. However, there are a lot of really simple features I would like it to have. I took a look at the code, and it's pretty hard to understand. The lack of good documentation means it's not worth my time. There's no sense in committing a ton of time trying to learn the code base, just to find out that an apparently simple feature is impossible to implement without restructuring the whole thing. So, I'm just restructuring the whole thing :).

I'll start by going through all the things that I don't like about Xournal.

Memory Usage

One of the biggest problems I have with Xournal is it's memory usage. A typical 10 page Xournal document consumes around 300MB of RAM, and takes about 60 seconds to open. This is a big nuisance to me. I suspect that Xournal stores the whole document in memory, which is the cause.

Bitmaps

A lot of times I really want to paste some snippet into my notes. There is a Xournal patch for using bitmaps, and it's not terrible, but the images render fuzzy and it's difficult to scale and place them in the document. I usually end up exporting the whole thing to PDF for later reference. I've written a script which can copy parts of the screen to the clipboard (like the Adobe Reader snapshot tool), so I'd really like to be able to drop a bunch of images into a notebook and draw around them, write on them, etc.

Layers

I think that layers are a really useful tool, but it's hard to use them in xournal. First of all, you have to select them from a drop down list at the bottom of the screen, not a list box. You can't reorder them. And if you move to a lower layer, all the layers above it disappear.

Pen Options

Only three line widths and no fast-access colorwheel.

Shapes

Can only draw shapes by having the recognizer interpret them. Why not have shape tools that allow you to drop the shape and then resize, move around?

No lasso tool

Rectangular selection just doesn't cut it for me. Especially when I have potato shaped drawings that I want to move around, without moving the text around it.

Inkbook

What I really want is a digital notebook. Inkbook aims to be just that. Inkbook is really a merger of features that I like from both Xournal and Inkscape, and an attempt to fix some of the problems I have with both. Here is a list of the features I'm currently focusing on.

  • very large documents
  • ability to organize notebooks (like folders)
  • ability to link individual pages to multiple notebooks
  • multiple layers per page
  • multiple page sizes
  • continuous range of brush sizes
  • continuous color picking
  • bitmap cut & paste
  • grouping of paths
  • objects (shapes)
  • collaboration (openbook module?)

Very large documents and Organization

I want to be able to have several dozens of pages in a document, which basically means that the entire document can't be stored in memory. Therefore, I'm attemping to store the data an a sqlite database. This also addresses the desire to have better organizational facilities. I'm implementing separate database objects for notebooks, pages, layers, objects, and paths.

A notebook is an ordered list of notebooks and an ordered list of pages (i.e. a folder). A page is an ordered list of layers. A layer is an ordered list of objects. An object is an ordered list of objects, images, or paths. A path is an ordered list of drawing primitives (most likely a one-to-one mapping to the cairo API).

Organization and View

For organizing notebooks, I plan to have a triew-view (i.e. directory tree). I'll have a thumbnail page view which shows the current pages and those near it, and allows for scrolling through the whole notebook. This will be a custom widget which renders each of the pages via their thubmail image. I'll have a list-view to organize layers on the page. The list view will also show list complex objects so they can be easily selected and edited (but it wont display any information about handdrawn paths, as there will be a large number of these). The main view will display a viewport of the page.

Current Progress

I've got a proof-of-concept running with the sqlite database file backend and working views the notebook organization and layers. I've got a proof-of- concept for the thumbnail view but it needs more work. It's written in C++ and meant to be very easy to understand and extend. I'm using Gtkmm3 (unstable) because it's GTK, but it's C++, and it has cairo as the native API. Here's a screenshot:

Inkbook Screenshot]

Emulating Adobe Reader's Snapshot tool in Ubuntu


In this post I presented a simple python script for copying image files to the gnome clipboard. In this post, I'll show how I use this script with imageMagik to emulate the snapshot tool in Adobe Reader. I originally wanted to use this just for Evince (default Ubuntu document reader) but have found all kinds of situations where it's handy outside of Evince.

The end goal here is to have a key binding which, when pressed, starts a "selection mode" where any part of the screen can be selected. That selection is then copied to the gnome clipboard so that it can be pasted into a document (i.e. a libreoffice or openoffice document, though I usually use it to paste into Xournal).

The script

Here is the script I use to do just that.

#!/bin/bash

#prefix for filename
prefix='screenshot';
timestamp=`date '+%d-%m-%y-%N'`;
extension='.png';

file="/tmp/$prefix-$timestamp$extension";

import $file
python $HOME/Codes/python/imgclip/imgclip.py $file

The import tool is part of the imageMagik package. It does the screenshot taking part, by changing the mouse cursor to a "selection" tool. It saves the screenshot in /tmp/screenshot_TIMESTAMP.png where the timestamp is generated by the date command. The script then runs imgclip to copy the screenshot to the clipboard. I have this script bound to a command in compiz. The command is

bash $HOME/Codes/shell/screenshot/screenshot.sh

Here is a screencast of it in action:

ImgClip (xclip for images)


Here is a little python script I wrote to emulate xclip for image files. xclip, if you don't know, is a simple command line tool for setting/retrieving text from the clipboard. For instance the following command

ls -l | xclip -i -selection clipboard

copies the current directory listing to the gnome clipboard, where it can then be ctrl + v pasted into a forum post, email, etc.

I really wanted something that does the same for image files. Unfortunately the following does not work:

cat image.png | xclip -i -selection clipboard

I'm not sure of the details of how the gnome clipboard works… but this doesn't do it. I discovered a way to do it easily using pygtk. Here is a python script that does exactly what I want:

#! /usr/bin/python

import pygtk
pygtk.require('2.0′)
import gtk
import os
import sys

def copy_image(f):
    assert os.path.exists(f), "file does not exist"
    image = gtk.gdk.pixbuf_new_from_file(f)

    clipboard = gtk.clipboard_get()
    clipboard.set_image(image)
    clipboard.store()


copy_image(sys.argv[1]);

P.S. I pasted this code into this post using the following command

cat imgclip.py | xclip -i -selection clipboard

Make sure to set the script to executable

chmod +x imgclip.py

And then use it like this

./imgclip.py /path/to/some/image.png

My cmake Bootstrap Process


I've been using eclipse as an IDE for a long time. For Java, eclipse is ridiculously beautiful. It almost writes your programs for you. Eclipse CDT is lacking a some features I would really like to see, and it's impossible to understand the code base and write plugins (and so… I have not implemented anything myself) however, as a traditionalist, I like my IDE to work on top of the command line tools, not in place of them. Also, I like that there are PHP, Javascript, XML, Perl, and Latex plugins… which means that my life is much easier given that I work on all my projects with the same familiar interface.

Lately, I've also started using CMake (instead of my ridiculously complicated but super-stream-lined personal makefile system). The CMake project generator for Eclipse is really awesome, and I'm completely sold on using cmake now. However, creating a new project from within eclipse doesn't work as well as I'd like. Also, I have a couple of other files that I generally include in every project, so I've started bootstrapping my projects by using a template directory, and a pair of scripts.

Consequently I find myself repeating several tasks every time I start a new project… just like when I was starting off with programming.

  1. copy an old project
  2. rip out all the unnecessary classes
  3. chop of the CMakeLists.txt files to get it down to a bare bones
  4. initialize a git repository
  5. make an initial commit
  6. create a build directory
  7. export my development root directory so cmake can find it
  8. call cmake (which, for eclipse projects, is a chore)

All this gets a little tedious. I'd really like to have an eclipse plugin that does this… but I'm not that skilled in the advanced features of Eclipse, so I started automating this with a script. Most of the things that I've been working on lately have been Gtkmm programs. So I've created a barebones project directory with stubs for all of the common parts I need:

  • all the gtkmm cmake find modules
  • a stub CMakeFile.txt
  • a stub glade file
  • a stub UI xml file
  • a simple main.cpp
  • one class called Application to load the glade file and the UI
  • some git helper scripts
  • a .gitignore file
  • a doxyfile
  • a doxygen mainpage file
  • a bootstrap script

I also wrote a script which does all of the redundant tasks mentioned above. I'll go through all the files in the template directory and the new project script describing what they're for. The template files can be downloaded as a gzipped tarball here. The new project script can be retrieved here

Scripts I Always Use

countlines.pl

This is a simple script that I use to count the number of lines of code in a project. It reports the total number of lines of code as a raw number, exluding whitespace, and exluding comments.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
my @extensions  = ("cpp","h","hpp");
my @directories = ("src");
my @filelist;

foreach $directory( @directories )
{
    print "searching $directoryn";
    foreach $extension( @extensions )
    {
        print "   for *.$extensionn";
        open INLIST, " find $directory -name '*.$extension' |";
        while()
        {
            chomp;
            push(@filelist, $_);
        }
        close INLIST;
    }
}

$totallines     = 0;
$written        = 0;
$noncomment     = 0;

my $blockcomment    = 0;
my $comment         = 0;

print "parsing filesn";

foreach $file( @filelist )
{
    print "   $filen";
    open INFILE, "./" . $file or die "failed to open" . $file . "n";

    while()
    {
        chomp;
        s/s//g;

        $totallines++;

        $comment        = 0;
        if( /^(//)/ )   {$comment        = 1;  }
        if( /(/*)/ )    {$blockcomment   = 1;  }
        if( /(*/)/ )    {$blockcomment   = 0;  }

        $noncomment++   unless( length($_) == 0 || $comment || $blockcomment );
        $written++      unless( length($_) == 0 );
    }

    close INFILE;
}

print "n";
print "total:       " . $totallines     . "n";
print "written:     " . $written        . "n";
print "noncomment:  " . $noncomment     . "n";

$dummy = <>;

logChanges.sh

This is a helper script I use to format my git commits the way I like. I like to have a list of all the files changed at each commit stored right in the log. This script strips the comment hashes from the git status report.

1
2
3
4
5
#!/bin/bash

echo ""
echo "Files Changed:"
git status | sed -e "s/#t//" -e "/^#/d"

commitAll.sh

This is a script that I call to commit all changes to the git repository. It calls logChanges.sh to generate a changelog changes.txt, and then opens nano to add a nice comment to the log entry.

1
2
3
4
5
6
#!/bin/bash

git add -A
bash logChanges.sh > changes.txt
nano changes.txt
git commit -F changes.txt

bootstrap.sh

This is a script that calls cmake from the build directory. It prepends $HOME/Codes/devroot to the CMake prefix path so that it can find all of my develepment libraries. It adds the corresponding lib/pkgconfig directory to the PKG_CONFIG_PATH variable so that pkg-config looks for my development library versions before checking the system versions. It also sets the prefix- path directory to be the install prefix. Then it calls cmake to generate Eclipse CDT4 project files, along with a source project and the option to build with debug flags.

1
2
3
4
5
6
7
8
#!/bin/bash

export PREFIX=$HOME/Codes/devroot

export SCRIPT_DIR=`dirname $0`;
export CMAKE_PREFIX_PATH=$PREFIX:CMAKE_PREFIX_PATH
export PKG_CONFIG_PATH=$PREFIX/lib/pkgconfig/:PKG_CONFIG_PATH
cmake -G "Eclipse CDT4 - Unix Makefiles" -DECLIPSE_CDT4_GENERATE_SOURCE_PROJECT=TRUE -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=$PREFIX $SCRIPT_DIR

Configuration Files

doxy.config.in

Doxygen configuration file. The project, version, paths, and some other things are set via cmake variables

#--------------------------------------------------
# Project related configuration options
#--------------------------------------------------
DOXYFILE_ENCODING      = UTF-8
PROJECT_NAME           = "${PROJECT_NAME}"
PROJECT_NUMBER         = v${${PROJECT_NAME}_VERSION_MAJOR}.${${PROJECT_NAME}_VERSION_MINOR}.${${PROJECT_NAME}_VERSION_BUGFIX}
OUTPUT_DIRECTORY       = ./doc
CREATE_SUBDIRS         = NO
OUTPUT_LANGUAGE        = English
BRIEF_MEMBER_DESC      = YES
REPEAT_BRIEF           = YES
ABBREVIATE_BRIEF       =
ALWAYS_DETAILED_SEC    = YES
INLINE_INHERITED_MEMB  = NO
FULL_PATH_NAMES        = YES
STRIP_FROM_PATH        = ${CMAKE_CURRENT_SOURCE_DIR}/src
STRIP_FROM_INC_PATH    = {CMAKE_CURRENT_SOURCE_DIR}/src/
SHORT_NAMES            = NO
JAVADOC_AUTOBRIEF      = NO
QT_AUTOBRIEF           = NO
MULTILINE_CPP_IS_BRIEF = YES
INHERIT_DOCS           = YES
SEPARATE_MEMBER_PAGES  = NO
TAB_SIZE               = 4
ALIASES                =
OPTIMIZE_OUTPUT_FOR_C  = NO
OPTIMIZE_OUTPUT_JAVA   = NO
OPTIMIZE_FOR_FORTRAN   = NO
OPTIMIZE_OUTPUT_VHDL   = NO
EXTENSION_MAPPING      =
BUILTIN_STL_SUPPORT    = YES
CPP_CLI_SUPPORT        = NO
SIP_SUPPORT            = NO
IDL_PROPERTY_SUPPORT   = YES
DISTRIBUTE_GROUP_DOC   = NO
SUBGROUPING            = YES
TYPEDEF_HIDES_STRUCT   = NO
SYMBOL_CACHE_SIZE      = 0

#--------------------------------------------------
# Build related configuration options
#--------------------------------------------------
EXTRACT_ALL            = YES
EXTRACT_PRIVATE        = YES
EXTRACT_STATIC         = NO
EXTRACT_LOCAL_CLASSES  = YES
EXTRACT_LOCAL_METHODS  = NO
EXTRACT_ANON_NSPACES   = NO
HIDE_UNDOC_MEMBERS     = NO
HIDE_UNDOC_CLASSES     = NO
HIDE_FRIEND_COMPOUNDS  = NO
HIDE_IN_BODY_DOCS      = NO
INTERNAL_DOCS          = NO
CASE_SENSE_NAMES       = NO
HIDE_SCOPE_NAMES       = NO
SHOW_INCLUDE_FILES     = YES
FORCE_LOCAL_INCLUDES   = NO
INLINE_INFO            = YES
SORT_MEMBER_DOCS       = YES
SORT_BRIEF_DOCS        = YES
SORT_MEMBERS_CTORS_1ST = NO
SORT_GROUP_NAMES       = YES
SORT_BY_SCOPE_NAME     = YES
GENERATE_TODOLIST      = YES
GENERATE_TESTLIST      = YES
GENERATE_BUGLIST       = YES
GENERATE_DEPRECATEDLIST= YES
ENABLED_SECTIONS       =
MAX_INITIALIZER_LINES  = 30
SHOW_USED_FILES        = YES
SHOW_DIRECTORIES       = YES
SHOW_FILES             = YES
SHOW_NAMESPACES        = YES
FILE_VERSION_FILTER    =
LAYOUT_FILE            =

#--------------------------------------------------
# configuration options related to warning and progress messages
#--------------------------------------------------
QUIET                  = NO
WARNINGS               = YES
WARN_IF_UNDOCUMENTED   = YES
WARN_IF_DOC_ERROR      = YES
WARN_NO_PARAMDOC       = NO
WARN_FORMAT            = "$file:$line: $text"
WARN_LOGFILE           =

#--------------------------------------------------
# configuration options related to the input files
#--------------------------------------------------
INPUT = ${CMAKE_CURRENT_SOURCE_DIR}/src
        ${CMAKE_CURRENT_SOURCE_DIR}/include
        ${CMAKE_CURRENT_SOURCE_DIR}/docs/pages
        ${CMAKE_CURRENT_BINARY_DIR}/src
        ${CMAKE_CURRENT_BINARY_DIR}/include
        ${CMAKE_CURRENT_BINARY_DIR}/docs/pages

INPUT_ENCODING         = UTF-8
FILE_PATTERNS          = *.cpp
                         *.h
                         *.hpp
RECURSIVE              = YES
EXCLUDE                =
EXCLUDE_SYMLINKS       = NO
EXCLUDE_PATTERNS       =
EXCLUDE_SYMBOLS        =
EXAMPLE_PATH           =
EXAMPLE_PATTERNS       =
EXAMPLE_RECURSIVE      = NO
IMAGE_PATH             =
INPUT_FILTER           =
FILTER_PATTERNS        =
FILTER_SOURCE_FILES    = NO

#--------------------------------------------------
# configuration options related to source browsing
#--------------------------------------------------
SOURCE_BROWSER         = YES
INLINE_SOURCES         = NO
STRIP_CODE_COMMENTS    = YES
REFERENCED_BY_RELATION = NO
REFERENCES_RELATION    = NO
REFERENCES_LINK_SOURCE = YES
USE_HTAGS              = NO
VERBATIM_HEADERS       = YES

#--------------------------------------------------
# configuration options related to the alphabetical class index
#--------------------------------------------------
ALPHABETICAL_INDEX     = NO
COLS_IN_ALPHA_INDEX    = 5
IGNORE_PREFIX          =

#--------------------------------------------------
# configuration options related to the HTML output
#--------------------------------------------------
GENERATE_HTML          = YES
HTML_OUTPUT            = html
HTML_FILE_EXTENSION    = .html
HTML_HEADER            =
HTML_FOOTER            =
HTML_STYLESHEET        =
HTML_TIMESTAMP         = YES
HTML_ALIGN_MEMBERS     = YES
HTML_DYNAMIC_SECTIONS  = NO
GENERATE_DOCSET        = NO
DOCSET_FEEDNAME        = "Doxygen generated docs"
DOCSET_BUNDLE_ID       = org.doxygen.Project
GENERATE_HTMLHELP      = NO
CHM_FILE               =
HHC_LOCATION           =
GENERATE_CHI           = NO
CHM_INDEX_ENCODING     =
BINARY_TOC             = NO
TOC_EXPAND             = NO
GENERATE_QHP           = NO
QCH_FILE               =
QHP_NAMESPACE          =
QHP_VIRTUAL_FOLDER     = doc
QHP_CUST_FILTER_NAME   =
QHP_CUST_FILTER_ATTRS  =
QHP_SECT_FILTER_ATTRS  =
QHG_LOCATION           =
GENERATE_ECLIPSEHELP   = NO
ECLIPSE_DOC_ID         = org.doxygen.Project
DISABLE_INDEX          = NO
ENUM_VALUES_PER_LINE   = 4
GENERATE_TREEVIEW      = YES
USE_INLINE_TREES       = NO
TREEVIEW_WIDTH         = 400
FORMULA_FONTSIZE       = 12
SEARCHENGINE           = NO
SERVER_BASED_SEARCH    = NO

#--------------------------------------------------
# configuration options related to the LaTeX output
#--------------------------------------------------
GENERATE_LATEX         = NO
LATEX_OUTPUT           = latex
LATEX_CMD_NAME         = latex
MAKEINDEX_CMD_NAME     = makeindex
COMPACT_LATEX          = YES
PAPER_TYPE             = letter
EXTRA_PACKAGES         = amsmath
                         amsfonts
                         hyperref
LATEX_HEADER           =
PDF_HYPERLINKS         = YES
USE_PDFLATEX           = YES
LATEX_BATCHMODE        = YES
LATEX_HIDE_INDICES     = NO
LATEX_SOURCE_CODE      = NO

#--------------------------------------------------
# configuration options related to the RTF output
#--------------------------------------------------
GENERATE_RTF           = NO
RTF_OUTPUT             = rtf
COMPACT_RTF            = NO
RTF_HYPERLINKS         = NO
RTF_STYLESHEET_FILE    =
RTF_EXTENSIONS_FILE    =

#--------------------------------------------------
# configuration options related to the man page output
#--------------------------------------------------
GENERATE_MAN           = NO
MAN_OUTPUT             = man
MAN_EXTENSION          = .3
MAN_LINKS              = NO

#--------------------------------------------------
# configuration options related to the XML output
#--------------------------------------------------
GENERATE_XML           = YES
XML_OUTPUT             = xml
XML_SCHEMA             =
XML_DTD                =
XML_PROGRAMLISTING     = NO

#--------------------------------------------------
# configuration options for the AutoGen Definitions output
#--------------------------------------------------
GENERATE_AUTOGEN_DEF   = NO

#--------------------------------------------------
# configuration options related to the Perl module output
#--------------------------------------------------
GENERATE_PERLMOD       = NO
PERLMOD_LATEX          = NO
PERLMOD_PRETTY         = YES
PERLMOD_MAKEVAR_PREFIX =

#--------------------------------------------------
# Configuration options related to the preprocessor
#--------------------------------------------------
ENABLE_PREPROCESSING   = YES
MACRO_EXPANSION        = NO
EXPAND_ONLY_PREDEF     = NO
SEARCH_INCLUDES        = YES
INCLUDE_PATH           =
INCLUDE_FILE_PATTERNS  =
PREDEFINED             =
EXPAND_AS_DEFINED      =
SKIP_FUNCTION_MACROS   = YES

#--------------------------------------------------
# Configuration::additions related to external references
#--------------------------------------------------
TAGFILES               =
GENERATE_TAGFILE       = ${PROJECT_NAME}.tag
ALLEXTERNALS           = NO
EXTERNAL_GROUPS        = YES
PERL_PATH              = /usr/bin/perl

#--------------------------------------------------
# Configuration options related to the dot tool
#--------------------------------------------------
CLASS_DIAGRAMS         = YES
MSCGEN_PATH            = ${CMAKE_MSCGEN_PATH}
HIDE_UNDOC_RELATIONS   = YES
HAVE_DOT               = NO
DOT_FONTNAME           = FreeSans
DOT_FONTSIZE           = 10
DOT_FONTPATH           =
CLASS_GRAPH            = YES
COLLABORATION_GRAPH    = YES
GROUP_GRAPHS           = YES
UML_LOOK               = NO
TEMPLATE_RELATIONS     = NO
INCLUDE_GRAPH          = YES
INCLUDED_BY_GRAPH      = YES
CALL_GRAPH             = NO
CALLER_GRAPH           = NO
GRAPHICAL_HIERARCHY    = YES
DIRECTORY_GRAPH        = YES
DOT_IMAGE_FORMAT       = png
DOT_PATH               = ${CMAKE_DOT_PATH}
DOTFILE_DIRS           =
DOT_GRAPH_MAX_NODES    = 50
MAX_DOT_GRAPH_DEPTH    = 0
DOT_TRANSPARENT        = NO
DOT_MULTI_TARGETS      = NO
GENERATE_LEGEND        = YES
DOT_CLEANUP            = YES

CMakeLists.txt

The root CMakeLists file. The project name is replaced by the new project script

cmake_minimum_required(VERSION 2.8)

# defines the project name
project (projectName)
set( ${CMAKE_PROJECT_NAME}_VERSION_MAJOR 0 )
set( ${CMAKE_PROJECT_NAME}_VERSION_MINOR 1 )
set( ${CMAKE_PROJECT_NAME}_VERSION_BUGFIX 0 )

# adds the project-specific cmake module directory cmake/Modules to the cmake
# search path
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake/Modules/")

# finds pkg-config
find_package(PkgConfig REQUIRED)

# add the src/ subdirectory to the list of directories cmake processes
add_subdirectory(src)
add_subdirectory(include)

# configure the doxygen configuration
configure_file(
    "${PROJECT_SOURCE_DIR}/doxy.config.in"
    "${PROJECT_BINARY_DIR}/doxy.config"
    )

# use Jan Woetzel's doxygen doc target
include("${CMAKE_MODULE_PATH}/TargetDoc.cmake" OPTIONAL)

src/CMakeLists.txt

The CMakeLists for the application.

find_package(GTKmm REQUIRED)
#find_package(Boost COMPONENTS iostreams REQUIRED)

include_directories(
#   ${Boost_INCLUDE_DIRS}
    ${GTKmm_INCLUDE_DIRS}
    )

set(LIBS ${LIBS}
#   ${Boost_LIBRARIES}
    ${GTKmm_LIBRARIES}
    )

add_executable( ${CMAKE_PROJECT_NAME} main.cpp Application.cpp)

target_link_libraries( ${CMAKE_PROJECT_NAME} ${LIBS})

configure_file(
    ${CMAKE_CURRENT_SOURCE_DIR}/mainwindow.glade
    ${CMAKE_CURRENT_BINARY_DIR}/mainwindow.glade COPYONLY )

configure_file(
    ${CMAKE_CURRENT_SOURCE_DIR}/mainwindow.xml
    ${CMAKE_CURRENT_BINARY_DIR}/mainwindow.xml COPYONLY )

New Project Script

start_new_project.sh

This is the script I use to start a new project. It copies the template directory to a new directory in $HOME/Codes/cpp/ named after the new project. It also replaces the project name in the root CMakeLists to the project name. It initializes a git repository in the source and make an initial commit with all the files. It then creates a build directory in $HOME/Codes/cpp/builds. It changes to that directory and then calls the bootstrap script to generate the project files and makefiles.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#!/bin/bash

EXPECTED_ARGS=1
if [ $# -ne $EXPECTED_ARGS ]
then
    echo "Usage: `basename $0` {arg}"
    exit 1
fi

PROJECT_DIR="$HOME/Codes/cpp/$1
BUILD_DIR="$HOME/Codes/cpp/builds/$1TEMPLATE_DIR="$HOME/Codes/cpp/template"

if [ -d "$HOME/Codes/cpp/$1" ]
then
    echo "The directory '$PROJECT_DIR' already exists"
    exit 1
fi

echo "Making $PROJECT_DIR"
mkdir $PROJECT_DIR

echo "Copying files from $TEMPLATE_DIR to $PROJECT_DIR"
cp -rv $TEMPLATE_DIR/* $PROJECT_DIR/

echo "Setting project name in CMakeList.txt"
sed -i "s/projectName/$1/" $PROJECT_DIR/CMakeLists.txt

echo "Changing to project directory"
cd $PROJECT_DIR
echo "   pwd: `pwd`"

echo "Initializing the git repository"
git init
git add -A
git commit -m "Initial commit, template project"

echo "Changing to the build directory"
mkdir $BUILD_DIR
cd $BUILD_DIR
echo "   pwd: `pwd`"

echo "Bootstrapping cmake"
$PROJECT_DIR/bootstrap.sh

Program Source Files

src/main.cpp

/*
 *  file   main.cpp
 *  date   May 31, 2011
 */

#include 
#include "Application.h"

int main(int argc, char** argv)
{
    Gtk::Main kit(argc, argv);
    Application app;
    kit.run(app.getWindow());
}

src/Application.h

/*
 *  file   Application.h
 *  date   Mar 19, 2011
 */

#ifndef APPLICATION_H_
#define APPLICATION_H_

#include

class Application
{
    private:
        Glib::RefPtr      m_builder;
        Glib::RefPtr    m_ui;
        Gtk::Window*                    m_mainWindow;

    public:
        Application();
        virtual ~Application();

        Gtk::Window& getWindow();
};

#endif /* Application_H_ */

src/Application.cpp

/*
 *  file   Application.cpp
 *  date   Apr 29, 2011
 */

#include "Application.h"

Application::Application():
    m_mainWindow(0)
{
    using namespace Glib;
    using namespace Gtk;

    m_builder = Builder::create_from_file("mainwindow.glade");
    m_mainWindow = 0;
    m_builder->get_widget("main_window",m_mainWindow);

    VBox* vbox = 0;
    m_builder->get_widget("main_vbox", vbox);

    m_ui = UIManager::create();
    m_ui->add_ui_from_file("mainwindow.xml");
    Gtk::Widget* menubar = m_ui->get_widget("/mb_main");
    Gtk::Widget* toolbar = m_ui->get_widget("/tb_main");
    vbox->pack_start(*menubar,false,false);
    vbox->pack_start(*toolbar,false,false);
    vbox->reorder_child(*menubar,0);
    vbox->reorder_child(*toolbar,1);

    m_mainWindow->show_all();
}

Application::~Application()
{

}

Gtk::Window& Application::getWindow(){ return *m_mainWindow; }

src/mainwindow.glade

(failed import from wordpress)

src/mainwindow.xml

(failed import from wordpress)

Gtkmm 3.0 in Ubuntu 11.04


Lately I've been writing a lot of codes using gtkmm 3.0. The latest stable branch is 2.4, but this is based on GTK+ 2… and I really want to use GTK+ 3. Why? Because GTK+ 3 uses cairo for it's native drawing API, and cairo is sweet. gtkmm uses cairomm, which is even sweeter. So here are my notes on getting gtkmm-3 built and installed for Ubuntu 11.04. I've written a shell script to do all the work, but I'll go through it section by section to explain what's going on.

Setup

Traditionally /usr/ is used for normal system stuff /usr/local is for experimental stuff (it's essentially the same as /usr but it makes it easy to find and remove stuff after it breaks your system). However, since I originally was developing with the unstable Gtkmm branch in Ubuntu 10.04, which didn't even have a package for GTK+ 3 in the repositories, I started installing things into my home directory… since it's just for testing anyway. Therefore, I create a directory $HOME/Codes/devroot (for development root filesystem) where I install all the "unstable" packages I'm using, or where I practice-install the programs/libraries I'm writing.

I also make a directory $HOME/Codes/cpp/gnome where I download all the source tarballs and do the building.

#!/bin/bash
cd $HOME

export BASE=$HOME/Codes/devroot
export PATH=$BASE/bin:$PATH
export LD_LIBRARY_PATH=$BASE/lib:$LD_LIBRARY_PATH
export PKG_CONFIG_PATH=$BASE/lib/pkgconfig:$PKG_CONFIG_PATH
export XDG_DATA_DIRS=$BASE/share:$XDG_DATA_DIRS
export ACLOCAL_FLAGS="-I $BASE/share/aclocal $ACLOCAL_FLAGS"

mkdir -p $BASE
mkdir -p Codes/cpp/gnome
cd Codes/cpp/gnome

Next I export some variables containing the versions of the unstable packages I'm using. This is so that I can quickly update the script for future installations and things.

export MM_COMMON_VER=0.9.5
export GLIBMM_VER=2.28.1
export ATKMM_VER=2.22.5
export PANGOMM_VER=2.28.2
export CAIROMM_VER=1.1.10
export GTKMM_VER=3.0.1

Then I download all the source tarballs

wget http://ftp.acc.umu.se/pub/GNOME/sources/mm-common/0.9/mm-common-$MM_COMMON_VER.tar.gz
wget http://ftp.acc.umu.se/pub/GNOME/sources/glibmm/2.28/glibmm-$GLIBMM_VER.tar.gz
wget http://ftp.acc.umu.se/pub/GNOME/sources/atkmm/2.22/atkmm-$ATKMM_VER.tar.gz
wget http://ftp.acc.umu.se/pub/GNOME/sources/pangomm/2.28/pangomm-$PANGOMM_VER.tar.gz
wget http://ftp.acc.umu.se/pub/GNOME/sources/gtkmm/3.0/gtkmm-$GTKMM_VER.tar.gz
wget http://cairographics.org/snapshots/cairomm-$CAIROMM_VER.tar.gz

And extract them all into $HOME/Codes/cpp/gnome (currently PWD).

tar xvzf mm-common-$MM_COMMON_VER.tar.gz
tar xvzf glibmm-$GLIBMM_VER.tar.gz
tar xvzf atkmm-$ATKMM_VER.tar.gz
tar xvzf pangomm-$PANGOMM_VER.tar.gz
tar xvzf gtkmm-$GTKMM_VER.tar.gz
tar xvzf cairomm-$CAIROMM_VER.tar.gz

Then we start installing the packages in the appropriate order. There are a few ugly things that we have to do in the process though.

cd mm-common-$MM_COMMON_VER
./configure -prefix=$BASE
make -j6
make install
cd ..

cd glibmm-$GLIBMM_VER
./configure -prefix=$BASE
make -j6
make install

The first ugly thing is that the glibmm package doesn't install the doctool perl script like it should… so we have to do that manually:

mkdir -p $BASE/share/glibmm-2.4/doctool/
cp docs/doc-install.pl $BASE/share/glibmm-2.4/doctool/

Then we continue installing the libraries

cd ..

cd atkmm-$ATKMM_VER
./configure -prefix=$BASE
make -j6
make install
cd ..

cd pangomm-$PANGOMM_VER
./configure -prefix=$BASE

The second ugly thing we have to do is move libfreetype.la to where libtool can find it. I'm not sure why it can't find this specific library but for whatever reason, even after setting LD_LIBRARY_PATH it always looks in /usr/lib. So I just pretend like no ones watching and create a symlink.

sudo ln -s /usr/lib/x86_64-linux-gnu/libfreetype.la /usr/lib/

And everything after that goes pretty normally.

make -j6
make install
cd ..

cd cairomm-$CAIROMM_VER
./configure -prefix=$BASE
make -j6
make install
cd ..

cd gtkmm-$GTKMM_VER
./configure -prefix=$BASE
make -j6
make install
cd ..

Getting Inkscape to Use Latex Fonts (in Windows)


Introduction

Creating graphics for latex can be a real pain. There are a number of different options for doing this, though none of them is completely ideal. If you're comfortable using regular latex and generating DVI's, then the pstricks package is a very powerful tool. If you prefer generating PDF (now an open standard) files (as I do) then the PGF/TikZ latex packages are a very powerful and can do just about everything… except that you have to code your graphics… which is a very slow iterative process. The GNU Diagramming tool Dia can create block diagrams and flow charts and can export either pstricks or tikz code. Inkscape is a much nicer user-oriented graphical vector drawing tool, but doesn't have native support for for LaTex, and creating graphics including LaTeX math-mode is a real pain. In any case, I've found a number of situations where a figure like the following was pretty easy to do create in Inkscape.

[caption id="attachment_373" align="aligncenter" width="351" caption="Inkscape Figure for LaTeX"]![Inkscape Figure for LaTeX](http://128.31.5.103/wordpress /wp-content/uploads/2010/04/vortexElement.png)[/caption]

Getting the Fonts

In order to get the math font's to look like they do in LaTeX, though, you need to have the font's installed where Inkscape can find them. Unfortunately, LaTex uses type1 postscript fonts, while Inkscape can only find font's that windows has installed in the system, which includes true-type or open-type fonts. Fortunately you can get the fonts for "Computer Modern" (Knuth's Font uses as the default in LaTeX) from the TeX archives in these formats. Simply download these fonts, and install them in windows (drag them to C:/Windows/Fonts). The next time you run Inkscape, it will have these font's available and you can use them in your pretty graphics.

Other Fonts

There are some other font's that you'll find used by latex that aren't in OTF or TTF format though. The only (open-source) way I've found to convert type-1 font's to OTF is through an ancient tool called Font- Forge. It's an X-Windows program so you'll have to install the Cygwin x-server, or, luckily, someone has ported it to MinGW (native Win32).

Tex Text plugin

Lately, I've been using the Tex Text plugin instead of using latex fonts with regular inkscape text. The interface is a little tedious, but it works quite well (and it's a lot less tedious then laying out the text by hand).

HTML Documentation with Equation Numbers (Referencing an External PDF Document with Doxygen's HTML Documentation)


So, anyone who uses Doxygen to document their code knows that it's pretty much the most amazing thing ever. Seriously, it's awesome. One cool thing about it is the ability to reference external documentation. For instance, if you use a library in your code and you want to include the library's documentation with your own. However, let's say that (hypothetically of course) you're an academic… and the code you write implements some theoretical design or model. In that case, you may actually want your documentation to reference a paper, or a report that you've written. Perhaps, even many such papers or reports.

The Problem

In particular, let's say that you're a grad student, in the process of writing a paper (and of course, you used LaTex… because, well, why wouldn't you?) and you go and write some code to simulate or demonstrate some elements of that paper. In that case, some of your functions may implement certain equations. Some of your classes (if it's object oriented) may implement certain models. For an example, lets say this is your paper:

Let's also assume that you've been good, and have been documenting your code with Doxygen. Let's say you had some c++ class that implemented your model and it's definition looks something like this:

/**
 *  file       CTheoreticalModel.h
 *  author     Joe Author (jauthor@institute.edu)
 *  date       Apr 17, 2010
 *  brief      Definition file for CTheoreticalModel class
 */

#ifndef CTHEORETICALMODEL_H_
#define CTHEORETICALMODEL_H_


/**
 *  brief  Theoretical Model derived in section 2, on page 1
 *
 *  This is a detailed description of the model
 */
class CTheoreticalModel
{
    private:
        double    m_alpha;    ///< [parameter] defined in equation 2.1
        double    m_beta;     ///< [parameter] defined in equation 2.2

    public:
        /**
         *  brief      Construct a new model using the given parameters
         *  param[in]  alpha   [parameter] defined in equation 2.1
         *  param[in]  beta    [parameter] defined in equation 2.2
         */
        CTheoreticalModel( double alpha, double beta );


        /**
         *  brief      calculates [some property] by implementing algorithm 2.1
         *              on page 1
         *  return     [some property]
         */
        double algorithmA();


        /**
         *  brief      updates the model by [some parameter] according to the
         *              dynamics of equation 2.4
         *  param[in]  gamma   [parameter] defined in equation 2.3
         */
        void equationB( double gamma );


        /**
         *  brief      tests [some parameter] against the model; implements
         *              equation 2.6
         *  param[in]  theta   [some parameter] defined by equation 2.5
         */
        bool testC( double theta );
};

#endif /* CTHEORETICALMODEL_H_ */

Then the html documentation that doxygen will generate will look like this:

Now let's say that you talk to your advisor and he suggests that maybe section 2 should come after section 3. Moreover, you add a bunch of content to section 1 so now all of the code for this model is on page five. So then you end up with this:

So now you have to go back and change all of the equation numbers and page references in your code. But wait, when we wrote our document we "label{}"ed all of our equations, algorithms, and sections. Wouldn't it be cool if we could just reference those in the comments? Doxygen exposes latex's math mode for us to document inline equations. It uses latex to render the equations, and then uses dvipng to turn those into png images. Moreover, latex has the xr package, which allows us to reference labels from other documents. Lastly, the "ref{}" command is valid inside math-mode. So we have all the tools we need, but there is one slight problem. In order to use the xr latex package, we need to include the "externaldocument" command in the header of the document.

The solution

Now here's the fun part. When Doxygen renders all of the equations, it does so by generating a single latex source file called "_formulas.tex". We don't have explicit access to modify the preamble of that source file, but we are allowed to add optional packages to the list of what is included. We do that by modifying the "EXTRA_PACKAGES" line of the doxyfile. For instance, if we edit the doxyfile like this:

…
# The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX 
# packages that should be included in the LaTeX output.

EXTRA_PACKAGES = amsmath xr amsfonts
…

then when doxygen generates _formulas.tex it will include in the preamble a list of includes like this

    usepackage{amsmath}
    usepackage{xr}
    usepackage{amsfonts}

Note that Doxygen tokenizes the list of packages (parses it) at whitespace, and then takes each token and wraps it with "usepackage{}", inserting it into the header. We can hijack this method of input by making EXTRA_PACKAGES variable like this

…
EXTRA_PACKAGES = amsmath xr}externaldocument[paper-]{dummy}% amsfonts
…

Then the preamble of _formulas.tex will look like this

    usepackage{amsmath}
    usepackage{amsfonts}
    usepackage{xr}externaldocument[paper-]{dummy}%}
    usepackage{hyperref}

Note how we use a comment character (percent) to comment out the closing bracket that doxygen put's around our 'package'. Now we have an extra command in our preamble. If you haven't looked up the xr documentation yet, this command means to look for a file called "dummy.aux" generated by latex. The package extracts all the labels from that file and appends "paper-" to the front of the label names. Now we can change our code documentation to look like this:

/**
 *  file       CTheoreticalModel.h
 *  author     Joe Author (jauthor@institute.edu)
 *  date       Apr 17, 2010
 *  brief      Definition file for CTheoreticalModel class
 */

#ifndef CTHEORETICALMODEL_H_
#define CTHEORETICALMODEL_H_


/**
 *  brief  Theoretical Model derived in section f$ref{paper-sec:Model}f$,
 *          page f$pageref{paper-sec:Model}f$
 *
 *  This is a detailed description of the model
 */
class CTheoreticalModel
{
    private:
        double    m_alpha;    ///< [parameter] defined in equation f$ref{paper-eqn:alphaDef}f$
        double    m_beta;     ///< [parameter] defined in equation f$ref{paper-eqn:betaDef}f$

    public:
        /**
         *  brief      Construct a new model using the given parameters
         *  param[in]  alpha   [parameter] defined in equation
         *                      f$ref{paper-eqn:alphaDef}f$
         *  param[in]  beta    [parameter] defined in equation
         *                      f$ref{paper-eqn:betaDef}f$
         */
        CTheoreticalModel( double alpha, double beta );


        /**
         *  brief      calculates [some property] by implementing algorithm
         *              f$ref{paper-alg:SomeAlgorithm}f$ on page
         *              f$pageref{paper-alg:SomeAlgorithm}f$
         *  return     [some property]
         */
        double algorithmA();


        /**
         *  brief      updates the model by [some parameter] according to the
         *              dynamics of equation f$ref{paper-eqn:SomeEquation}f$
         *              on page f$pageref{paper-eqn:SomeEquation}f$
         *  param[in]  gamma   [parameter] defined in equation
         *                      f$ref{paper-eqn:gammaDef}f$
         */
        void equationB( double gamma );


        /**
         *  brief      tests [some parameter] against the model; implements
         *              condition f$ref{paper-eqn:SomeCondition}f$
         *  param[in]  theta   [some parameter] defined by equation
         *                      f$ref{paper-eqn:thetaDef}f$
         */
        bool testC( double theta );
};

#endif /* CTHEORETICALMODEL_H_ */

Now all we have to do is dump dummy.aux (generated when we build the paper using latex) into the html directory where doxygen is going to build _formulas.tex and then when we make the documentation it looks like this:

Sure, all the references are images… which isn't particularly great, but it's a lot better than having to go in and change the labels every time we make a change to the referenced document. Whenever writing a code and a referenced document are done in parallel, this could be quite a handy trick. If you want the html document to look a little more professional, add a package that will set the font to the same as the font set by your doxygen CSS stylesheet.

If you want to play around with the files used in this post, pick them up here: dummy.7z. Create the latex document with the following command.

latex dummy.tex

Then copy dummy.aux into the html directory.

cp dummy.aux html/

Then run doxygen

doxygen doxyfile