Enable WOL through a Linksys Router

This is blatent copy-paste job from here put here for my own posterity.

This all works very well if the computer sending the Magic Packet is on the same LAN as the sleeping computer, but requires some additional effort to get working over the Internet. Especially if the very common Linksys WRT54G is your wireless router. The WRT54G setup page employs javascript to prevent the user from entering a broadcast address, so there is a work around. Here’s what to do to set this up:

  • Enable WOL on your computer. This is usually a setting in the BIOS. (This may not be possible if you are using a wireless card. Only the very latest cards support Wireless Wake On Lan.)
  • If you don’t already have Firefox, download and install it.
  • Download and install the DOM Inspector Firefox Add-on.
  • Using Firefox, open your Linksys WRT54G admin page (usually
  • Click on Applications & Gaming
  • Add a new entry: Application=”WOL”, Start=”9″, End=”9″, Protocol=”UDP”, IP Address=”200″
  • In Firefox, click on Tools, then DOM Inspector.
  • Use DOM inspector to find the “WOL” entry and change IP Address from 200 to 255. (Firefox will red highlight the areas you have selected in DOM Inspector, this makes it easier to narrow down to the correct element.)
  • Click “Save Changes” on the Applications and Gaming page.
  • Download and install a Magic Packet program that can send a packet over the Internet. I like this one: http://magicpacket.free.fr/

You should now be able to wake your computer up from where ever you are. (Also, I should say that my router has firmware version 8.00.5. I don’t know if this matters, since I don’t have any other router to test it on.)

Note to self: this is probably unsafe and bad. Also, I used firebug.

cppfontconfig - Simple C++ interface into fontconfig

Lately I've been trying to learn how to use (and I mean really use) the open source font stack. Since I tend to greatly prefer c++ over c, as part of the process of learning these libraries I figured I'd workout API wrappers. Since the font stack libraries are generally object oriented, I figured a C++ wrapper around these would be easy to accomplish.

As a start, I wrote cppfontconfig, a C++ wrapper around font config.


I'm not sure if the design is good or if it's even useful, I guess we'll see.

Exception Streams in C++

Here's a short but handy C++ Snippet. I was looking for a way to quickly generate runtime exceptions with useful information about the current program state. My process was:

  1. Create a string stream
  2. Build the message
  3. Throw the exception using the string from the stream

I felt like this was particularly cumbersome and quite annoying, especially for doing things like string parsing or SQL because there are a lot of places in the code where error checking is required.

In any case, I came up with this simple little class tempate which sits in a single header file and is included in the cpp file where it is to be used.



namespace utility   {

/// used to simplify the process of generating an exception message
 *  Derives from stringstream so provides an ostream interface, but throws
 *  an exception with the contents of the string when the object is destroyed
 *  \tparam Exception_t must be an exception type which accepts a
 *                      const char* in it's constructor
class ExceptionStream :
    public std::stringstream
            throw Exception_t( str().c_str() );

        std::ostream& operator()()
            return *this;

typedef ExceptionStream ex;

} /* namespace utility */

Usage is like this:

ex fExp;

fExp = evalf( m_expression["radius"].subs(m_x == iRadius));
if( is_a(fExp) )
    radius= ex_to(fExp).to_double();
    utility::ex()() << "GiNaC failed to parse radius expression: "
                    << fExp;

Here I'm using GiNaC to parse a string into a mathematical expression. If the process fails I want to throw a runtime exception (typedef'ed as utility::ex).

The class derives from string stream so it works just like a string stream… building a string by catting together the RHS of all the string operators. The magic is that the destructor for the class throws an exception. The message for the exception is the string that was built.

It's a very handy time saver… though I'm not sure if it's actually safe to use. If the destructor throws an exception, is the object memory still freed?

Sqlitemm: C++ wrapper for sqlite3

As part of writing inkbook, I decided to use sqlite3 for data storage. The C/C++ API is actually a C API, and while it is object oriented and rather intuitive, it's just not C++. Considering that the API is very simple and, in particular, the subset of the API I wanted to use was very simple, I went ahead and wrote a quick wrapper.

Sqlitemm provides a C++ style interface to creation of database connections and statements (as in Connection and Statement are classes). Objects are reference counted using Glib::RefPtr so memory management is a bit easier.

You can find the project in my tracker , but heres an example of it's usage:


int main(int argc, char** argv)
    // open a connection to a sqlite database
    Glib::RefPtr sqlite =

    // we'll reuse this variable for different statements
    Glib::RefPtr stmt;

    // prepare a read statement
    stmt = sqlite->prepare("SELECT * FROM table_a WHERE field_a=?");

    // bind a value to one of the parameters

    // execute the select statement

    // read out the result set
        // retrieve the first column of the result set as an integer
        int             field_a = stmt->get(0);

        // retrieve the second column of the result set as a string
        std::string     field_b = stmt->get(1);

        // retrieve the third column of the result set as a double
        double          field_c = stmt->get(2);

        // do something

    // if we want to reuse the statement we need to call this

    // prepare a second statement, note that all memory allocated for the
    // first statement is released here, because the smart pointer is
    // reassigned and the object it points to only has one outstanding reference
    stmt = m_sqlite->prepare("UPDATE table SET field_a=? WHERE field_b=?" );

    // now actually execute the statement

    // note that we do not have to explicitly close the connection, when
    // the sqlite variable goes out of scope, the smart pointer will drop it's
    // reference to the underlying Connection object, and the connection
    // object will be destroyed. The databse is closed during the destructor
    // of the Connection object.
    return 0;

Texmake: a makefile generate for latex projects

texmake is a makefile generator, much like cmake. In fact, it originally started with me looking for ways to make cmake work with latex documents. After a bit of work, I decided the system was too much designed around the c/c++ build process, so I started working on some tools to simplify my life.

Keeping it simple

texmake tries to eliminate as much work as possible from the document writer. It will try to figure out and resolve dependencies, including graphics files which require conversion from their source format. It also allows for out-of- source builds, so that the source directory doesn't get all cluttered with the intermediate files that latex generates.


Consider this scenario: We have a very long complicated report to write (i.e. software manual, end-of-project report, thesis, how-to-book). We want to generate both a full version, and a simplified quick-start kind of version. The quick start version will contain a subset of the chapters of the full version. The quick start will be published in PDF and HTML (i.e. web- friendly). The long version will probably be a large document and we don't really want it to be browsed on-line, but it is likely to be printed, so we'll put it in pdf and dvi format, as well as epub for people who have e-readers.

example project

In the process of making this document, we've generated many image files. Some of them are hand drawn SVGs. Some of them are generated tikz images. Some of them are diagrams drawn in DIA or plots in GNU Plot. Some of these figures are shown in multiple different chapters (because the author does not want to just refer the user back to a previous page, which is unnecessary in an electronic format, but may be more meaningful in a print format).

Furthermore, we have some things that need to only be included in each version. For instance each version should include some kind of header which tells the reader where he can find the other versions online, or where he can order them from.

Now, we can go maintain a makefile structure that manages all of this quite easily, but we will have to build it by hand. Every time we add a new chapter or image or output format, we have to go add a line to a makefile somewhere. Wouldn't it be nice if all of that work would just happen?


here are a list of features that I have implemented or am working on

  1. don't rebuild the document unless it needs it
  2. multiple documents in the same project, potentially sharing various pieces of content
  3. monitor documents dependency on package, style, and class files so that document is rebuilt if these are updated
  4. monitor dependencies on all included files, so that if any included file is updated, the document is rebuilt
  5. only run bibtex if needed
    1. one of the database files is updated
    2. there are unresolved citations in the document
  6. rerun latex until it stabilizes
  7. discover and automatically convert graphics source files (like svg) to the kind (pdf)latex(ml) understands (pdf,eps,png)
  8. out of source builds so that sources can be in version control without complicated ignore rules
  9. caches values of environment variables and binary locations so that initial environment does not have to be manually set-up each time the document is built
  10. colorize output to make it clear where and when things get messed up


texmake relies on the presence of a texmakefile in the directory of the document you want to build. The texmakefile simply lists source files (.tex documents) and the output files that should be built from them. Multiple output files can be built from the same input file, so that the same latex source can be used to generate pdf, dvi, and xhtml versions (so-called single source builds).

texmake init is called from the build directory, accepting a single parameter being the directory that contains the root texmakefile. texmake resolves the absolute path to all the tools it needs (latex, pdflatex, bibtex, latexml, kpsewhich, …), and caches those locations. (TODO: also cache environment variables for latex and search paths for kpsewhich). It generates a makefile in that directory which contains rules for all of the output files registered, and creates directories, if they're missing, in which the output files will be located.

texmake relies on GNU make to determine when documents need to be rebuilt. This makefile includes several other makefiles (if they exist) containing detailed dependency lists for each of the output files. If the output files have never been built, these included files do not exist, but that's OK because it is already clear to make that it needs to build the output files.

The output files are built with texmake build. The texmake builder runs (pdf)latex(ml) with all the necessary switches to make sure that the source directory and output directory are searched for required files. The builder also scans the output of latex to build a list of all the files that latex loads (for the dependency graph) as well as all the files that are missing. If the missing files are graphics files, then it generates make rules for how to build those graphics files from similarly graphics files in the source directory (it is assumed the source file has the same name, but a different extension). This way, an svg source file can be used to generate pdf,eps,or png images depending on if the document being built is dvi,pdf, or xhtml (respectively). If the file in the source directory is already compatible with the latex that is being used, then no conversion is necessary. If the missing file is a bibliography file, then bibtex is run to generate the bibliography file. The builder also scans the output of bibtex to add bibliography database files (.bib files) to the dependency graph. Bibtex is also run if there are missing citations reported by latex (this is because bibtex will not include bibliography entries that are not cited in the latex source, so if a new citation is added, bibtex will need to be rerun). The texmake builder also scans the output of latex for rerun suggestions. It will rerun latex if it had to run bibtex, or if latex itself suggested that it be rerun (to get cross references straight, for example).

Major TODO items

  1. cache environment variables in init
  2. use diff to determine if TOC's get updated (they will get touched by latex so make will always think they're new) to determine of a rerun is necessary for TOCs
  3. add image generation rules for tikz
  4. consider a more modular design for making intermediates (i.e. run a program, like matlab, to generate intermediates)


The texmake code is in github.


As I was working on texmake I decided that I didn't want to figure out what all the possible auxilary output files would be for a latex document. Also, I'm suspecting that it will depend on what packages are included and things. Anyway, I wanted a way to just monitor the build directory and see all the files that were created while running latex. It turns out this is very easy on linux. This is a very simple program which watches a directory, and will print out any files that are created, modified, opened, closed, moved, or deleted. It prints out the file name, followed by a comma, followed by the notification event.

You can find the code on github

svg2pdf and svg2eps (convert svg to pdf or eps from the command line)

I've been working on a new makefile for my latex projects. The goal is to have single-source builds of dvi, pdf, and xhtml documents. I ran into a problem of generating figures. latex expects eps graphics, pdflatex expects pdf figures, and latexml expects png figures (or will try to generate them). In order to generate documents with make, I need a way to generate eps and pdf figures from svg sources (I usually use inkscape to make my figures). Inkscape can be run from the command line, but I dont want to install inkscape on my server because that will require installing a ton of libraries that don't make sense to have on a server with no graphical interface.

As it turns out, writing such a command line tool is very easy with librsvg and cairo. Carl Worth of redhat has produced a nice demo of svg2pdf which can be found at freedesktop.org. I cloned his project, create a cmake project out of it, and made a trivial modification to turn it into svg2eps as well.

You can find my code at https://github.com/cheshirekow/svg2x.

Throw: A Compiz Plugin

One feature that I've really been wanting from compiz is to give windows a bit of momentum when I'm moving them around. In other words, if I flick a window with the pointer and then let go, I want it to continue moving, so that it can be "thrown" across the screen. Thankfully, someone involved in compiz wrote such a plugin, found on his blog here. Unfortunately, the strategy that he uses doesn't work well for me. There are essentially three problems with it:

  1. In order to calculate the velocity of the window after it is released, he compares the window position at the time it is released to the window position at the time it was grabbed, and calculates the delta. This leads to weird behavior when the users is indecisive and moving the window around sporadically before releasing it.
  2. The velocity of the window has an unnatural association to the actual "velocity" that the user was moving it with.
  3. The velocity appears to be zero when using a pen-tablet (wacom) input because my hand generally stops before the pen moves out of range of the tablet.

While the compiz API documentation is in a very sad state, Sam's plugin showed me all the parts of the API that I needed. I rewrote the plugin to essentially low-pass the velocity of the window, sampled at the compiz frame rate. At each "movement" event, I update the delta in the x direction and the y direction. At each "frame" of compiz, I bake the accumulated deltas, along with the time (in ms) since the last sample. The accumulated deltas and the number of milliseconds are stored in a ring buffer. When the window is released, the buffer is averaged to get a window velocity in pixels-per-millisecond.

I also augmented the window structure to store a floating point representation of the window location. Together with the low-pass on the velocity, the outcome seems to be a lot smoother.

The window velocity exactly matches what I "feel" like the velocity of the window is when I let go of it. Also, the filter only has enough history for a couple of frames so weird movement of the window prior to release is "forgotten", and only the "most recent velocity" is used.

The code for the plugin can be found in my github