My buddy Tyler is writing a novel this month as part of National Novel Writing Month (NaNoWriMo). You can follow his novel writing progress at his page.
Posts
In addition to del.icio.us, there is also a nifty site called nutr.itio.us that automagically manages all of your tag names and gives suggestions for tags that other people have used when visiting a site. Furthermore, links are colored by their relative popularity for that site. Pretty damn cool.
Also there is a very cool Delicious extension for Firefox. You can see it in action below. This makes it so much easier to manage my bookmarks than the crappy hierarchical thing that I had been doing.
Wired has a little article that describes what CASOS does. At the end there are a few quotes from Kathleen about making predictions with the software we write.
As a method of procrastination, I’ve started to play a little bit with forms of social software. Namely, I’ve started accounts on del.icio.us, a social bookmark sharing site, and bloglines. I’m not entirely sure how much I’ll use Bloglines, I’m moderately happy with Straw, but I understand how bloglines can be used to reduce demand on a site, which is helpful.
The neat thing is that both of these sites make it possible to share your information, through RSS none the less.
On my quest playing with compression of HTTP stuff, I found out today that it is possible to compress the request to a server in addition to the response. While for most cases this wouldn’t matter much, as requests are small, for web-dav clients this can be a very big deal. Basically the difference is that a client sends an additional header that says Content-Encoding: gzip to the server (this also means they need to seend a Content-Length header too) and the server if it can handle that will automatically upack the body of the request.
After my rant about bandwidth saving from last night, I decided to do what I could with my install of PyBlosxom. Unfortunately, it’s going to be difficult to implement the if-modified-since directive with how PyBloxsom is built, however, I knew it couldn’t be that hard to work around the Gzip stuff. Basically, all the encodings a client can accept are stored in an environment variable HTTP_ACCEPT_ENCODING. This will contain a string something like gzip,deflate.
As part of my research I’m writing a custom RSS feed aggregation software to infer some links for social networks out of entries made by open source developers in their weblogs. Right now I’m using Mark Pilgrim’s excellent FeedParser to parse all of the feeds. It handles mangled XML very nicely, which is good because Blosxom loves to kick out bad XML (in particular, it doesn’t handle the & character that well).
Here is my attempt at escaping unicode hell. Basically this looks for any bit in a string that is about 128 and changes it to the appropriate XML encoding.
<tt><b><font color="#0000FF">def</font></b> <b><font color="#000000">cleanString</font></b><font color="#990000">(</font>instr<font color="#990000">)</font><font color="#990000">:</font> <i><font color="#9A1900"> """Takes an input string, and encodes the bytes for XML stuff.</font></i> <i><font color="#9A1900"></font></i> <i><font color="#9A1900"> Hopefully this should alleviate a lot of the problems we've been having</font></i> <i><font color="#9A1900"> with feeds that turn out to not be readable.
This is a little plea for help so I can excape the unicode hell that I’m in. I’m writing a little program that crawls various web pages out there in the real world, but am having some issues with it because of unicode related stuff. Mainly I’m getting issues like this:
Traceback (most recent call last): File "./loadFeeds.py", line 176, in ? parseFeed(cursor, x['rssfeed_id'], fp) File "./loadFeeds.py", line 148, in parseFeed if loadFeedEntries(st, feed_id, update_id, fp): File ".
I went and created a sourceforge project page for xmlsnipe. I also submitted a link to freshmeat that hopefully will be posted soon.