Guild Wars 2

I started playing Guild Wars 2, and am happy their questing system has broken with WoW's current quest design. As WoW grew they "simplified" and "streamlined" their questing into something that feels like it should be best described as "raming a railroad down their players throats"

For example early in the Mist of pandaria beta, your ability to complete a zone would break because when a bug broke any one of a number of quests on the main path through the zone, it was quite likely you the next quests wouldn't be unlocked.

Guild wars 2 seems to have three tiers of quest.Collapse )


Its been a really long time since I tried to write. I keep meaning to roll my own blog software, but there's so many other things I should be doing.

At the moment I'm trying to compile the meego/N9 contact & calendaring code on my ubuntu box. Because really I should have a working calendar, and my org-mode files got too cluttered, and evolution/thunderbird/korganizer all feel too heavyweight.

I think apple did a good job with the versions of address book & icalendar on the versions of OS X I'd used 10.1-10.6, and would like some functionality like that on ubuntu. Since I've spent the last year or two trying to rebuild one of my work applications around the semantic web stack, gnome tracker's use of sparql appeals to me. And since some chunk of the N9's PIM stack was integrated with tracker, it seemed like a fun place to poke.

Building debian packages for mozilla's sync server

I'm surprised this seems to have gotten valid debian packages with a minimum of fuss for a package where I couldn't find a recommended release archive.

Upstream is in mercurial at (and server-core, server-storage, server-reg). I don't know mercurial very well and git doesn't pull from the other DVCSes, so I wanted to use bazaar -- unfortunately for some reason the bzr-hg plugin was having trouble pulling directly from the http server. However I discovered I can make a bzr branch from a local hg repository.

  1. hg clone upstream/server-full
  2. hg tags
    to find their tagged release version

  3. bzr branch -r hg:<hg spec> upstream/server-full syncserver
  4. python sdist --dist-dir ..
    to build the "orig" tarball the debian tools want
  5. ln -s SyncServer-1.0.tar.gz syncserver_1.0.orig.tar.gz
  6. copy over my old debian directory & update changelog
  7. debuild -S
  8. Edit a few times to add in whatever components were in the build tree but not in the archive built with sdist
Of course I haven't tried installing anything yet. So who knows if this'll actually work. The main thing that'll bite me is tracking the changes I made to the and setup.cfg files to get a tar.gz file built with sdist that matches the build tree. (sdist adds an egg_info section to setup.cfg)

P2P vs Centralized networks

I've been thinking a lot about SOPA & Related efforts to control the internet by Big Media + Federal Government.

I'm pretty sure the reason the feds want they want to disrupt the ability of the network to form functioning distributed decision making organizational structures as that's a core threat to their reason to exist.

There's been a variety of Darknets being developed now as alternatives to the white market network. The downside is the organizations with the most reason to put large amounts of money into them now are primarily organized crime.

Also if you have a distributed network that allows psudeonymity you will have spam, criminals, and 4chan on it. So one key question is which is worse? A network under the control of a government that aims to block groups it dislikes from organizing or a network that allows criminals to organize?

Protecting against XSS

I wanted a module to strip out potential XSS injections.

I looked at the set of allowed HTML on the LJ post and was came up with this idea.

use BeautifulSoup to parse the submitted html, remove all tags that aren't in a safe html whitelist. And then for img & a tags process the url and require they start with an allowed set of protocols. The main downside is that <img src=/foo.png/> wont work. you have to list the http:

This seems like a good method for sanitizing user input while allowing some html -- but how can you really know you're protecting against all the possible ways to inject a hostile payload. There's some really funky techniques for tricking the browser at

Collapse )

How annoying is duplicated posts

Do you, reader, think its a good idea to cross post to multiple social networks and let the reader deal with duplicates?

Or would it be better for the representation of ones social network be incomplete on any particular site. (E.g. some people are only on facebook friends, or twitter followers, or G+ circle members.) So post deduplication is handled by only subscribing once on ones preferred service..

Or alternatively the poster could just pick one site and post there and make everyone check multiple sites to keep up with everyone they want to pay attention to.

How annoying is social network post duplication

Really annoying
A little annoying
No opinion/Don't care
Its fine
Its great

(no subject)

Thanks to crayon physics I'm now going to have visions of spinning crayon logs as I try to sleep. I suppose that's better than worrying about X,Y or Z.

(no subject)

So does anyone have a good desktop client for viewing ones LJ friends page?

One of the things I learned about me and social networking tools I learned from getting things done.

Have as few in-boxes as you can manage. The thing that was nice about twitter & facebook is they provided APIs for downloading their stuff into client side applications so I didn't have to remember to go look.

(OTOH, the Ubuntu default social networking application has some issues).

(no subject)

I still have trouble trusting large entities. On the other hand I feeling lonely and feel like I should at least make an effort to interact with a wide world.

Poll #1760483 Where to blog?

Should I start using LJ again?

X is so much better, use it instead.