No Pugs

they're evil

Externals allows you to make use of an svn:externals-like workflow with any combination of SCMs. What is the svn:externals workflow? I would describe it roughly like this:

You register subprojects with your main project. When you checkout the main project, the subprojects are automatically checked out. Doing a ‘status’ will tell you the changes in the main projects and any subprojects from where it’s ran. You commit changes to the the projects all seperately as needed. If somebody else does an update, they will get the changes to the subprojects as well.

Probably like you, I’ve started using git for some projects/plugins. Git has a feature called git-submodule that is supposed to work similar to svn:externals. Git-submodule’s annoyances are what inspired me to start the externals project, and here are some of the problems I have with git-submodule’s workflow:

  • When you clone/pull an existing project, you have to do git-submodule init/update manually to get the subprojects (or to update them.) You may be thinking this isn’t a big deal, but it’s an extra step. The main project is almost never functional without the subprojects so why would you ever want to pull in only the main project? Having to run extra steps every time is annoying.
  • With git-submodule, the subprojects are pulled in at a specific commit, not at a branch tip. This is not a commit at some point in some branch’s history. It’s a commit with the working directory completely disconnected from any branch. If you want to make any edits to a subproject, you first have to checkout the branch you want to work with. This is extremely annoying. When I start adding a new feature to the main project, I can’t predict which subprojects may need to be modified along the way. Should I go to them all manually and do a checkout? If not, this usually means that as I’m stepping through the debugger I have to keep track of what subprojects I’m about to change, stop what I’m doing and go to them and checkout a branch. Not only is this disruptive to my workflow, on more than one occasion I have forgot to checkout a branch before making edits, and wound up issuing the wrong commands to do the checkout once I realized I was detatched. The result was I wiped out all of my changes in the subproject. This can be extremely irritating.
  • Status doesn’t propagate through the subprojects. This means that when it’s time to make some commits, you have to go to every single subproject and do a ‘git status’ because you aren’t 100% sure which ones you’ve made changes too.
  • Because of #2, when you do make changes to a subproject, you have to remember to do a ‘git add path/to/subproject’ so that the new commit is pointed to by the main project.

However, even if git-submodule was as useful to me as svn:externals, I would still see a need for ext: when you have a project with subprojects managed by different SCMs. What I’ve been doing is if the main project is git, all subprojects of type git I would manage with git-submodule, all subprojects using subversion I would simply commit the whole subversion work directories into my git repository. When the main project is subversion I’ve been using svn:externals for managing subprojects that use subversion, and checking whole git repositories into my main project’s repository. Now with externals, I can have a uniform workflow regardless of SCM combination.

For info on how to use externals, please see: http://nopugs.com/ext-tutorial


Published on 09/04/2008 at 07:09PM under , , .

2 comments

Gather ‘round my children and let me tell you a tale

So I notice that 4 of my 2-disk raid1 arrays were degraded after a RAM upgrade on one of my servers. I checked which device was missing and it was /dev/sdd1 (/dev/sdd2, /dev/sdd3, /dev/sdd6 from other arrays, too, but I will only consider sdd1 from /dev/md1 for the sake of this tale.) Opening the case I saw that I knocked off one of the SATA cables. It seems like about 1 in 10 SATA cables that I encounter do not clip properly and slide off easily. The one that had been knocked off was at the SATA3 port on the mobo (which starts at SATA0, so it seems reasonable that it is /dev/sdd.) Anyways, I reconnect the cable, boot the machine, and mdadm –add the partitions back to the relevant arrays, like so mdadm /dev/md1 –add /dev/sdd1.

Everything looks great in cat /proc/mdstat

After a kernel upgrade, I noticed that the arrays started degraded again! I went ahead and mdadm –add’ed back the partitions, all looked well in /proc/mdstat.

Then I rebooted just to make sure that it would assemble properly on boot. It booted degraded once again.

So after trying lots of stuff such as clearing the superblock, zeroing the whole device, –stop’ing and –assembe’ing manually, I finally do what I should have initially done, which is a dmesg | less. (nothing peculiar was in /var/log/messages and friends)

In dmesg, the kernel tells you what it’s trying to do as it’s assembling the arrays. It finds all the partitions that could possibly be members of raid arrays. It then matches them up by UUID and tries to assemble them. So I first see the kernel considering /dev/sdd1. Ok, so it finds a bunch of partitions that doesn’t match up with it, then it mentions /dev/sda1 matches and will be considered (which was the working disk still in the array.) Then it finds more that don’t match, and finally, to my surprise, mentions that /dev/sdb1 matches and will be considered! So that’s three devices in a two device array. How does the kernel handle this? Well it chooses to use the last 2 matching ones it finds, so it doesn’t bother with /dev/sdd1 at all. It assembles /dev/sda1 and /dev/sdb1 into /dev/md1 but /dev/sdb1 isn’t “fresh” because it was actually the device who’s cable fell off. That’s right, the kernel now decided that the device at SATA3 on the mobo would be /dev/sdb instead of /dev/sdd.

To fix this, I simply had to

mdadm /dev/md1 –add /dev/sdd1
mdadm /dev/md1 –fail /dev/sdd1
mdadm /dev/md1 –remove /dev/sdd1
mdadm /dev/md1 –add /dev/sdb1

for each of the relevant arrays. Then it assembled properly on each boot.

The moral of this story: always do a mdadm –detail /dev/mdX BEFORE adding partitions to an array to make sure you have the proper device name of the failed device.

Published on 07/21/2008 at 07:08PM under .

0 comments

On the voicecoder mailing list, Quintijn has released an installer that works on Vista. The speech wiki that was used before seems to have been spammed into oblivion, and that installer failed on vista anyways.

I also hit a few snags so here is the steps that worked for me:

head here: http://sourceforge.net/project/showfiles.php?group_id=70807

Click and save the “pythonfornatlink” link. Extract the files from the download and run each one by right clicking and hitting “run as administrator.”

Click and save the “natlink” link (it contains vocola)

Run this file as administrator.

After everything’s installed, go to start -> all programs -> natlink -> Configure Natlink Gui

Got to the configure tab, enable natlink.

uncheck the box that says “Vocola uses simpscrp”

Enable vocola

Go ahead and close the gui and you should be good to go (I rebooted at this point, but restarting dragon is probably good enough.)

Published on 07/13/2008 at 07:05PM under .

0 comments

For some reason, BufferedLogger is the default logger in rubyonrails. This logger at this time has no way to customize it’s format, and what’s worse, is it’s format is practically unusable.

At some point most of us are going to encounter a situation where knowing when something was logged is nearly essential.

So, after fighting with rails for a while, here is how I got timestamps in my logs using rails 2.1.0

It’s worth pointing out that I made several errors while trying to do this, however the behavior of rails with a misconfigured logger was completely unhelpful in 90% of the situations I found myself in. So hopefully this can save somebody some time.

In environment.rb, inside the Initializer.run block:

  config.logger = Logger.new(File.dirname(__FILE__) + "/../log/#{RAILS_ENV}.log") 
  config.logger.formatter = Logger::Formatter.new

Then in development.rb, test.rb and production.rb:

config.logger.level = Logger::DEBUG

You can set this to whatever logging level is appropriate for the given environment. I personally use Logger::WARN for production and Logger::DEBUG for everything else.

Now, Logger::Formatter cannot be customized but at least it gives you a timestamp. This was good enough for me and if you’re satisfied then stop here.

If you need further customization, you can write your own formatter class. The interface is pretty straight forward, you simply needs to implement:

def call(severity, time, progname, msg) 
   #your code goes here that builds the entry you want to see in the logfile
end

Published on 07/10/2008 at 06:49PM under .

2 comments

Powered by Typo – Thème Frédéric de Villamil | Photo Glenn