Sunday, December 22, 2013


Sunday, December 8, 2013

Inverting the git index

I wanted to find a way to invert the git index (swap the staged and unstaged files in the index). There were some answers on stackoverflow but the best I found initially required an interactive rebase to reorder the commits.

Then I found another answer about how to use tags to invert commits. This is ultimately what I'm using now by putting both answers together:

Wednesday, December 4, 2013

Android multi-user sdcard access

I haven't found the documented way to see the sdcard content for different users on a multi-user android  nexus7 using adb  (suppose I need to see some debug logs). However I did find this:

cd /storage/sdcard0
ls ..

What is listed is not actually the content of  /storage. There are listings for user 0 (root) and user 10 (the other user). You can confirm this with ps, you'll see processes prefixed with u0_ and u10_.

ls ../0 lists the tablet owner's files
ls ../10 lists the other user's files

You can also copy files from there.

cp ../10/Download/somefile .

Note: Even tab autocompletion works too.

Thursday, November 21, 2013

Try/finally to do post-super actions with return

A common pattern for overriding a super method requires keeping the return result of the super method in a temporary variable:

But this temporary variable can be avoid using try/finally:

Tuesday, November 19, 2013

Interesting usage of a Java switch

In implementing some trivial mappings between a POJO and an SQLite cursor, there is the possibility of forgetting to map some columns. In general I prefer enums over static final constants whenever possible (and enums allow me to annotate other column metadata that I can use to create the table). But here I've found an interesting usage of a switch statement.

Given the enum:

If I were to add a new column, a compiler warning can thankfully remind me with a warning. 

Here's a way to take advantage of that in code that would otherwise be linear without the switch:

You can see that I switched on the enum with ordinal 0, but it doesn't really matter as long as the first case matches the switch constant; when no breaks are used all of the cases fall through to the next. Since I intend to cover every enum value only once, there's no need for a for loop to iterate the enum values.

That's another advantage to this approach over for (Column e : Column.values()) switch (e) {} is that this way the values can be processed in any order, rather than just incrementing ordinals.

Wednesday, September 25, 2013

Android Remote Sync Content Provider Pattern

If you plan on creating an Android SyncAdapter, you should have already seen the great Google I/O presentation on the subject already

The presentation describes how to design your database to keep track of the sync state for each database row you need to sync. Essentially, the database becomes the mediator between an Activity and the SyncAdapter.

Sync States
Here I've listed the SyncState enum I use to keep track of record state. For example, 'created' is used by the Activity and 'creating' is using by the SyncAdapter.

While the pattern is powerful for all of the reasons described in the video, it does put a burden on the Activity developer to manage the state whenever a record is changed. It also becomes fairly messy in the ContentProvider to determine whether the change was made by the Activity and is outgoing, or was made by the SyncAdapter and is, thus, incoming. So I sought a better way to encapsulate the client-side responsibility of this pattern in some kind of proxy.

  1. Keep the client DRY; all of the state management code is redundant
  2. Align with the Android ContentProvider interface; all of the CRUD methods available on the ContentProvider already describe exactly what the client needs to manage the sync state
Divide and Conquer
Since the Activity only interacts with the database through a ContentProvider, I created my proxy solution using another provider I called RemoteSyncContentProvider. My original ContentProvider is now called LocalContentProvider because it interacts with the database only, and is only used by the RemoteSyncContentProvider and the SyncAdapter. Since I ended up with some fairly common aspects of the two ContentProviders, the common parts are shared in a common base called AbstractContentProvider.

Since I have two ContentProviders, I have two content authorities: Contract.authority and Contract.authorityLocal. The local authority is not tied to any SyncAdapters as it represents the local database.

The first benefit of splitting the responsibilities is that the LocalContentProvider now becomes much simpler; it updates the local database and never needs to syncToNetwork -- it is also agnostic of the fact that the sync states exist at all.

The gist of the RemoteSyncContentProvider is that it implements query, insert, update, delete utilizing an instance of the LocalContentProvider client that it got though getContext ().getContentResolver ().acquireContentProviderClient (Contract.authorityLocal). Since the entire purpose of the RemoteSyncContentProvider is to manage the sync state of each row that it touches, before proxying to the LocalContentProvider, it sets the necessary SyncState value on the ContentValues passed to it.

For insert, it always sets the 'create' sync state. For update, it sets the 'update' sync state unless the sync state is already 'create'. The RemoteSyncContentProvider can also decide if of these changes should force a resync of some related tables that should now be fetched because of the new values.

Delete is a little more work since it first has to query the sync state. If the sync state is 'create', it directly delegates to the delete call, otherwise it updates the sync state to 'delete'.

To support queries of records where only partial information about them is known (some kind of key identifying columns), the RemoteSyncContentProvider implements query by inserting stub records with the key columns and the read state. To make sure the query results are available in a reasonable amount of time without having to wait for the next sync tickle, the RemoteSyncContentProvider will force the SyncAdapter to resync using ContentResolver.requestSync() with the manual and expedited flags.

In all of the content modifier methods, notifyChange is called with syncToNetwork true since this is the RemoteSyncContentProvider and all of these changes need to initiate the SyncAdapter to process these database changes. Note that the authority tied to the RemoteSyncContentProvider must match the authority of the SyncAdapter so that syncToNetwork triggers the right SyncAdapter.

In the SyncAdapter itself, however, while the RemoteSyncContentProvider is the ContentProvider passed to it as the ContentProviderClient, so this client must be ignored to prevent sync update loops. Instead, a separate instance of the LocalContentProvider (authorityLocal) is used by the SyncAdapter (make sure to release() it).

As described in the Google I/O presentation, the SyncAdapter is very aware of the sync states it finds through the local authority. It uses these states to determine which records need to be pushed to and pulled from the remote server; it updates these states while it is processing the sync so the sync status is always available in the database and communicated to CursorAdapters automatically. Finally, when the sync is complete and successful, the SyncAdapter clears the sync states to indicate completion and to prevent a resync of the same records.

In the end, a client Activity can use the RemoteSyncContentProvider as any normal content provider with ignorance of the SyncAdapter doing the background work except for understanding that queries may return stub records that are asynchronously updated when there is network connectivity.

Saturday, August 31, 2013

Android Async Task with Toast Status

Here's an extension of the CatchableAsyncTaskLoader I wrote about to add progress/status updates via a Toast

Radio Thermostat Auto-Away Script

When I set out to get a wifi-enabled thermostat, here's what I originally wrote about our furnace thermostat before we had cooling installed:

"Looking into wifi enabled HVAC thermostats... nothing strikes me as having the features that I want. They target "easy of programming" and "energy saving" but none of them explicitly aim to monitor more rooms with wireless sensors (though it can be added on) and it is obvious that lowering the temperature setting in winter saves energy, I don't need a green leaf icon for that. Sometimes I want to be able to just circulate the air with the fan with a set timer. And why should I have to program the thermostat at all? If it is on my local network, then it should be able to detect when our phones are here, implying that we are here.

So here's my requirements:
1) A temperature setting when I'm home.
2) A temperature setting when my wife is home (overrides my setting).
3) A temperature setting when we're probably sleeping
4) A temperature setting when nobody is home
5) The ability to set a timer to run the fan for for air circulation
6) Extra wireless thermometers in key 'hotspot' rooms
7) A nice to have would be a count of the number of hours the air filter has been in use."

I ended up finding the Radio Thermostat 3m-50 for $99 at Home Depot, which was a great deal. I don't understand why Nest gets so much press since it doesn't really solve any problem (having a thermostat schedule when our lives don't really have a schedule), it just makes living with the problem and fiddling with a thermostat all the time more fun; but it a very expensive piece of entertainment. Sorry, but distracting the user from the problem doesn't count as an actual solution. The real problem is that a thermostat should be able to sense if nobody is home so it doesn't waste energy, and it should obviously sense when they return.

To actually solve the problem, I use this piece of ruby script that pings our phones and sets away mode when we are not home:

Sunday, August 25, 2013

Variation on AsyncResult for Android AsyncTaskLoader

I found this example of an AsyncResult for an Android AsyncTaskLoader

However, it suggests a pattern of handling exception types with if/else statements in the "if (exception != null)", likely with instanceof.

That responsibility really belongs to the user, and the user can do this instead:

try {
   if (result.getException ()) throw result.getException ();
   // non-exception results
} catch ...

However, this also leaves the responsibility of the user to catch and not ignore the exception. So here's a better alternative that forces the user to catch the (parameterized!) exception to get the value:

Tuesday, June 11, 2013


Often I want to open many files in vim tabs, but opening vim through xargs will not have access to the tty. Here’s a great fix:

I’ve summarized this workaround in this pipevi bash function:

Sunday, May 5, 2013


Azilink for Android rocks with Ubuntu. You can charge your phone and easily turn your laptop into a wireless hotspot, no phone rooting required!

I wasn’t able to get a nice NetworkManager config setup but here is the next best thing.

Install the apk on your phone and put your phone is usb debugging mode.

Then sudo apt-get install android-tools-adb openvpn

Download to /etc/openvpn/azilink.conf

Here’s my network interface config:

Thursday, April 4, 2013

ssh to a remote screen session

ssh -t host screen -x will force ssh to create a terminal session (required for screen) which ssh will not normally create when executing a remote commond.

Wednesday, March 27, 2013

forever doless

Two very useful functions in bash that I use when having to run tests frequently are ‘forever’ and ‘doless’. Forever is just shorthand for an infinite loop
Loading ....
In the forever loop it has the advantage of restarting the command when ‘q’ is pressed which is great for a screen terminal dedicated to test runs. Compared to screen itself, it can be scrolled without needing to enter copy mode.

Doless pipes the output of a command to less while still showing the output as it is progressing until completion. 

Wednesday, March 13, 2013

Thursday, March 7, 2013

Taking over the caps lock key real estate

Years ago, I decided that I wanted to remap some keys on my keyboard, specifically because Esc is a common key used in Vim.

Here’s the content of my .Xmodmap file:
Loading ....

Essentially, the Caps Lock key becomes Tab because it has the same reach is Enter, and Tab becomes Esc because it is needed frequenly. The Caps Lock feature which is never used is mapped to the Esc key.

Tuesday, February 26, 2013

Optimizing disk access with flashcache and ssd

Flashcache is the easiest to setup of the different caching layers available for Linux (bcache, dm-cache) because it doesn’t require patching the kernel and can compile as a module; better it comes with dkms support and builds smoothly on Ubuntu 12.10.

I’ve replaced my Lenovo cdrom with this Ultrabay filled with a reasonably priced 64 GB solid-state disk.

I also setup laptop-mode-tools to be able to spin down the hard disk when on battery and it isn’t being accessed (like when all of the reads are cached). I’m happy to say that I’m getting 91% read cache hits today.

This cable was not easy to find, but it will let me use the old cdrom externally, and with other systems.

Monday, February 25, 2013

Migrating 2xRAID->LVM+RAID via 2xRAID+LVM+RAID

I started with

/ = md0 (sda1, sda2) and

/home = md1 (sda2, sda3).

Then I broke the raid with

mdadm —manage /dev/md0 —fail /dev/sdb1

mdadm —managed /dev/md1 —fail /dev/sdb2.

I then validated that it was possible to use mdadm —zero-superblock /dev/sdb1 and mount e2fsck /dev/sdb1 as a single device directly (with new enough version of md metadata).

On to LVM…

I repartitioned /dev/sdb into one large sdb1 at the 1MB boundary.

mdadm —create /dev/md3 -l1 -n2 /dev/sdb1 mising

pvcreate /dev/md3

vgcreate raid1 /dev/md3

Now I wanted to migrate the existing md0 and md1 to lv root and lv home. Using mdadm -D I could get the KB sive of each md device and feed that into lvcreate.

lvcreate -nroot -L 10490304K raid1

lvcreate -nhome -L 1938828544K raid1

Note the message rounding to the nearest extent size (a few MB). This was able to work because I had removed /dev/sdb3 which was originally for swap space.

Here’s where the interesting part came in…

I now added the lv volumes to md0 and md1.

mdadm —manage /md0 —add /dev/raid1/root

mdadm —manage /md1 —add /dev/raid1/home

Wait for resync…

Now to remove the sda devices from md0 and md1, repartition sda like sdb, and add sda1 to md3.

mdadm —manage /dev/md0 —fail /dev/sda1

mdadm —manage /dev/md1 —fail /dev/sda2

sfdisk /dev/sdb —dump | sfdisk /dev/sda

mdadm —managed /dev/md3 —add /dev/sda1

Wait for resync…

Update mdadm.conf by removing the md0 and md1 lines and append md/3 outputted from this command

mdadm —examine —scan

also comment out DEVICE partitions line so that md0 and md1 don’t come up on this next reboot.

Edit /etc/fstab to change the / and /home mount points to point to the new lvs.

update-initramfs -u


Reboot with fingers crossed. It should come up fine with the lvs mounted and only md/3.

Now clear the md0 and md1 superblocks

mdadm —zero-superblock /dev/raid1/root

mdadm —zero-superblock /dev/raid1/home

and put back the DEVICE partitions line in mdadm.conf.

Reboot once more.

Edit: The confusion grub had before rebooting required some repair with the live CD.

Now the two lv filesystems cat be grown to fit the new lv size (rounded up) with resize2fs.

Sunday, February 24, 2013

LVM anti-pattern

I’ve just spent my first weekend with LVM and I’ve been researching different best practices. One practice I’ve seen suggested, and what originally made sense to me, is that when using LVM in some ideal cases (like non-booting secondary drives), one doesn’t need to use partitioning at all and should use the entire raw device as an LVM physical volume. But in deciding how I want to organize my volumes between various devices, and having to shuffle some things around and spend way to much time in GParted, I’ve learned that spanning an entire raw disk with no partitions or using only one partition is an LVM anti-pattern.

The initial draw for using LVM is the ability to logically organize volumes, which is very flexible because it allows online resizing. But focusing only on the logical organization of the data is short-sighted because eventually there may come a time where the physical data needs to be migrated to an external disk, or some of the unused space needs to be repurposed for a different volume group.

It is true that pvresize is available, but that just means one needs to go back down the path of using GParted again to make room for another pv, which will be very difficult to do in an online way.

On the other hand, the flexibility that seems to be missed in most of the documentation and discussions that I’ve come across is that LVM’s flexibility runs both ways. It is directly because LVM is an abstraction over the physical volumes that one should not care how many pv’s make up a volume group, but rather one should try to have as many pv’s as possible because that will provide the most future-proofing in the physical layer, which is the hardest to reconfigure after the fact.

Therefore, I recommend dividing a hard disk into at least 4 primary partitions, possibly with various sizes so there is a size that will be more suited to meeting a future need should you need to break off a piece of a volume group to repurpose it for something else. Maybe even better is to use extended partitions, or GPT if possible, to get more pv segments since today’s disk drives are so large. Having a couple of 50GB pieces that can be broken off of the vg will be very useful.

Case study:

Suppose you created a single partition for your LVM, but, without knowing better, didn’t use the right partition alignment to get the best performance out of your disk drive. The number of ways of correcting this are limited. You could try using GParted, but you’d need the bleeding edge version to get LVM support, and even then, would you want to wait to move all of the data on a 500GB+ partition to the right just to free up a couple of kilobytes at the start of the drive? Even so, you’d have to completely trust GParted to do this right and/or backup the entire drive to external storage. Alternatively, both of these are assuming you actually have the extra storage available, you could migrate off of the data to the external storage, wipe the drive, repartition, and copy all of the data back. (BTW This could be done entirely online by using pvmove.)

Instead, if you had created many smaller pv’s on the drive, it would be trivial to pvmove the data off of the first partition and then pvremove it so   it could be deleted and recreated with the right alignment. (Though this would probably need to be completed in stages for each pv, but I would trust LVM to managed this better than GParted.)

Friday, February 8, 2013

Git private branch pattern

I’ve frequently had to keep local changes that are not committed to our source control (svn actually), for bad or for worse. I’ve been using git-svn and update-index —assume-unchanged to hide these local changes from git status. One downside is that it is possible to accidentally commit those changes by explicitly listing a directory to commit. The bigger downside is that it can make switching between branches a bit painful because the assumed unchanged files need to be scrubbed before git will switch branches. Furthermore, the private changes I’ve made are frequently lost and I often have to re-edit those files after rebasing to master.

The private branch pattern I’m using now is much more convenient. I’ve started creating a ‘private’ branch off of master with all of the local changes that I want to make. Then I can automate rebasing those changes in and out of my working branch as necessary. This also helps make sure that my local changes aren’t lost, and it creates a nice way to catalog them in a changelog.

It works like this:
git checkout master

git checkout -b private

… make private changes and commit them

git checkout work

git rebase private
Now to merge changes to master first run this to remove the private changes
git checkout work

git rebase —onto master private

git checkout master

git merge work

git push origin … or git svn dcommit … etc

I recommend automating this with some aliases in ~/.gitconfig

Also, it is a good idea to make commit messages on the private branch with the prefix "private:" so when you see these commits in your work branch, you can easily identify them.

Thursday, February 7, 2013

YouTube pair your laptop with your TV

In Chrome, open Developer Tools (found in the Tools menu), click the settings gear in the bottom right corner. Select the Overrides tab and change the User Agent to Android 4.0.2 Galaxy Nexus.

Back in the tab, visit Follow the regular pairing process with your TV.

Edit: This post was written before Chromecast and the updates to the YouTube player.

Saturday, January 19, 2013

Zero-width non-breaking space

In Ubuntu LibreOffice, you can insert the word joiner character by pressing Ctrl+Shift+u then typing 2060 and pressing enter.

Under Format->AutoCorrect…->AutoCorrect Options you can add an automatic replacement of C++ to Cu2060+u2060+