I’ve been working on this app spec for weeks at work. In an effort to improve on what I’d accomplished, I reached out to my friend, and one of the smartest people I know, Peter Becan. I wanted him to teach me how to do it right. Peter’s been doing this sort of thing for a lot longer than I have, and has a particular knack for it. Learning to correctly prepare an application specification is harder than learning to program, IMHO.

As my spec writing progressed, I’d email Peter PDF output from Scrivener as I accumulate enough new content worthy of a dispatch. marked, which I’m using to preview MultiMarkdown markup in my Scrivener document, generates the formatted PDFs. I realize that as time progresses, keeping Peter in the loop requires a different method. In order to keep him informed, and not have to email him a new PDF every so often, I needed to make use of the web to post the formatted output.


Scrivener supports outputting HTML generated by the MultiMarkdown processor as a Compile For target. There are instructions on how to use MultiMarkdown with Scrivener here, however the instruction appear out of date. I’m using Scrivener 2.2 and some of the preferences mentioned in the instruction have either moved, or no longer exist. Luckily, by setting options using MultiMarkdown metadata and a simple change in the compile settings, I obtained the necessary output.

For posterity’s sake, what I did not find was MultiMarkdown Settings… under the File menu

Two things are necessary to produce the desired output for inclusion in a WordPress page. First, as mentioned in the instructions, I had to enable the exporting of Titles for both Documents and Groups within Scrivener. You do this by checking the check box under Title in the Formatting options of the Compilation Settings sheet.

Format Settings

Second, I opted to use a Meta-Data file at the very top of my Scrivener doc to coerce the MultiMarkdown processor to produce the necessary output. The metadata fields that I use are:

Title:  Contractor Spec   
Format: snippet  

The key is the Format field. What that does is instruct the MultiMarkdown processor to only create the HTML for the given markup and not an entire XHTML page. Clearly if I am including the output in a WordPress page, only the HTML associated with the MultiMarkdown markup is necessary; the cruft associated with a well-formed XHTML page (i.e. HEAD, BODY, etc.) would be in the way. With all the correct metadata and settings in place, I use Compile… with a Compile For of MultiMarkdown -> HTML and save that in its own subfolder within my source tree.


Git. Love it or hate it, it’s the linchpin in the operation. My Git repo resides on the same server as my WordPress installation. Having that scenario started me thinking about how I would get the HTML snippet residing within the repo into a place that I could serve just that content and not the entire source tree. I started Googling update website with git, and sure enough I found what I was looking for. After sifting through several top results, I found that this was the best answer.

I have an addendum to those instructions. For the remote path, I used a file:// path pointing to the path of the real repo on the server. Found that here.

The key to copying the HTML to a place I can include it from is using a post-receive hook in Git. Very simply put:

The post-receive hook runs after the entire process is completed and can be used to update other services or notify users. (taken from Pro Git)

However, if you follow the steps set out in the instructions, you’ll wind up with your entire repo in the web directory. While that may work in most cases, it was not ideal for what I was trying to do. Next pass at Google had me looking for ways to only checkout a subset of the entire repo. The key to that is something entitled a sparse checkout. I used the steps outlined here to only checkout the folder (and it’s content) that contained the HTML snippet. One exception however about the sparse checkout instructions, you will need to include a ‘*’ at the end of the path. Otherwise you will receive from Git:

error: Sparse checkout leaves no entry on working directory

For my “local” repo on the server, I picked a spot outside of the root of the WordPress installation to checkout the files to.


Last step in the process. How do I insert a snippet of HTML that resides on disk into a WordPress page? Create your own page template and use a custom field. In the end, this was rather easy, once I learned how to do it. This page at the WordPress codex explains how to create the custom template and where to upload it to your server. Scroll down a bit till you get to the Using Custom Fields section. For this, I’m using a custom field called doc_path. Here is my custom template named docs.php:

 Template Name: docs
if (is_page() ) {
$docs_path = get_post_meta($posts[0]->ID, 'docs_path', true);

        <div id="primary" class="span8">
            <div id="content" role="main">
                <?php include($docs_path); ?>
            </div><!-- #content -->
        </div><!-- #primary -->


This allows me to specify the full path to the HTML snippet on disk that I want included in the WordPress page.

Custom Fields

Make sure, as I forgot this the first time I saved my new Page, to set the template for the Page to your custom template like this:


Final Output

Because the application I’m writing the spec for is proprietary, I can’t share the real fruits of my labor with you. However, what I did do is create another page containing a snippet of sample.html from the MultiMarkdown source at github. My sample page is here. P.S. Yes, I am aware that the sample page has a broken image link.

I had a need to import some CAD drawings into my Visio document. The CAD drawings were provided to me as PDF documents. Visio has no native way to insert a PDF into a drawing. SnagIt to the rescue. Besides being an excellent app for making screenshots, it installs itself as a printer. Well, all I did was print my PDF to the SnagIt printer, saved the image as a TIFF, and then inserted the TIFF into my Visio drawing.

The resolution was quite good and I achieved exactly what I wanted. Gotta love it when shits works out!

Enhanced by Zemanta

I’ve been toying around with SQL Server CE replication. For whatever reason, my code was failing with the following exception when I called Synchronize():

Failure to connect to sql server with provided connection information. sql server does not exist, access is denied because the iis user is not a valid user on the sql server, or the password is incorrect.

As it turns out, if you use the follow form of the SqlCeReplication ctor (as observed using Reflector):

public SqlCeReplication(string internetUrl, string internetLogin, string internetPassword, string publisher, string publisherDatabase, string publication, string subscriber, string subscriberConnectionString)

the PublisherSecurityMode is set to SecurityType.NTAuthentication. Otherwise, if you use the parameterless ctor, PublisherSecurityMode is left to its default, which is SecurityType.DBAuthentication. This assignment is NOT documented.

I am working on a SQL Server 2005 Reporting Services (SSRS) report that has differing row colors based on a value in each data row.  The color value is defined in the database.  When I initially created the report, each row had a variable background color but the foreground color was black.  The first time I ran the report, my dark blue background didn’t contrast well with my black foreground.  I quickly realized that I needed a way to vary the foreground color programmatically based on the background color.  After first discussing things over with Nate, here is the expression I came up with for the Color property of the table row:

((CInt(Fields!Status_Color.Value) And &HFF) * 299) +
((CInt(Fields!Status_Color.Value) >> 8 And &HFF) * 587) +
((CInt(Fields!Status_Color.Value) >> 16) * 114)
) / 1000) < 125,

Let me explain where this all comes from.  First off, the color that is stored in the database is used by a VB6 program.  VB6 stores colors as BGR and .NET stores colors as RGB (well, technically aRGB).  The first step is to break down the value from the database to its constituent parts (red, green, and blue) using bitshift operations I learned from Keith Peters and then apply the contrast formula I found from Colin Lieberman‘s website. I then determine that if the blackground is a dark color, then we use white and for a light background, black.  This appears to working like a charm.

Last night we purchase an Apple TV (as well as a Mac Mini, 20″ Cinema Display Monitor, wired keyboard, and wireless mouse). Today I had the unfortunate happenstance of making this little gem of a unit find, and talk to iTunes running on my Mac Pro. I’ll save you the gory details buy my switch is a Linksys SRW224G4P and in the end, I had to disable IGMP Snooping. Otherwise the multicast traffic wasn’t flowing around correctly. This fix came as “well, let’s just see if we turn the helper off”. Well sure as shit, it worked. Yippee for me!

I wanted to report that I succeeded in using iSCSI (on an Openfiler server) with Time Machine via a gigabit link with jumbo frames (MTU of 9000) enabled. The secret to my success? I used the iSCSI initiator from http://www.small-tree.com/. It appears as if the iSCSI initiator from globalSAN is just a plain broke down piece of shit. Well, you do get what you pay for. Hurray for me and my buddy Steve over at Small Tree!

Ok, I first have to say how blown away I am that I was able to get this to work. Here’s the scenario. I was unsuccessful in having VMWare Fusion run my Vista x64 Boot Camp partition (probably for this reason). After many fits and starts, the plan was to create a Vista Complete PC Restore image to an external drive. Then create a VMWare Fusion Vista x64 VM and restore the backup to the VM. The first thing I learned is that Complete PC Restore must restore the image to a drive that is as large (or larger) than the original drive. This is apparently because the backup is a true disk image and not a file backup. Well, this posed a small waste of time because I had created a pre-allocated 250GB vmdk that now had to be scrapped for a 500GB dynamic volume. Also, during the restore, I received an error. I attempted the restore a second time without checking the box to format the drive, and this time time it took.

After a night-long restore of the image, I came into my office in the morning to find the VM repeatedly rebooting do to a Vista blue screen and Vista set to automatically reboot after a stop. I left the VM in a suspended state and went to work. Later on in the evening, I set out to fix the problem. The problem was a 7B stop, which means that a hardware driver for the mass storage unit was not loading at boot. Well, sure, I now have a new IDE controller; as far as Vista is concerned.

Ah, but this is where is gets slick. Vista is now equipped with a revised recovery console, or WinRE. The long and short of it was that I was able to edit the registry from the recovery console. Yes, phat, I know! The docs on how to do this at MS are missing a step (they fail to mention that you need to edit within the “offline” key), so I was able to find better docs elsewhere. This combined with the information as to what registry keys to edit from MS, I was in business. As for that last page from MS, all I did was change the Start value on the two drivers from 4 to 0.


I run an XP virtual machine because our ERP app does not run on Vista Business (or any version of Vista). I also use this environment for development work in and around our ERP application. So, today I was installing SQL Server 2005 Express Edition with Advanced Services. Since this includes Reporting Services, I need IIS installed. This is where I ran into trouble. Add/Remove Windows Components would not launch.

When I attempted to launch Add/Remove Windows Components, I received the following error:

Setup library setupqry.dll could not be loaded, or function IndexSrv could not be found.

Contact your system administrator. The specific error code is 0x7e.

The first step in diagnosing this problem was to run Procmon. I saw nothing unusual because all access to setupqry.dll (c:\Windows\System32\Setupqry.dll) was successful. The next step was to use Dependency Walker. Using Procmon, I was able to see the process that reads setupqry.dll, and the command line is:

"C:\WINDOWS\system32\sysocmgr.exe" /y /i:C:\WINDOWS\system32\sysoc.inf

I used Dependency Walker to profile sysocmgr.exe, and sure enough, as part of the log output was:

LoadLibraryExW("C:\WINDOWS\system32\Setup\setupqry.dll", 0x00000000, LOAD_WITH_ALTERED_SEARCH_PATH) returned NULL. Error: %1 is not a valid Win32 application (193).

Looks as if the file is corrupt. I went to another XP workstation here in the office, and copied Setupqry.DLL from their machine. When I went to copy the file to my machine, I immediately noticed that in Windows Explorer, the file lacked a description like the other DLLs. Well, I copied over the existing file, and Add/Remove Windows Components opened up like a charm. Crisis avoided; back to DEFCON 2.

As a follow up to my prior post about my new Teletype Bluetooth GPS Receiver, I found that the having the two devices was way too difficult to manage. Well, that coupled with my need for the latest and greatest BlackBerry, I now have a T-Mobile 8800 with built-in GPS receiver. The software stays the same however there is no longer a need for the brick (granted a small brick) on the dash.