Aug 04 2008


Tag: warren @ 8:30 pm

Goddess Poetry

my next one true goddess

powers this raw storm,

worship her gift today.

time rusts the sky.

Jun 22 2008

MP3 Odyssey: moving my music library to a new PC

Tag: warren @ 10:58 am

I have been using Windows Media Player (WMP) for a few years. It did the job and I was able to create a cool playlist based on my song ratings and how recently I had heard the songs. My playlist had a few important features:

  • All unrated songs were included until I rated them
  • I hear all 5 star songs at least once every 6 months
  • I hear most 4 star songs at least once a year and definitely every 2 years
  • I was able to pull up a random selection of music according to my rules with little effort

The below summaries my rules:

Star Rating Include after Last Played Include this many songs
1 > 5 Years 1
2 > 2 Years 2
3 > 1 Years 14
4 > 30 Days 17
5 > 30 Days 21
Unrated Now All
4 > 1 Year 20
5 > 6 Months All
4 > 2 Years All

So, takeaway message from this is that I have put significant effort into my music library.

Just bought a new laptop that I wanted to make my main machine and here is what the WMP help suggested I do with my library:

Windows Media Player library FAQ

How do I move my library from one computer to another computer?
The library is a database that includes links to the digital media files on your computer. Among other reasons, you can’t move the library from one computer to another because the links in the database would no longer be correct. If you want to replicate your current library on another computer, you need to copy your digital media files to that computer and then add those files to the new library. For more information about adding content to your library, see Add items to the library.

Not acceptable.

Google quickly revealed Dale Preston’s Windows Media Player Metadata Backup. After following the instructions I had a relatively quick transfer of the library…unfortunately it did not transfer Date Last Played. Obviously required for the playlist that I like.

Searching further it seems that the Date Last Played corresponds to a field called UserLastPlayedTime that cannot be updated.

Time for a different approach.

I had occasional looks at MediaMonkey over the years, but there was no compelling reason to swap from WMP. Loading it up on the old machine and performing a rescan updated everything from the WMP database. Now all I had to do was move it to a new machine.

It took a little heartache but I ended up with the entire database on my new laptop. What I did was:

  1. On the old Windows XP PC relocate (via MediaMonkey auto-organise) all mp3 folders/files to c:\music
  2. Copy all mp3/folders files to c:\music on new Vista PC.
  3. Install mediamonkey on new vista PC.
  4. Copy mediamonkey ini and DB files from old PC to new PC as per locations in this information
  5. Open mediamonkey on new PC and notice that all entries are grey and the Path for songs is for example “[Appletree]\Music\AC-DC\Who Made Who\01 – AC-DC – Who Made Who.mp3″ Changed the C: properties to have a matching “Appletree” label.
  6. Attempt to run “Locate moved/missing tracks” as per these instructions
  7. Kill mediamonkey via Windows task manager because the “Locate moved/missing tracks” did nothing other than hang the program (note that a new version was released while I was doing this, perhaps the new version does not behave like this)
  8. Also noted that “Add/Rescan Tracks to the library” has a similar effect as the “Locate moved/missing tracks” menu option – ie hangs the program and has to be killed via task manager.
  9. Attempted to update the drive ID as per these instructions (Which pointed to the script here)
  10. This script did not initially work, however it did after commenting out the lines as specified in a message on this page
  11. Now my collection became active. The MediaMonkey errors which required killing via the Windows Task Manager also stopped occurring.

Take a breath!

Now it should be a simple matter of recreating the playlist in MediaMonkey, right? It was not so obvious how to create a playlist that would do the same function as my original WMP playlist. Until I found that auto playlists can be combined just by using the Advanced Autoplaylist feature in MediaMonkey.

What I did was create an Autoplaylist for each of the criteria that are listed above and then create a ‘master’ Autoplaylist that combined the lot of them by adding a search criteria with: Property=”Playlist”, Condition=”Is”, and then selecting the Autoplaylists to include in the Values area.

Hooray, the new laptop is now setup with my music database.

May 29 2008

JAOO Brisbane: Goldilocks and the Concurrent Processes

Tag: warren @ 8:44 pm

Today I attended the first day of JAOO Brisbane. A pleasant diversion from the everyday, a chance to catch up ghosts of workplaces past, and an opportunity to see presentations by some IT thought leaders.

Erik Meijer started the day wanting to make a case for “fundamentalist functional programming”. The IT community has reached a crisis point of distributed systems and multi-core computers that is not solved by present day programming languages. A (the?) primary issue to be solved is in eliminating hidden side-effects. Stating side-effects is an enabler for implicit concurrency, rather than, say the explicit world of threads in Java. Hopefully that is a reasonable summary – the talk was impressive enough just because of the journey that we went on.

Finishing the day were Erik Dörnenburg and Martin Fowler searching for design that is just right. There was some discussion of essential and accidental complexity. Essential is required by the problem you are facing, accidental is required by the approach you take. Solutions with too much simplicity are not tasty and neither are solutions with too much complexity. Getting your design just right is a primarily a craft. It may be helped by things like Domain Driven Design or reducing irreversible architecture choices by removing them or delaying them.

Apr 22 2008

Choose Your Poison: Does business logic belong in the database?

Tag: ,warren @ 11:41 pm

There are debates about where to locate the business logic in a software application. Some say that the only place is in some middle language, others argue that the database is the logical home. Once upon a time I guess that no-one gave it much thought, you build a piece of software and it ran on the server, anyone who wanted to use it logged into the server. Or you sent the software off to a user to install on their computer.

Then came two tier applications with part of the software installed on a PC and part residing on a server somewhere. Where did the business logic go? Probably with the piece on the PC, causing heartache whenever it had to be changed. So, the next intelligent move was to centralise the business logic on the server – what was on the server? Probably just the database.

N-tiered applications bought layers separating the storage of data, access to the data, business domain, presentation logic and user interface. Why? Because it means you can change one of the components without having to affect all the other pieces of the software puzzle.

So, there are choices as to where you try to place most of the business logic. In practical terms this equates to a choice between your database and some other language that extracts data from the database.

Some reasons to choose the middle language would be:

  • Your software is meant to work with as many databases as possible.
  • You have no idea about how to use database features like stored procedures.
  • You believe that implementing the business logic in a middle language provides a better separation of concerns or a looser coupling.
  • You believe that the database is only there for data persistence.
  • You believe that the middle language is a better technical choice for manipulation of data. That is, Data Access Objects and possibly the database schema are generated, Object-Relational Mapping (ORM) frameworks can be fully used.
  • The database is only one of many data sources for your software.
  • It is important to create a solution as quickly as possible at the possible expense of later maintainability.
  • There is a higher probability that your database of initial choice will change to something else.

Some reasons to choose the database would be:

  • Your software is meant to work with as many middle languages/interfaces as possible.
  • You believe in having the business logic as close to the data as possible.
  • You believe in making use of database features beyond simple data storage (particularly if you paid a lot of money for it).
  • You believe that the database is a better technical choice for manipulation of data. That is, stored procedures are used to abstract the implementation of data storage, complex queries can be used and are tuned by database experts, unit testing of database code is performed.
  • There is a higher probability that your middle language of initial choice will change to something else.

I believe that the object-relational impedance mismatch should be acknowledged and managed at the DAO interface regardless of use of database features – rather than subverting either the object model or the relational model. Except when the solution indicates otherwise.

My opinion is that the object model and the relational model should both be first-class citizens in your software. Having a stored procedure layer in the database supports the loose coupling of the models. Recognising the importance of both aspects of your software supports:

  • better use the native capabilities of your tool choices
  • better response to change.

There are some passionate views about this topic, a few starters to follow up are:

Object/Relational Mapping is the Vietnam of Computer Science. It represents a quagmire which starts well, gets more complicated as time passes, and before too long entraps its users in a commitment that has no clear demarcation point, no clear win conditions, and no clear exit strategy.

Ted Neward: The Vietnam of Computer Science

For modern databases and real world usage scenarios, I believe a Stored Procedure architecture has serious downsides and little practical benefit. Stored Procedures should be considered database assembly language: for use in only the most performance critical situations.

Coding Horror: Who Needs Stored Procedures, Anyways?

Martin Fowler: Domain Logic and SQL

Mar 24 2008


Tag: ,warren @ 7:09 pm


Those frantic whispers scream.

I only see their shadow in you,

And I urge you to hear,

The still music in me.

We are apart together.

Oct 10 2007

The Documentation Reflex

Tag: warren @ 3:40 pm

We can find key information. Key information is whatever anyone needs to do their work.

So what? What can we do to achieve this blissful informed state? What are you going to go about it?

People have many opportunities to document. We make the choice to document, or not, all the time…

  • Making a decision, evaluating options
  • Discussing requirements with a user
  • Today’s task list
  • While programming
  • Asking how to use an application’s feature

when do we choose to document? Usually when we are forced to, or the ‘doing’ of a task is assisted by a document…

  • Recording pros and cons for various options
  • Draw a user interface layout on a whiteboard
  • Write your task list on a post-it note
  • Pseudo code to explore design options
  • Email a question to application support

when do we want to keep a document? There is no easy answer, something will be useful if it is ever used. Beyond the base documentation requirements of our workplace, we are empowered to capture information based on our judgement of importance balancing cost and value.

What does it take to move documentation, that was functional during a task, to something suitable for keeping? Effort.

  • Recording options in a Word document.
  • Having a whiteboard that will email its user interface drawing to you (so you can attach it to a change request).
  • Change from post-it note to task management software.
  • Write pseudo code in your development tool as code comments.
  • Post a question to an application forum.

Ideally we want to reduce the effort overhead to zero so that we have a ‘documentation reflex’ – documentation that happens through work habits. Ask yourself if what you are doing is worth documenting. If it is, are you capturing it in a way that minimises your effort to document it?

Encouraging updates

The effort to change documents should also be minimal. Tools can help or hinder:

  • If I have the document open for edit, does it lock out other potertial updaters?
  • Is a history of changes recorded – can you revert to a version without all those mistakes you just made?

In most cases authorship should be shared so that everyone feels able to make a change. There is a cost to update ease when adding ceremony to a document (eg prescribing a certain format, or requiring a set of signatures). It is often the case that these things are added to stop a document from being updated – what is your change control policy? Only secure things from change with a reason, because change is not inherently bad.

Specific requirements

In addition to simple opportunities, we may have requirements for specific documentation. These requirements are usually more formal, and are made evident through business or process requirements…

  • Production Change Request
  • Application training

These become project deliverables in their own right – you just have to sit down and put effort into creating them.

Created vs evolving documents

When capturing documents, it is important to note that some are created, and others evolve. Some documentation will record a moment in time (like a diary), for example: options analysis, a decision, or meeting minutes. Whereas others will need constant change, for example: application training, comments within the source code, or a process checklist.

There is a grey area for what is a ‘created’ or ‘evolving’ document. For example the user interface design captured via a whiteboard. If the user requests a change, should the already documented user interface be revisited, or just the change request be captured? The choice in this case comes back to what you have identified as important to capture. Do you accept the initial design and change requests as diary-like documents, with the built system reflecting the current user interface? Or do you need to have an up-to-date model of the user interface?

Making the choice to have an evolving document is more expensive. A guideline can be to make a ‘created’ document the default output, then only evolve that document as a considered decision.


  • Capture key information through your work practices.
  • Minimise the effort and ceremony to capture information.
  • Create documents by default, evolve them by choice.

Aug 27 2007

Introvert’s Birthday

Tag: ,warren @ 10:12 pm

A still gift

Incubates a thinking symphony

Sweet blackness whispering a dream

Introverts Birthday

Aug 22 2007

How much is enough documentation?

Tag: ,warren @ 10:08 pm

What really constitutes a helpful set of documents for software development? That is possibly a poor choice of words as there is not even a need to restrict ourselves to formal ‘paper’ documents for communication.

Have you ever heard:

  1. "You cannot trust the documentation because it is out of date."
  2. "I’ll need to ask a developer to look at the source code and get back to you on the definition for that business rule."

Statements that probably indicate dysfunctional documentation, but not without cause, there are a number of challenges:

  • As a project iterates to maturity, things change. Keeping an up to date, extensive set of documents directly reduces a team’s ability to deliver software.
  • Putting little or no effort into documentation creates a long term problem for the people who would like to know key information.
  • What documentation is valuable?
  • How should documentation be captured?
  • How do you find a piece of documentation?

Looking at those challenges we can probably state the communication vision:

We can find key information.

So what is key information?

Key information is whatever anyone needs to do their work. Quite a broad scope, and probably the point of most disagreement in various documentation strategy discussions. How do we agree on a balancing point for the documentation content? It is too expensive to capture everything that anyone might ever need. So, how much will be captured so that it does not unduly hinder what we are doing now…or next year?

A helpful question to ask is: Who is the audience and what is their likely need?

OK, what tactics can be employed to capture key information?

  • Construct a team culture that requires a number of people to know it.
  • Video – a demo, someone talking about it, …
  • Record (audio) someone talking about it.
  • Photograph it (works well for those whiteboards that don’t print).
  • Write it down.

Fine, we have our key information and it is valuable, how do I find it?

  • I already know it.
  • I know where it is.
  • I can ask someone who knows the answer.
  • I can ask someone who knows where the answer is.
  • We have conventions on where to store information artifacts and I can browse them manually.
  • We employ technology to search for it (eg a search engine like google).

An example:

A software system is developed and maintained.

Identified audiences and their needs:

  • Software developers: high level overview, business requirements/priorities, system architecture, domain model/processes, business rules, source code, key design decisions
  • Business owners: high level overview, domain model/processes, business rules
  • Software users: training, system help

Communication Strategy:

To facilitate communication an intranet site is created which includes automatically produced documents (eg automated test output, automatic code documentation), a wiki is setup to store virtual artifacts, a forum is setup for the system users to ask questions and provide general feedback. A search engine indexes the whole lot, so the team can find information.

  1. High level overview: video the business owners presenting their hopes for the system, write a vision for the project (possibly already part of a business case document)
  2. Business requirements/priorities: record simple statements of key requirements, obtain regular access to key business people, provide constant feedback to key business people, provide an issue tracking system that captures issues and comments
  3. System architecture: produce architectural diagrams with technical and business viewpoints
  4. Domain Model/Process: produce domain model and process diagram(s)
  5. Business Rules: write up business rules
  6. Source Code: write source code with comments that may be extracted to a documentation system (for example javadocs).
  7. Key Design Decisions: write Technical Memos focusing on the decision
  8. Training/System help: none required because the system is so intuitive ;)

Note the focus is on valuable communication, rather than extensive documentation.

Nov 18 2006

ResultSetMapper Project – Map a Java ResultSet to an Object

Tag: warren @ 7:19 pm

After discussing a JDBC ResultSet mapper recently. I’ve put up an initial solution at sourceforge.

Oct 13 2006

JDBC ResultSet Mapper

Tag: ,warren @ 2:43 pm

In building a better bean processor I discussed some ideas for reflection mapping of a ResultSet to Object.

This time around we can look at a solution design.


Here is a UML diagram of the solution (click on the image for a bigger, clearer picture).

Note that the diagram contains a couple of example extensions for the DBUtils BeanProcessor and Spring RowCallbackHandler. One of the requirements for the ResultSetMapper is that it can be easily be plugged into these bigger frameworks.

ResultSetMapper UML

Basic Mapping

Data mapping will occur against ResultSet columns and Object fields.

By default the Object field will require the @MapToData annotation, though this requirement may be turned on/off with ResultSetMapper.setAnnotationRequired(true/false). Why have it off? There would be no requirement to add an empty ‘@MapToData’ annotation. So, why have it on? The source code contains a documentary ‘@MapToData’ annotation to indicate that some magic is being performed on the field.

The mapping will use a NameMatcher to compare the field name with the column name (unless a MapToData.columnAlias is specified). A default NameMatcher – NameConverter – will use simple camelCase to under_score matching.

Data mapping will occur, in order of priority:

  1. Via a specified setter procedure – MapToData.setter
  2. Via the JavaBean standard default setter
  3. Directly into the object field


Inheritance requires that many target classes may be supplied to the ResultSetMapper. However, this means that there must be some way of choosing which target to create. To allow selection of target objects, an ObjectValidator will be used – every type of target class will be constructed from the ResultSet and passed to the ObjectValidator. (The default validator always returns true.)

The advantage of this approach is that the programmer only deals in their domain object, but it does require an overhead of creating extra objects for validation.


Solving this problem requires a couple of technical problem answers:

  • How to specify that an Object is an aggregate target (eg MyDataObject), rather than ‘Just Another Object’ (eg String, MyTransientObject)? Answer: MapToData.isAggregateTarget = true/false
  • How to have multiple fields of the same class with different business meanings (eg MyDataObject start; MyDataObject end;)? Answer: MapToData.columnPrefix or MapToData.columnSuffix


So the example domain from the previous post would now be:

public abstract class Jelly { 
  Long jellyId; 
  String jellyType; 
public class JellyAttribute { 
  String name; 
  String value; 
public class JellyCompany {
  String companyName;
  String address;
public class JellyBean extends Jelly { 
  String targetMarket;
  BigDecimal weight;
  @MapToData (columnPrefix="flavour_", isAggregateTarget = true)
  JellyAttribute flavour; 
  @MapToData (columnPrefix="colour_", isAggregateTarget = true)
  JellyAttribute colour; 
  @MapToData (isAggregateTarget = true)
  JellyCompany company;
public class JellyCup extends Jelly {
  String productName;
  @MapToData ( columnAliases = { "cup_volume" } )
  BigDecimal volume;
  @MapToData (isAggregateTarget = true)
  JellyAttribute shape;
public class JellyCupAndSpoon extends JellyCup {
  String spoonMaterial;
  BigDecimal spoonLength;

Next Page »