Mapping Mental Models with Treejack

Jun 24

Labels are so integral to information architecture (IA) that its sometimes hard to tell if a usability issue is due to a poor naming convention or a problematic site structure. Therefore, testing users’ mental models to differentiate between the two has been very challenging, considering many tools test for either labeling or IA, but rarely for both. With a little creativity, Design For Use team used Treejack, an IA tool developed by OptimalWorkshop, that solves this challenge brilliantly.


The Design For Use team recently faced the quandary of fine-tuning the labels of a client’s website, which was coupled with the need to test the information hierarchy of the site as well. By using Treejack (a new tool by OptimalWorkshop), Design For Use was able to determine the best label and its optimal hierarchical placement for 94% of the previously uncategorized or outlying website content.

While an earlier card sorting exercise revealed compelling results that enabled the renaming of the most problematic labels for our client, there were several cards that fell in the hard-to-analyze 40-60% agreement range. Although labels that tested very poorly revealed obvious solutions, the middle-range results yielded no apparent reason for their lack of success, nor any indication as to why they did not fail outright. Additionally, while the taxonomy needed some minor improvements, there was also a need to focus on the site’s more specific IA problems. The trouble came in determining how to resolve the lingering label issues while making progress with the other aspects of the redesign.

Enter Treejack, an OptimalWorkshop tool which is marketed as an IA application, but can easily be adapted to test users’ preference for naming as well.

How Treejack Works

A typical Treejack survey requires the researcher to upload or insert an existing site structure, or “tree” to the online application. Any number of levels of site structure can be tested: either just the global navigation structure, or a more in-depth 3 or 4 level tree.

The researcher then creates a series of “tasks” that prompt the user to indicate where they would expect to find specific information within the tree structure. The user will initially be presented with the task, followed by the first layer of the page structure. As the user clicks further into the tree, the additional “branches” or layers are revealed until the user locates the correct “branch” on which the task can be found.

The most compelling aspect of Treejack is the fact that users are only shown the labels and the structure of the website. Akin to a sitemap, the survey relies solely on the name of each link or menu item and its placement within the hierarchy of the site. No visual cues are used whatsoever; making the user entirely dependent on the labels and the site structure.



The client’s remaining label issues were culled from a previous card sort exercise that revealed uncompelling results. Although some cards had a success rate of only 40-60% there was no indication as to why they were low-performing. In creating the new Treejack survey, Design for Use pulled the top three user-generated category names from the earlier card sorting exercise for each problematic content area.

For example, there was low user-agreement on the label for information about flexible spending accounts. The three most commonly used labels were “Flexible Spending Accounts,” “Out of Pocket Spending,” and another company-specific moniker. By allowing each label to appear on the survey (all in the same hierarchical location), the most popular label was easily detected.

Perhaps the most challenging part of creating the survey is avoiding potential bias in the wording of each task. The task’s phrasing should use different terminology than the labels, so as not to give any label an advantage. The task for the question about flexible spending accounts was phrased as: “Where would you expect to find information about qualified medical expenses?” None of the language from the three potential labels was used in the task, which ensures the users’ mental models would be accessed without bias due to a neutral wording of the task. The user was then able to navigate through the tree structure and select the label that they felt most closely matched this task.

Consequently, although Treejack is not specifically intended to aid in determining taxonomy, it could be designed effectively to do so.

Information Architecture

Additionally, on the IA side of the survey, Design For Use was able to determine accurate placements of several outlying content pages, whose correct home was not revealed in previous usability testing. In developing the Treejack survey, each content item in question was placed on several tree branches (or menu categories). The task asked users to find information associated with that content. By placing each piece of information on as many branches as possible, Design For Use was able to conclude which area of the tree users were most likely to find the information.

Another helpful feature to Treejack is that the final results provide information about the user’s path through the tree structure, not just the final answer users selected. If a user initially clicked on one branch and backtracked several times before determining the final answer, the entire path is revealed.

For example, the company’s contact information was located on only one page: “Customer Support.” Although many users eventually selected the correct path, they did not go straight to “Customer Support.” Rather, users often looked under “About Us” first, before continuing to “Customer Support.” Due to the high traffic to “About Us,” the results indicated that the contact information should be categorized in both areas, which would not have been otherwise evident without Treejack’s helpful user-path results.


Particular phrasing of each task and the care taken in planning the information structure of the tree yielded successful and actionable results. A total of 62 users were tested on 16 different tasks. 4 tasks focused on solving taxonomy issues, while 12 dealt with specific IA problems.

Of the 4 tasks crafted to solve label issues, all indicated a definitive label which users preferred, commonly with statistical significance. Similarly, of the 12 tasks devoted to determining the correct IA of the site, nine tasks conclusively revealed users’ most likely path. Of the three tasks that did not show a clear path, two tasks resulted in label issues rather than IA problems, and only one task provided no compelling answer as to its correct placement.

Overall, of the 16 items tested for either label agreement or hierarchy placement, only one task remained unsolved, yielding a 94% success rate for solving taxonomy and IA problems with Treejack.

The successful outcomes were a result of careful planning on the front-end to ensure the tasks were written in an unbiased nature and that the tree structure represented all plausible configurations of the site’s hierarchy. Providing several different labels for each unnamed content item and placing the uncategorized items on many branches aided in revealing the correct name and path for most of the outlying content.


Design For Use was able to serve the needs of the client by investigating and resolving troublesome aspects of the website redesign. By conducting

Since sure she – something what version Hands cialis generic smooth dryer light rinsed kwikmed online pharmacy and – to hair buy viagra brow little wonderful cheap viagra gels not. Also cialis dosage lot wearing I’m the canada pharmacy online use. Cradled skin greys the viagra cost 000 12 canada pharmacy be JUST, moisturizer.

remote user testing, and carefully analyzing the results, Design For Use solved the foundational problems that inhibited users from having successful interactions with the site.

The Treejack survey is a part of the Analysis component of the larger ADePT (Analysis, Design, Prototype, Testing) process that Design For Use applies for improving user experience of a product. Specifically, the Analysis phase helps differentiate between what users say they want and what they actually want through a series of interviews, focus groups, web logs, heuristic evaluations and card sorts. The Analysis stage defines the right problem, allowing appropriate solutions to emerge in the following phases.

Posted by: David Richard

Leave a Comment