Summary: In preparation for upcoming remote usability tests, Design For Use has researched and evaluated the perks and drawbacks of remote testing versus traditional lab testing.
Remote usability testing is gaining popularity and prevalence as more companies and Web developers recognize the benefits of user research and look for more cost-effective ways of reaching out to users. Although test moderators are separated geographically and sometimes temporally from the users, the potential advantages of remote usability testing may outweigh those of lab-based testing. Specifically, the decreased cost and ability to recruit a more representative and diverse user base make remote testing more appealing to usability experts and clients alike. Whether the results of remote testing are universally as effective at identifying usability barriers as face-to-face lab testing remains to be seen.
Why Go Remote?
Remote usability testing offers several benefits over traditional lab testing. For one, it is more cost-effective than traditional lab testing, as minimal facilities, if any, are needed , . Typical lab rental can cost upward of $2,000 a day, not to mention the added pressure of conducting each study within a tight time frame.
Having participants use their own computers offers testers the opportunity to view the website or application in varying systems, browsers, screen resolutions, and connection speeds , whereas in the lab environment the user is restricted to whatever system is offered by the test administrator.
Disadvantages of remote testing can be slightly less cut-and-dried. Remote usability testing may be perceived as more intrusive than using a lab, as participants are typically asked to share their personal computer screens . Also, participating in a test in a natural environment has the potential for distractions such as kids or coworkers interrupting the test. However, some studies suggest that if participants do not mind sharing their screens, then the data will be much more valid because the participants are comfortable in and familiar with their natural settings , , and this will result in a more realistic test of the website or application .
One study also points out that distractions offer a more true-to-life environment . For example, in testing a consumer site such as Amazon, it would be particularly helpful to test in the user’s natural environment. Suppose a participant is looking to purchase a recently published romance novel. She spends a few minutes browsing her options and viewing the details of some of the books, but then she hears her baby crying in the next room. In her haste to attend to her child, she closes the laptop and leaves. When she comes back a few minutes later and navigates back to Amazon, she is pleased to see that the site shows her what she has recently viewed so she doesn’t have to begin her search over.
In a study in 2004, Brush found that 25 percent of participants who were tested both in a lab setting and remotely felt more comfortable in the lab, but the remaining 75 percent felt equally comfortable in both environments. About half preferred remote testing overall, and the other half judged remote and lab testing to be about equal . Judging by these results, a strong case can be made for remote testing over lab testing, even if costs and scheduling were equal. Of those participants who expressed a strong preference between remote testing and lab testing, an overwhelming majority preferred remote testing, indicating that the natural environment of home or office is preferable to a lab.
Additionally, less of a burden is placed on the user in having them complete a usability test from their home or office. Rather than traveling to a lab facility, and testing in a new and possibly intimidating environment, the user can complete the study in the comfort of their home.
Other potential drawbacks have to do with the technical side of testing. For one, there will likely be a longer setup time , as participants will have varying levels of experience with the systems they’ll be using for testing, such as WebEx or Skype. Participants will be working with different systems, and, while this is valuable insofar as it offers a more comprehensive look at usability hurdles, still the test facilitators need to spend time preparing to work with a variety of browsers and operating systems , and it can be difficult to troubleshoot IT problems remotely and restart the test , .
Other technical problems test facilitators should be prepared for include confidentiality issues, corporate firewalls, and interrupted Internet connections . Internet speeds and security have the potential to interrupt the test while it’s in progress, but the other technical issues can typically be resolved before beginning the test. A clear and digestible instruction guide submitted to participants ahead of time can reduce these issues, but test moderators should be prepared for any contingency. Additionally, if technical issues arise with the application or site being tested—for example, Flash not being supported by all systems—this can provide valuable information to the testers. It would be helpful to have the participants complete a sample task representative of the test tasks before the official test session begins, so administrators can identify potential issues.
Ultimately, depending on your project’s scope and timeline, remote testing could prove a valuable—and potentially better—alternative to lab testing.
Do you have experience making the switch from lab testing to remote testing? Tell us your thoughts!
Coming Soon: A comprehensive evaluation of synchronous remote testing vs. asynchronous remote testing!
 Gough, D., & Phillips, H. (6/9/2003). Remote online usability testing: Why, how, and when to use it. Retrieved from http://www.boxesandarrows.com/view/remote_online_usability_testing_why_how_and_when_to_use_it.
 Seffah, A., & Habieb-Mammar, H. (2009). Usability engineering laboratories: Limitations and challenges toward a unifying tools/practices environment. Behavior & Information Technology, 28(3), 281-291.
 Tullis, T., Fleischman, S., McNulty, M., Cianchette, C., & Bergel, M. (2007). An empirical comparison of lab and remote usability testing of Web sites. Proceedings from Usability Professional Association Conference, 2002. Orlando.