Usability Research aims to create a better Radio Free Asia website

rev-feature-2a

Every time a person has a great experience with a website, a web app, a gadget, or a service, it’s because a creative team made crucial decisions about both design and implementation—decisions based on data about how people interact with a computer interface.

During August, September and October of this year ODDI and Radio Free Asia collaborated on an in depth user experience review of the the RFA desktop and mobile websites.

Radio Free Asia broadcasts domestic news and information of specific interest to its listeners in China, Tibet, North Korea, Vietnam, Cambodia, Laos, and Burma.  All broadcasts are solely in local language(s) and dialects.

Remote testing sets the stage.

ODDI used the CrazyEgg platform to get an overview of user behaviors during the month of August. 15 pages were followed for a total of 150,000 impressions. This allowed us to see where people were clicking, and where they were not. We also got an idea of how many people scroll down the pages and where most people stop. Finally testing allowed us to see where those users are coming from to begin with, and who clicks on what the most!

Remote testing gave us an overview of users interaction with the pages, and where some follow up with in-person testing might be useful.

From there ODDI:

DEVELOPED TEST PLAN

We sat down with the RFA team and agreed on the test objectives, the questions used in the test, and characteristics of the people who will be trying out the design.

CHOSE A TESTING ENVIRONMENT

Radio Free Europe provided an excellent partitioned room. Video and audio was delivered from the testing room to the observer’s room via network connections.

FIND AND SELECT PARTICIPANTS

The best place to perform these kinds of tests would be in the target countries. Since travel and recruitment would be prohibitively expensive, we sought out English as Second Language students at local universities.

Most of our participants did not know anything about the site prior to the test, and we are grateful for their fresh and valuable insights.

We recruited six participants to test the Chinese Web site and six participants to test the Chinese Mobile site: they were screened to be:

·         Native Chinese speakers

·         Very active news seekers when in China – especially those who visited blocked sites

·         Particularly interested in sensitive Chinese domestic news

·         All under 30

We also recruited six participants to test the Vietnamese Web site and six participants to test the Vietnamese Mobile site: they were screened to be:

·         Native Vietnamese speakers

·         Very active news seekers when in Vietnam ­‑ especially those who visited blocked sites

·         Particularly interested in sensitive Vietnamese domestic news

·         All under 30

The final group, while adhering to the screening parameters mentioned were an interesting mix of backgrounds, including students majoring in electrical engineering, environment science, computer science, applied math and information technology, who also displayed a range of feelings and reactions to the website.

PREPARED TEST MATERIALS

In the test materials, we included specific background and warm up questions to ask, prompts for follow-up questions, tasks, as well as closing, debriefing questions that we want to ask each participant and an evaluation survey.

CONDUCTING THE SESSIONS

rev-test1

Each session was videotaped with one camera attached to the phone to record the user’s taps and gestures while a second one was focused on the user’s facial expressions. Observers in a separate room watched the live video feed and took notes.

We used software called Morae for in-house UX testing on tablets and mobile phones. Morae allows us to capture video — with more than one camera angle — and record scoring as we go. Having video of a participant’s hand movements allowed us to do a more accurate and thorough analysis of how they reacted during certain tasks. Also, since it was in-person, we asked follow-up questions immediately after a task to find out why a participant might have been confused about a task.

We also had a team of people from Radio Free Asia, who were observing the tests in a separate room and participated in the test by asking questions through Morae’s chat window at the end of each session.

The tests consisted of a detailed hour-long interview in English with a subject using his or her phone. After a short introduction, the user was asked to perform 9 tasks on the RFA mobile site. These questions and tasks were videotaped and timed (through Morae) to assess the ease with which the user could interact with the mobile site.

Participants were told they would be videotaped and asked to sign a photo release.

After an initial introduction and discussion of web news, each participant was read a set of instructions. The tasks were given to each participant one at a time on separate sheets of paper.  He or she was asked to read each task out loud before attempting to interact with the website. Mobile users were asked to bring their own phones and used them in the test.

The tests were administered in English, but each participant engaged with the website in their native language.

A native speaker in Mandarin or Vietnamese was on hand if the participant had trouble putting his or her views into English. About half the participants took advantage of this option. Some particularly taciturn participants were debriefed in their native language to ensure the test team was getting all of the results and not suffering from a language gap.

Participants were not coached by the moderator. When something did not go well, they were asked to assess the website and offer advice on how the user experience could be improved.

Occasionally at the end of a task the moderator revealed what should have happened, and asked the participant how the website could be improved.

DEBRIEF WITH PARTICIPANTS AND OBSERVERS

rev-survey

At the end of each session, the moderator asked: “How’d that go?” Also, we invited observers from RFA to pass follow-up questions to the moderator or to ask questions themselves. We also prepared an evaluation survey for participants to fill out.

ANALYZE DATA AND WRITE UP FINDINGS

When we looked at those observations after the test, the weight of evidence helped us examine why particular things happened. From that examination, we developed theories about the causes of frustrations and problems. After we generated these theories, RFA team members can later use their expertise to determine how to fix design problems.

OUR FINDINGS AND SUGGESTIONS

The quality of design is an indicator of credibility. 

Our overall suggestion is to refine and redesign the site. Three users mentioned that RFA’s website looked like a blog or Facebook page, and they doubted its trustworthiness for that reason. Our tests show that elements such as layout, consistency, typography, color and style all affect how users perceive a website.

In addition, the RFA design is three years old and needs to be updated. Among the changes that research suggested:

  • Refine typography and visual hierarchy to be easier to read for mobile first, since this is the most challenging device to design for.
  • Add timestamp to news articles.
  • Create a shorter page, heatmaps show 50% of users are only viewing 25% of the current page.
  • Icons and text do not have sufficient touch/clear area for touch screens on smart phones.
The following two tabs change content below.
Xi Rotmil

Xi Rotmil

Xi Rotmil is a blogger and researcher for the BBG's Office of Digital and Design Innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *