Over the past weeks and months, our team has made recommendations and taken decisions, with the DGs (departments), on the basis of tests with around 150 participants. Is 150 enough? Wouldn't we need more in a project of this size?

Between October and January, we launched four rounds of tests to see whether participants could find the right path for around 30 tasks. They had to work out where those tasks would be, out of a choice of 15 categories or 'classes'.

For each of the test rounds, we invited 1000 people from the pool of volunteers that we recruited during our major survey in May last year. Between 300 and 350 people completed the test - a fairly good response rate.

With 30 task instructions or scenarios and with the best-practice rule not to ask participants more than 15 questions, the 300 participants gave us an average of around 150-160 results per task question. Research shows that tests can give reliable results with 40-60 participants (95% confidence rate). And indeed, in the stability analysis of the results, we see stable results coming in as early as 25 participants.

The graph shows the number of participants on the bottom axis, and the percentage of success (finding the right path to complete a task) or failure (the wrong path) on the left axis. Until we had asked over 25 participants if they could find the right path, results were variable, as one participant going down a particular path had a significant impact on the graph. Once we had asked over 25 participants, the results levelled out and remained consistent up as far as 150 participants.

 

Photo by Arne Kuilman / Flickr Creative Commons

Related disciplines: 


Add new comment