Cardinal Path’s response to COVID-19 Cardinal Path is sharing all we know to help marketers during COVID-19.  Learn more.

In planning the initial stages of a user-centered design project, I was recently asked to provide a summary of the relative advantages and disadvantages of various data capture methods. As others might find this list useful, I thought I'd post it here.



  • Fast and cheap, cast wide net quickly
  • Easy analysis (except for open-ended questions)


  • Focus on preference rather than what works best
  • Don't reveal performance
  • Rely on memory
  • Usually no opportunity to probe or follow up
  • Motivation and attention span of participants is critical
  • Difficult to achieve representative sample
  • Reliability may be questionable


Advantage: Provide in-depth data. Help with issues like:

  • What type of information are you looking for?
  • Are you able to find it? How?
  • What is missing?
  • How do you use this information?
  • How critical is this information?
  • What format do you want it in?
  • What works well on the existing site?
  • What would you change?

Pitfalls of interviews:

  • Subjects may be intimidated by or try to impress interviewer
  • Subjects may assume interviewer knows things he doesn't
  • Subjects may have incomplete knowledge or faulty memories
  • Subjects may not be articulate
  • Time-consuming and expensive (especially when interviewees must be compensated)


Advantages include:

  • Rapid feedback from users
  • Provide window into users' stated priorities
  • Generates product possibilities that stakeholders may have missed

Disadvantages include:

  • Participants may influence each other
  • Tend to be dominated by one or two especially vocal participants
  • Talking about something is not the same as doing it (Famously, the Edsel was the most heavily focus grouped car in history)


Advantages include:

  • Reveals how users interact with site, where problems lie
  • Great for evaluating existing websites
  • Can be done quickly and cheaply if you're willing to sacrifice careful selection and screening of participants, and eliminate “frills” like video editing and detailed reports.


  • Does not always reveal root causes of problems, or how to fix them
  • You can observe what users do, but may not always get the reasons behind their actions
  • Results are limited to the assigned tasks. Can't be generalized to entire site.
  • Very dependent upon skill and experience of moderator
  • Can get expensive, especially where participants are highly paid professionals and where video editing and detailed reporting are required.


Best conducted in-person, as you can ask participants to think out loud and learn not only how they organize content, but why. (i.e. deeper insight into mental models and priorities.)

Can be conducted remotely, online. (But then all you get is the “what”, not the “why”.)


  • Reveals logical groupings and labels, from users' point of view
  • Easy to implement
  • If done remotely, can involve large number of users worldwide


  • Sometimes, results are not logical. Need to ask questions.
  • Users might misinterpret card labels, contaminating results
  • Does not reveal how users would complete a task
  • Does not rank items by importance or frequency
  • Does not reveal page/task flows
  • If performed on existing users, they may simply replicate existing system