PEMCRC Survey Studies

Why are the Reviewers asking for these?

  • Content  Content refers to the authority and evidence behind the content of your survey question items.  Generally speaking, just having one person design questions is insufficient.  Two experts is great.  Three is better.  A whole panel – if you can do that – is even better.  A rigorous breakdown of content, such as an actual formal Delphi process is even better.  Essentially we are looking for rigor in the specific content within your survey question items as a component of validity.  Later in the application, you’ll have an opportunity to tell us how you maintained the rigor in this component.

RAND Corporation. The Delphi Method (1967).

  • Response Process  – Response process is another component of survey validity, but addresses something beyond content.  It refers to whether respondents who are otherwise naive to your questions will respond appropriately to yield valid data.  Factors that can adversely affect response process include – but are not limited to – poor grammar and syntax, bias in the tone of the question, or questions that trap the respondent into a single answer: e.g. “Do you feel caring for children is a good thing?”  Typically, the best method to improve this component of validity is to do pilot testing, preferably with a larger number of people (20s to 30s is ideal), and to obtain feedback about the questions themselves.  In this application, please tell us what your pilot testing was like, how it was conducted (online, paper, anonymous, etc.), how many people and what kinds of people piloted, and what changes you made based on piloting.  Multiple piloting phases can be good too.
  • Reliability & Internal Consistency  – Reliability is another component of validity but is not synonymous with validity.  It reflects the propensity of the items to generate consistent responses – as opposed to random responses based on however the respondent feels that day.  Reliability can be demonstrated in many ways, including:
    • temporal reliability (a respondent takes the survey at 2 different time points with an opportunity to ‘forget,’ but still has very similar results)
    • spatial reliability (a respondent takes the survey in 2 different environmental milieux – e.g. in the middle of a clinical shift and at home)
  • Internal consistency is a form of reliability but at the item-to-item level.  It essentially provides assurance that each question item is measuring the same thematic construct, and it can be numerically calculated.  For example, if all questions are reflective of political affiliation, then generally speaking, answering a question on taxes should be fairly consistent with the way you answer the question on the environment, etc.  In other words, answering an item a certain way is reasonably consistent with the way you would answer another item because all of the items measure political affiliation.  A Cronbach’s alpha is a popular measure of internal consistency, if it applies to the type of survey questions you are posing.

Tavakol. Making sense of Cronbach’s alpha (2011).

  • Why are you asking for Demographics at the end?  This one is a PEM CRC SOEM request:  although the evidence is not robust, anecdotally – our surveys tend to have better completion rates when the ‘easy’ demographic questions end the survey, rather than start the survey.

More resources for Survey Studies:

Survey Study Subcommittee Review Process & Scoring system

Bennet et al.  Reporting Guidelines for Survey Research (2011).

One thought on “Why are the Reviewers asking for these?

Leave a Reply

Your email address will not be published. Required fields are marked *