Archives

  • 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • 2024-05
  • Z DEVD FMK Traditionally in the absence of

    2018-11-07

    Traditionally, in the absence of an external gold standard, research on social measurement has considered measurement approaches yielding higher prevalence figures for the risk behavior to be more valid. However, recent randomized experiments for e.g., Brener et al. (2006) suggest that this approach may be biased and that it is likely that the direction of the bias may vary by population and context (Gregson et al., 2002; Pienaar, 2009).
    Methods
    Results
    Discussion The success and credibility of research on youth risk-taking behavior is based largely on the quality of the data used. In this paper, the research team discusses errors and sources of bias that threaten data quality using a sample of youth in the Dominican Republic. Although most studies comparing the effects of alternative survey interview methods have assessed the prevalence of specific self-reported risk behaviors using only two methods (Bautista-Arredondo et al., 2011; Van de Looij-Jansen and de Wilde, 2008; Wright et al., 1998), this study is the first to compare the effectiveness of various survey interview methods, which vary in the level of privacy and cognitive demands, at measuring youth risk behavior in Latin American and the Caribbean. To the best of the research team’s knowledge, and in Z DEVD FMK to previous studies (such as Brener et al., 2006, or Tourangeau and Smith, 1998), results suggest that in some instances, lower prevalence rates may be more accurate than higher prevalence rates in specific contexts, suggesting that less risky behaviors might be more accurate. The research team found that certain risk behaviors that are tolerated by adults and considered as a right in LAC (e.g., ) are over-reported in interviewer-assisted methods, while those considered taboo or illegal (e.g., ) are reported less frequently. These findings are bolstered by data from a qualitative study among Dominican youth (Bautista-Arredondo et al., 2011b) that suggest that sexuality is considered a positive behavior by youth and alcohol consumption is considered to be common entertainment among youth and not harmful to health. The was to examine differences in overall non-response and individual question-level errors. The research team found no statistical difference in terms of non-response rates between home-based methods and the CATI method. Among individual question-level errors, the SAI method generated the lowest RCI. Some argue (Jenkins and Dillman, 1995) that this may be due to various cognitive flaws arising as a direct consequence of youth responding without any assistance or supervision whatsoever, with the exception of the written instructions on the questionnaire. The FTFI and CATI methods control non-response, blank responses, and skip errors because epoch rely on skilled interviewers. However, FTFI method interviewers produce a larger number of complex inconsistencies (inconsistencies involving two or more questions) than CATI method interviewers. This is because in the CATI method, computer assisted checks help interviewers eliminate complex Z DEVD FMK inconsistencies. The ACASI method manages to control non-responses, blank responses and skip errors as efficiently as the FTFI and CATI methods. This is accomplished by replacing the interviewer and using software that controls blank responses and out-of-range values and adapts the flow of the interview to prevent incorrect question skipping. The ACASI method, by software design, fails to effectively control complex consistencies. There is some evidence that the ACASI method shows a prevalence of approximately 30 points lower than the FTFI and SAI methods. To confirm that the ACASI method is biased, the research team used a telephone survey conducted by the World Bank between 2009 and 2010 among the same youth cohort, which recorded data on the participants’ number of children. The research team compared the number of respondents who reported that they had never had sex but who had children (Table 3). In the ACASI method, the number of inconsistencies is abnormally high, particularly among women. Such a difference cannot be explained by a socially desirable response bias. The research team believes that due to the length of the questionnaire and the learning process for the software, where the interviewer got to know how to advance faster in the questionnaire, a skipping pattern developed that was independent from reality or circumstances. The fact that prevalence for specific indicators were lower with ACASI (usually those located at the end of the survey) means that they might use as a systemic response pattern.