Choice developing edition item multiple test third validating Secret webcam chat no credit

Second, we tested whether the CRT response format altered the well-established association between performance in the CRT and benchmark variables: belief bias, denominator neglect, paranormal beliefs, actively open-minded thinking and numeracy.

Third, we tested the psychometric quality of the different formats of the tests by comparing their internal consistency.

choice developing edition item multiple test third validating-43

Specifically, we set up three main aims to fill in the gaps outlined above.

First, we tested whether the CRT response format affects performance in the test, both in terms of reflectiveness score (i.e., correct responses) and intuitiveness score (i.e., appealing but incorrect responses).

Cognitive conflict, which triggers deeper cognitive processing—according to several dual-process theories (De Neys, ).

Thus, a multiple-choice version of the CRT might be easier because the explicit options would trigger cognitive conflict with higher likelihood, which, in turn, would lead to easier engagement with cognitive reflection and thus become more strongly associated with the benchmark variables usually linked with the cognitive reflection the CRT is assumed to measure (e.g., belief bias, paranormal beliefs, denominator neglect, and actively open-minded thinking; Pennycook et al., ).

Indeed, if such equivalence has been achieved then using a validated multiple-choice version of the CRT would be more convenient since such a version would most likely be quicker to administer and code than the open-ended CRT.

Furthermore, an automatic coding scheme would eliminate any potential coding ambivalence—for instance, whether “0.05 cents,” a formally incorrect answer to a bat and ball problem, should count as an incorrect or correct answer, on the basis of the assumption that participants mistook the unit in the answer for dollars instead of cents: that is, “0.05 dollars.”There are several good empirical and theoretical reasons to expect differences according to the response formats of the Cognitive Reflection Test.

The Cognitive Reflection Test, measuring intuition inhibition and cognitive reflection, has become extremely popular because it reliably predicts reasoning performance, decision-making, and beliefs.

Across studies, the response format of CRT items sometimes differs, based on the assumed construct equivalence of tests with open-ended versus multiple-choice items (the ).

This is supported by the thinking-aloud evidence, in which performance on the CRT with the open-ended response format was partly explained by the lack of specific knowledge needed to solve the problem (Szaszi et al., ).

Tags: , ,