◄►◄❌►▲ ▼▲▼ • BNext New CommentNext New ReplyRead More
How much could you learn about a person in two minutes, just getting them to answer written questions? I suppose you could ask them their favourite colour or song, or quiz them about their other preferences, occupations, and sundry other demographic matters. Getting them to reveal marital status, religion, politics, earnings and savings might be good. Letting them write a few comments might reveal vocabulary levels and personal attitudes.
In comparison, imagine trying to measure a person’s intellectual level in two minutes. It seems unlikely that anything of interest could be obtained so quickly. That was exactly my supposition when, a year ago, I came across results on a 2 minute intelligence test, part of the UK biobank study. By way of explanation, in clinical settings it takes 45 to 90 minutes to do a full Wechsler intelligence test. Additional special tests, of long and short term visual and verbal memory, can lengthen that to 120 minutes. 2 minutes seemed a little short. I decided to dig a little deeper.
The test is called the verbal-numerical reasoning test by Ian Deary at the University of Edinburgh, and has been incorporated into the UK Biobank study. Here is a relevant study:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4844168/
The cognitive test was limited to a 13-item verbal-numerical test to be completed in 2 minutes, which ought to be enough to grade the general population. The mere notion of such a test will discomfort those citizens who regard their own intellects as more wide-ranging and multi-faceted than could ever be measured by mere earthly means, and who rank their brainpower of greater value to Western Civilization in ways that could not possibly be assayed in 120 seconds. Personally, I quail at the thought of having to subject myself to such a harsh evaluation. I mean, 13 into 120 is, let me see, well, not very long at all to solve each item. On reflection, it is a mere 9 seconds to pass each question. Can such people exist?
This test, Verbal-numerical reasoning (UK Biobank data field 20016) consisted of thirteen multiple-choice items, six verbal and seven numerical. Participants responded to the items on a touch-screen computer.
One of the verbal items was: “Stop means the same as: Pause/Close/Cease/Break/Rest/Do not know/Prefer not to answer”.
One of the numerical items was: “If sixty is more than half of seventy-five, multiply twenty-three by three. If not subtract 15 from eighty-five. Is the answer: 68/69/70/71/72/Do not know/Prefer not to answer”.
Participants had a two-minute time limit to answer the thirteen questions. The “prefer not to answer” option was considered as missing data for the purposes of the present analyses. The scores from the test formed a normal distribution.
Astoundingly, even a 13 point short test arranges itself into a crude bell curve, almost as if it was picking up something real about the population, in this case a very respectable 158,845 participants. A real Foxtrot Oscar number. In terms of proper psychometry, this is excellent, because the researchers know exactly who the participants are, and have tons of data on them, including their genetics. It is extremely rare, to put it mildly, to find samples of that size in most psychological research. You can also get big numbers with web based tests, but that is without getting independent verifiable data from participants.
Of course, I would not wish to make too much of this having been achieved in the United Kingdom, as part of their longstanding avid interest in epidemiology, but I would not want it ignored either.
The Cronbach alpha coefficient for these items has been reported elsewhere as 0.62 which is a reasonable consistency, given that both verbal and numerical aspects of reasoning are being measured.
Test-retest reliability: Two-way random Intra Class Correlation statistics showed statistically significant correlations for verbal-numerical reasoning (ICC = 0.65, 95% CI = 0.63 to 0.67). This suggests stability of the measurement over time.
However, one should not imagine that a test of this brevity has the power of more extensive testing. For example, if you add in another 4 short tests (numeric memory, reaction time, visual memory errors and prospective memory) the resulting g factor on the combined short test comes to 40% of the variance, rather than up to the 50% obtainable with more extensive testing. The raw correlations with the other, rather different, cognitive tasks are low. Reaction times and memory task are not direct measures of fluid ability, which is mostly what the verbal and numerical reasoning task is tapping into. Nonetheless, it is telling that reasoning power can be measured with 13 quick items. It is sobering to find how well it does in such a short time period.
Ian Deary says that measuring intelligence is ridiculously easy. This comes as a severe disappointment to some people, who would like to believe that it would take a very extended period of closely detailed study to even begin to map the outer boundaries of their manifold talents. Exactly so.
We lesser mortals reveal ourselves quickly.