[Reading-hall-of-fame] Confusing Ignorance With Illiteracy

tsticht at znet.com tsticht at znet.com
Sat Feb 19 22:02:21 GMT 2011


2/19/2011

Confusing Ignorance With Illiteracy

Tom Sticht                                                         
International Consultant in Adult Education


One of the major purposes of having people learn to read is so that they may
be able to increase their knowledge about a subject. For instance, if you
want to find out what someone knows about a subject, you might give them a
simple multiple choice test in a written format, and then ask questions
about the subject matter of interest. But this confounds the assessment of
the person’s knowledge about the subject with their ability to read.

Often in what are called reading tests, knowledge and reading skill are
confounded. For instance, in a vocabulary test, it may be unclear whether a
person does not know the meaning of a word, or the person lacks the word
recognition skill to decode the word.

In the National Assessments of Educational Progress (NAEP),  reading skills
and knowledge assessment are confounded in tests of science, mathematics, 
or other content areas because the latter assessments are given largely 
using the printed language and require good reading skills which some
students may not have. Generally there is no attempt to separately
determine a student’s knowledge in the content area separately from the
person’s ability to read in the content area in an unskilled or skilled
manner.

In work for the U.S. Navy, colleagues and I developed a 45 hour reading
development program to help sailors improve their reading ability while
increasing their knowledge needed for upward mobility in their career
progression. In this program, reading instruction was integrated with Navy
career progression knowledge. In assessing learning outcomes in this course
we considered both improvements in Navy career progression knowledge and
increases in reading skill. We did this by developing two separate
assessments.

The Navy Knowledge assessment presented questions about the career
progression information taught in the course and required the personnel to
answer the questions drawing upon the knowledge they had in their long term
memories. The Navy Functional Reading assessment presented questions for
answering, along with paragraphs of written information that contained the
answers to the questions. The idea here was to find out how well the
personnel could read the written language to increment whatever internal
knowledge they had in long term memory stored  in their brains, by
extracting it from the external “long term memory” formed by  the written
passages.  By comparing the Navy Knowledge and Navy Functional Reading
assessment results in pre-and post-program assessments we could determine
separately the extent to which personnel had increased their Navy knowledge
as well as their reading skill for incrementing their long term knowledge
store using an external knowledge store.

In additional work for the U.S. Navy we developed separate readability
formula for determining how much general reading ability as measured by a
standardized, normed reading test a person needed to be able to comprehend
Navy material with 70 percent accuracy. We developed formulas for those
with high and low prior knowledge about the Navy. We found that with low
background Navy knowledge, a person needed a general reading ability of
about the eleventh grade to comprehend with 70 percent accuracy. But highly
knowledgeable personnel needed only a sixth grade level of general reading
to comprehend Navy-related material with 70 percent accuracy. In this case,
then, high levels of background knowledge substituted for some five grade
levels of general reading ability.

The Armed Services have long understood the difference between general
reading ability and specialized bodies of knowledge in developing their
Armed Services Vocational Aptitude Battery (ASVAB). This assessment battery
assesses both general reading vocabulary and paragraph comprehension, but
also includes assessments of specialized bodies of knowledge such as Auto
and Shop, General Science, Electricity and Electronics, and others. When
selecting people for service, lower general reading ability scores may be
offset by higher scores in specialized bodies of knowledge.

The failure to attend to differences in knowledge and literacy is a problem
for the National Assessments of Educational Progress and the National
Assessment of Adult Literacy. It contributes to a serious underestimation
of the intellectual abilities of America’s children, youth, and adults, and
it leads to the egregious error of confusing ignorance with illiteracy.

tsticht at aznet.net






More information about the Reading-hall-of-fame mailing list