[Maths-Education] Re: ICT in mathematics
Tanner, Howard
howard.tanner at smu.ac.uk
Mon Mar 7 18:41:59 GMT 2011
This is a very interesting and useful thread, which is raising some interesting issues for me. I do not claim to be an expert on or advocate for meta-analyses, so
First with regard to effect size. I do not think I was misrepresenting Hattie. I quote here from his inaugural address:
"An effect-size of .31 would not, according to Cohen (1977), be
perceptible to the naked observational eye, and would be approximately
equivalent to the difference between the height of a 5'11" and a 6'0" person."
Now I agree with Dylan that I can see an effect size of 0.2, but it is very small and I have been much happier when innovations that I have been involved in have reached the 0.4 benchmark above controls. However, Hattie claims
"Most innovations that are introduced in schools improve achievement by
about .4 of a standard deviation. This is the benchmark figure and provides a
'standard' from which to judge effects."
"The typical effect does not mean that merely placing a teacher in front of a
class would lead to an improvement of .4 standard deviations. Some
deliberate attempt to change, improve, plan, modify, or innovate is involved."
http://www.education.auckland.ac.nz/webdav/site/education/shared/hattie/docs/influences-on-student-learning.pdf
Hattie is not referring to the baseline rate of progress within a system, but to the average effect of INNOVATIONS in education. He is comparing innovations with each other and suggesting that some innovations are more effective than others. Now there are obviously methodological issues associated with meta-analyses, such as the possible tendency for negative or null results not to be published, which would raise the bar. There are also meta-studies that ignore context or which lump together a range of disparate innovations under the same theme. However, Hattie's core point is about investing in the innovations that have the biggest effect. (Although I'm not sure that I want to promise that investing in such innovations would clear the National Debt).
With regard to the second point, I feel on stronger ground. The limited aims of the Cognitive Tutor in teaching procedural knowledge have a degree of validity in that there are probably many teachers in the UK as well as the USA who see mathematics in this limited way. But surely we don't wish to accept this status quo but to improve matters?
Surely we don't want to work to take teachers out of the system, but to help them to understand how the affordances of ICT can provide opportunities for creativity, experimentation, discussion, argumentation and justification? Effective education requires effect teachers. We need a highly skilled, highly qualified, creative and reflective teaching profession that chooses to use technology to develop higher level mathematical thinking. There are no shortcuts.
Howard Tanner
PS I would like to read the paper referenced, but my library won't let me access it until November 2011.
-----Original Message-----
From: maths-education-bounces at lists.nottingham.ac.uk [mailto:maths-education-bounces at lists.nottingham.ac.uk] On Behalf Of dylanwiliam at mac.com
Sent: 06 March 2011 07:37
To: Mathematics Education discussion forum
Subject: [Maths-Education] Re: ICT in mathematics
***********************************************************************************************************
This message has been generated through the Mathematics Education email discussion list.
Hitting the REPLY key sends a message to all list members.
***********************************************************************************************************
Jacob Cohen's views about the interpretation of effect sizes are widely accepted but it is important to note that they were based on experiences in psychology, rather than education. In psychology, an effect size of 0.2 might be small, but in education it is huge. This is because the average increase in achievement in mathematics (and most other subjects) is 0.4 standard deviations per year. An effect size of 0.2 therefore represents a 50% increase in the rate of learning-a very dramatic effect that, if it could be replicated across just one year group in England, would have a net present value equivalent to around three times the national debt.
This 0.4 figure is what Hattie was quoting. He was basically saying that ordinary teaching will give you 0.4 standard deviations per year, so all interventions should be measured against this baseline. The figures quoted in most of the Carnegie studies are against comparison groups over the same period, so the 0.4 has already been subtracted. The other thing to say about Hattie's "Visible Learning" project is that the use of meta-analysis in such situations is problematic. First, measures of educational outcomes differ in their sensitivity to instruction, so apparently similar measures would give different effect sizes for the same intervention. Second, the calculation of effect sizes uses the standard deviation of the control and treatment groups as a divisor. Where the groups are sub-group of the whole population, this divisor will be smaller, and so the effect size will be larger. This partly explains why interventions for students with special educational needs appear to provide larger effects (See Fuchs & Fuchs, 1986), but more importantly means that where studies are not conducting on representative samples of the whole population, average effect sizes are difficult, if not impossible, to interpret.
Most readers of this thread will have had enough about this already, but for those that want to read more about effect sizes in education, and sensitivity to instruction, I have written more about this in: Wiliam, D. (2010). Standardized testing and school accountability. Educational Psychologist, 45(2), 107-122.
Regarding Howard's second point, it is important to note that the Cognitive Tutor was designed to do just one thing: teach procedural aspects of algebra to ninth grade students in the United States. In many countries, these particular aspects of algebra are not regarded as particularly important, but in the US, this is a substantial proportion (perhaps 50%) of what students do in ninth grade mathematics. In terms of these very limited goals, Cognitive Tutor does a good job (by my estimate, it is better than 90% of teachers). Of course we should work to increase teachers' pedagogical content knowledge, but while we do this I think we need to worry about the students who are being taught right now. I would love to get to a point where every teacher is better than Cognitive Tutor, but while we are getting there, we need also to think about the short term.
Dylan Wiliam
On 5 Mar 2011, at 22:02, Tanner, Howard wrote:
> ***********************************************************************************************************
> This message has been generated through the Mathematics Education email discussion list.
> Hitting the REPLY key sends a message to all list members.
> ***********************************************************************************************************
> I have two points to make in relation to this thread:
>
> 1. The research reported / selected for reporting, on the Carnegie learning site lists the effect sizes reported in the trials. In most cases these are small (using Cohen's usual criteria for small, medium and big) and below d=0.4. Hattie (2009) reports that the average effect size for any intervention is 0.4 and demands effect sizes above this level for "innovations that work". The average effect size for "computer-assisted instruction" is reported as 0.31 - so nice try but no cigar! (Am I allowed to say that?)
>
> 2. I have a problem with classifying interventions as ICT or non-ICT as if that was the most important aspect of learning and teaching.
>
> Clearly there is a world of difference between
> a) computer aided learning in which a student works through a series of tasks as instructed by a computer;
> b) a good teacher using software such as Autograph, Geogebra or Cabri to support dialogic teaching that encourages debate around students' ideas and
> c) a poor teacher using the same software to tell students what the answer is.
>
> I don't think that there are easy technological fixes for educational problems. We need to focus on helping teachers to develop their own pedagogical subject knowledge and appreciate when the affordances of ICT might be useful.
>
>
> Hattie J (2009) Visible Learning; a synthesis of over 800 meta-analyses relating to achievement London; Routledge
>
>
> Dr Howard Tanner
> Reader in Education / Darllenydd Addysg
>
> Director of Centre for Research in Education / Cyfarwyddwr y Ganolfan Ymchwil mewn Addysg
> School of Education / Ysgol Addysg
> Swansea Metropolitan University / Prifysgol Fetropolitan Abertawe
> Townhill Road / Heol Townhill
> Swansea / Abertawe SA2 0UT
> Wales, UK / Cymru, y DU
>
> Phone / Ffôn: 01792 482019
> Fax/Ffacs: 01792 482126
> e-mail / e-bost: howard.tanner at smu.ac.uk
>
> -----Original Message-----
> From: maths-education-bounces at lists.nottingham.ac.uk [mailto:maths-education-bounces at lists.nottingham.ac.uk] On Behalf Of Alan Rogerson
> Sent: 04 March 2011 16:10
> To: Mathematics Education discussion forum
> Subject: [Maths-Education] Re: ICT in mathematics
>
> ***********************************************************************************************************
> This message has been generated through the Mathematics Education email discussion list.
> Hitting the REPLY key sends a message to all list members.
> ***********************************************************************************************************
> Dear Dylan,
>
> What you say below does not in any way alter the fact that what you
> actually recommended to Sarah were reports on a webpage produced by a
> commercial company. That is the problem. You say something is "one of
> the best researched" but we need to know when and by whom? It is not the
> quantity of research that counts, rather its quality.
>
> Please note, I am not making any judgement about the actual research
> which you call "original" nor did I say that this research is some how
> invalidated by being used by a commercial company. Let's say for the
> sake of argument, that all this research could be validated, and also
> note that some of the reports on the Carnegie.inc webpage were (as we
> know) from Carnegie Mellon University itself, and may have even
> pre-dated the formation of Carnegie.inc, I do hope you can see that this
> does not change the problem? "Selective quotation" is still a real
> hazard, what company after all will quote research critical of its own
> products?
>
> We know only too well the much bigger and much more serious debate going
> on about so-called academic research being funded, or supported, (or of
> course suppressed!) by drug companies. Companies are in business to make
> money, so we can hardly use them, or the reports they quote, as
> objective exemplars of "research". The contrast is between reports which
> clearly have no such bias, and those which are at risk of being biassed.
> Surely we cannot say "third party evaluations... would be better",
> surely you mean essential? We know from basic statistics that biassed
> evidence, when we can not attach boundaries to the bias, , is, and must
> be, useless (not second best). We all know the story of the millions of
> telephone calls surveyed that failed to predict the next President of
> the USA....?
>
> Please also note that there is absolutely no bias (or specific
> accusations) against Carnegie.inc in particular here, it is a purely
> general point that is being made.
>
> The only remaining problem, and somewhat insoluble, is the one Douglas
> Butler has just mentioned.
>
> C'est la vie, c'est la ICT.
>
> Best wishes,
> Alan
>
>
>
>
> On 04/03/2011 15:52, dylanwiliam at mac.com wrote:
>> Alan: Sarah asked specifically for studies that showed the impact of ICT on attainment, and the Cognitive Tutor is one of the best researched pieces of software for mathematics education. While Carnegie Learning is a commercial company that has taken over marketing and distribution of the products generated by the people who developed the Cognitive Tutor, the research itself is very solid (and much of it dates from before Carnegie Learning became involved). I agree that third party evaluations, such as those undertaken by Mathematica, would be better, and of course educationalists should evaluate the merits of the studies, but the fact that the research is now being used to support a commercial enterprise does not invalidate the original findings.
>>
>> Dylan
>
>
> This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham.
>
> This message has been checked for viruses but the contents of an attachment
> may still contain software viruses which could damage your computer system:
> you are advised to perform your own checks. Email communications with the
> University of Nottingham may be monitored as permitted by UK legislation.
More information about the Maths-Education
mailing list