- Home
- Andy Field
How to Design and Report Experiments
How to Design and Report Experiments Read online
How to Design and Report
Experiments
How to Design and Report
Experiments
Andy Field
Graham Hole
© 2003
First published 2003
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, transmitted or utilized in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without permission in writing from the Publishers.
SAGE Publications Ltd
6 Bonhill Street
London EC2A 4PU
SAGE Publications Inc
2455 Teller Road
Thousand Oaks, California 91320
SAGE Publications India Pvt Ltd
32, M-Block Market
Greater Kailash - I
New Delhi 110 048
British Library Cataloguing in Publication data
A catalogue record for this book is available from the British Library
ISBN 0 7619 7382 6
ISBN (pbk) 0 7619 7383 4
Library of Congress Control Number: 2002108293
Typeset by Keyword Typesetting Services
Printed in Great Britain by The Cromwell Press Ltd, Trowbridge, Wiltshire
Contents
* * *
Preface
Acknowledgements
Part 1: Designing an Experiment
1 Before You Begin
1.1 Variables and Measurement
1.2 Experimental versus Correlational Research
1.3 The Dynamic Nature of Scientific Method
1.4 Summary
1.5 Practical Tasks
1.6 Further Reading
2 Planning an Experiment
2.1 What Should I Research: Finding Out What’s Been Done?
2.2 How Do I Research My Question?
2.3 Summary: Is That It?
2.4 Practical Tasks
2.5 Further Reading
3 Experimental Designs
3.1 The Three Aims of Research: Reliability, Validity and Importance
3.2 Different Methods for Doing Research
3.3 So, Which Experimental Design Should You Use?
3.4 Ethical Considerations in Running a Study
3.5 Summary
3.6 Practical Tasks
3.7 Further Reading
Part 2: Analysing and Interpreting Data
4 Descriptive Statistics
4.1 Populations and Samples
4.2 Summarizing Data
4.3 Confidence Intervals
4.4 Reporting Descriptive Statistics
4.5 Summary
4.6 Practical Tasks
4.7 Further Reading
5 Inferential Statistics
5.1 Testing Hypotheses
5.2 Summary
5.3 Practical Tasks
5.4 Further Reading
6 Parametric Statistics
6.1 How Do I Tell If My Data are Parametric?
6.2 The t-Test
6.3 The Independent t-Test
6.4 The Dependent t-Test
6.5 Analysis of Variance
6.6 One-Way Independent ANOVA
6.7 One-Way Repeated Measures ANOVA
6.8 Two Way Independent ANOVA
6.9 Two-Way Mixed ANOVA
6.10 Two-Way Repeated Measures ANOVA
6.11 Analysis of Covariance (ANCOVA)
6.12 Summary
6.13 Practical Tasks
6.14 Further Reading
7 Non-parametric Statistics
7.1 Non-Parametric Tests: Rationale
7.2 The Mann-Whitney Test
7.3 The Wilcoxon Signed-Rank Test
7.4 The Kruskal-Wallis Test
7.5 Friedman’s ANOVA
7.6 Summary
7.7 Practical Tasks
7.8 Further Reading
8 Choosing a Statistical Test
8.1 The Need to Think About Statistics at the Outset of Designing a Study
8.2 Five Questions to Ask Yourself
8.3 Specific Sources of Confusion in Deciding Which Test to Use
8.4 Examples of Using These Questions to Arrive at the Correct Test
8.5 Summary
8.6 Practical Tasks
Part 3: Writing Up Your Research
9 A Quick Guide to Writing a Psychology Lab-Report
9.1 An Overview of the Various Sections of a Report
9.2 Title
9.3 Abstract
9.4 Introduction
9.5 Method
9.6 Results
9.7 Discussion
9.8 References
10 General Points When Writing a Report
10.1 The Standardized Format of the Report
10.2 Some Important Considerations When Writing a Report
10.3 Writing Style
10.4 Give Yourself Enough Time
10.5 Summary
10.6 Practical Tasks
10.7 Further Reading
11 Answering the Question ‘Why?’ The Introduction Section
11.1 Providing a Rationale
11.2 How to Describe Previous Research and its Findings
11.3 Outlining Your Own Experiment
11.4 Providing Predictions About the Experiment’s Outcome
11.5 Summary
11.6 Practical Tasks
12 Answering the Question ‘How?’ The Method Section
12.1 Design
12.2 Participants
12.3 Apparatus
12.4 Procedure
12.5 Summary
12.6 Practical Tasks
13 Answering the Question ‘What Did I Find?’ The Results Section
13.1 Tidying Up Your Data
13.2 Descriptive Statistics
13.3 Inferential Statistics
13.4 Make the Reader’s Task Easy
13.5 Be Selective in Reporting Your Results!
13.6 Summary
14 Answering the Question ‘So What’? The Discussion Section
14.1 Summarize Your Findings
14.2 Relate Your Findings to Previous Research
14.3 Discuss the Limitations of Your Study
14.4 Make Suggestions for Further Research
14.5 Draw Some Conclusions
14.6 Summary
15 Title, Abstract, References and Formatting
15.1 The Title
15.2 The Abstract
15.3 References
15.4 Appendices
15.5 Practical Tasks
16 Example of an Experimental Write-Up
16.1 Abstract
16.2 Introduction
16.3 Method
16.4 Design
16.5 Procedure
16.6 Results
16.7 Discussion
16.8 References for the Example
References
Index
Preface
* * *
A long time ago in a galaxy far, far away there lived a race of aliens who had no difficulty with the finer points of experimental design. For centuries the forces of the ‘dark side’ imposed a rigid regime of learning and practising experimental design. The strain of writing labreport after lab-report day in, day out, proved too much for some. A small rebel alliance escaped from the planet in an old x-axis starfighter. They found their way to an attractive small blue-green planet called Earth where they inter-married with a set of hairy apes that were scratching their heads about how to work out whether their leader had significantly more bananas than them. We are their descendants. The rest is what we now call history, but this aversion to designing and reporting experiments has stayed in our racial memory. So, if you want to know whether
your leader has significantly more bananas than you, this is the book for you!
There are many worthy books on experimental design on the market, so why have we written another one? Well, few books take you logically through the process of doing an experiment (from the stage of having the initial idea right through to delivering the finished labreport). Those that do probably don’t have as many jokes in them as this one (and they certainly don’t have dogs and cats). Over the years that we’ve both lectured on experimental design and statistics, we’ve noticed that a bit of humour (well, we think it’s humour at least) goes a long way in helping to relieve the potential stress of the topic—for the students and for us. So, this book isn’t as big as it looks and certainly not as scary!
Acknowledgements
* * *
Joint: We are grateful to many people who’ve read bits of this book and provided invaluable comments (in alphabetical order): Tricia Maxwell, Julie Morgan, Brenda Smith, Liz Valentine, Leonora Wilkinson. Thanks also to Jayne Grimes, for letting us mutilate her paper into the example write-up that’s presented in Chapter 16. Two anonymous reviewers made extremely valuable comments that helped us with the final version. We are also grateful to Michael Carmichael for being not only a wonderful editor, but also a very nice bloke!
Andy: Listening to music while I write keeps me sane (arguably). I listened to the following while writing this book: Fugazi, Arvo Pärt, System of a Down, Slipknot, Korn, Radiohead, Tom McRae, more Fugazi, George Harrison, The White Stripes, Frank Black and the Catholics, Slayer, some more Fugazi, Weezer, Air, Favez, and the Foo Fighters.
I am very grateful to all of my friends who persist in being my friends even though I never phone them because I’m too busy writing books! Most of my thanks go to Graham Hole for being such a great teacher, an immensely clever guy, and one of the funniest people I know. He made writing this book a truly entertaining process and so most of all I’m grateful that he was too well-mannered to say ‘piss off’ when I went to his office one day and said ‘hey, Graham, fancy writing a text book. . .’.
Graham: Listening to music also keeps me sane (or so my wardens say), but I’m not sure that would be true if I listened to Andy’s choice of music – with the exception of George Harrison. My thanks go to C.P.E. Bach (a deeply under-rated composer who even Mozart thought was great) and his dad (who everyone thinks is great these days, so no problems there). I can’t top what Andy has written about me, except to say that it is actually all true of him – he is a really great person to write a book with, and he has made it all a genuinely enjoyable experience (partly by writing all the tricky bits and doing all the tedious stuff like headings, corrections, formatting, etc.). He is a very generous person, in every sense of the word (as you can tell from his acknowledgements above). Bloody hell, this is turning into a TV awards ceremony! Oi, reader! You’ve spent your entire student grant on this book to find out about experimental design, so why are you wasting your time reading this page? Just get on with reading the rest of the book, will you? And if you haven’t paid for the book, I hope the bookstore security guard gives you a good kicking as you try to smuggle it out of the shop.
Dedications
* * *
I’d like to dedicate this to the memory of my father, John Ernest Hole, who taught me to read, amongst many other things.
G. H.
For Mum and Dad because, although they are not nearly as cute as my cat, they have done (and still do) so much in return for so little.
A. F.
PART 1 DESIGNING AN EXPERIMENT
* * *
1 Before You Begin
* * *
Scientists spend most of their lives trying to answer questions: why do some people get nervous when they speak to others? does smoking cause cancer? does reading a book on experimental design help you to design experiments? The traditional view is that the fundamental premise of science is that there are absolute truths – facts about the world that are independent of our opinions of them – to be discovered. There are fundamentally two ways in which these sorts of research questions can be answered (and the absolute truths discovered): we can observe what naturally happens in the real world without interfering with it (e.g. correlational or observational methods), or we can manipulate some aspect of the environment and observe its effect (the experimental method). These two approaches have many similarities:
Empirical: both methods attempt to gather evidence through observation and measurement that can be replicated by others. This process is known as empiricism.
Measurement: both methods attempt to measure whatever it is being studied (see Box 1.1).
Replicability: both methods seek to ensure that results can be replicated by other researchers (Box 1.1 illustrates how measurement can affect the replicability of results).
Objectivity: both methods seek to answer the research question in an objective way. Although objectivity is a scientific ideal, arguably researchers’ interpretations of their results are influenced by their expectations of what they hope to discover.
Nevertheless, correlational and experimental methods have one fundamental difference: the manipulation of variables. Observational research centres on unobtrusive observation of naturally occurring phenomena (for example, observing children in the playground to see what factors facilitate aggression). In contrast, experimentation deliberately manipulates the environment to ascertain what effect one variable has on another (for example, giving someone 15 tequilas to see how it affects their walking).
Box 1.1: Why do scientists measure things?
Imagine you were a chemist (heaven forbid!) and you wanted to demonstrate that eating a newly discovered chemical called ‘unovar’ made your brain explode. You force-fed 20 people with unovar and indeed their brains did explode. These results were written up and published for other chemists to read and you were awarded the Nobel science prize (which you enjoyed in the comfort of the prison cell assigned to you for murdering 20 innocent people). A few years pass and another scientist Dr. Smug-git comes along and shows that when he fed unovar to his participants their brains did not explode. Why could this be? There are two measurement-related issues here:
Dr. Smug-git might have fed his participants less unovar than you did (it may be that brain explosion is dependent on a certain critical mass of the chemical being consumed)
Dr. Smug-git might have measured his outcome differently – did you and Smuggit assess brain explosion in the same way?
For the former point, this explains why chemists and physicists have devoted many hours to developing standard units of measurement. If you had reported that you’d fed your participants 100 grams of unovar, then Dr. Smug-git could have ensured that he had used the same amount – and because grams are a standard unit of measurement we would know that you and Smug-git used exactly the same amount of the chemical. Importantly, direct measurements such as the gram provide an objective standard: an object that weighs 10 g is known to be twice as heavy as an object weighing only 5 g.
It is easy enough to develop scales of measurement for properties that can be directly observed (such as height, mass and volume). However, we rarely have this luxury in psychology and other social sciences because we are interested in measuring constructs that cannot be directly measured; instead we rely on indirect measures. For example, if I were to measure undergraduates’ anxiety at having to do a statistics course on a scale ranging from 0 (not anxious) to 10 (very anxious), could I claim that a student who scores 10 is, in reality, twice as anxious as a student who scores 5? Although I couldn’t claim that a student who scored 10 was twice as anxious as a student who scored 5, I probably could claim that the student scoring 10 was more anxious (to whatever degree) than the student scoring 5. This relationship between what is being measured and the numbers obtained on a scale is known as the level of measurement. In a sense, the level of measurement is the degree to which a scale informs you about the construct being measured – it relates to the accuracy
of measurement.
The second proposed explanation for the difference between your experiment and that of Dr. Smug-git illustrates this point rather nicely. In both cases the observed outcome was an exploding brain, but how was this measured? Clearly a brain can either explode or not, so it should be easy to observe the brain and then classify its response to the chemical as exploding or not exploding. Easy eh? Well, perhaps not, what constitutes an explosion? Does the brain have to literally pop – propelling small fragments of blood and tissue onto the nearby walls – or will it suffice to have a large internal haemorrhage? Perhaps Dr. Smug-git required a more dramatic response before he would classify a brain as exploding – hence his differing conclusion. This example illustrates what psychologists face all of the time: an inability to directly measure what they want to measure. When we can’t measure something directly there will always be a discrepancy between the numbers we use to represent the thing we’re measuring and the actual value of the thing we’re measuring (i.e. the value we’d get if we could measure it directly). This discrepancy is known as measurement error.
1.1 Variables and Measurement
* * *
Scientists are interested in how variables change and what causes these changes. If you look at any research question, such as the one above – ‘why do some people get nervous when they speak to others?’ – inherent within it is something that changes in some way (it is not constant). In this case it is nervousness: the question implies that some people will be more nervous than others; therefore, nervousness is not constant – it changes (because it will differ both in different people and across different situations). In much the same way ‘does watching horror films make children more anxious?’ implies that anxiety will change or be different in different children. Anything that changes in this way is known as a variable – something that varies. As you’ll see later in this section variables can take many forms (see page 6), and can be both manipulated and observed (see page 10).