Sign up to the PsychLogic newsletter at the bottom of this page to download printable AQA A-level Psychology notes + AQA A-level Psychology revision guide + how to revise for A-level Psychology + more...
The best way to revise Psychology A-level...




  • Primary and secondary data, and meta-analysis. Quantitative and qualitative data
  • Laboratory, field, natural & quasi experiments
    • Aims, operationalising variables, IV’s and DV’s
    • Hypotheses - directional and non-directional
    • Experimental design - independent groups, repeated measures, matched pairs
    • Validity – internal and external; extraneous and confounding variables; types of validity and improving validity
    • Control – random allocation, randomisation, standardisation
    • Demand characteristics and investigator effects
    • Reliability; types of reliability and improving reliability
    • Pilot studies
  • Correlation analysis – covariables and hypotheses, positive/negative correlations
  • Observational techniques – use of behavioural categories
  • Self-report techniques – design of questionnaires and interviews
  • Case studies
  • Content analysis
  • Thematic Analysis


  • Selecting participants and sampling techniques
  • The British Psychological Society (BPS) code of ethics and ways of dealing with ethical issues
  • Forms and instructions
  • Peer review
  • Features of science: objectivity, empirical method, replicability and falsifiability
  • Paradigms and paradigm shifts
  • Reporting psychological investigations
  • References
  • The implications of psychological research for the economy


  • Analysis and interpretation of quantitative data. Measures of central tendency - median, mean, mode. Calculating %’s. Measures of dispersion – range and standard deviation (SD)
  • Presentation and interpretation of quantitative data – graphs, histograms, bar charts, scattergrams and tables
  • Analysis and interpretation of correlational data; positive and negative correlations and the interpretation of correlation coefficients
  • Distributions: normal and skewed


  • Factors affecting choice of statistics test: Spearman’s rho, Pearson’s r, Wilcoxon, Mann-Whitney, related t-test, unrelated t-test, Chi-Squared test
  • Levels of measurement – nominal, ordinal, interval
  • Procedures for statistics tests
  • Probability and significance: use of statistical tables and critical values in interpretation of significance; Type I and Type II errors
  • Introduction to statistical testing: the sign test


Research Methods is concerned with how psychologists conduct research in an attempt to find evidence for theories. A theory without research support is really just someone’s reasoned opinion, not a proven fact.

Psychologists generally adopt a scientific approach to studying the mind and behaviour. The scientific method is based on empiricism – the belief that one can gain true knowledge of the world through the unbiased observation and measurement of observable, physical phenomena.

Laboratory experimentation is the method most associated with science as it involves the careful manipulation of variables to establish whether there are cause-effect relationships with other variables: for example, will an increase in testosterone cause an increase in aggression?

Psychologists face difficulties, however, in that they are studying highly complex, reactive creatures (humans) who tend not to behave in the predictable way that the objects of study of physics, chemistry and biology do. Equally, people put in an artificial laboratory situation who are aware they are being observed will tend not to behave in a normal, natural way. For this (and various other) reasons, psychologists have developed a variety of other means of research such as field and natural experiments, correlation studies and observations.

A debate exists within Psychology as to what extent it is desirable and/or appropriate to apply scientific methods to the study of humans. Many psychologists have argued that a strictly scientific approach reduces the complexity of human behaviour to an overly reductionist level and that human psychology can be better understood by more detailed and in depth methods such as questionnaires, interviews, case studies and content analysis.

Whereas biological approaches, behaviourism and cognitive psychology tend to favour quantitative, scientific, laboratory based approaches, psychodynamic and humanistic approaches argue for a more qualitative, descriptive approach.

The syllabus focuses on scientific approaches and how to design studies which produce valid (accurate/truthful) findings. There is also an emphasis on the statistical analysis of quantitative data.


METHODS, TECHNIQUES & DESIGN (A-level Psychology revision notes)


Psychologists conduct research in an attempt to find evidence for theories. Throughout the history of Psychology there has been an on-going debate in regard to what methods of investigation are appropriate to study the mind and behaviour. Whilst some favour a highly scientific, lab-based, experimental approach, others argue that these methods are inappropriate to the study of humans and support more in depth, less scientific, qualitative approaches such as interviews, case studies and observations.

  • Laboratory Experiments
  • Field Experiments
  • Natural & Quasi Experiments
  • Correlation Studies
  • Observational techniques
  • Self-report Questionnaires
  • Self-report Interviews
  • Case Studies
  • Content Analysis

Each of these methodologies uses different research techniques and has associated strengths and limitations.

Data (information produced from a research study) may be

  • Quantitative: numerical data that can be statistically analysed. This has the advantage of being more objective, quicker to gather and analyse, and can be presented in ways that are easily and quickly understandable. However, data can be superficial, and lacking depth and detail of participants’ subjective
  • Qualitative: written, richly detailed, descriptive accounts of what is being studied. This allows participants to express themselves freely. However, these methods are time consuming, can be costly to collect, difficult to analyse and suffer from problems of subjectivity.

Data gathered by psychologists can be

  • Primary – directly collected by the psychologist them self: e.g. questionnaires, interviews, observations, experiments.
  • Secondary – data collected by others: e.g. official statistics, the work of other psychologists, media products such as film or documentary.
  • Meta-analysis refers to when a psychologist draws together the findings and conclusions of many research studies into 1 single overall conclusion.


LABORATORY EXPERIMENTS (AQA A-level Psychology revision notes)

Lab experiments are the most complex methodology in terms of their logic and design.

Any experiment begins with an aim.

The aim is a loose, general statement of what we intend to investigate: e.g. does alcohol affect driving performance?


Any experiment looks at the cause-effect relationship between 2 variables. A variable is any factor/thing that can be measured and changes. For example, intelligence, aggression, score on authoritarian personality scale, short-term memory capacity, etc. The two variables in the above example are alcohol and driving performance.


In psychological research we often want to find a way of expressing a variable numerically. This is referred to as operationalising a variable. Variables can be operationalised in many ways – for example,

  • Intelligence can be operationalised through an IQ test
  • Authoritarianism can be operationalised through a questionnaire
  • STM capacity can be operationalised through a task such as seeing how many digits a participant can remember at once.


Of the 2 variables we are testing in an experiment, one is referred to as the Independent Variable (IV) and the other is referred to as the Dependent Variable (DV).

In an experiment we test 2 conditions of the IV against the DV to see if there is a significant difference between how the 2 conditions of the IV affect the DV.

For example, we could set up an experiment to examine the cause-effect relationship between alcohol and driving performance. To do this we could recruit 100 volunteer participants, randomly split them into 2 groups of 50, give the 1st group a measure of alcohol and then let them drive on a driving simulator which would produce a score of x/20 for driving performance. The 2nd group would be given no alcohol and allowed to drive on the simulator. Therefore, we would end up with 50 scores of x/20 for those who had driven after consuming alcohol, and 50 scores of x/20 for those who had driven and not consumed alcohol.

We could take the mean average score for each group and compare them. For example, we may find that those who had drunk alcohol scored a mean average of 10/20 whereas those who hadn’t consumed alcohol scored an average of 16/20. What we have done in this experiment is to test 2 conditions of the IV (alcohol and no alcohol) against the DV (driving performance) to see if there is a significant difference between how the 2 conditions of the IV affect the DV. If we find a significant difference between how the 2 conditions of the IV affect the DV we have found evidence that there is a cause-effect relationship between alcohol consumption and poor driving performance. 

   AQA A LEVEL PSYCHOLOGY IV + DV                                    


From the aim of our experiment we formulate our hypotheses.

A hypothesis is an exact, precise, testable prediction of what we expect to find in an experiment.

  • The Experimental/Alternative Hypothesis: a statement predicting that we will find a difference between how the 2 conditions of the IV affect the DV: e.g. ‘There will be a significant difference in driving performance between participants who have and have not consumed alcohol’.

The above hypotheses are non-directional (or 2-tailed) hypotheses. This means that they do not make a prediction about the direction of results: i.e. they don’t predict that 1 of the groups is going to do better or worse than the other, they just predict that some kind of difference will occur.

However, if the experimenter strongly expects that results will go in a certain direction or previous research indicates this he may choose to apply a directional (or 1-tailed) hypothesis. This does make a prediction about the direction of results.

  • Experimental Hypothesis (1-tailed): ‘Participants who have consumed alcohol will show significantly poorer driving performance than participants who have not consumed alcohol’.


In any experiment we always have at least 2 groups of participants performing in at least 2 experimental conditions. There are several different ways in which we can allocate (put) participants to different conditions each with associated strengths and limitations.

1. Independent Groups Design. Participants are split into 2 groups, each group performing in 1 condition only.

The limitations of this design are

  • Participant Variables – the fact that individual differences between participants may affect the DV without us being aware of it and thus reduce the validity (accuracy) of our results. For example, we may find that participants in the alcohol condition are all excellent drivers with high alcohol tolerance, whilst participants in the no-alcohol condition are all poor drivers. Thus, the alcohol group may drive better and we might (falsely) conclude that alcohol improves driving performance. The problem of participant variables is reduced with a large sample and by randomly allocating participants to the 2 conditions.
  • It requires more participants than a repeated measures design.

The advantage of this design is that we will not encounter Order Effects (see below).

2. Repeated Measures Design. In this design all participants perform in the 1st condition and then perform in the 2nd condition. This allows us to directly compare participants’ performance across the 2 conditions.

The limitations of this design are

  • Order Effects – when participants perform in condition 1 then condition 2 their performance in the 2nd condition may either improve due to practise or get worse due to boredom or tiredness. In an attempt to overcome the problem of order effects we can use counterbalancing. This involves ½ the participants performing in condition 1 first, then condition 2, while the other ½ of the participants perform in condition 2 first, then condition 1. (This is thought to balance out the problem of order effects).
  • They may also work out the aim of the study and exhibit demand characteristics (see below).

The advantage of this design is that there is no possibility of participant variables threatening the validity of the study.

3. Matched Pairs Design: This design overcomes the problem of order effects and participant variables. Before the study begins we need to find participants who we can match with each other in terms of relevant characteristics such as age, gender, IQ, etc. The study then runs as an independent groups design, however, because each participant is matched with another participant in the other condition participant variables are less of a problem. The disadvantage of this design is it may be costly, time-consuming and difficult to find participants who match precisely.


It is highly important that experiments are well designed and run - otherwise findings may be inaccurate and lead us to draw false conclusions.

Validity generally refers to the truthfulness and accuracy of our findings.

We can distinguish between 2 types of validity.

  1. INTERNAL/EXPERIMENTAL VALIDITY. This relates to whether we are really measuring what we think we are measuring. In any experiment we are trying to isolate the effect of the IV on the DV. Therefore, we need to ensure that no other unwanted, uncontrolled extraneous variables are affecting the DV without our knowledge. If an extraneous variable does affect our final results, we refer to as a confounding (i.e. confusing) variable.

AQA A LEVEL PSYCHOLOGY EXTRANEOUS VARIABLES2. EXTERNAL VALIDITY. This relates to the extent that the findings of the study can be generalised beyond the research setting. External Validity can take 2 forms.

    • Ecological Validity. This relates to the problem of whether studies conducted under highly controlled, artificial, lab situations can produce findings that can be generalised to everyday life, or whether behaviour shown by participants will be artificial. For example, in the drink-driving study, participants use a driving simulator which is not really similar to driving in a real car on a real road.
    • Population Validity. If we only use small or biased/unrepresentative samples of participants, we may not be able to generalise findings to human behaviour in general.
    • Temporal Validity. If studies were conducted a long time ago, it can be argued that their findings are not relevant to the present day. For example, Asch’s conformity study was conducted in 1950’s America and it has been argued that the climate of America at this time was particularly conformist. Social change since the 50’s has meant that people are now far more non-conformist and independent.


Extraneous variables are variables which the experimenter has failed to eliminate or control which are affecting the DV without us being aware of it. This threatens the validity of the study and the accuracy of our findings.

Extraneous variables must be carefully and systematically controlled. When designing an experiment, researchers should consider the following areas where extraneous variables may arise:

  1. Random allocation/randomisation of participants to experimental conditions. To avoid any bias on the behalf of the researcher, participants should always be divided into groups randomly.
  2. Standardisation of instructions and procedures. Participants should be given exactly the same instructions as each other and go through exactly the same procedures as each other to avoid differences in these acting as extraneous variables.
  3. Participant variables: participants’ age, intelligence, personality and so on should be controlled across the different groups taking part. For example, in the above experiment: gender, driving experience, alcohol tolerance, body mass, etc. Participants could also be pre-tested and put into a matched-pairs design.
  4. Situational variables: the experimental setting and surrounding environment must be controlled. This may include the time of day, the temperature or noise effects.
  5. Order effects: participants may improve or get bored performing in different conditions. This can be controlled by using independent groups, matched participants or counter-balancing.
  6. Demand Characteristics or Investigator Effects (see below).
  7. A control group is a group of participants from who act as a baseline from which differences in the experimental group are measured. For example, we might compare improvements in mood scores for an experimental group who received therapy against a control group who none.


It is highly important that experiments are well designed and run - otherwise findings may be inaccurate and lead us to draw false conclusions. If studies are to be regarded as credible, they must be valid.

The following techniques are used to check for/achieve/ensure validity.

  • Face validity is the extent to which a test is subjectively viewed as being able to measure the concept it claims to measure. In other words, a test can be said to have face validity if it "looks like" it is going to measure what it is supposed to measure.
  • Content Validity involves independent experts being asked to assess the validity/accuracy/appropriateness of instruments/tests used to measure a variable: e.g. agreeing that a particular IQ test is a valid measure of intelligence.
  • Concurrent Validity involves comparing the validity of a new test/measure against an established test/measure whose validity is already known and trusted. For example, the results of a new form of IQ test could be tested against an old, established IQ test. If scores correlate between the 2 tests they are said to have concurrent validity.


The fact that an experiment is a social situation means that behaviour may be affected by the presence of others (experimenter and other participants) and the expectations that participants have. Thus, we may not be getting a valid picture of how people behave in the real world.

  • Demand Characteristics refers to the fact that participants realise they are in an experiment and are being observed and tested. They may, therefore, alter their behaviour either to behave in ways they think the experimenter wants them to behave in or according to how they think they should behave. Participants may try to work out the aim of experiment and modify their behaviour accordingly. They may also show ‘social desirability bias’ – giving responses they believe are correct or moral, rather than answering honestly.
  • Investigator Effects refers to the fact that the experimenter may consciously or unconsciously gives hints or clues to research participants about how he wants or expects them to behave.


Reliability of a study refers to the issue of if we conduct the study again will the study produce similar results? Clearly, if a study produces wildly varying results each time it is carried out there is either no real cause-effect relationship between the IV and the DV or the design of the study is invalid. Therefore, repeating a study confirms previous findings.


Inter-rater reliability

  • If a number of different observers are conducting the same observational study, we need to ensure the observers have inter-rater reliability. This means that observers are all defining behaviours and recording observations in the same way as each other. Thus, before the study begins observers should be trained through the use of, for example, a training video where they learn and are then tested on how to define and categorise behaviours in the same way as each other. We can assess inter-rater reliability by analysing the correlation between different observers score on the same behaviour. This will produce a correlation coefficient (see Correlation Studies and Spearman’s rho test): e.g. +0.96 = a strong positive correlation (they are rating things in the same way as each other).

Test-retest reliability

  • Reliability of a test (e.g. IQ test) or questionnaire can be tested by asking a participant to complete the test/questionnaire, then complete it again 2 weeks and a month later. If answers are similar over a period of time, then the test/questionnaire can be said to have reliability. We can assess test-retest reliability by analysing the correlation between different test scores. This will produce a correlation coefficient (see Correlation Studies): e.g. +0.96 = a strong positive correlation (high similarity between different test scores).


A pilot study is a small scale version of the main study that is conducted in advance to ensure

  • The procedures of the study will run smoothly
  • That equipment/tests are functioning accurately
  • That participants understand instructions
  • That all extraneous variables are controlled


  • High degree of control: experimenters can control all variables in the experiment. The IV and DV can be precisely defined (operationalised) and measured to assess cause-effect relationships - for example, the amount of caffeine given (IV) and reaction time (DV). This leads to greater accuracy and objectivity.
  • Replication: other researchers can easily repeat/replicate the experiment and check results for reliability. This is much easier in a controlled laboratory situation as opposed to a field experiment conducted in the real world.


  • Lack of ecological validity.
  • Demand characteristics.

(Explain both these points in full according to above notes.)


FIELD EXPERIMENTS (Psychology A-level revision)

A field experiment is carried out in the real world rather than under artificial laboratory conditions. Participants are exposed to ‘set-up’ social situation to see how they respond. The ‘naïve’ participants are unaware they are taking part in an experiment.


  1. As the experiment is conducted in the real world levels of ecological validity are increased meaning that we can generalise behaviour to real-life behaviour.
  2. As participants do not know they are involved in an experiment they will not show demand characteristics.

(Explain both these points in full according to above notes.)


  1. As the study is not conducted under tightly controlled laboratory conditions there is a greater chance that extraneous variables will influence the DV without the researcher being aware of this.
  2. Field experiments often involve breaking ethical guidelines: e.g. failing to get participants consent, deceiving participants, failing to inform them of their right to withdraw or debriefing them, etc.


NATURAL & QUASI EXPERIMENTS (A-level Psychology revision)

In a natural experiment the psychologist does not manipulate or ‘set up’ a situation to which participants are exposed to, rather they observe a change in the natural world (IV) and assess whether this has an effect on another variable (the DV). For example, whether the introduction of TV into remote communities (IV = (i) no TV, and (ii) TV) and measuring whether this has had an effect on children’s’ aggressiveness (DV). A quasi-experiment is the same as a normal experiment but participants are not randomly allocated to conditions.


  1. As the experiment is conducted in the real world levels of ecological validity are increased.
  2. In natural experiments, as participants do not know they are involved in an experiment they will not show demand characteristics.

(Explain both these points in full according to above notes.)


  1. As the study is not conducted under tightly controlled laboratory conditions there is a greater chance that extraneous variables will influence the DV without the researcher being aware of this.
  2. Natural experiments may involve breaking ethical guidelines: e.g. failing to get participants consent to be observed, failing to inform them of their right to withdraw or debriefing them.


CORRELATION ANALYSIS (AQA A-level Psychology revision)

 A correlation study involves measuring the relationship between 2 covariables: e.g. height and weight, stress and illness, ‘A’ Level point score and income aged 30, etc. (However, correlation studies only measure whether there is some kind of relationship, not whether there is a cause-effect relationship.)

 The relationship may either be



To conduct a correlation study we need to operationalise the 2 co-variables and their relationship can then be plotted on a scattergram for each participant. The general pattern revealed should indicate whether the relationship is positive or negative and how weak or strong the relationship is. However, we can conduct statistical analysis of our data to produce a correlation coefficient: a number somewhere between -1 and +1 which will indicate the exact direction and strength of relationship between the 2 co-variables.



Whereas hypotheses for experiments predict there will be a ‘difference’ between how the 2 conditions of the IV affect the DV, hypotheses for correlation studies predict there will be a ‘relationship’ between 2 co-variables.

Hypotheses can be directional or non-directional depending on whether or not past research indicates whether we should expect to find a relationship (either positive or negative).


  • 2-Tailed Experimental Hypothesis: ‘There will be a significant correlation between stress and illness’.
  • 1-Tailed Experimental Hypothesis: ‘There will be a significant positive correlation between stress and illness’. (This could also be predicting a negative correlation.)


  1. Correlation studies allow us to assess the precise direction and strength of relationship between 2 co-variables using correlation coefficients (see above).
  2. Correlation studies are a valuable preliminary (initial) research tool. They allow us to identify relationships between variables that we may then decide to investigate in more detail through experimentation.


  1. Correlation studies only tell us that there is some kind of relationship between 2 variables, they do not tell us about cause-effect relationships, and thus they are a weaker methodology than lab experiments.
  2. We may sometimes find a correlation between 2 variables by pure chance, even when no real relationship exists between the variables – thus they may be misleading. For example, there is an almost perfect negative correlation between Nigerian iron exports and the UK birth rate between 1870 and 1920 even though these factors are completely unrelated.


OBSERVATIONAL TECHNIQUES (AQA A-level Psychology revision guide)

Observations simply involve observing behaviour in the natural environment.

Observations may be

  1. Overt: the psychologist’s presence is made known to the group being studied. This may lead to demand characteristics and participants behaving in unnatural ways.
  2. Covert: the psychologist’s presence is hidden. Either he appears as a normal member of the public or is his presence is concealed in some way (e.g. CCTV camera). Although this overcomes the problem of demand characteristics, there are ethical issues to do with deception, lack of consent and invasion of privacy.
  3. Participant: the psychologist joins the group being studied. This may be covert or overt.
  4. Non-Participant: the psychologist remains outside the group being studied. This may be covert or overt.

Observational studies can be conducted in real life situations (naturalistic observations) or in laboratories (which provide more control – controlled observations). Behaviours observed can be recorded in a qualitative form or can be counted/quantified.

For example, we may wish to conduct an observational study of gender differences in aggressive behaviours amongst 5-7-year olds. A tally chart can be constructed to record observations and behavioural classifications/categories.


This chart allows us to make statistical statements about behaviours: e.g. boys punch 4 times more than girls do.

One way of recording behavioural categories is event sampling (as in the example above – recording the number of times a particular event occurs); the other is time sampling – recording what is occurring at certain time intervals: e.g. every minute.

If a number of different observers are conducting the same observational study, we need to ensure the observers have inter-rater reliability (see section of Reliability above).


  1. During covert observations there are high levels of ecological validity and no demand characteristics. Participants are unaware that they are being observed and they are in a natural environment – thus we are observing behaviour as it naturally occurs.
  2. With participant observation the psychologist can question participants and get a much more in depth insight into the behaviours, beliefs and motivations of the group being studied. Thus, a much deeper, richer, descriptive picture of behaviour is produced.


  1. With covert observations ethical issues arise concerning invasion of privacy, lack of consent, deception and lack of right to withdraw.
  2. With overt observations participants may exhibit demand characteristics and act in socially-appropriate or otherwise unnatural ways.



 The term self-report simply means that the participant is reporting on their own perception/view of themselves – either using a questionnaire or an interview.

 For either technique:

  • Social desirability bias may be an issue in that if a participant knows their answers will be read/heard by someone else they may say what they think is socially acceptable/desirable rather than the truth. To combat this, questionnaires can be kept anonymous and confidential.
  • Self-report studies are also subjective in that the individual’s perception of themselves may be quite different from how others view them.


Questionnaires can be:

  1. Closed ended.

E.g. I intend to vote for Joe Biden.


Closed ended questions allow us to produce quantitative data: e.g. statistical statements such as 45% of participants agreed.

  1. Open ended.

Produce lengthier answers – richly descriptive, qualitative data.

E.g. Explain why you intend to vote for Joe Biden.




When constructing questionnaires, we must try to ensure that the questions we ask are clear, concise, non-ambiguous, and easily understandable, and will be interpreted by all participants in the same way as each other.

We may also want to check the reliability of the questionnaire through test-retest reliability. Open-ended questionnaires can be thematically analysed (see later section on this).


  1. Closed-ended questionnaires are capable of providing large amounts of information from large amounts of people fairly cheaply and quickly.
  2. Closed-ended questions can be statistically analysed to allow us to make statements about %’s of people who hold certain beliefs, etc.
  3. Open-ended questions allow us to gain an in depth insight into participants’ personal opinions and the motives that underlie behaviours and beliefs.


  1. If socially sensitive questions are asked participants may give socially-appropriate responses. E.g. if a questionnaire asks whether someone holds racist beliefs it is unlikely they will admit to this to a researcher. This can be overcome by making questionnaires anonymous and confidential.
  2. Open-ended questions can be difficult to interpret and analyse as participants may give lengthy answers. This makes it hard to understand broad patterns and trends in participants’ beliefs and behaviours.


Interviews can be conducted with individuals or groups either face-to-face or telephone/internet. The respondent can describe their response in depth and detail (qualitative data) and say what they want to say rather than filling out pre-set answer choices (e.g. questionnaires). Interviews can be thematically analysed (see later section on this).

Interview questions can be:

  1. Structured: a pre-set list of questions is asked.
  2. Unstructured: the interview progresses as more of an on-going conversation between interviewer and interviewee.


  1. Interviews provide richly detailed qualitative descriptions of participants’ subjective (personal) understanding of their behaviour, beliefs and motivations.
  2. With open-ended questions, interviewees may be able to suggest and shed light on further areas of research and interest relating to the topic they are being interviewed about.
  3. Structured interviews allow all participants to be asked the same questions, making general patterns in answers easier to analyse and keep the interview limited to the subject matter the interviewer wants to cover.


  1. If socially sensitive questions are asked participants may give socially-appropriate responses. E.g. if an interviewer asks whether someone holds racist beliefs it is unlikely they will admit to this.
  2. Open-ended questions can be difficult to interpret and analyse as participants may give lengthy, personal answers. This makes it harder to analyse broad patterns and trends in participants’ beliefs and behaviours.


CASE STUDIES (AQA A-level Psychology resources)

These are longitudinal studies (conducted over a long period of time) which focus in great detail on an individual or a small group. They are often used in the field of psychopathology and child development, and may include a variety of methods such as unstructured interviews and observations.


  1. Case studies provide richly detailed descriptions of participants’ subjective (personal) understanding of their behaviour, beliefs and motivations.
  2. Case Studies usually follow the progress and changes an individual goes through over time.


  1. Case studies are associated with problems of subjectivity and personal interpretation on the behalf of the psychologist: e.g. the psychologist may be biased in their viewpoint and interpretation of events and behaviour: for example, with the case study of Little Hans, Freud was accused of interpreting Hans’ behaviour to make it support his theory of the Oedipus Complex. Thus, because case studies do not use controlled scientific methods of experimentation, they are thought to lack scientific objectivity and proof.
  2. For the above reason, and for the fact they are only carried out on one individual, case studies suffer a lack of reliability and generalisability.


CONTENT ANALYSIS (A-level Psychology notes)

This is a technique where researchers identify themes or behavioural categories and count how many times they occur (see later section on thematic analysis). It is often used with written or visual material such as interviews, open-ended questionnaires, diaries, magazines, films, etc.  A coding system of categories will be developed whereby we count certain times a particular piece of content arises.

For example, we might ask mothers with children who have just started primary school to keep a diary of their child’s response to this and then count how many times categories such as ‘child crying’, ‘child showing clingy behaviour’, ‘child showing anger to mother’ occur.


  • It allows qualitative data (writing or visual material) to be put into a quantitative form (counting behaviours), so that statistical analysis can take place and data can be represented in tables and graphs.


  • Constructing a coding system involves the risk of an investigator imposing their own meaning on the data. The investigator might choose coding categories they think are important and overlook categories which actually are important. Thus, there may be problems of subjectivity and personal bias.


THEMATIC ANALYSIS (AQA A-level Psychology notes)

 Interviews, open-ended questionnaires and content analysis (all qualitative research techniques) can be analysed in terms of themes which occur in the content of responses given by participants.  We can count these themes to produce quantitative data. For example, if we interviewed adults who had experienced maternal deprivation as an infant we could analyse what major themes occurred in interviews (e.g. feelings of loss, desire for love, etc.) and count how many times these themes occurred.


  • We can turn complex qualitative data into quantitative data which can then be statistically analysed. For example, 65% of participants referred to feelings of loss in their interviews.


  • If a number of researchers are conducting thematic analysis on the same data they may interpret and count themes in a different way to each other which would lead to a lack of reliability. (This could be overcome through testing for inter-rater reliability.)


PARTICIPANTS & SAMPLING (A-level Psychology revision notes)

It is important to select participants carefully when conducting research to ensure the study has population validity (see section on Validity above).

The term population refers to all the people within a certain category whom we wish to study: e.g. all schizophrenics, all 5-11 year olds, all pregnant women, etc. From this population we draw a smaller sample. Ideally, we want our sample to be fairly large and to be representative of the population as a whole (i.e. a good cross-section in terms of age, gender, ethnicity, etc.)

With a large, representative, random sample of participants we should be able to generalise (apply) our findings to the population as a whole (i.e. say that what is true of our sample is true of the population as a whole).

There a number of different sampling methods we can employ to select participants each with its own advantages and disadvantages.

  1. Random sampling. The sample is randomly selected from the population: e.g. picking names at random out of a hat. Although this method is truly random it does not guarantee a representative sample.
  2. Volunteer (self-selecting) sampling. Participants respond to an advert placed by the researcher: e.g. Milgram’s obedience study. This method is not random and doesn’t guarantee a representative sample as only certain types of people are likely to volunteer. However, volunteers are likely to make motivated and cooperative participants in research.
  3. Opportunity sampling. Potential participants are approached by the researcher and asked whether they would be willing to take part in a study. This method is not random and doesn’t guarantee a representative sample as only certain types of people are likely to agree to take part. However, those who do are likely to make motivated and cooperative participants in research.
  4. Systematic sampling. Taking every ‘nth’ person on a list: e.g. every 10th person on a school register. Not random or guaranteed to be representative.
  5. Stratified sampling. The population is assessed for what proportion of particular characteristics it contains (e.g. age, gender, ethnicity, social class, etc.) and representative numbers of participants possessing these characteristics are randomly sampled to form the sample.

For example, a school population of 1000 students has 40% boys and 60% girls, and 50% of all students are below the age of 16 and 50% are 16 +.

If we wanted a stratified sample of 100 students we would select

  • 40 boys (40% of all students) and 60 girls (60% of all students)
  • Then sub-divide into age
    • 20 boys below the age of 16 (50% of the 20 boys)
    • 20 boys above the age of 16 (50% of the 20 boys)
    • 30 girls below the age of 16 (50% of the 20 girls)
    • 30 girls above the age of 16 (50% of the 20 girls)


Stratified sampling is truly representative and random.


ETHICAL ISSUES AND WAYS OF DEALING WITH THEM (AQA A-level Psychology revision notes)

The British Psychological Society (BPS) publish ethical guidelines which psychologists are supposed to follow when planning and conducting research.


Participants should not be deceived (lied to) or involved in experiments unless they have agreed to take part. One way of dealing with this is to make sure that the participant is told precisely what will happen in the experiment before requesting that he or she give voluntary informed consent to take part. In reality, many experiments require some level of deception to avoid demand characteristics, hence it is often difficult to receive fully informed consent.

For example, Milgram got consent to take part in an experiment, but not informed consent as participants did not know the true aim of the study.

Dealing with Deception and Lack of Informed Consent

  • At the end of the experiment participants should be informed about the aims, findings and conclusions of the investigation and the researcher should take steps to reduce any distress that may have been caused by the experiment. This may be in the form of counselling. They should also be asked if they have any questions.
  • Presumptive Consent. The general public are surveyed and asked whether they believe that the breaking of ethical guidelines in a particular study is justified or not. This solution is often used in relation to experiments where participants cannot be asked for consent as the study requires them to remain naïve: e.g. field experiments such as Hofling.
  • Prior General Consent. In this proposed solution, people volunteer to take part in research at some point in the future. Thus, they serve as a pool of participants who may be used at some point in the future.
  • Retrospective consent involves asking the participants for consent after they have participated in the study.
  • In the case of young children or the mentally ill, parents or guardians can provide consent if they judge a procedure is in the client’s best interests: e.g. whether a child with ADD should be prescribed a drug. Approval could also be obtained after consulting professional colleagues: e.g. psychiatrists debating whether a depressed patient would benefit from a drug treatment.


Participants should have the right to withdraw from an experiment at any time.

They should be informed of this right in the standard briefing instructions given to them before the experiment commences. They have the right to insist that any data they have provided during the experiment should be destroyed.


Participants should be exposed to no more risk than they would encounter in their normal lives. They should also be protected from any kind of psychological harm such as stress, embarrassment or damage to their self-esteem.  If participants are showing signs of distress they should be reminded of their right to withdraw.


Information about participants’ identities should not be revealed and can be kept confidential by ensuring participants’ identities remain anonymous and confidential. Freud, for example, gave his clients pseudonyms: e.g. Little Hans.


FORMS & INSTRUCTIONS (Psychology A-level revision)


If asked to write a consent form, to get full marks you must provide sufficient information on both ethical and methodological issues for participants to make an informed decision. You must also write as it would be read out to participants.

The form should contain

  • The purpose of the study
  • The length of time required of the participants
  • Details of any parts of the study that participants might find uncomfortable
  • Details about what will be required of them, and what they will have to do
  • There is no pressure to take part in the study at all
  • Right to withdraw (they can leave at any time, without giving a reason, keep any money they have been paid, and any data collected on them will be destroyed)
  • Reassurance about protection from harm
  • Reassurance about confidentiality of the data
  • They should feel free to ask the researcher any questions at any time
  • They will receive a full debrief at the end of the programme


You need to use the details in the description of the study to write an appropriate set of instructions for participants. The instructions should be clear, concise, use formal language and be as straightforward possible. They must:

  • Explain the procedures of this study relevant to participants.
  • Include a check of understanding of instructions.

(This is not a consent form so references to ethical issues are not necessary.)


PEER REVIEW (A-level Psychology revision)

Peer review is the process by which psychological research papers are subjected to independent scrutiny (close examination) by other psychologists working in a similar field who consider the research in terms of its validity and significance. Such people are generally unpaid. Peer review happens before research is published.

Peer review is an important part of this process because it provides a way of checking the validity of the research, making a judgement about the credibility (believability) of the research, and assessing the quality and appropriateness of the design and methodology.  It is a means of prevent incorrect data entering the public domain. This is important to ensure that any funding is being spent correctly.

Peers are also in a position to judge the importance or significance of the research in a wider context.  They can also assess how original the work is and whether it refers to relevant research by other psychologists.  They can then make a recommendation as to whether the research paper should be published in its original form, rejected or revised in some way.  This peer review process helps to ensure that any research paper published in a well-respected journal can be taken seriously by fellow researchers and the public. 



Science is the unbiased observation and measurement of the natural world. It is the only tool humanity has developed for establishing factual truths about the world. Science allows us to establish the laws of physical world and from this knowledge create technology.

Since the 1700’s the scientific method has been developed, scrutinised and refined.

Major features of the scientific methods are

  • Empiricism – Information is gained through direct observation or experiment on physically observable and measurable phenomena rather than by reasoned argument, unfounded beliefs, faith or superstition.
  • Objectivity – Scientists should strive to be unbiased and non-interpretative in their observations and measurements. Prior expectations and preconceptions should be put aside. Subjective can be thought of as biased, personal and interpretive.
  • Replicability – One way to demonstrate the validity of any observation or experiment is to repeat it. If the outcome is the same, this confirms the truth of the original results, especially if the observations have been made by a different person. In order to achieve such replication it is important for scientists to record their methods carefully so that the same procedures can be followed in the future.
  • Control – Scientists seek to demonstrate causal relationships between variables. The experimental method is the only way to do this – where we vary one factor (the independent variable) and observe its effect on a dependent variable. In order for this to be a ‘fair test’ all other conditions must be kept the same, i.e. controlled. This allows us to establish the cause-effect relationships which underlie the laws of nature.
  • Theory construction – One aim of science is to record facts, but an additional aim is to use these facts to construct theories to help us understand and predict the natural world. A theory is a collection of general principles that explain observations and facts. Theories should be based a sound body of valid and reliable scientific study.
  • Hypothesis Testing – A good theory must be able to generate testable hypotheses. Popper developed the concept of falsification – the only way to really prove a theory correct is to disprove it: if it can’t be disproved it must be correct.


PARADIGMS AND PARADIGM SHIFTS (AQA A-level Psychology revision guide)

A paradigm refers to the accepted and approved of ways of thinking, understanding, theorising and researching that exist and are shared within any one particular science. For example, biologists all tend to work within a paradigm where they accept basic concepts (evolution and Darwinian theory) as true and agree on how biology should be studied (scientific experimentation).

Psychology is often described as pre-paradigmatic as there is no complete, shared agreement between psychologists about how they should understand and explain human behaviour or what the best methods to study behaviour are. Psychology encompasses a number of conflicting approaches (e.g. behaviourism, biological, cognitive, psychodynamic, evolutionary, etc.) which disagree over what the major influences are on behaviour and what methods should be employed to study behaviour.

A paradigm shift occurs when there is a fundamental change in how scientists in a particular field understand and research subject matter due to evidence proving that the previous paradigm was inadequate/incorrect in some way. For example, in the field of physics, Newton’s laws were the dominant paradigm from the 18th to early 20th century before the work of Einstein resulted in a paradigm shift in the way in which physicists understood the physical laws of the natural world.



Psychological investigations are written up/reported in the same way by all psychologists.

Abstract – A summary of the study covering the aims/hypothesis, method/procedures, results and conclusions. Allows a reader to gain a quick overall understanding of a study.

Introduction/Aim/Hypotheses – What the researchers intend to investigate. This often includes a review of previous research (theories and studies), explaining why the researchers intend to conduct this particular study. The researchers may state their research predictions and/or a hypothesis or hypotheses.

Method – A detailed description of what the researchers did, providing enough information for replication of the study. Included in this section is:

  • Information about the participants (how they were selected, how many were used, and the experimental design)
  • The independent and dependent variables
  • The testing environment
  • Materials used
  • Procedures used to collect data
  • Any instructions given to participants before (the brief) and afterwards (the debrief)

For full marks, the method section should be written clearly, succinctly and in such a way that the study would be replicable. It should be set out in a conventional reporting style, possibly under appropriate headings. The important factor here is whether the study could be replicated.

Results – This section contains statistical data including descriptive statistics (tables, averages and graphs) and inferential statistics (the use of statistical tests to determine how significant the results are).

If you are asked to outline and discuss the results of a study mention the following points

  • Write the results out clearly in words: e.g. ‘the mean number of objects remembered for participants listening to music was seven, but for those not listening to music was nine’.
  • Refer to the standard deviation or range and explain what they mean, e.g. ‘those listening to music had a higher standard deviation than those not listening to music, meaning that their scores varied more around the mean. So there were more individual differences in participants’ memories when listening to music.’
  • Say whether the results were significant and how you know this (refer to the OV, CV and level of significance), and what it means if they were.
  • Discuss issues of validity
  • Discuss issues of reliability


  • The researchers offer explanations of the behaviours they observed and might also consider the implications of the results (how it can be applied to the real world) and make suggestions for future research.
  • The researchers must consider their work critically, and evaluate it in terms of validity, reliability, any short-comings or criticisms, etc.
  • Discuss how their research relates to the background research discussed in their introduction.


The full details of any journal articles or books that are mentioned in the Introduction section of a psychological report.  
Always written in the following format: surname, initial(s), year. Title. Where it was published. Publisher.
For example, a book written by Sandra L Bem in 1993 titled ‘The lenses of gender: transforming the debate on sexual equality’, published in Newhaven by Yale University Press would be referenced:
Bem, S. L. 1993. The lenses of gender: transforming the debate on sexual equality. New Haven. Yale University Press.



Although it is difficult to quantify how much psychology contributes to the economy, Psychology university departments receive over £50 million in research grants annually.

Psychological research is used in diverse fields such as medicine, psychiatry, therapy, social work, childcare, advertising, marketing, business, forensic in crime, the army, education, etc.

Apart from direct benefits, Psychology indirectly contributes to the economy: for example, in the UK, 40% of people claiming incapacity benefits are doing so due to anxiety or depression, therefore, psychotherapy may assist the long-term unemployed in returning to work which causes increased tax revenue.

Psychology may also assist in finding solutions to wider social problems relating to crime, aggression, child abuse, etc. This could contribute to the economy by reducing levels of crime (theft and damage to properties), reducing prison population (paid for by the tax-payer) and increased taxation (people working rather than being in prison).


DESCRIPTIVE STATISTICS (A-level Psychology notes)

Once a study has been conducted that produces quantitative data, patterns and trends can be simply analysed using some of the following techniques.


This refers to the 3 forms of average – Mean, Median and Mode – which tell us about the average within a set of data.


For example, a set of scores are produced in a memory test:

5, 7, 8, 8, 10, 11, 14, 15, 45

Add all scores and divide by total number of scores:  123 divided by 9 = 13.67

  • An advantage of the mean is that it is the truest form of average because it uses all scores within a set of data.
  • A disadvantage is that the mean may be artificially inflated or deflated by extreme scores (outliers) in a set of data (in such a case we can say that the data is skewed). In the above example the extreme score of 45 artificially inflates the mean to an unrealistically high level.


The median is the middle score in a set of ranked (put in order from low to high) data.

  • An advantage of the mode is that It is not affected by extreme scores (outliers).
  • A disadvantage is that the Mode can be altered a lot by small changes in a set of data.

E.g. 2, 4, 4, 5, 9, 15, 16 Median = 5 (Take mean average if 2 numbers in middle).

       2, 4, 5, 9, 15, 16, 17 Median = 9 (Take mean average if 2 numbers in middle).



The most frequently occurring score in a set of data.

  • An advantage of the mode is that It is not affected by extreme scores (outliers).
  • A disadvantage is that the Mode can be altered a lot by small changes in a set of data. Also, set of scores may have no mode value.

E.g.  2, 2, 4, 5, 9, 15, 16   Mode = 2

         2, 3, 4, 5, 9, 16, 16 Mode = 16



To calculate how much 1 number is as a percentage of another number divide the 1st number by the 2nd and multiple by 100.

For example, if Bob earns £26,060 a year and Nicola earns £137,540 then 

26,060/137,540 x 100 = 18.94

Therefore, Bob earns 18.94% of Nicola’s salary.


These tell us about the ‘spread’/‘dispersion’/’variability’ within a set of scores – the range and the standard deviation (SD).


This simply tells us about the range of scores in a set of data. The range is calculated by taking the highest score and subtracting the lowest score.


The standard deviation tells us about the amount of variability from the mean.

For example, 2 classes of students with 2 different psychology teachers gained the following % scores in an end of year test.

GROUP 1: 18, 24, 31, 46, 55, 64, 79, 82, 90, 98.  Mean = 59

GROUP 2: 49, 52, 54, 57, 68, 60, 62, 64, 66, 68.  Mean = 60

Although the 2 groups have very similar mean scores, GROUP 1 have a much larger SD – there is a lot of variability from the mean whereas there is little variation from the mean in GROUP 2.

The SD is a stronger measure of dispersion than the range because

  • The SD is a measure of dispersion that is less easily distorted by a single extreme score.
  • The SD takes account of the distance of all the scores from the mean.
  • The SD does not just measure the distance between the highest score and the lowest score.


DISPLAYS OF DATA (AQA A-level Psychology notes)

Quantitative data can be plotted on a variety of graphs and charts.

GRAPHS are used to display continuous scores (ordinal data: see Inferential Statistics below). For example, to record participants scores in a memory test (x/20).


HISTOGRAMS are graphs converted to show interval scores (rather than continuous ones). (See Inferential Statistics below.)   


BAR CHARTS are not used to display scores - rather they display categories of information (nominal data: see Inferential Statistics below). For example, number of participants in a particular category such as: favourite colour, borough of London lived in, participants studied at A Level, etc.


Note: whereas histogram bars join because they display continuous sets of scores, bar chart bars are separate as they show separate categories of information. 

SCATTERGRAMS are used to display data from correlation studies (see previous notes on Correlation Studies).




Many characteristics of populations follow a normal distribution: e.g. height, weight, shoe size, etc.

IQ scores are show a ‘normal’ distribution as below: i.e. most scores cluster around the mean average and as scores decrease or increase in either direction, fewer and fewer people possess these high or low scores. 68% of the population have an IQ between 85 and 115, only 2% of the population have an IQ between 130 and 145.


However, distributions of characteristics in populations may be ‘skewed’ (distorted in one direction of another). For example, salary in the UK is positively skewed: i.e. a small % of the population earn a very large salary. The IQs of children at a school for the gifted would be negatively skewed (i.e. few with a low IQ, lots with a high IQ).



INFERENTIAL STATISTICS (AQA A-level Psychology revision notes)

Although quantitative data can be analysed in fairly simple ways using measures of central tendency and dispersion, psychologists and scientists employ more complex statistical techniques to analyse results.

Experiments and correlation studies involve assessing whether

  • there is a significant difference between how the 2 conditions of the IV affect the DV


  • there is a significant correlation between 2 co-variables.

The term ‘significant’ can be thought of as referring to whether there is a real, interesting and important difference or correlation between variables.

For example, in the drink-driving study we may find a mean average score of 16/20 for the sober group and 9/20 for the alcohol group – clearly this is an important ‘significant’ difference. On the other hand if the scores were 14/20 and 11/20 we would be less sure if there was a real ‘significant’ difference between the groups.

At a basic level, statistical analysis is a tool to assess whether we have or have not found a significant difference or correlation in a study.

There are a number of different statistical tests that can be used to analyse data. Which test is appropriate to use is decided by

  1. Whether the study is an experiment or a correlation study
  2. Whether the study’s design is an independent groups design or a repeated measures design
  3. Whether data is at the ordinal, nominal, interval or ratio level (see below)



Quantitative data comes in different forms/types.

  • Ordinal Data – scores which can be ranked from low to high: e.g. scores in an IQ test, memory test or personality questionnaire.
  • Nominal Data – data in the form of categories of information: for example, number of students studying particular participants in college.

For the following examples decide whether data is ordinal or nominal.

Height, eye colour, borough of London lived in, stress score, favourite animal, skill at driving, reaction speed.

  • Interval Data – Ordinal data which has either been separated into intervals: e.g. 0-5, 6-10, 11-15, 16-20, etc.


In the exam you are only required to know about how to conduct inferential statistics using the Sign Test, however all statistics tests follow the basic principles below.

  • Data from an experiment or correlation study is processed through a number of statistical/mathematical formulae. This will eventually produce one single number which ‘describes’ the data as a whole – this is referred to as the Calculated/Observed Value (OV)
  • The OV is then compared to a Critical Value (CV). This is a number found by cross-referencing certain information on a table of statistical significance.
  • Different statistics tests have different rules
    • In some tests if the OV > CV then the statistics test shows that we have found a significant difference/correlation and can, therefore, accept the experimental hypothesis. If the OV < CV we reject the experimental hypothesis.
    • In other tests the reverse is true: e.g. if OV < CV we accept the experimental and reject the null.
    • In the exam you will be told which of the 2 rules above applies to the statistics test concerned.
  • At a basic level, therefore, statistical analysis of data is a way of establishing whether we have or haven’t found significant results.


In theory, psychologists/scientists never say that their findings are 100% accurate and true – there is always a probability that although results seem to indicate particular findings they are incorrect and findings have occurred by chance.

The concept of level of significance is used to indicate to readers to what percentage probability we can say that a particular set of findings are accurate and true, and to what extent results may have simply occurred due to chance.

For most pieces of psychological research a significance level of P < 0.05 is used. This indicates a 95% probability that results are accurate and true and a <5% probability that results occurred due to chance.

Higher levels of significance can be set when the accuracy of research findings is more important: e.g. in trials of a new drug. Thus findings which are significant at P < 0.01 mean that researchers are 99% confident results are true and there is only a 1% probability they occurred due to chance.

< means ‘the same as or less than’.


Depending on the results of statistical analysis of data we may find that results are significant at any one of the above levels of probability. The higher the level of probability – the more significant, and therefore stronger, our results are.


Type 1 errors – calling something true when it’s false.

When a statistics test indicates that the experimental hypothesis should be accepted, but in fact, the results are due to chance random factors. If the level of significance is set at 5%, there will always be a 1/20 chance of a type 1 error.

Clearly, the higher the level of significance (e.g. P < 0.1), the greater the chance that a type 1 error will occur (in this case 10%).

Type 2 errors – calling something false when it’s true.

When a statistics test indicates that the experimental hypothesis should be rejected, but in fact, the results are significant.

Clearly, the lower the level of significance (e.g. P < 0.005), the greater the chance that a type 2 error will occur.


THE SIGN TEST (Psychology A-level revision)

The Sign Test is the 1 statistics test you do need to know how to fully conduct.

Signs tests are used in experiments with a repeated measures design and nominal data.

Example and procedures

We could conduct a study into whether there is a difference in people’s memory for a list of 10 words they’ve been read (DV = memory score x/10) depending on whether they heard the words in quiet conditions (1st condition of the IV) or noisy conditions (2nd condition of the IV). We would use a 1-tailed hypothesis for this study as previous research indicate that noise would disrupt memory ability

Once the experiment is conducted data (results from participants) is put into a results table.


Steps to calculate Sign Test

  1. Subtract the score for the experimental condition from the control condition. If the result is negative add a minus sign; if it’s positive add a + sign; if there’s no difference add a 0
  2. Count the number of times the less frequent sign occurs. In the above example, the + sign is the least frequent. Call this value S. Therefore, S = 2
  3. Count the total number of + and – signs. Call this value Therefore, N = 7
  4. Decide whether a 1 or 2-tailed hypothesis was used. In the above example, we used a 1-tailed hypothesis.
  5. Consult the table of statistical significance (below) for the Sign Test to find the critical value (CV).
  6. Look down the left hand column marked N until you get to the total number of + and In the case described N = 7.
  7. Cross reference N with the columns for either 1 or 2-tailed test (depending on whether your hypothesis is 1 or 2-tailed) and the Level of Significance value 05 (this is your Level of Significance – P < 0.05). In the case above this gives a value of 0. Call this value the critical value (CV). Therefore, CV = 0.
  8. If the critical value ≥ S then we have found a significant difference between how the 2 conditions of the IV affected the DV: i.e. there is a significant difference in how noisy and quiet conditions affect memory ability. In the example above the critical value (CV = 0) is not greater than S (S = 2) therefore, we have not found a significant difference.

 Table of Critical Values for the Sign Test