Skip to main content

Publication, Part of

Adult Psychiatric Morbidity Survey: Survey of Mental Health and Wellbeing, England, 2023/4

Part 2 Release

The following chapters will be published in Autumn 2025:

5. Alcohol: hazardous, harmful and dependent patterns of drinking

6. Drug use and dependence

8. Personality disorder

10. Autism spectrum disorder

11. Bipolar disorder

12. Psychotic disorder

13. Eating disorders

26 June 2025 09:30 AM

Methods

Authors

Katie Ridout, Anna Keyes, Ioana Maxineanu, Sarah Morris, Abigail Timpe, Mari Toomse-Smith, Traolach Brugha, Zoe Morgan, Samuel Tromans, Sally McManus 


1. Introduction

1.1 The Adult Psychiatric Morbidity Survey series

The Adult Psychiatric Morbidity Survey (APMS) 2023/4 is the fifth in a series of national mental health surveys. The APMS series began in 1993, with surveys conducted about every seven years: 2000, 2007 and 2014. Due to the COVID-19 pandemic, fieldwork for the fifth survey was postponed to 2023/4.

The APMS series is designed to:

  • Estimate the prevalence of a range of types of common and rare mental health neurodevelopmental conditions and disorders in the population.
  • Estimate the proportion of people with each disorder in receipt of treatment and services.
  • Produce trends in disorder and treatment through comparisons with previous surveys in the series.
  • Enable the circumstances of people with different mental disorders to be compared with those of people without disorder.

The first two surveys in the series were carried out by the Office for National Statistics (ONS) in 1993 and 2000, and covered England, Scotland and Wales. Since 2007, the APMS covered England only and had no upper age limit to participation (which was 64 in 1993 and 74 in 2000). The 2007, 2014 and 2023/4 surveys were designed and carried out by the National Centre for Social Research (NatCen), the University of Leicester, and City St George's, University of London, on behalf of NHS England (previously NHS Digital, the trading name of the Health and Social Care Information Centre).

1.2 The 2023/4 survey

1.2.1 Summary of survey design

APMS 2023/4 is a random probability face-to-face survey, covering adults living in residential households in England. To enable comparisons over time, APMS 2023/4 largely replicated the design of previous surveys in the series.

Fieldwork for APMS took place between March 2023 and July 2024. One adult aged 16 and over was randomly selected from one eligible household at each selected address. For further information on the sample design, see Section 2.2 of this chapter. The survey consisted of two phases. The first phase was a face-to-face interview conducted by NatCen interviewers. A remote telephone interview was offered if a participant was not able to have a face-to-face visit. Most (96.5%) participants proceeded with a face-to-face interview. A sub-sample of phase one participants were selected to take part in phase two, which was a face-to-face assessment conducted by clinically trained interviewers coordinated by the University of Leicester.

The phase one questionnaire included screening tools and diagnostic schedules for different mental health disorders, as well as questions on service use, aspects of life related to mental health and participants’ sociodemographic characteristics. Around a third of the interview, including the most sensitive questions, was completed by participants on interviewers’ laptops. The rest of the interview was administered by an interviewer.

A household response rate of 29.4% was achieved, with 6,912 people interviewed for phase one. Of these, 1,742 (25.2%) participants were issued to phase two and 887 (50.9% of those issued) took part in a phase two examination.

The chapters in this publication provide prevalence estimates of common and rare mental conditions and disorders for people aged 16 and over living in England. Given known associations between disadvantage and mental health, deprived areas were oversampled in APMS 2023/4. The data in this report have been weighted to present a representative picture of mental health across adults in England.

1.2.2 The APMS 2023/4 report

Findings from APMS 2023/4 are published in two parts and will both be available here.

Excel tables accompany each chapter and can be found via the data tables links on the summary page or in each chapter.

Part 1 includes the following chapters:

1. Common mental health conditions

2. Mental health treatment and service use

3. Posttraumatic stress disorder

4. Suicidal thoughts, suicide attempts and non-suicidal self-harm

7. Gambling behaviour

9. Attention deficit hyperactivity disorder

 

Part 2 includes the following chapters:

5. Alcohol: hazardous, harmful and dependent patterns of drinking

6. Drug use and dependence

8. Personality disorder

10. Autism spectrum disorder

11. Bipolar disorder

12. Psychotic disorder

13. Eating disorders

1.2.3 Availability of datasets

APMS is a long survey and only some of the results are included in this report. Pseudonymised and disclosure-controlled datasets are available a few months after the publication for all the surveys in the series for specific secondary analysis projects under Special User Licence agreement from the UK Data Service at https://ukdataservice.ac.uk/


2. Survey design

2.1 Overview of the survey design

Each survey in the APMS series involved interviewing a large, stratified, probability sample of the general population, covering people living in residential households. The two-phase survey design involved an initial phase one interview with the whole sample, followed up with a phase two semi-structured assessment carried out by clinically trained interviewers with a subset of participants (McManus et al. 2020).

Participants were screened for a range of different types of mental health conditions, from common conditions like depression and anxiety disorder through to rarer neurological and mental health conditions such as psychotic disorder, attention deficit hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). The long phase one questionnaire also covered many aspects of people’s lives that are linked to mental health, and this information can be used to profile the circumstances and inequalities experienced by people with mental disorders.

Design strengths

This design has several strengths:

  • By sampling from the general population rather than from lists of patients, APMS data can be used to examine the ‘treatment gap’. That is, the survey data can be used to explore what proportion of people with a condition are not in contact with services or in receipt of any treatment, or who are in receipt of inappropriate treatment.
  • The use of validated mental disorder screens and assessments allows for identification of people with sub-threshold symptoms and those with an undiagnosed disorder.
  • Consistent methodology and coverage over time allows for trends in a number of conditions to be monitored.
  • Collection of a large amount of data on a range of topics so that relationships between different domains can be examined. In particular, the questionnaire covers detailed and current information about people’s social and economic circumstances, which does not tend to be collected in a consistent or comprehensive way in administrative datasets.
  • The use of a self-completion mode to cover the most sensitive topics – such as suicide attempts, illegal behaviours, and experience of abuse and violence – means that the survey includes information that some participants may have never disclosed before.
  • At the end of the survey a question is asked about permission for follow-up. The study therefore presents an opportunity for longitudinal data collection and a sampling frame that allows a random sample of people with very specific experiences, who may not otherwise have been identifiable, to be invited for further research. 
  • The APMS dataset gets deposited with the UK Data Service and is designed to be suitable for extensive further analysis. There is only scope for a small part of the data collected to be covered in this publication.

Design limitations

The design also has its limitations:

  • The sampling frame covers only those living in private households, and therefore those who were living in institutional settings such as large residential care homes, offender institutions, prisons, in temporary housing (such as hostels or bed and breakfasts) or sleeping rough, did not have a chance to be selected. People living in such settings are likely to have worse mental health than those living in private households (Bebbington et al. 2021; Chilman et al. 2024). However, the proportion of the overall population not living in private households is so small that this would have little (or no significant) impact on the prevalence estimates for the disorders covered by APMS (Brugha et al. 2012, 2016).
  • Some people selected for the survey were not able to take part in a long interview. These include those with serious physical health conditions, who may feel unwell or be staying in hospital during the fieldwork period, and those whose capability may be impaired, for example due to cognitive decline because of dementia or injury, or because of a learning impairment. Reasonable adjustments could be made such as dividing the interview between two visits.
  • Some people selected for the survey could not be contacted or refused to take part. The response rate achieved (29.4%) was lower than previous surveys in the series (57% in 2014), consistent with broader trends in survey non-response rates (de Leeuw, Hox & Luiten 2018; Williams & Brick 2018). A problem for all such studies is how to take account of those who do not take part, either because contact could not be established with the selected household or individual or because they refused to take part. It may be partially attributable to increased numbers of survey requests (Eggleston 2024). The weighting (outlined in Section 4) addresses this to some extent.
  • Younger age groups were underrepresented in the achieved sample. Participation in surveys has always tended to be lower among younger age groups and higher among older age groups.  An estimated 13% of England's population were aged 16 to 24 in 2022, whereas 4.6% of survey responses came from this age group. Males aged 16 to 24 were most likely to be underrepresented in the achieved sample with 3.3% of survey responses achieved among this age-sex group, compared with an estimated 6.6% of England’s population being in this group. Conversely, 7.6% of responses were from males aged 75 and over and 9% from females aged 75 and over, while the respective population proportions were 4.8% and 6.3%. Although weighting (Section 4) addressed this to some extent, confidence intervals were fairly wide for some subgroup analysis among younger adults.
  • The mental health assessments used are not as valid as a clinical interview. In a clinical interview, a trained psychologist or psychiatrist may take many sessions and much explorative questioning and clinical judgement to reach a diagnosis. In the context of a questionnaire administered by a lay interviewer, this is not possible. However, phase two interviewers undertook extensive clinical interview training as part of their role. In addition, the assessments used have been validated and are among the best available for the purpose in hand (Brugha et al. 1999, 2012).
  • Socially undesirable or stigmatised feelings and behaviours may be underreported. While this is a risk for any study based on self-report data, the study goes some way to minimise this by collecting particularly sensitive information in a self-completion format.
  • As for all surveys, it should be acknowledged that prevalence rates reported are only estimates. If everyone in the population had been assessed, the rate found may have been higher or lower than the survey estimate. Confidence intervals are given for key estimates in a design effects table in the Excel tables accompanying each chapter. For low prevalence disorders, relatively few positive cases were identified. Particular attention should be given to uncertainty around these estimates and to any subgroup analysis based on these small samples. All comparisons made in the text have been statistically tested and only statistically significant differences are described.

Changes to the 2023/4 survey

While the aim was to keep the 2023/4 survey as similar to the previous surveys in the series as possible to allow for trend analysis, some methodological and content changes were planned for APMS 2023/4 from the outset. These included:

  • Changes to certain parts of the questionnaire following a review of the survey content, including a consultation on the content and outputs.
  • Development of an alternative interview mode following the COVID-19 pandemic.
  • Inclusion of a clinical eating disorders comparison study carried out by the University of Leicester. The aims were to clinically calibrate the analytical thresholds used within the Schedules for Clinical Assessment in Neuropsychiatry (Section 9), which covers eating disorders, and to inform how data on possible and probable eating disorders are collected in the future.

Initially the intention was to achieve a sample of about 11,000 participants resident in England, comprising:

  • A general population sample of adults aged 16 and over. Target sample size 8,000, including the oversampling addresses from the most deprived quintile of the Index of Multiple Deprivation 2019 (IMD).
  • An ethnic minority boost sample. Target sample size 3,000.

Ethnic minority boost

Mental health is known to vary by ethnicity, but due to the low prevalence of some ethnic groups in the overall population, most surveys contain too few participants to be able to robustly report on specific groups. Instead, participants from different minoritised groups tend to be combined into larger heterogenous categories for analysis. Recognising the importance of mental health data that could be analysed by granular ethnic categories, an ethnic minority boost was designed to run alongside the main survey. The goal was to provide an evidence base to inform planning and design of mental health services to be inclusive of the needs of ethnic minority groups.

The ethnic minority boost fieldwork ran in parallel to the main survey fieldwork for four months, after which it was discontinued, as the actual ‘screening in’ rates differed from the assumptions too much to make the boost economically viable. The data from the ethnic minority boost will be archived, but has not been included in the APMS 2023/4 report. The design and delivery of the ethnic minority boost will be described in separate technical documentation.

Ethical approval

For APMS 2023/4, ethical approval for phase one was obtained from NatCen’s internal Research Ethics Committee (NatCen: 1st November 2022), phase two from University of Leicester Medicine and Biological Sciences Research Ethics Committee (29th June 2022), and the clinical eating disorders comparison study from the NHS Health Research Authority (18th April 2023).

2.2 Sample design

Overview of the sample design

The sample for APMS 2023/4 was designed to be representative of the population living in private households (that is, people not living in communal establishments or sleeping rough) in England. As with previous surveys in the series, APMS 2023/4 adopted a multi-stage stratified probability sampling design. At the first stage, a random sample of primary sampling units (PSUs), based on postcode sectors, was selected. Within each selected PSU, a random sample of postal addresses (known as delivery points) was then drawn.

The sample consisted of the core and deprived area boost samples. The deprived area boost sample was drawn from Lower layer Super Output Areas (LSOAs) in the most deprived quintile of England’s Index of Multiple Deprivation (IMD).

Additional sample was drawn from the core and deprived area boost samples and assigned as reserve sample, which was later issued when fieldwork did not progress as expected. Due to the lower-than-expected response rate, a further reserve sample for both samples was drawn during the fieldwork period.

Sampling frame

The sampling frame used was the small user Postcode Address File (PAF) because it has excellent coverage of private households in England. The small user PAF consists of those Royal Mail delivery points which receive fewer than 50 items of mail each day. Therefore, most large institutions and businesses are excluded from the sample, but businesses and institutions that receive fewer than 50 items each day are included. When interviewers visited an address that did not contain a private household, they recorded it as ineligible. The small proportion of households living at addresses not on the PAF (estimated to be less than 1%) were not covered by the sample frame (ONS 2023b).

As with previous surveys in the series, APMS 2023/4 did not include adults not residing in private households. People living in communal or institutional establishments tend to be either aged 16 to 24 years (and living in higher education halls of residence) or aged 65 years or over (and living in a nursing or care home setting) (ONS 2023a). Older people living in communal settings are likely to have worse mental health than older people living in private housing, and this should be borne in mind when considering the survey’s account of the older population’s mental health. Overall, communal establishment residents represented less than 2% of all usual residents in England (ONS 2023a).

Selection of primary sampling units

Core sample

PSUs were defined at the postcode sector level. Postcode sectors with fewer than 500 PAF addresses were combined with neighbouring sectors to form the PSUs. This was done to prevent addresses being too clustered within a PSU.

Before selection, the list of PSUs in the population was ordered (stratified) by a number of strata and a systematic random sample was selected from the ordered list. This ensures the different strata in the population are correctly represented and increases the precision of survey estimates. APMS 2023/4 core sample used a sampling methodology that was consistent with previous surveys in the series, and very similar to that used in 2014.

The sampling frame was sorted by three stratification variables in the following order:

  • 9-category former Government Office Region.
  • quintiles of the percentage of the population in National Statistics Socio-economic Classification (NS-SEC) categories 1 and 2 (which denote professional and higher technical occupations).
  • population density at grouped postcode sector level.

When the initial core sample was drawn, Census 2021 data was not available, so Census 2011 data was used to create the stratification variables. When the additional reserve sample was subsequently drawn, Census 2021 data had been released, so this was used to update the stratification variables used. PSUs were systematically selected with probability proportional to the delivery point count of each grouped postcode sector.

710 PSUs were drawn for the initial core sample (64 of which formed the initial reserve), during the fieldwork a further 200 were drawn for the additional reserve, a total of 910 core PSUs. Prior to selecting the additional reserve, postcode sectors already included in the main or deprived area boost samples were excluded from the sampling frame to avoid duplication.

Deprived areas boost sample

The deprived areas boost sampling frame was based on LSOAs. LSOAs were ranked on the IMD score (Index of Multiple Deprivation 2019) and divided into quintiles. For the deprived areas boost the sample was restricted to the most deprived quintile.

First, any LSOAs already selected for the core sample were excluded. LSOAs were then selected from the most deprived quintile of IMD with probability proportional to the delivery point count of each LSOA. Prior to selection of PSUs, the sampling frame was sorted by three variables in the following order:

  • 9-category government office region (consistent with the core sample).
  • quintiles of IMD rank.
  • population density per square kilometre.

When the initial deprived areas boost sample was selected, LSOA boundaries for Census 2021 were not yet available, so Census 2011 LSOA boundaries were used. When the additional deprived area boost LSOAs were sampled, Census 2021 boundaries were available, so these were used. 180 LSOAs were systematically selected for the initial deprived area boost (16 of which were put in reserve) and 50 LSOAs for the additional reserve, a total of 230 deprived areas boost PSUs. LSOAs that had already been selected for the core or initial deprived areas boost sample were excluded from the sampling frame for the additional deprived areas boost, to avoid duplication.  

For more information: Table 1

Sampling addresses and households

In the second stage of sampling 22 delivery points were randomly selected within each of the selected PSUs or LSOAs. The total sample of issued addresses was 25,080, consisting of 1,140 PSUs.

Interviewers visited the addresses to identify private households with at least one resident aged 16 or over. When visited by an interviewer, 1,580 of the selected addresses were found not to contain private households. These addresses were ineligible and were excluded from the survey sample. At eligible addresses found to contain more than one dwelling/household, interviewers entered the number of dwelling units or households into the electric address record form and one dwelling/household was automatically selected at random.

Sample design: Addresses issued by sample type

 

Sampling individuals

One adult aged 16 years or over was randomly selected for interview in each eligible household. This was preferred over interviewing all eligible adults because:

  • It helped interviewers to conduct the interview in privacy and thereby obtain more reliable information.
  • Individuals within households tend to be similar to each other and the resultant clustering can lead to an increase in standard errors around survey estimates. By selecting one person in each household, this clustering effect was overcome.
  • Given the length of the interview, interviewing one household member helped to reduce the burden placed on each household.

Selecting participants for phase two

Some participants who responded to the phase one interview (and agreed to be followed up) were invited to take part in phase two of the survey. Participants who based on their phase one answers were more likely to have autism, psychosis or eating disorders were eligible for phase two examination. This differed from the approach taken in 2007 and 2014 surveys when only those more likely to have autism and psychosis were eligible.

For each phase one participant, the probability of selection for a phase two assessment was calculated as the higher of three disorder-specific probabilities: psychosis probability, autism probability and eating disorder probabilities. The probabilities were generated based on participants’ responses to screening questions in the phase one questionnaire. Participants were selected for phase two on this basis and those who agreed to be contacted by the University of Leicester were selected for a phase two visit. Screening criteria used for APMS 2023/4 are summarised below.

Psychotic disorder phase two selection criteria

Participants needed to meet one or more of the below criteria:

  • Currently taking antipsychotic medication.
  • Reporting an inpatient stay for a mental or emotional problem in the past 3 months.
  • Positive response to 5a in Psychosis Screening Questionnaire (PSQ).
  • Self-reported diagnosis or symptoms of psychotic disorder.
  • Self-reported identification of panic attacks.

Autism phase two selection criteria

  • All adults scoring 8 or more on AQ-17.
  • 16% of adults scoring 4 to 7 on AQ-17.

Eating disorders phase two selection criteria

  • All adults scoring 2 or more on the SCOFF questionnaire.

Phase two: Participant selection

 

Further detail on selection, definitions, analysis and changes across survey years can be found in Chapter 10 Autism spectrum disorder, Chapter 12 Psychotic disorder and Chapter 13 Eating disorders.

2.3 Piloting and questionnaire development

Guidance and consultation

The APMS series is long-established and the 2023/4 survey design was based on that used in previous surveys in the series. The survey development that did take place, to ensure that the survey meets current needs, drew on the expertise of a wide range of advisors and data users. These included:

  • Project oversight and management from NHS England.
  • A Steering Group comprised of representatives from NHS England, Department of Health and Social Care, Office for Health Improvement and Disparities, Royal College of Psychiatrists, Gambling Commission, charities (Mind and Beat Eating disorders), and academic leads in psychiatric epidemiology. This group was coordinated by NHS England.
  • An APMS Academic Group, co-ordinated by the research team, and drawing on the expertise of leading academics from a range of universities and medical schools.
  • A group of senior NatCen interviewers with practical experience of survey delivery in the field.
  • Consultation focus groups were carried out with different stakeholder groups, including government research and policy officials, third sector organisations focused on mental health or mental health risk factors, data analysts, academics, health care providers and people with lived experience (Gill et al. 2021). Some guided feedback interviews were also conducted. Focus groups and interviews were aimed at understanding the priority of different topics included or to be included in the questionnaire.
  • A public consultation survey on the questionnaire content, where anyone with interest in the topic could respond.

Based on feedback from these sources, a few changes were made to the survey content. These are summarised in Section 2.4. Participants were able to take part by a remote telephone interview when a face-to-face interview was not possible. Some questions asked in the face-to-face interview were not included in the remote telephone interview to minimise participant burden. 

Full detail on topic coverage is included in Section 2.4.

Cognitive testing

Cognitive interviewing methods provide an insight into the mental processes used by participants when answering survey questions, thus helping to identify problems with question wording and design (Collins 2015). These methods investigate four cognitive stages:

  • How participants understand and interpret survey questions.
  • How they recall information that applies to the question.
  • The judgements they make as to what information to use when formulating their answers.
  • The response mapping process.

Cognitive testing was carried out in 2021 to test new and adapted questions. Not all new or adapted questions could be tested, and priority was given to those identified as potentially problematic and were suited to cognitive testing methods.

Cognitive interviews were conducted with 18 people. The participants varied in terms of gender, age, mental health, substance use, gambling behaviour and whether they receive care.

Data were thematically analysed and a debrief session was conducted where recommendations for alterations were discussed and agreed. A written report on the findings and recommendations was submitted to NHS England and amendments were made to questions where agreed.

Dress rehearsal

Following the cognitive testing, the questionnaire was refined in preparation for a full-dress rehearsal. At this stage, an alternative telephone mode was added for participants who could not take part in a face-to-face interview for COVID-19 related reasons. The dress rehearsal enabled testing of the flow, alternative telephone mode, content and timings of the interview as a whole, and of individual modules, together with the operation of fieldwork procedures and materials. Recommendations were implemented for the mainstage where appropriate.

The dress rehearsal tested both phase one and phase two procedures.

For phase one, 13 postcode sectors were selected for convenience based on interviewer availability. 22 addresses were then selected at random per postcode sector. 71 full interviews were completed, 9 of which were completed by phone.

At the end of the phase one interview, participants were asked whether they were happy to be contacted about a phase two interview. 42 (76%) phase one participants agreed to be contacted about phase two. 11 phase two interviews were completed.

Phase two interviews were conducted by clinically trained interviewers co-ordinated by the University of Leicester. The phase two dress rehearsal sample included people both men and women of a range of ages. A report covering the dress rehearsal was submitted to NHS England.

Overall, the dress rehearsal ran smoothly with no major issues and received positive feedback from the interviewers. Other findings from the dress rehearsal were:

Phase one 

  • The face-to-face interview took an average of 99 minutes to complete. This was longer than the target length of 90 minutes. As a result, questionnaire content needed to be reduced.
  • Minor amendments to the interview were recommended to improve the flow, including ‘emergency exits’ in the violence module for participants who preferred not to complete the questions.
  • Clarifications and explanations for interviewers during the training were suggested.
  • Minor changes were recommended for participant communications.

Phase two 

  • Practical issues were identified that needed to be resolved for the mainstage, related to kit and entering administrative information.
  • Interviewers provided feedback for administrative improvements to be made to the interview software.

2.4 Interview mode and topic coverage

Phase one interview

Interview mode

In APMS 2023/4 the phase one interview could either be conducted face-to-face or remotely over the phone. Face-to-face interviews consisted of interviewers asking questions and recording answers in-person, with a self-completion section partway through the interview.

An alternative remote interview by telephone was implemented following the COVID-19 pandemic, for participants who felt unable to take part in person. If a participant needed to use an alternative mode, they agreed a suitable date and time with the interviewer, who then contacted them by phone. Before leaving, the interviewer left behind showcards and other survey materials, so that participants could refer to these during the interview. Participants completing the survey remotely had the option to complete the self-completion section of the questionnaire online using a web questionnaire. While they completed this, the interviewer waited on the line for them to finish, optimising response and minimising non-completion. To reduce participant burden and encourage active participation, remote interviews did not include all the modules included in the face-to-face interview.

In person interview structure

Part 1: Face-to-face Part 2: Self-completion Part 3: Face-to-face
Interviewer asked questions and recorded answers using Computer Assisted Personal Interviewing (CAPI). Participant encouraged to answer these questions using Computer Assisted Self Interviewing (CASI). If participant refused or needed assistance, the interviewer could read out the questions instead. Final section of the interview conducted by interviewer using CAPI.

Remote interview structure

Part 1: Telephone interview Part 2: Online questionnaire Part 3: Telephone interview

Remote interviews included a subset of questions.

Interviewer asked questions over the phone and recorded answers using Computer Assisted Personal Interviewing (CAPI).

Participant encouraged to answer these questions using an online questionnaire. A web link and unique identifier were sent to the participant by email, which they used to access the questionnaire.

Interviewer remained on the phone while they completed the web questionnaire.

Final section of the interview conducted by interviewer over the phone using CAPI.

Interview content

The table below summarises the topic coverage of the phase one interviews by interview section and mode. The interview consisted of initial modules of questions administrated by the interviewer using Computer Assisted Personal Interviewing (CAPI), a self-completion section (Computer Assisted Self Interviewing or CASI), and further interviewer administered modules. A few sections were asked only of particular age-groups, for example questions on cognitive function were restricted to those aged 60 or over.

The full phase one questionnaire is reproduced in Appendix C and documentation that will be lodged with the UK Data Service describes each of the survey items.

Phase one, Part 1 interview content (interviewer administered)

Questionnaire module Included in remote interview (y/n) Age inclusion criteria
Details of household members and relationships y All
General health and activities of daily living y All
Caring responsibilities n All
Mental wellbeing (SWEMWBS) n All
Physical health conditions y All
Sensory impairment n All
Self-reported diagnoses of mental health conditions y All
Self-reported height and weight y All
Treatment and service use y All
Common mental disorders y All
Suicidal behaviour and self-harm y All
Psychosis screening questionnaire y All
Attention deficit hyperactivity disorder y All
Work related stress y Aged 16-69
Tobacco use and vaping  n All
Alcohol – any drinking y All

Phase one, Part 2 interview content (self-completion)

Questionnaire module Included in remote interview (y/n) Age inclusion criteria
Alcohol (AUDIT) y All
Drug use and dependence y All
Gambling behaviour y All
Eating disorders y All
Personality disorder n All
Social functioning y All
Bipolar disorder y All
Autism spectrum disorder y All
Posttraumatic stress disorder y All
Military experience y All
Interpersonal violence and abuse y All
Child neglect n All
Suicidal behaviour and self-harm y All
Discrimination n Aged 16-69
Prison experience n All
Sexual orientation and behaviour n All

Phase one, Part 3 interview content (interviewer administered)

Questionnaire module Included in remote interview (y/n) Age inclusion criteria
Cognitive and intellectual functioning: TICS-M n Aged 60 or over
Cognitive and intellectual functioning: National Adult Reading Test n All
Stressful life events y All
Parenting n All
Social support networks n All
Religion n All
Social capital and participation n All
Socio-demographics n All
Consents (for data linkage, phase two contact and follow-up research) y All

Mental health conditions covered by APMS 2023/4

A summary of the measures used to assess or screen each of the mental health conditions and disorders included in APMS 2023/4 phases one and two is listed below, with further technical detail in Appendix A.  To enable meaningful temporal trends to be captured, the ways in which measures are scored have not changed in line with changes in diagnostic classification systems. However, additional items have often been used to enable researchers to do this in secondary analyses.

Measures used to assess and screen for mental disorder.

Condition Diagnostic status Classification system Assessment tool Survey phase Reference period
Generalised anxiety disorder (GAD) Present to diagnostic criteria International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10) (World Health Organization 1993) Clinical Interview Schedule – Revised (CIS-R) (Lewis et al. 1992) 1 Past week
Common mental health condition (CMHC) not otherwise specified (NOS) Present to diagnostic ICD-10 CIS-R 1 Past week
Obsessive and compulsive disorder (OCD) Present to diagnostic criteria ICD-10 CIS-R 1 Past week
Depressive episode Present to diagnostic criteria ICD-10 CIS-R 1 Past week
Panic disorder Present to diagnostic criteria ICD-10 CIS-R 1 Past week
Phobia Present to diagnostic criteria ICD-10 CIS-R 1 Past week
Posttraumatic stress disorder (PTSD) Screen positive Diagnostic and Statistical Manual of Mental Disorders-IV (DSM-IV, American Psychiatric Association 1994) PTSD Check List (Blanchard et al 1996) 1 Past week
Attempted suicide Occurrence of behaviour - Self-completion 1 Past year
Alcohol use disorders Screen positive ICD-10 Alcohol Use Disorder Identification Test (AUDIT) (Saunders et al. 1993) 1 Past six months
Problem gambling Screen positive DSM-IV Problem Gambling Severity Index (PGSI) (Ferris and Wynne 2001) 1 Past year
Drug dependence Screen positive - Based on Diagnostic Interview Schedule (DIS) (Malgady et al. 1992) 1 Past year
General personality disorder traits Screen positive International classification of diseases for mortality and morbidity statistics 11th Revision (ICD-11, WHO 2019) Standardised Assessment of Personality (SAPAS) (Moran et al. 2003) 1 Lifetime
Borderline personality disorder (BPD) Present to diagnostic criteria DSM-IV Self-completion Structured Clinical Interview for DSM IV (SCID-II-Q) (First 1997) 1 Lifetime
Antisocial personality disorder (ASPD) Present to diagnostic criteria DSM-IV Self-report SCID-II-Q 1 Lifetime
Attention deficit hyper-activity disorder (ADHD) Screen positive DSM-IV Adult ADHD Self-Report Scale (ASRS v1.1) (WHO 2003) 1/2 Past six months
Autism Present to diagnostic criteria DSM-V Autism Diagnostic Observation Schedule (ADOS-2 Module 4: Lord et al. 2012) 1/2 Lifetime
Bipolar disorder (BD) Screen positive DSM-IV Mood Disorder Questionnaire (MDQ) (Hirschfeld et al. 2000) 1 Lifetime
Psychotic disorder Present to diagnostic criteria ICD-10 Schedules of Clinical Assessment in Neuropsychiatry (SCAN) (WHO 1999) 1/2 Past year
Eating disorders Screen positive/ Present to diagnostic criteria ICD-10 SCOFF eating disorders questionnaire (Morgan et al. 1999), Eating Disorder Examination – Questionnaire Short (EDE-QS) (Gideon et al. (2016), Schedules for Clinical Assessment in Neuropsychiatry (SCAN) 3.0 (WHO in press) 1/2 Past year and past week

Coverage of the 1993, 2000, 2007, 2014 and 2023/4 APMS interviews

The following table summarises the topic coverage of 1993, 2000, 2007, 2014 and 2023/4 APMS phase one questionnaires. In 1993 the survey was administered by paper and pen, from 2000 a computer assisted interviewing approach was used. The aim has been to have consistent core coverage, with additional modules covered in different years.

Summary of APMS coverage in 1993, 2000, 2007, 2014 and 2023/4: face to face interview

Face-to-face interview 1993 2000 2007 2014 2023/4
General health  
Activities of daily living    
Caring responsibilities    
Service use and medication a
Self-reported height and weight      
Common mental disorders
Suicidal behaviour and self-harm b
Psychosis screening questionnaire
Attention deficit hyperactivity disorder    
Work related stress    
Smoking
Drinking
Intellectual functioning:          
    TICS-M  
    National Adult Reading Test (NART)  
    Animal naming test    
Key life events ♦ 
Social support networks
Religion    
Social capital and participation    
Socio-demographics

 

Table notes

a In APMS 1993 only participants who screened positive for CMHC were asked about use of services and receipt of treatment.

In APMS 1993 only participants with depression in the past week were asked about suicidal behaviour.

Summary of APMS coverage in 1993, 2000, 2007, 2014 and 2023/4: self-completion

 

Self-completion 1993 2000 2007 2014 2023/4
Problem drinking a
Drug use and dependence
Personality disorder  
Social functioning    
Gambling behaviour      
Autism    
Posttraumatic stress disorder    
Military experience    
Bipolar disorder      
Interpersonal violence, abuse and neglect    
Suicidal behaviour and self-harm (repeated)    
Eating disorder      
Discrimination    
Prison experience      
Table notes

a APMS 1993 data on problem drinking is not compatible with that collected in 2000, 2007, 2014 and 2023/4.

Topics added  

The following topics not in APMS 2014 were reintroduced in APMS 2023/4 as a result of the survey consultation: 

  • Self-reported height and weight was added back in for 2023/4. It was last included in 2007.  
  • Gambling was added back in for 2023/4. It was last included in 2007, however PGSI was used in 2023/4 instead of the DSM-IV measure used in 2007. 
  • Eating disorders was added back in for 2023/4. It was last included in 2007, both years included the SCOFF questionnaire. In 2023/4 the EDE-QS was also included.

Summary of amendments to existing modules 

All questionnaire changes are described in the archived dataset documentation, including information on the rationale for changes, and summarised below: 

  • Household demographics: Participants were asked about the gender identity of all residents in the household. Sex at birth was asked only of the selected participant. Relationship status options were updated. 
  • General health: Additional question on receiving care at home in the past month.  
  • Caring responsibilities: Additional question on type of condition or impairment the person they provide care for has.
  • Mental wellbeing: The 14 item Warwick Edinburgh Mental Well-Being Scale (WEMWBS) (Tennant et al. 2007) was replaced with the 7 item Short Warwick Edinburgh Mental Well-Being Scale (SWEMWBS) (Stewart-Brown et al. 2009; Ng Fat et al. 2017).

  • Physical health conditions: Some additional health conditions were added to the list and additional questions covering long COVID were included.

  • Mental health condition diagnoses: Some additional mental, emotional and neurological conditions were added to the list.

  • Mental health treatment: Additional questions on barriers to treatment and waiting times for treatment.

  • Common mental health conditions: Minor changes were made to the CIS-R to improve comparability with the ICD-11 diagnostics classification.

  • Work related stress: Additional questions on remote working.

  • Tobacco: New questions on vaping and smoking cessation.

  • Alcohol: Additional questions on whether non-drinkers have ever drunk alcohol and reasons for not drinking. New questions covering treatment received for drinking. Some questions updated asking about units drunk in line with latest AUDIT. Severity of Alcohol Dependence Questionnaire (SADQ) not included.

  • Drug use: New questions included on use of synthetic cannabinoids, methamphetamine, khat, fentanyl, morphine, oxycodone, tramadol and nitrous oxide. Additional questions on consequences of and treatment for drug use.

  • Autism: New question on sensory sensitivity added.

  • PTSD: Added 5 items from the PCL-5 questionnaire (Blevins et al. 2015).

  • Military experience: Additional questions on the branch of the forces served in and impact of serving in the armed forces.

  • Interpersonal violence and abuse: The module was extensively revised, including changes to the language, overall structure and additional questions, including a wider range of types and characteristics of perpetrators and dimensions of violence. Participants were also able to click on a link to take them to the BBC News website as an escape from the questionnaire if required. They were able to exit the series of questions if they preferred not to answer.

  • Childhood abuse and neglect: 5 additional questions were added so all 10 of the core ACEs (Adverse Childhood Experiences) were covered.

  • Suicidal thought attempts and self-harm: Additional questions on help sought and received after self-harm, repetition of self-harm or suicide attempts and age of onset.

  • Discrimination: Additional questions on the type of discrimination and where the discrimination took place.

  • Prison experience: Moved from a list of life events that may have been experienced, asked by the interviewer, to the self-completion section. Additional questions on prison experience were also asked including number of times, length of sentence, receipt of mental health treatment and whether they were put into segregation.

  • Sexual orientation and behaviour: Sexual orientation was asked of all adults, not just those under the age of 65 years. New question on whether participant identifies as trans or has trans history.

  • Stressful life events: Questions added on whether the life event had happened more than once, the age they were when the event happened, whether any family had experienced a stressful life event and if so the impact on participant.

  • Parenting: An additional question asking the age of the participants youngest child.

  • Social support: 5 additional questions to measure loneliness were included.

  • Religion: New questions on how important religion is to participant and frequency of attending religious services.

  • Social capital and participation: Additional questions including access to green spaces and social media use.

  • Ethnicity and migration: Questions included on country of birth of parents and immigration status.

  • Education: Questions were aligned to the Government Statistical Service and Statistician Group (GSS) harmonised education questions.

  • Employment: The module was extensively revised. Questions on current employment and student status were updated. Additional questions on shift patterns and remote working were included.

  • Financial and housing circumstances: Questions on sources of household income and benefits received were revised to make sure they were up to date.

Phase two examination

The phase two examination assessed psychotic disorder, autism and symptoms of eating disorders. The approach taken for autism and psychotic disorder will be described in further detail in the relevant chapters in Part 2 of the report.

The phase two examination consisted of semi-structured standardised clinical assessments. The measures collected in phase two, which are included in the APMS chapters on autism, psychotic disorders and eating disorders, were:

  • Schedules for Clinical Assessment in Neuropsychiatry (SCAN) (WHO 2014).
  • Module 4 of the Autism Diagnostic Observation Schedule (ADOS-2 Mod4) (Lord et al. 2012).
  • Schedules for Clinical Assessment in Neuropsychiatry 3.0 (WHO in press).

In addition, SCAN items covering ADHD and autism were included in the examination but are not used for analysis within this report.

2.5 Fieldwork procedures

Training and supervision of interviewers

Phase one interviewers  

NatCen interviewers were made aware of the sensitive nature of the interviews before choosing to work on the first phase of the survey. All interviewers selected to work on the survey were briefed on its administration. Topics covered on the 1-day survey-specific training included introducing the survey, questionnaire content, confidentiality and responding to participant distress.  

Interviewers were provided with written instructions. As the fieldwork took place over the course of a year, training sessions for new interviewers starting on the project were conducted throughout the fieldwork period. Bi-weekly drop-in sessions were also provided for interviewers if further support was required. Less experienced interviewers were accompanied by a project supervisor during the early stages of their fieldwork to ensure that the interviews were administered correctly. Routine supervision of 10% of interviewer work was subsequently carried out for quality control purposes.

Phase two interviewers  

The phase two interviewers were recruited and co-ordinated by the University of Leicester. The interviewers had either previous interviewing and/or clinical experience in a medical, social science or related field and some had a degree in behavioural sciences. Phase two interviewers received an extensive, month-long induction and training programme, run by a senior research psychologist and a psychiatrist. They also received training sessions from NatCen on using computer assisted interviewing. Whilst in the field, these interviewers received regular supervision sessions and technical support.

Contacting participants

An advance letter was sent to each sampled address before fieldwork began. This introduced the survey and stated that an interviewer would be visiting the address to seek permission to interview one person aged 16 or over living there. A £10 voucher was attached to the bottom of the advance letter. A sample advance letter is provided in Appendix D.

Once the advance letters had been sent, interviewers were asked to visit all addresses in their sample. The interviewer used an electronic Address Record Form to record all attempts to visit the address. At initial contact, the interviewer established the number of households at the address and made any selections necessary (see Section 2.2 Sample design). The interviewer randomly selected one adult per household, and then attempted to book either a face-to-face, or as a last resort a telephone interview, with that person. As in previous waves of the survey, the survey title used in the field was the ‘National Study of Health and Wellbeing’. This was felt to be more readily understandable than ‘psychiatric morbidity’. Interviewers had various materials they could use on the doorstep and leave with participants, including a survey leaflet that introduced the study and provided a number that people could call for more information (see Appendix D).

If the selected participant was not capable of undertaking the interview alone, for reasons of mental or physical incapacity, in the 2023/4 survey no information from this participant was collected. This differed from previous surveys, where proxy reporting by family or carer was allowed.

Collecting the data

The phase one interview took around 90 minutes to complete on average, although some were shorter, and others took as long as three hours. The interview involved computer assisted personal interviewing (CAPI) where interviewers asked the questions using a laptop and entered participants’ responses. A third of the interview was collected by self-completion. Partway through the interview, participants used the interviewer laptop (CASI) to complete the self-completion section. For remote interviews, the self-completion was reduced slightly and was completed using an online questionnaire (known as computer assisted web interview or CAWI).  

At the end of the phase one interview, signed consent was sought for the participant’s survey responses to be linked with other health datasets, including medical and health records held by NHS Digital (now NHS England)/UK Health Security Agency. The documentation for this is included in Appendix D. Written consent was also requested for participants to be contacted by NHS Digital and/or NatCen to take part in future research.  

Verbal permission was sought for a University of Leicester interviewer to contact the participant again in order to explain the phase two interview, should they be selected: 70.4% agreed. 

The gap between phase one interview completion and phase two contact was minimised as far as possible, although inevitably there were some delays between the visits. Phase two interviewers were provided with some background information about participants such as age, gender, marital status and household composition. The interviewers contacted participants by phone to arrange an appointment.  

All phase two interviews took place in person and took 90 minutes to complete on average. Some of the phase two content was collected using CAPI software but the majority of the interview involved clinical assessments using the SCAN v3 standalone software. All participants had the same examination, regardless of the condition selection for phase two was based on.

Incentives

A voucher that could be exchanged for £10 in cash at any Post Office was attached to all advance letters, to make the letter more memorable and to generate interest. Due to lower than expected response rates, from November 2023 until the end of fieldwork an additional £10 high street voucher was given to those who took part in a phase one interview, as part of a response improvement campaign. Those who were selected and took part in a phase 2 interview were also given a £10 high street voucher at the end of the interview.

Signposting to further support

All participants at the end of the phase one and phase two interview were offered a list of helpline numbers and websites. These included the details for organisations providing information about the conditions covered by the survey as well as organisations providing support to people in crisis. The leaflet also emphasised contacting a GP for support and advice as a first step (see  Appendix D ).

Translation

The phase one computer-assisted questionnaire was translated to Urdu, which has in recent years been the most used alternative language in face-to-face surveys in England, to facilitate participation from those who did not feel confident responding in English. Interviewers who were Urdu speakers underwent an accreditation process and acted as interpreters where required. Urdu speakers were included in the phase two assessment team at the University of Leicester.


3. Survey response

3.1 Phase one response

Of the 25,080 addresses issued, 23,500 (93.7%) were found to include at least one private household and were therefore eligible for participation. Of these 8,571 (36.5%) were refusals in field and 1,498 (6.4%) were refusals direct to the office. 3,086 (13.1%) were coded as non-contacts and 3,433 (14.6%) were unproductive for another reason. 6,912 productive interviews were achieved, representing a 29.4% response rate. This included 42 (0.2%) partial interviews where the participant completed the treatment, service use and CIS-R modules, but did not reach the end of the interview.

Response rates at phase one

  Number Percentage
Potentially eligible households 23,500  
Field refusals 8,571 36.5%
Office refusals 1,498 6.4%
Non-contacts 3,086 13.1%
Other unable/unproductive 3,433 14.6%
Productive adults 6,912 29.4%
Full interviews 6,870 29.2%
Partial interviews 42 0.2%

Of the 6,912 productive interviews, 6,669 (96.5%) were carried out in-person and 243 (3.5%) were carried out remotely over the phone.  

Despite the self-completion section being very long, 89.0% of participants completed it. 68.9% of participants completed this entirely by themselves (via CASI or CAWI). In 20.1% of interviews the participant was assisted, either by the interviewer reading out the questions and entering the participant’s responses, or the interviewer reading out the questions, but the participant entering their own responses. 11.0% of participants did not complete the self-completion section of the interview at all. Most of these participants were older. 

72.6% of participants gave permission to be contacted for follow-up research and 68.1% consented to data linkage.

For more information: Table 2 and Table 3

Phase one interview mode

  Number of adults Percentage
Productive interviews 6,912  
In-person interviews 6,669 96.5%
Remote (telephone) interviews 243 3.5%
Productive self-completion 6,153 89.0%
Independent self-completion 4,761 68.9%
Interviewer assisted self-completion 1,392 20.1%
Consents    
Agreed to follow up research 5,019 72.6%
Agreed to data linkage 4,708 68.1%

Response rates by age and sex  

Comparison of the unweighted responding sample profile with ONS 2022 mid-year population estimates demonstrates that survey response was lower among younger age groups (ONS 2022b). An estimated 13% of England's population were aged 16 to 24 in 2022, whereas 4.6% of survey responses came from this age group. The divergence between population estimates and response profile was slightly higher for males than females, suggesting that males aged 16 to 24 were the age-sex group least likely to respond to the survey. An estimated 6.6% of England’s population were males aged 16 to 24, compared with 3.3% of survey responses coming from this group. 

Conversely, older age groups were overrepresented in the achieved sample. 7.6% of responses were from males aged 75 and over and 9% from females aged 75 and over, while the respective population proportions were 4.8% and 6.3%. The groups with response most closely aligned to population estimates were males aged 55 to 64 and females aged 45 to 54. The proportion of responses from both of these groups differed by 0.5 percentage points or less from population estimates.

For more information: Table 7

3.2 Phase two response

6,912 participants provided a phase one interview. A probability of selection was calculated for each participant based on their answers to the phase one screening questions on psychosis, autism and eating disorders as outlined in Section 2.2 Sample design. Overall, 70.4% of phase one participants agreed to be contacted about the phase two interview. After the application of the highest of the three disorder specific sampling fractions, 1,742 participants were issued for a phase two interview. Phase two interviews were conducted with 887 of these (50.9%), and there were 364 refusals and 491 non-contacts. 

For more information: Table 4

Response rates of adults at phase two

  Number of adults Percentage
Agreed to contact about phase two 4,863 70.4%
Selected for phase two 2,330 33.7%
Issued for phase two 1,742 25.2%
Refusal 364 20.9%
Non-contacts 491 28.2%
Productive interviews 887 50.9%
Full interviews 880 50.5%
Partial interviews 7 0.4%

4. Weighting the data

4.1 Weighting the phase one data

The survey data from the main and deprived area boost samples were weighted to take account of selection probabilities and non-response, so that the results were representative of the household population aged 16 years and over. Weighting occurred in four steps.

First, selection weights (wt1) were applied to take account of the differential selection probabilities of addresses and households within addresses. For each of the 25,080 addresses sampled from a total of 1,140 sampled PSUs, the address selection weight (wta) was calculated as follows:

\(wt_a = \frac{1}{\big(\frac{a_R}{a_{PAF}}\big)}\)

aR = number of addresses selected from region and IMD quintile

aPAF = number of addresses within PAF in region and IMD quintile

The selection probability varied by IMD quintile and region because of the deprived area boost. As the number of addresses selected from each PSU did not vary, this weight could be calculated at address level.

Household selection weights were then calculated to take account of addresses in which the interviewer found multiple dwelling units and/or households. The dwelling unit selection weight (wtd) consisted of the number of dwelling units within the address. The household selection weight (wth) consisted of the number of households within the dwelling unit. These two weights were multiplied together to create the overall household selection weight and trimmed at 4 to avoid extreme weights. The address and overall household selection weights were then multiplied together to produce wt1, the selection weight.

Second, to reduce household non-response bias, a household level non-response model was fitted using forward and backward stepwise logistic regression. Deadwood and known ineligible addresses were not included in the model’s base. The dependent variable was whether the household responded or not. The independent variables considered for inclusion in the model included interview observations of physical barriers to entry (e.g. a locked common entrance or the presence of security staff), small area-level Census 2021 measures, population density, Government Office Region (GOR), Census 2021 Output Area Classifications (OAC), urban-rural status, and IMD quintiles, all available for responding and non-responding households. Census 2021 area-level measures tested included percentage of owner occupiers, percentage of adults educated to degree level, percentage of adults in work, percentage of households with children, and percentage of households with access to a car or van. All models were run weighted by wt1, the selection weight.

Not all the variables tested were retained for the final model: variables not significantly related to the propensity of households to respond were dropped. The variables in the final model were: GOR, IMD quintiles, whether there were entry barriers to the selected address, quintiles of postcode sector-level population density, urban-rural status, OAC group, Local Authority-level population density, percentage of adults in managerial occupations (NS-SEC categories 1 and 2), percentage of adults educated to degree level, percentage of persons aged 55 or older, percentage of persons who were not white, percentage of households with children, and percentage of households with access to a car or van. 

The non-response weight (wt2) for each eligible household was calculated as the inverse of the probability of response estimated from the final model. The non-response weight was trimmed at the 99th percentile to improve efficiency and reduce the design effect.

Third, selection weights (wt3) were applied to take account of the different probabilities of selecting participants in different sized households. The weight was equal to the number of adults (16+) in the household, the inverse of the probability of selection. This was trimmed at 4 to avoid a small number of very high weights which would inflate the standard errors, reduce the precision of the survey estimates and cause the weighted sample to be less efficient.

The composite weight for selection and participation was calculated as the product of the weights from the previous stages:

\(wt_4 = wt_1 \times wt_2 \times wt_3\)

This composite weight was then checked for outliers and the top three weights trimmed to improve efficiency and reduce the design effect.

The final stage of the weighting was to adjust the composite weight (wt4) using calibration weighting. Calibration takes an initial weight (in this case wt4) and adjusts (or calibrates) it to given control totals. The process generates a weight which produces survey estimates that exactly match the population for the specific characteristics (control totals) used in the adjustment. Calibration reduces any residual non-response bias and any impact of sampling and coverage error for the measures used in the adjustment. The population control totals used were the ONS 2022 mid-year population estimates for age-by-sex and region, as well as Labour Force Survey Q4 2023 estimates of ethnicity and tenure (ONS 2025). After calibration, the APMS 2023/4 weighted data matched the estimated population in terms of age-by-sex as shown in Table 7. Outliers were trimmed after calibration to improve efficiency, so totals do not match exactly for the other calibration variables, however the maximum residual bias is less than 0.1 percentage points. To adjust the response profile to match population estimates required assigning higher weights to the youngest survey participants and lower weights to the oldest. See Section 3 Survey response for more information. 

The final weights for the main and deprived area boost data have a DEFF of 1.75, a NEFF of 3,951, and efficiency of 57%.  

For more information: Table 5, Table 6 and Table 7

4.2 Weighting the phase two data

Three weighting variables were developed specifically for use when analysing outcomes derived from phase two data: presence of psychosis, presence of autism, and presence of probable eating disorder. These weights were designed to generate condition-specific datasets that are representative of the general population, based on all the participants with relevant information. Participants received a phase two weight if they were eligible for phase two, were selected, and then responded. 

For analysis of prevalence of each of the disorders assessed at phase two (autism, psychosis, and eating disorders), a new dataset was created combining phase two participants with phase two weight and phase one participants not eligible for phase two with their phase one weight. Participants who were selected for phase two, but did not respond, were removed. Phase one participants who were not eligible for phase two for a specific disorder were assumed not to have that specific disorder.  

The phase two weights account for two factors:

  1. Not all those eligible for phase two were selected with equal probability: all those screened in with a positive psychosis score or a SCOFF questionnaire score of 2+ were selected as were all adults scoring 8 or more on the Autism-Spectrum Quotient (AQ-17). For adults scoring 4-7 on the AQ-17, sub-sampling was used to select 16% for phase two. 
  2. Some of the eligible phase one participants did not agree to be contacted for phase two during their phase one interview, so were automatically excluded from the phase two selection. Others were selected for phase two but then declined to take part. These refusals introduce the possibility of phase two non-response bias. The phase two weights incorporate a non-response adjustment to ensure that those responding have a similar weighted profile to those eligible.

The phase two weights were calculated by modelling, via stepwise logistic regression, the probability of being selected and responding to phase two, conditional on being eligible for selection. The weight per phase two participant was then calculated as the inverse of the predicted probability from the model, multiplied by their phase one weight. The predicted probabilities simultaneously account for selection probabilities and for observable non-response biases.

The variables included in the final phase two non-response model were:  

  • IMD quintiles. 
  • Whether participant was screened in for psychosis. 
  • Whether participant was screened in for eating disorders. 
  • AQ-17 scores by sex, ethnic group (five categories). 
  • Equivalised income quintiles.

Other variables, such as age, employment status, qualification, and marital status, were tested in the regression model, but were excluded because they were not significantly associated with response to phase two. The non-response weights were trimmed at the 99th percentile to remove extreme weights and reduce the design effect. 

To create the final phase two weights for each condition, phase one participants ineligible for phase two were assigned their phase one weight. Cases screened into phase two that did not respond were assigned a weight of 0. Cases screened into phase two that did respond were initially assigned a weight multiplying their phase one weight by the phase two non-response weight described above. This composite weight was then rescaled up, so that the sum weights for phase two participants in each phase two stratum matched the sum weights of all eligible cases in that stratum who responded in phase one. For example, the phase two participants screened in for psychosis had the sum of their phase two psychosis weights rescaled from 140 to 606, as this is the sum of phase one weights for those screened in for psychosis (including non-respondents). For the autism weights, this rescaling was split by the four phase two sampling strata, as the probability of selection for phase two varied by AQ-17 score and gender.

The three sets of final weights were checked for efficiency, residual bias, and impact on phase two survey outcomes. The final weights were not trimmed to preserve the accuracy of the rescaling.


5. Data analysis and reporting

5.1 Introduction

APMS 2023/4 is a cross-sectional survey of the general population. While it allows for associations between mental disorder and personal characteristics and behaviour to be explored, it is important to emphasise that such associations cannot be assumed to imply causality. A list of the variables used in the analysis in this report will be included in the archived dataset.

5.2 Weighted analysis and bases

As outlined in Section 4 Weighting the data above, all the data presented in the substantive chapters of this report are weighted to account for likelihood of selection and non-response. Bases are presented as weighted and unweighted. The unweighted bases show the number of participants included. The weighted bases show the relative size of the various sample elements after weighting, reflecting their proportions in the population in England.

5.3 Age-standardisation

Data have been age-standardised in most tables to allow comparisons between groups after adjusting for the effects of any differences in their age distributions. When different sub-groups are compared in respect of a variable on which age has an important influence, any differences in age distributions between these sub-groups are likely to affect the observed differences in the proportions of interest. 

Observed data can be used to examine actual prevalence or mean values within a group, needed, for example, for planning services and is provided in the accompanying tables. Where data have been age standardised, the age-standardised estimates are discussed in the report text. 

Age and sex were unknown for a small number of participants, which means they could not be included in the age-standardised analysis. As a result, the base sizes for age-standardised and observed tables are slightly different.

All age-standardised analyses in the report are presented separately for men and women, and age standardisation was undertaken within each sex, expressing male data to the overall male population and female data to the overall female population. When comparing data for the two sexes, it should be remembered that no standardisation has been introduced to remove the effects of the sexes’ different age distributions.

Age standardisation was carried out using the direct standardisation method. The standard population to which the age distribution of sub-groups was adjusted was the mid-year 2022 population estimates for England. The age-standardised proportion p’ was calculated as follows, where pi is the age-specific proportion in age group i and Ni is the standard population size in age group i:

\(p' = \frac {\sum_i N_i p_i}{\sum_i N_i}\)

Therefore p’ can be viewed as a weighted mean of pi using the weights Ni. Age standardisation was carried out using the age groups 16 to 24, 25 to 34, 35 to 44, 45 to 54, 55 to 64, 65 to 74 and 75 and over. The variance of the standardised proportion can be estimated by:

\(var(p') = \frac {\sum_i ( N_i^2 p_i q_i / n_i)}{(\sum_i N_i)^2} \)

where qi = 1 – pi and ni is the sample number in age-sex group i. 

Both observed and age-standardised data are provided in tables by ethnic group and gender, IMD and gender, and region and gender. Only age-standardised data are presented in tables by employment status, problem debt, limiting physical health conditions, and common mental health conditions.

5.4 Standard analysis breaks

Most of the mental health conditions covered in this report are analysed by a core set of breaks: age, gender, ethnic group, employment status, problem debt, Index of Multiple Deprivation (IMD) quintile, comorbidity and region. These are described briefly below and defined in more detail in Appendix B Glossary.

Gender and sex

Participants were asked their gender identity in the household demographics section of the interview. A follow-up question also asked participants about their sex at birth. In this report, results are mostly broken down by gender, that is whether participants identified as a man, a woman or as another category such as non-binary. Where data are presented by gender, the number of men and the number of women combined may total less than the number of all adults. Those identifying in another way, such as non-binary, are not included as a separate group due to small base sizes and disclosure risk. They are included in totals and in tables without a gender break. Note that the base for all tables that cover the whole population is described as ‘all adults’. 

Age-standardisation calculations use ONS 2022 mid-year population estimates, which are based on sex (ONS 2022b). Age-standardised results exclude adults for whom age or sex is missing. This group are included in the observed tables. 

Data in trend tables are presented by sex (defined as male or female at birth) to enable comparisons with previous surveys in the series. For key estimates, the report includes estimated population size totals, which are calculated using ONS 2022 mid-year population estimates and therefore were also based on sex.

Ethnic group

Participants identified their ethnicity according to one of fifteen groups presented on a show card, including ‘other – please state’. These groups are based on those used in the 2021 Census and are drawn from the Government Statistical Service (GSS) ethnicity harmonised standard, for use on national surveys. The groups were subsumed under four headings: White; Black/Black British; Asian/Asian British; and those who reported their ethnic group as mixed, multiple or other. For some analyses by ethnic group the White group was further divided into ‘White British’ (which included those giving their ethnic group as White and English, Scottish, Welsh or from Northern Ireland) and White other.

About 20% of the sample (1,375 participants) identified with an ethnic group other than White British. This is in line with the combined prevalence of these groups in the adult population resident in England (18%) (ONS 2022a). It should be noted that these small groups are highly heterogeneous, for example the ‘Black/Black British’ group could include both recent migrants from Somalia and Black people born in Britain to British parents. The results of analysis by ethnic group should therefore be treated with caution.

Employment status

Detailed information was collected from participants on the nature of their employment status in the previous week. Participants were classified as either employed (including working in a family business); unemployed (and therefore looking and available for work); or economically inactive (including those who are unable to work due to disability or illness, students, retired, or looking after the home). The standard International Labour Organization definition was used and is described fully in Appendix B Glossary. Where this analysis break has been used, the base has been restricted to participants aged between 16 and 64.

Problem debt

Participants were classified as having a ‘problem debt’ if they indicated being ‘seriously’ behind on paying at least one type of bill and/or had been disconnected from their gas or electricity because they could not afford it, in the past year.

Area-level deprivation

Area-level deprivation has been defined using the English Indices of Deprivation 2019, commonly known as the Index of Multiple Deprivation (IMD).    

IMD is the official measure of relative deprivation for Lower Super Output Areas (LSOAs) in England. LSOAs comprise between 400 and 1,200 households and usually have a resident population between 1,000 and 3,000 persons. IMD ranks every LSOA in England from 1 (most deprived area) to 32,844 (least deprived area). Deprivation quintiles are calculated by ranking the 32,844 neighbourhoods in England from most deprived to least deprived and dividing them into five equal groups. These range from the most deprived 20% of neighbourhoods nationally to the least deprived 20% of neighbourhoods nationally.  

The Index of Multiple Deprivation (IMD) was revised in 2019. It combines several indicators, chosen to cover a range of economic, social, and housing issues, into a single deprivation score for each small area in England. Seven distinct domains have been identified in the English Indices of Deprivation:

  • income 
  • employment 
  • health deprivation and disability 
  • education, skills and training 
  • crime 
  • barriers to housing and services 
  • living environment. 

In this report, quintiles of IMD are used to give an area-level measure of deprivation related to where participants live. 

For further information see: English indices of deprivation 2019

Region

Analysis by region is based on the nine former Government Office Regions (GORs). 

Base sizes for regions can be relatively small, and caution should be exercised in examining regional differences.

Comorbidity

Within each chapter, age-standardised analyses by presence of one or more limiting physical health conditions and presence of one or more common mental health conditions (CMHCs). 

Participants were asked if since the age of 16 they had any of 25 physical health conditions listed on a card, including asthma, cancer, diabetes, epilepsy and high blood pressure. Participants were coded as having a limiting physical health condition if they reported having one or more physical health condition in the past 12 months that had been diagnosed by a doctor and that this had limited their ability to carry out day-to-day activities. 

The revised Clinical Interview Schedule (CIS-R) was used to assess six types of CMHCs: depression, generalised anxiety disorder, panic disorder, phobias, obsessive compulsive disorder, and CMHC not otherwise specified. Participants identified with at least one of these were defined as having a CMHC. 

Treatment and service use

See Chapter 2 Mental Health Treatment and Service Use chapter for a description of how the various forms of treatment and service use were derived. These include forms of treatment and service use provided, for a mental or emotional condition, by the NHS, private or other providers. When looking at treatment and service use, participants who screened positive for each disorder were compared with those who did not. Because of the relatively low prevalence of many of the disorders assessed in APMS 2023/4, this generally meant that the base size for the group with the disorder was usually small. Age-standardising a small group can be problematic, for the reasons outlined above, and so the treatment and service use analyses were not age-standardised.

5.5 Testing for statistical significance

Significance testing was carried out on the results in the 2023/4 report. The term ‘significant’ refers to statistical significance at the 95% level and is not intended to imply substantive importance. The significance tests were carried out in order to test the relationship between variables in a table, usually an outcome variable nested within gender, by an explanatory variable such as age (in categories), employment status and region. The test is for the main effects only (using a Wald test). For example, the test might examine whether there is a statistically significant relationship between bipolar disorder and age (after controlling for gender) and between bipolar disorder and gender (after controlling for age).

More about the Wald test

The Wald test is a statistical test used to calculate the significance of parameters in a statistical model. The Wald test is used in analysis of APMS data in this report to establish whether the association among particular variables is statistically significant. The test calculates the statistical significance of parameters in a logistic regression model of prevalence of common mental health conditions in order to establish whether age and gender are significantly associated with prevalence of common mental health conditions.

It is worth noting that the test does not establish whether there is a statistically significant difference between any particular pair of subgroups (e.g. the second and third IMD quintiles). Rather it seeks to establish whether there is a significant pattern of association across a variable’s categories (e.g. across the five IMD quintiles). Significance testing gives some insight into whether the variation in the outcome between groups that is observed could have happened by chance or whether it is likely to reflect some 'real' differences in the population. 

A p-value is the probability of the observed difference in results occurring due to chance alone. A probability of less than 5% is conventionally taken to indicate a statistically significant result (p<0.05). It should be noted that the p-value is dependent on the sample size, so that with large samples, differences or associations which are very small may still be statistically significant.

Using this method of statistical testing, differences which are significant at the 5% level indicate that there is sufficient evidence in the data to suggest that the differences in the sample reflect a true difference in the population. 

A second test of significance looks at the interaction between gender and the variable under consideration. If the interaction is statistically significant (p<0.05) this indicates that there is likely to be an underlying difference in the pattern of results for men and women, and this will normally be commented on in the report text. 

For key estimates in each chapter, the text and tables also include confidence intervals, to give readers more information on statistical significance.

5.6 Time series

In most chapters, 2023/4 results are compared to results from previous survey years. To allow for comparisons over time, the 2023/4 results in trend tables use sex rather than gender, as gender was not explicitly collected in previous surveys. The results in trend tables are also sometimes restricted to specific age ranges depending on the estimate, as some of the 1993 (64 years) and 2000 (74 years) surveys had an upper age limit. This means that 2023/4 estimates in trend tables are not directly comparable to tables disaggregated by gender and age within the same chapter. 

Estimates from different survey years are considered significantly different if the confidence intervals for the two estimates do not overlap. In some instances, slightly overlapping confidence intervals might still be significant according to a generated p-value, which provides an alternative indication of statistical significance. In these instances, this is explained in the commentary. This is a cautious but transparent approach to assessing change over time in this report.

5.7 Sampling errors and design factors

The percentages quoted in the main report are estimates for the population based on the information from the sample of people who took part in this survey. All such survey estimates are subject to some degree of error. The confidence interval (CI) is calculated from the sampling error, which is a measure of how such a survey estimate would vary if it were calculated for many different samples. If the survey was repeated many times, such a 95% CI would contain the true value 95% of the time. For this survey, a multi-phase stratified design was used, rather than a simple random sample, and the sampling errors need to reflect this.

The effect of a complex sample design on estimates is quantified by the design factor (deft). It is the ratio of the standard error for a complex design to the standard error which would have resulted from a simple random sample. A deft of 2, for example, indicates that the standard errors are twice as large as they would have been had the sample design been a simple random sample. The sampling errors, design effects and CI for key prevalence variables can be found in accompanying data tables for each chapter.


6. Quality assurance

Quality assurance processes were adhered to throughout APMS 2023/4, from preparation and sampling through data collection and data analysis to report writing, as detailed in this chapter. NatCen has a quality management system with sets of procedures that were followed throughout. The purpose of establishing standard procedures, as highlighted by the WHO in relation to its World Health Survey (Üstun et al. 2005), is to help ensure that:

  • Data collection is relevant and meaningful. 
  • Data can be compared across surveys and between subgroups. 
  • Practical implementation of the survey adheres to proper practice. 
  • Errors in data collection are minimised. 
  • Data collection capability is improved over time.

Examples of quality control measures that were built into, or checked after, the survey process included:

  • The computer programme used by interviewers had in-built soft checks (which can be suppressed) and hard checks (which cannot be suppressed). These included querying uncommon or unlikely answers, and answers out of the acceptable range.  
  • For phase one interviewers, telephone checks were carried out with participants at 10% of productive households to ensure that the interview had been conducted in a proper manner. 
  • The phase two interview was less structured and required clinical skill and assessment. The work of the phase two interviewers was supervised by a senior research psychologist. They also accompanied all interviewers on at least one of their participant visits three months into fieldwork, to ensure that they were conducting the interview as per protocol and to validate the coding. If a further supervised visit was felt necessary, this was also carried out.  
  • All phase two interviewers were observed on a number of occasions during fieldwork, for their ADOS interviewing to be validated, in addition to weekly meetings to discuss individual cases and queries throughout.  
  • During the analysis stage, the table syntax and accompanying text were reviewed by both the author and a second checker. In the final stages, the complete report underwent quality assurance by the quality assurance lead. 

7. References

American Psychiatric Association. (1994). Diagnostic and Statistical Manual of Mental Disorders (4th ed.). American Psychiatric Association. 

Bebbington, P. E., McManus, S., Coid, J. W., Garside, R., & Brugha, T. (2021). The mental health of ex-prisoners: analysis of the 2014 English National Survey of Psychiatric Morbidity. Social Psychiatry and Psychiatric Epidemiology, 1-11. 

Blanchard, E. B., Jones-Alexander, J., Buckley, T. C., & Forneris, C. A. (1996). Psychometric properties of the PTSD Checklist (PCL). Behaviour Research and Therapy, 34(8), 669-673. 

Blevins, C. A., Weathers, F. W., Davis, M. T., Witte, T. K., & Domino, J. L. (2015). The posttraumatic stress disorder checklist for DSM‐5 (PCL‐5): Development and initial psychometric evaluation. Journal of Traumatic Stress, 28(6), 489-498. 

Brugha, T. S., McManus, S., Smith, J., Scott, F. J., Meltzer, H., Purdon, S., ... & Bankart, J. (2012). Validating two survey methods for identifying cases of autism spectrum disorder among adults in the community. Psychological Medicine, 42(3), 647-656. 

Brugha, T. S., Nienhuis, F., Bagchi, D., Smith, J., & Meltzer, H. (1999). The survey form of SCAN: the feasibility of using experienced lay survey interviewers to administer a semi-structured systematic clinical assessment of psychotic and non-psychotic disorders. Psychological Medicine, 29(3), 703-711. 

Brugha, T. S., Spiers, N., Bankart, J., Cooper, S. A., McManus, S., Scott, F. J., ... & Tyrer, F. (2016). Epidemiology of autism in adults across age groups and ability levels. The British Journal of Psychiatry, 209(6), 498-503. 

Chilman, N., Schofield, P., McManus, S., Ronaldson, A., Stagg, A., & Das-Munshi, J. (2024). The public health significance of prior homelessness: findings on multimorbidity and mental health from a nationally representative survey. Epidemiology and Psychiatric Sciences, 33, e63. 

Collins, D. (ed.) (2015). Cognitive Interviewing Practice. SAGE Publications Ltd. 

de Leeuw, E., Hox, J., & Luiten, A. (2018). International nonresponse trends across countries and years: An analysis of 36 years of labour force survey data. Survey Methods: Insights from the Field, 1-11. 

Eggleston, J. (2024). Frequent survey requests and declining response rates: Evidence from the 2020 census and household surveys. Journal of Survey Statistics and Methodology, 12(5), 1138-1156. 

Ferris, J., & Wynne, H. (2001). The Canadian Problem Gambling Index: Final report. Ottawa: Canadian Centre on Substance Abuse. 

First, M. B. (1997). Structured clinical interview for DSM-IV axis II personality disorders, (SCID-II). American Psychiatric Press. 

Gideon, N., Hawkes, N., Mond, J., Saunders, R., Tchanturia, K., & Serpell, L. (2016). Development and psychometric validation of the EDE-QS, a 12 item short form of the Eating Disorder Examination Questionnaire (EDE-Q). PloS One, 11(5), e0152744. 

Gill, V., Wilson, H., & McManus, S. (2021). Adult Psychiatric Morbidity Survey 2022: Survey Consultation Findings. NHS Digital. https://digital.nhs.uk/data-and-information/areas-of-interest/public-health/national-study-of-health-and-wellbeing/adult-psychiatric-morbidity-survey-2022-survey-consultation-findings  

Hirschfeld, R. M., Williams, J. B., Spitzer, R. L., Calabrese, J. R., Flynn, L., Keck Jr, P. E., ... & Zajecka, J. (2000). Development and validation of a screening instrument for bipolar spectrum disorder: the Mood Disorder Questionnaire. American Journal of Psychiatry, 157(11), 1873-1875. 

Lewis, G., Pelosi, A. J., Araya, R., & Dunn, G. (1992). Measuring psychiatric disorder in the community: a standardized assessment for use by lay interviewers. Psychological Medicine, 22(2), 465-486. 

Lord, C., Rutter, M., DiLavore, P., Risi, S., Gotham, K., & Bishop, S. (2012). Autism diagnostic observation schedule–2nd edition (ADOS-2). Los Angeles, CA: Western Psychological Corporation, 284, 474-478. 

Malgady, R. G., Rogler, L. H., & Tryon, W. W. (1992). Issues of validity in the Diagnostic Interview Schedule. Journal of Psychiatric Research, 26(1), 59-67. 

McManus, S., Bebbington, P. E., Jenkins, R., Morgan, Z., Brown, L., Collinson, D., & Brugha, T. (2020). Data resource profile: adult psychiatric morbidity survey (APMS). International Journal of Epidemiology, 49(2), 361-362e. 

Moran, P., Leese, M., Lee, T., Walters, P., Thornicroft, G., & Mann, A. (2003). Standardised Assessment of Personality – Abbreviated Scale (SAPAS): Preliminary validation of a brief screen for personality disorder. British Journal of Psychiatry, 183(3), 228–232.  

Morgan, J. F., Reid, F., & Lacey, J. H. (1999). The SCOFF questionnaire: assessment of a new screening tool for eating disorders. BMJ, 319(7223), 1467-1468. 

Ng Fat, L., Scholes, S., Boniface, S., Mindell, J., & Stewart-Brown, S. (2017). Evaluating and establishing national norms for mental wellbeing using the short Warwick–Edinburgh Mental Well-being Scale (SWEMWBS): findings from the Health Survey for England. Quality of Life Research, 26, 1129-1144.  

Office for National Statistics. (2022a). Ethnic group, England and Wales: Census 2021. Retrieved from https://www.ons.gov.uk/peoplepopulationandcommunity/culturalidentity/ethnicity/bulletins/ethnicgroupengland andwales/census2021

Office for National Statistics. (2022b). Mid-year population estimates, 2022. Retrieved from https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/bulletins/ annualmidyearpopulationestimates/mid2022 

Office for National Statistics. (2023a). Communal establishment residents, England and Wales: Census 2021. Office for National Statistics. Retrieved from https://www.ons.gov.uk/peoplepopulationandcommunity/housing/bulletins/communalestablishmentresidents englandandwales/census2021

Office for National Statistics. (2023b). Evaluation of addressing quality: Census 2021. Retrieved from https://www.beta.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates /methodologies/evaluationofaddressingqualitycensus2021

Office for National Statistics. (2025). Quarterly Labour Force Survey, October - December, 2023 [Data collection] (3rd ed.). UK Data Service.  

Saunders, J. B., Aasland, O. G., Babor, T. F., De la Fuente, J. R., & Grant, M. (1993). Development of the alcohol use disorders identification test (AUDIT): WHO collaborative project on early detection of persons with harmful alcohol consumption‐II. Addiction, 88(6), 791-804. 

Stewart-Brown, S., Tennant, A., Tennant, R., Platt, S., Parkinson, J., & Weich, S. (2009). Internal construct validity of the Warwick-Edinburgh mental well-being scale (WEMWBS): a Rasch analysis using data from the Scottish health education population survey. Health and Quality of Life Outcomes, 7, 1-8. https://link.springer.com/article/10.1186/1477-7525-7-15

Tennant, R., Hiller, L., Fishwick, R., Platt, S., Joseph, S., Weich, S., ... & Stewart-Brown, S. (2007). The Warwick-Edinburgh mental well-being scale (WEMWBS): development and UK validation. Health and Quality of life Outcomes, 5, 1-13. https://hqlo.biomedcentral.com/articles/10.1186/1477-7525-5-63

Üstun TB., Chatterji S., Mechbal A., Murray C.J.L. (2005). Quality assurance in surveys: standards, guidelines and procedures. World Health Organization. Geneva: Switzerland.  

Williams, D., & Brick, J. M. (2018). Trends in US face-to-face household survey nonresponse and level of effort. Journal of Survey Statistics and Methodology, 6(2), 186-211. 

World Health Organization. (1993). The ICD-10 classification of mental and behavioural disorders: Clinical descriptions and diagnostic guidelines. World Health Organization. 

World Health Organization. (1999). SCAN Schedules for Clinical Assessment in Neuropsychiatry Version 2.1. World Health Organization. 

World Health Organization. (2003). Adult ADHD Self-Report Scale-V1.1 (ASRS-V1.1) Screen. WHO Composite International Diagnostic Interview. World Health Organization. 

World Health Organization. (2014). Schedules for Clinical Assessment in Neuropsychiatry (SCAN). World Health Organization. 

World Health Organization. (2019). International classification of diseases for mortality and morbidity statistics (11th ed.). World Health Organization.


8. Citation

Please cite this chapter as:

Ridout, K., Keyes, A., Maxineanu, I., Morris, S., Timpe, A., Toomse-Smith, M., Brugha, T., Morgan, Z., Tromans, S., & McManus, S. (2025). Methods. In Morris, S., Hill, S., Brugha, T., McManus, S. (Eds.), Adult Psychiatric Morbidity Survey: Survey of Mental Health and Wellbeing, England, 2023/4. NHS England.


Last edited: 26 June 2025 9:31 am