This review examines the methodological quality of the published research most frequently cited to dismiss astrology as pseudoscience. It identifies a consistent pattern in the literature: studies producing negative findings about astrology are accepted without serious methodological scrutiny, while studies producing positive findings are suppressed, reframed, or attacked with unfalsifiable alternative explanations. The review critiques the standard instruments used to measure astrological belief, documents the suppression of statistically significant positive findings across multiple independent studies, and situates these failures within the broader metascience literature on publication bias, self-censorship, and epistemic injustice. The evidence does not demonstrate that astrology has been disproven. It demonstrates that the question has not been honestly investigated.
The scientific study of astrology has a problem, and the problem is not astrology. It is the researchers.
Over the past four decades, a consistent pattern has emerged in the academic literature. Studies that produce negative findings about astrology are accepted into prestigious journals without serious methodological scrutiny. Studies that produce positive findings are attacked, reframed, or suppressed. The instruments used to measure "belief in astrology" do not measure anything a serious astrologer would recognise. The samples are small, culturally homogeneous, and drawn from populations that do not represent the people being studied. And the conclusions drawn from this evidence base are treated, by journalists and the public alike, as settled science.
They are not settled science. They are settled prejudice with institutional backing.
What follows is not a defence of astrology. It is an examination of the research. Every claim made here is sourced, verifiable, and drawn from the published academic record. The argument is simple: when you read the studies that are used to dismiss astrology, with the same critical attention you would bring to any other field, the case falls apart. Not because astrology is proven correct, but because the research that claims to disprove it does not meet the standards the scientific establishment claims to uphold.
In 2022, Andersson, Persson, and Kajonius published a study in Personality and Individual Differences titled "Even the stars think that I am superior."[1] The study recruited 264 participants from a single Facebook group, a sample composed predominantly of middle-aged women. From this, the researchers concluded that belief in astrology is associated with narcissism and lower intelligence. The findings were reported by Psychology Today, PsyPost, the Daily Mail, and dozens of other outlets. None of them questioned the methodology.
Consider what would happen if a researcher recruited 264 people from a single Christian Facebook group and published a paper concluding that Christians are narcissistic and unintelligent. The study would not survive peer review. It would not survive an ethics board. It would not survive a first-year research methods seminar. The sample is self-selected, culturally homogeneous, and far too small to generalise from. But because the target was astrology, the paper sailed through review at a respected Elsevier journal and became international news.
In 2025, Edwards, March, Willoughby, and Giannelis published a study in the Journal of Individual Differences using General Social Survey data from 8,553 Americans, reporting that intelligence (measured by a vocabulary test) was the strongest negative predictor of astrological belief.[2] The larger sample is an improvement. The instrument is not. The study makes no distinction between someone who reads a newspaper horoscope and someone who analyses natal charts using Swiss Ephemeris data. It conflates serious astrological practice with pop astrology, and it does so because the instruments available to measure "astrological belief" are designed to conflate them.
The Belief in Astrology Inventory, developed by Chico and Lorenzo-Seva in 2006, is the standard psychometric scale used across the literature.[3] It asks participants to rate statements like "A person's zodiac sign determines how he behaves" and "If I get the chance, I read my daily horoscope." A natal chart contains hundreds of interacting data points: planetary positions across twelve houses, aspects at precise angular distances, dignities and debilities, sect conditions, retrograde cycles, mutual receptions, element and modality distributions. No serious astrologer would recognise "a person's zodiac sign determines how he behaves" as a statement about astrology. It is a statement about the researchers' understanding of astrology, which is to say, it is a statement about their ignorance.
The standard response to any positive astrological finding is the Barnum effect, first demonstrated by Bertram Forer in 1949.[4] Forer showed that people accept vague personality descriptions as highly accurate when told they are personalised. This is real and well-documented. It explains why newspaper horoscopes feel relevant. It does not explain why, in controlled studies where participants evaluate chart-specific readings against matched controls, they sometimes identify the real reading at above-chance rates. The Barnum effect accounts for acceptance of generic statements. It does not account for discrimination between specific and non-specific statements. These are different cognitive tasks, and treating them as identical is a methodological error that the literature has been repeating for decades.
In 1985, Shawn Carlson published what remains the most cited study in astrology research: a double-blind test in Nature claiming to demonstrate that astrologers cannot match natal charts to personality profiles above chance rates.[5] The study has been treated as definitive for four decades. It appeared in Nature's commentary section, not the peer-reviewed research section. It is also wrong.
When the statistician Suitbert Ertel reanalysed Carlson's raw data in 2009, he found that Carlson had divided his sample into five sub-groups and tested each separately, a procedure that reduces statistical power and makes it harder to detect effects that exist. When Ertel pooled the data correctly and analysed the full sample, both tests showed statistical significance in favour of astrology: p = .054 for the three-way forced choice, and p = .04 for the rating scale, with a medium effect size.[6] Carlson's study did not demonstrate that astrology fails. It demonstrated that splitting your sample into fragments and testing each one separately is an effective way to make a real effect disappear.
In 1978, Mayo, White, and Eysenck published results from a large study (N = 2,324) in the Journal of Social Psychology that found significant support for astrological predictions regarding extraversion and introversion by zodiac sign.[7] The findings were initially celebrated as evidence worth investigating. They were then suppressed. A follow-up proposed that the results were an artefact of "self-attribution": participants who knew their zodiac sign's supposed traits had attributed those traits to themselves. The self-attribution explanation was accepted without the same scrutiny that had been applied to the original positive findings. The logical circularity is instructive: if participants score in a way that matches astrological predictions, it is because they already believed the predictions. If they do not score that way, astrology is disproven. There is no possible outcome in which astrology receives a fair hearing.
The most instructive case is the Gauquelin planetary effect. Michel Gauquelin analysed over 25,000 birth records across multiple European countries and found that Mars appeared in angular house positions significantly more often in the birth charts of eminent athletes than chance would predict.[8] The finding replicated across independent national datasets. The Belgian Comité Para, a sceptical organisation established specifically to investigate such claims, tested the Mars effect in 1967 and confirmed it. They then sat on the results for eight years. When the findings were eventually made public, the committee claimed "demographic errors" had contaminated the data, without specifying what those errors were or producing evidence for them. Committee member Luc de Marré resigned in protest. Independent analyses in the 1980s confirmed the statistical methodology was sound.
Dean and Kelly published a meta-analysis in 2003 in the Journal of Consciousness Studies, aggregating over 40 controlled studies and concluding that astrologers perform at chance level.[9] McRitchie's 2016 critique in the same journal demonstrated that Dean and Kelly had conflated methodologically disparate studies, mixing rigorous research with poorly designed experiments to produce a straw-man version of astrology that was easy to knock down.[10] When you combine a study that tests whether sun signs predict personality with a study that tests whether full natal chart analysis produces discriminable readings, the former drowns the latter. This is not meta-analysis. It is dilution.
Figure 1. Chronological overview of landmark studies in astrology research. Positive findings are consistently suppressed, reframed with unfalsifiable alternative explanations, or left unreplicated. Negative findings are accepted into the literature without commensurate methodological scrutiny. The pattern spans seven decades.
In 2010, Henrich, Heine, and Norenzayan published one of the most cited papers in behavioural science, demonstrating that 96% of participants in psychology research come from Western, Educated, Industrialised, Rich, and Democratic populations, representing roughly 12% of the global population.[11] By 2018, the problem had barely improved: 95% of research samples were still drawn from these populations, and Africa, representing 17% of the world's people, contributed less than 1% of research participants.[12]
Every astrology study cited in the mainstream literature was conducted within this WEIRD framework. The researchers are Western. The samples are Western. The instruments measure Western pop-cultural understanding of astrology. Indian Jyotish, Chinese BaZi, Hellenistic, Medieval, and Renaissance traditions are invisible. When researchers conclude that "astrology" does not work, they mean that the thin, culturally specific version of astrology they measured in their culturally specific sample did not produce the culturally specific result they were testing for. The confidence with which this conclusion is generalised to all astrological practice, across all traditions, is not supported by the evidence. It is supported by the assumption that Western academic psychology is the only framework that counts.
The American Psychological Association publishes explicit guidance requiring practitioners to respect clients' religious and spiritual beliefs as a core dimension of human diversity.[13] Astrology, despite functioning as a deeply held framework for meaning-making for millions of people across multiple cultures, receives no such protection. Prayer is studied with ethical guardrails and cultural sensitivity. Meditation is studied with respect. Astrology is studied with the explicit assumption that it is false, by researchers who design their instruments to confirm that assumption, and publish their findings in journals that have no incentive to question the premise.
The temptation is to treat the astrology research record as an isolated case. It is not. The dynamics that produce bad astrology research are the same dynamics that have been documented, by some of the most respected scholars in metascience, as systemic failures across the entire scientific enterprise.
Thomas Kuhn argued in 1962 that normal science actively resists anomalies and paradigm-contradicting evidence until a crisis forces revolutionary change.[14] The institutions of normal science do not reward curiosity about inconvenient findings. They reward conformity to the existing paradigm. Max Planck put it more bluntly: "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."[15]
In 2005, John Ioannidis published "Why Most Published Research Findings Are False" in PLOS Medicine, demonstrating that the majority of published research contains results that cannot be replicated, driven by systemic incentives that reward novel positive findings over rigorous methodology.[16] A decade later, Brian Nosek and the Open Science Collaboration attempted to replicate 100 published psychology experiments and found that fewer than half produced the same results.[17] Daniele Fanelli's 2017 meta-assessment of over 68,000 meta-analyses in PNAS confirmed that publication selection bias systematically contaminates scientific fields, with small studies and early studies consistently overestimating effects because journals prefer publishable, positive, novel results.[18]
The problem extends beyond methodology to what researchers are willing to study at all. In 2008, Joanna Kempner documented the "chilling effect" of controversy on research agendas: scientists self-censor and avoid entire research programmes when those programmes are associated with social or political stigma.[19] In 2025, Clark and colleagues published empirical evidence confirming what Kempner observed: 78% of U.S. psychology professors report fear of social sanctions for researching taboo topics, and tenure provides no protection whatsoever. Tenured faculty report the same level of fear as their untenured colleagues. The mechanisms scientists fear most are not institutional penalties but reputational damage, colleague ostracism, and social media attacks.[20]
Miranda Fricker's concept of epistemic injustice describes how social power structures systematically exclude or delegitimise certain knowledge claims based on the identity of the speaker rather than the quality of the evidence.[21] David Hess and colleagues have documented how institutional power structures leave entire areas of research unfunded and unexamined when they conflict with prevailing assumptions, a phenomenon they call "undone science."[22] The astrology research record sits at the intersection of every one of these documented failures: a stigmatised topic that researchers avoid for fear of professional consequences, studied using instruments that reflect ignorance of the subject matter, with positive findings suppressed and negative findings amplified, while the people who seek meaning from the practice are pathologised by researchers who would never apply the same treatment to populations defined by religious belief.
This is not a conspiracy. It is something worse: a set of incentive structures that produce biased outcomes without anyone needing to conspire. The researchers are not lying. They are doing what the system rewards them for doing. The system is the problem.
People do not seek astrology because they lack critical thinking skills. They seek it because they are experiencing uncertainty, difficulty, or pain, and they are looking for a framework that helps them understand what they are going through. Research consistently shows that engagement with astrological and metaphysical frameworks increases during periods of instability, stress, and social upheaval.[23] The first newspaper horoscope column was commissioned in August 1930, during the Great Depression. Astrology consultation spiked during the 2008 financial crisis. The pattern is as old as the practice itself.
When a journalist publishes "People who believe in astrology are narcissists" based on a study of 264 self-selected Facebook users, and that headline circulates to millions of readers, the effect is not neutral. It shames people for seeking meaning during the most difficult periods of their lives. It reinforces the social stigma that prevents people from discussing metaphysical beliefs openly, even with therapists and counsellors who are ethically obligated to respect those beliefs. And it does so under the banner of "science," borrowing the authority of empirical research without having done the work that authority requires.
We do not know, because no one has studied it, how this stigma affects the wellbeing of people who hold astrological beliefs. We do not know whether it suppresses help-seeking behaviour. We do not know whether it damages self-determination, autonomy, or the capacity to relate to others. These are basic research ethics questions that would be asked automatically if the population in question were defined by religious belief, ethnicity, or any other recognised dimension of diversity. The reason no one has asked them is the same reason the research is bad: fear of professional consequences for being associated with an unfashionable subject.
Honest inquiry would start by building instruments that measure what serious astrologers actually do, not what researchers imagine they do. It would use full natal charts, not sun signs. It would test whether chart-specific analysis produces discriminable readings, not whether horoscopes feel personally relevant. It would recruit participants from multiple cultural traditions, not a single Facebook group in a single country. It would apply the same methodological standards to positive and negative findings alike. And it would subject its conclusions to the same scrutiny it demands of the practices it investigates.
Honest inquiry would also require researchers willing to risk their reputations on the question. Kempner's chilling effect is real. Clark's data on self-censorship is real. The institutional cost of studying a stigmatised subject is real. But the alternative is what we have now: a research record that tells us more about academic culture than it does about astrology, and a public that has been told, with unearned confidence, that the question is settled.
It is not settled. The research has not been done properly. And until it is, the people who cite that research to dismiss, shame, and pathologise millions of people who find genuine value in astrological frameworks are not defending science. They are defending their own comfort. The evidence, when you read it with the care it deserves, supports exactly one conclusion: that the scientific study of astrology has been conducted with less rigour, less honesty, and less intellectual courage than the subject demands.
Beaufort Intelligence exists because someone has to do it properly.
- Andersson, I., Persson, J., & Kajonius, P. (2022). "Even the stars think that I am superior: Personality, intelligence and belief in astrology." Personality and Individual Differences, 187, 111389.
doi.org/10.1016/j.paid.2021.111389 - Edwards, T., March, M. J., Willoughby, E. A., & Giannelis, A. (2025). "Intelligence and individual differences in astrological belief." Journal of Individual Differences, 46(1), 50–57.
econtent.hogrefe.com/doi/abs/10.1027/1614-0001/a000434 - Chico, E. & Lorenzo-Seva, U. (2006). "Belief in Astrology Inventory: Development and Validation." Psychological Reports, 99(3), 851–863.
doi.org/10.2466/PR0.99.3.851-863 - Forer, B. R. (1949). "The Fallacy of Personal Validation: A Classroom Demonstration of Gullibility." The Journal of Abnormal and Social Psychology, 44(1), 118–123.
doi.org/10.1037/h0059240 - Carlson, S. (1985). "A double-blind test of astrology." Nature, 318, 419–425.
doi.org/10.1038/318419a0 - Ertel, S. (2009). "Appraisal of Shawn Carlson's Renowned Astrology Tests." Journal of Scientific Exploration, 23(2), 125–137.
journalofscientificexploration.org - Mayo, J., White, O., & Eysenck, H. J. (1978). "An empirical study of the relation between astrological factors and personality." The Journal of Social Psychology, 105(2), 229–236.
- Gauquelin, M. (1955). L'Influence des Astres, Étude Critique et Expérimentale. Le Dauphin, Paris. See also: Ertel, S. (1988). "Raising the Hurdle for the Athletes' Mars Effect." Journal of Scientific Exploration, 2(1), 53–82.
- Dean, G. & Kelly, I. W. (2003). "Is astrology relevant to consciousness and psi?" Journal of Consciousness Studies, 10(6–7), 175–198.
- McRitchie, K. (2016). "Clearing the Logjam in Astrological Research." Journal of Consciousness Studies, 23(9–10), 153–179.
philarchive.org/archive/MCRCTL - Henrich, J., Heine, S. J., & Norenzayan, A. (2010). "The Weirdest People in the World?" Behavioral and Brain Sciences, 33(2–3), 61–83.
doi.org/10.1017/S0140525X0999152X - Rad, M. S., Martingano, A. J., & Ginges, J. (2018). "Toward a psychology of Homo sapiens: Making psychological science more representative." Proceedings of the National Academy of Sciences, 115(45), 11401–11405.
doi.org/10.1073/pnas.1721165115 - American Psychological Association. (2017). "Ethical Principles of Psychologists and Code of Conduct," Principle E: Respect for People's Rights and Dignity.
apa.org/ethics/code - Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.
- Planck, M. (1949). A Scientific Autobiography. Trans. Frank Gaynor. Williams & Wilkins. Originally published as Wissenschaftliche Selbstbiographie (1948).
- Ioannidis, J. P. A. (2005). "Why Most Published Research Findings Are False." PLOS Medicine, 2(8), e124.
doi.org/10.1371/journal.pmed.0020124 - Open Science Collaboration. (2015). "Estimating the reproducibility of psychological science." Science, 349(6251), aac4716.
doi.org/10.1126/science.aac4716 - Fanelli, D., Costas, R., & Ioannidis, J. P. A. (2017). "Meta-assessment of bias in science." Proceedings of the National Academy of Sciences, 114(14), 3714–3719.
doi.org/10.1073/pnas.1618569114 - Kempner, J. (2008). "The Chilling Effect: How Do Researchers React to Controversy?" PLoS Biology, 6(11), e287.
doi.org/10.1371/journal.pbio.0060287 - Clark, C. J., Fjeldmark, M., Lu, L., Baumeister, R. F., Ceci, S. J., Frey, K., Miller, G. F., Reilly, W., Tice, D., von Hippel, W., Williams, W. M., Winegard, B. M., & Tetlock, P. E. (2025). "Taboos and Self-Censorship Among U.S. Psychology Professors." Perspectives on Psychological Science.
doi.org/10.1177/17456916241252085 - Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press.
- Hess, D. J. et al. (2010). "Undone Science: Charting Social Movement and Civil Society Challenges to Research Agenda Setting." Science, Technology & Human Values, 35(4), 444–473.
doi.org/10.1177/0162243909345836 - Tyson, G. A. (1982). "People who consult astrologers: A profile." Personality and Individual Differences, 3(2), 119–126.
doi.org/10.1016/0191-8869(82)90026-5